You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(3) |
Jul
|
Aug
(7) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
(7) |
2006 |
Jan
(1) |
Feb
(2) |
Mar
(3) |
Apr
(3) |
May
(5) |
Jun
(1) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
(17) |
Nov
(18) |
Dec
(1) |
2007 |
Jan
|
Feb
|
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(6) |
Dec
(1) |
2008 |
Jan
(17) |
Feb
(20) |
Mar
(8) |
Apr
(8) |
May
(10) |
Jun
(4) |
Jul
(5) |
Aug
(6) |
Sep
(9) |
Oct
(19) |
Nov
(4) |
Dec
(35) |
2009 |
Jan
(40) |
Feb
(16) |
Mar
(7) |
Apr
(6) |
May
|
Jun
(5) |
Jul
(5) |
Aug
(4) |
Sep
(1) |
Oct
(2) |
Nov
(15) |
Dec
(15) |
2010 |
Jan
(5) |
Feb
(20) |
Mar
(12) |
Apr
|
May
(2) |
Jun
(4) |
Jul
|
Aug
(11) |
Sep
(1) |
Oct
(1) |
Nov
(3) |
Dec
|
2011 |
Jan
(8) |
Feb
(19) |
Mar
|
Apr
(12) |
May
(7) |
Jun
(8) |
Jul
|
Aug
(1) |
Sep
(21) |
Oct
(7) |
Nov
(4) |
Dec
|
2012 |
Jan
(3) |
Feb
(25) |
Mar
(8) |
Apr
(10) |
May
|
Jun
(14) |
Jul
(5) |
Aug
(12) |
Sep
(3) |
Oct
(14) |
Nov
|
Dec
|
2013 |
Jan
(10) |
Feb
(4) |
Mar
(10) |
Apr
(14) |
May
(6) |
Jun
(13) |
Jul
(37) |
Aug
(20) |
Sep
(11) |
Oct
(1) |
Nov
(34) |
Dec
|
2014 |
Jan
(8) |
Feb
(26) |
Mar
(24) |
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
(28) |
Oct
(4) |
Nov
(4) |
Dec
(2) |
2015 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(13) |
Jul
|
Aug
(3) |
Sep
(8) |
Oct
(11) |
Nov
(16) |
Dec
|
2016 |
Jan
|
Feb
(6) |
Mar
|
Apr
(9) |
May
(23) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2017 |
Jan
|
Feb
|
Mar
|
Apr
(7) |
May
(3) |
Jun
|
Jul
(3) |
Aug
|
Sep
(8) |
Oct
|
Nov
|
Dec
(3) |
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(31) |
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2021 |
Jan
(2) |
Feb
(2) |
Mar
(5) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
(9) |
3
(2) |
4
|
5
|
6
|
7
|
8
|
9
(1) |
10
(2) |
11
|
12
(1) |
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
31
|
|
|
From: Dominic S. <dom...@gm...> - 2009-12-12 05:19:42
|
On Thursday 10 of December 2009 18:58:24 Stephen Sinclair wrote: > > The only solution I can think of is an explicit free() method for > > the server classes, so you don't need to rely on Python's reference > > counting/garbage collection to do the right thing. Could you try > > the attached patch? > > What about changing: > > def __dealloc__(self): > lo_server_thread_free(self._thread) > > to: > > def __dealloc__(self): > lo_server_thread_stop(self._thread) > lo_server_thread_free(self._thread) > > ? > > I think the extra _stop() will not cause any harm if it's already > been called, since liblo tracks the active status of server threads. The first thing lo_server_thread_free() does is to call lo_server_thread_stop() if the server is still active, so that additional call shouldn't make any difference. The problem here is that we simply don't know when __dealloc__() will be called by the Python interpreter. In some cases it might not be called until the interpreter exits, which is fine in most cases, but not if you want to re-open the same port while the interpreter is still running. That's why an additional method is needed to reliably close the server. Dominic |
From: Stephen S. <rad...@gm...> - 2009-12-10 17:58:36
|
On Wed, Dec 9, 2009 at 8:55 PM, Dominic Sacré <dom...@gm...> wrote: > Hi Dirk, > > On Wednesday 09 of December 2009 10:52:09 Dirk Griffioen wrote: >> Often when I start/stop/start a python server I get the following >> error: >> >> liblo.ServerError: server error 9904: cannot find free port >> >> I think this should not happen: a closed port can be reopened. >> However, when I take a closer look, the state of the port is >> FIN_WAIT2, I think because close() is not called locally. >> >> If I look at the pyx code I see the following: >> >> def __dealloc__(self): >> lo_server_thread_free(self._thread) >> >> def stop(self): >> lo_server_thread_stop(self._thread) >> >> So the free (which calls close()) is in 'dealloc' - but python is not >> determistic in when it calls a destructor, so the socket may or may >> not be closed when I start it again. > > It took me a while to reproduce this problem, although I know I had seen > it before a few times. I'm still not sure what exactly is going on, as > this happens even in cases where there are clearly no cyclic references, > so Python's garbage collector shouldn't even be involved... > > Strangely enough, the most reliable test case I've found is a simple >>>> liblo.Server(1234) > i.e. not binding the object to any variable. One would expect the server > to be deallocated immediately after creation, but it isn't. Only running > gc.collect() destroys the server and closes its port. > > The only solution I can think of is an explicit free() method for the > server classes, so you don't need to rely on Python's reference > counting/garbage collection to do the right thing. Could you try the > attached patch? What about changing: def __dealloc__(self): lo_server_thread_free(self._thread) to: def __dealloc__(self): lo_server_thread_stop(self._thread) lo_server_thread_free(self._thread) ? I think the extra _stop() will not cause any harm if it's already been called, since liblo tracks the active status of server threads. Steve |
From: Dominic S. <dom...@gm...> - 2009-12-10 01:57:12
|
Hi Dirk, On Wednesday 09 of December 2009 10:52:09 Dirk Griffioen wrote: > Often when I start/stop/start a python server I get the following > error: > > liblo.ServerError: server error 9904: cannot find free port > > I think this should not happen: a closed port can be reopened. > However, when I take a closer look, the state of the port is > FIN_WAIT2, I think because close() is not called locally. > > If I look at the pyx code I see the following: > > def __dealloc__(self): > lo_server_thread_free(self._thread) > > def stop(self): > lo_server_thread_stop(self._thread) > > So the free (which calls close()) is in 'dealloc' - but python is not > determistic in when it calls a destructor, so the socket may or may > not be closed when I start it again. It took me a while to reproduce this problem, although I know I had seen it before a few times. I'm still not sure what exactly is going on, as this happens even in cases where there are clearly no cyclic references, so Python's garbage collector shouldn't even be involved... Strangely enough, the most reliable test case I've found is a simple >>> liblo.Server(1234) i.e. not binding the object to any variable. One would expect the server to be deallocated immediately after creation, but it isn't. Only running gc.collect() destroys the server and closes its port. The only solution I can think of is an explicit free() method for the server classes, so you don't need to rely on Python's reference counting/garbage collection to do the right thing. Could you try the attached patch? Cheers, Dominic |
From: Dirk G. <dir...@ba...> - 2009-12-09 09:52:25
|
Hi, Often when I start/stop/start a python server I get the following error: liblo.ServerError: server error 9904: cannot find free port I think this should not happen: a closed port can be reopened. However, when I take a closer look, the state of the port is FIN_WAIT2, I think because close() is not called locally. If I look at the pyx code I see the following: def __dealloc__(self): lo_server_thread_free(self._thread) def stop(self): lo_server_thread_stop(self._thread) So the free (which calls close()) is in 'dealloc' - but python is not determistic in when it calls a destructor, so the socket may or may not be closed when I start it again. Should this be fixed? Or am I mistaken? Thanks in advance, Dirk |
From: Steve H. <S.W...@ec...> - 2009-12-03 09:59:53
|
On 3 Dec 2009, at 09:09, Dirk Griffioen wrote: > Hi, > > (Thanks everyone, in helping me figure this out!) >> Ah, that's about one Ethernet frame IIRC. >> >> Total 1500 bytes, but there's some overhead in there. >> > That make sense; so what would be an approach to fix ? > > Btw, I set the MTU of the loopback device to 1500 (it was 16436) but > the > message is still transferred correctly (same as on the lan). > > Does this mean the intermettent hops drop something? Honestly, I had no idea :) But I can ask one of my guys, who's a BSD sockets / TCP expert, when he gets into the office. - Steve |
From: Dirk G. <dir...@ba...> - 2009-12-03 09:10:07
|
Hi, (Thanks everyone, in helping me figure this out!) > Ah, that's about one Ethernet frame IIRC. > > Total 1500 bytes, but there's some overhead in there. > That make sense; so what would be an approach to fix ? Btw, I set the MTU of the loopback device to 1500 (it was 16436) but the message is still transferred correctly (same as on the lan). Does this mean the intermettent hops drop something? I was under the impression the tcp layer took care of chopping up the larger messages in sizeable packets ... and I just call 'send' with a message, like liblo does: // Send the data if (a->protocol == LO_UDP) { if (a->ttl >= 0) { unsigned char ttl = (unsigned char)a->ttl; setsockopt(sock,IPPROTO_IP,IP_MULTICAST_TTL,&ttl,sizeof(ttl)); } ret = sendto(sock, data, data_len, MSG_NOSIGNAL, a->ai->ai_addr, a->ai->ai_addrlen); } else { ret = send(sock, data, data_len, MSG_NOSIGNAL); } and that you dont need a loop or something like: #include <sys/socket.h> #define MESSAGE_SIZE 64000 int bytes_sent; int server_sock; int send_left; int send_rc; char *message_ptr; message_ptr = malloc (MESSAGE_SIZE); send_left = MESSAGE_SIZE; while (send_left > 0) { send_rc = send(server_sock, message_ptr, send_left, 0); if send_rc == -1 break; send_left -= send_rc; message_ptr += send_rc; } /* End While Loop */ But I might be mistaken here ... What do you think? Kind regards, Dirk |
From: Steve H. <S.W...@ec...> - 2009-12-02 17:48:52
|
On 2 Dec 2009, at 17:39, Dirk Griffioen wrote: > Hi, >> hmm, i thought it was already working on the LAN. >> >> > Yes it does. > >> another thought: >> you could also try to send the data through an ssh tunnel. >> if this works, then the problem is very likely to be in the >> inbetween hops. >> >> > It must be something like that, because when I send the large message > through a tunnel, it arrives fine. > > Go figure ... > > The largest message I am able to send without the tunnel is about 1427 > bytes, weird number :) Ah, that's about one Ethernet frame IIRC. Total 1500 bytes, but there's some overhead in there. - Steve |
From: Dirk G. <dir...@ba...> - 2009-12-02 17:40:11
|
Hi, > hmm, i thought it was already working on the LAN. > > Yes it does. > another thought: > you could also try to send the data through an ssh tunnel. > if this works, then the problem is very likely to be in the inbetween hops. > > It must be something like that, because when I send the large message through a tunnel, it arrives fine. Go figure ... The largest message I am able to send without the tunnel is about 1427 bytes, weird number :) > something like: > > client% ssh -L 9000:localhost:9000 username@server > client% oscsend localhost 9000 /test i 50 > > will create a tunnel from the client-machine to the server-machine: a > port 9000 is opened on the client and all traffic that is sent into that > is sent through the wire and will be sent on the host to > "localhost:9000" (localhost now being the host) > obviously you have to run an ssh-server there as well. > > Yep - I'm running ubuntu ... > btw, i always assume you are using linux. the same should work > out-of-the-box on osx. > on w32 you have to install some ssh-server (putty or the like) > Thanks! |
From: Dirk G. <dir...@ba...> - 2009-12-02 17:36:14
|
Thanks for the reply! > There is actually a limit, there is a constant LO_MAX_MSG_SIZE which > is the size of the buffer used to read from the socket. It is set to > 32768. I don't know if this is arbitrary, although the maximum UDP > datagram size is, I think, 64k, so twice that, and maybe it should be > changed accordingly. > > Either way, this is much larger than 5000 chars, so I don't know where > the problem is. I'll have to do some testing. > > Can you try increasing LO_MAX_MSG_SIZE and see if that changes things? > Also, can you try determining a more precise size where your message > stops getting through? > > I increased it 4 times, but it did not change anything. > Lastly, does the same thing happen when you try UDP? > Yes it does, sadly. > Steve > > ------------------------------------------------------------------------------ > Join us December 9, 2009 for the Red Hat Virtual Experience, > a free event focused on virtualization and cloud computing. > Attend in-depth sessions from your desk. Your couch. Anywhere. > http://p.sf.net/sfu/redhat-sfdev2dev > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel > |
From: IOhannes m z. <zmo...@ie...> - 2009-12-02 17:11:12
|
Stephen Sinclair wrote: > On Wed, Dec 2, 2009 at 11:44 AM, Dirk Griffioen >> >> Ok, I am not sure on the command line: do I need the server or the >> client (do you maybe have an example?) > > > I'm not sure, but maybe he means run netcat to receive OSC messages > and print them to the screen to examine, like so: > > nc -u -l -p 9000 | hexdump -C & > oscsend localhost 9000 /test i 50 > something like that. > Use nc without '-u' for TCP mode. i wanted to comment on that saying that "-u" really means "UDP" mode and not "TCP/IP" mode. now i see that you say "without" (and not "with") > > By the way, this reminds me, I should have mentioned in my previous > email, another thing to try is do this over the localhost connection > and see if it still doesn't work. Then you will know for sure if it > is network-related. hmm, i thought it was already working on the LAN. another thought: you could also try to send the data through an ssh tunnel. if this works, then the problem is very likely to be in the inbetween hops. something like: client% ssh -L 9000:localhost:9000 username@server client% oscsend localhost 9000 /test i 50 will create a tunnel from the client-machine to the server-machine: a port 9000 is opened on the client and all traffic that is sent into that is sent through the wire and will be sent on the host to "localhost:9000" (localhost now being the host) obviously you have to run an ssh-server there as well. btw, i always assume you are using linux. the same should work out-of-the-box on osx. on w32 you have to install some ssh-server (putty or the like) fgamsdr IOhannes |
From: Stephen S. <rad...@gm...> - 2009-12-02 16:52:52
|
On Wed, Dec 2, 2009 at 11:44 AM, Dirk Griffioen <dir...@ba...> wrote: > Hi, >> dunno, but unlikely if it works on the lan. >> >> > I thought so too, but if I run the osc server on another unrelated > server somewhere else it is also not capable of receiving larger > messages ... > > (This makes no sense to me) >> try to run the connection through a proxy (i often use netcat for this) >> and see what comes through as raw data. >> >> > Ok, I am not sure on the command line: do I need the server or the > client (do you maybe have an example?) I'm not sure, but maybe he means run netcat to receive OSC messages and print them to the screen to examine, like so: nc -u -l -p 9000 | hexdump -C & oscsend localhost 9000 /test i 50 Use nc without '-u' for TCP mode. By the way, this reminds me, I should have mentioned in my previous email, another thing to try is do this over the localhost connection and see if it still doesn't work. Then you will know for sure if it is network-related. Steve |
From: Dirk G. <dir...@ba...> - 2009-12-02 16:44:45
|
Hi, > dunno, but unlikely if it works on the lan. > > I thought so too, but if I run the osc server on another unrelated server somewhere else it is also not capable of receiving larger messages ... (This makes no sense to me) > try to run the connection through a proxy (i often use netcat for this) > and see what comes through as raw data. > > Ok, I am not sure on the command line: do I need the server or the client (do you maybe have an example?) Thanks for the reply! Best, Dirk |
From: Stephen S. <rad...@gm...> - 2009-12-02 16:43:41
|
On Wed, Dec 2, 2009 at 3:37 AM, Dirk Griffioen <dir...@ba...> wrote: > Hi, > > I have been doing some testing with liblo and tcp and seem to have hit a > rather strange limit. > > The setup is > - client & server connecting over the internet (an adsl line into the > companies lan) > - client sends a rather large string to the server (around 5000 characters) > - the message never arrives > - but with 250 chars it does arrive (!) > > Also, when this setup is run on the lan (all internal), it works fine: > the large message is received fine. > > This might have to do with the company firewall, so I will ask - but the > weird part is that a small message arrives but a large one does not and > I do not see a real reason for that. > > Therefore I'd like to ask, is there a limit somewhere in lilbo? > There is actually a limit, there is a constant LO_MAX_MSG_SIZE which is the size of the buffer used to read from the socket. It is set to 32768. I don't know if this is arbitrary, although the maximum UDP datagram size is, I think, 64k, so twice that, and maybe it should be changed accordingly. Either way, this is much larger than 5000 chars, so I don't know where the problem is. I'll have to do some testing. Can you try increasing LO_MAX_MSG_SIZE and see if that changes things? Also, can you try determining a more precise size where your message stops getting through? Lastly, does the same thing happen when you try UDP? Steve |
From: IOhannes m z. <zmo...@ie...> - 2009-12-02 08:44:01
|
Dirk Griffioen wrote: > Hi, > > I have been doing some testing with liblo and tcp and seem to have hit a > rather strange limit. > > The setup is > - client & server connecting over the internet (an adsl line into the > companies lan) > - client sends a rather large string to the server (around 5000 characters) > - the message never arrives > - but with 250 chars it does arrive (!) > > Also, when this setup is run on the lan (all internal), it works fine: > the large message is received fine. > > This might have to do with the company firewall, so I will ask - but the > weird part is that a small message arrives but a large one does not and > I do not see a real reason for that. > > Therefore I'd like to ask, is there a limit somewhere in lilbo? > dunno, but unlikely if it works on the lan. try to run the connection through a proxy (i often use netcat for this) and see what comes through as raw data. m,fgar IOhannes |
From: Dirk G. <dir...@ba...> - 2009-12-02 08:37:28
|
Hi, I have been doing some testing with liblo and tcp and seem to have hit a rather strange limit. The setup is - client & server connecting over the internet (an adsl line into the companies lan) - client sends a rather large string to the server (around 5000 characters) - the message never arrives - but with 250 chars it does arrive (!) Also, when this setup is run on the lan (all internal), it works fine: the large message is received fine. This might have to do with the company firewall, so I will ask - but the weird part is that a small message arrives but a large one does not and I do not see a real reason for that. Therefore I'd like to ask, is there a limit somewhere in lilbo? Kind regards, Dirk |