You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(3) |
Jul
|
Aug
(7) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
(7) |
2006 |
Jan
(1) |
Feb
(2) |
Mar
(3) |
Apr
(3) |
May
(5) |
Jun
(1) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
(17) |
Nov
(18) |
Dec
(1) |
2007 |
Jan
|
Feb
|
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(6) |
Dec
(1) |
2008 |
Jan
(17) |
Feb
(20) |
Mar
(8) |
Apr
(8) |
May
(10) |
Jun
(4) |
Jul
(5) |
Aug
(6) |
Sep
(9) |
Oct
(19) |
Nov
(4) |
Dec
(35) |
2009 |
Jan
(40) |
Feb
(16) |
Mar
(7) |
Apr
(6) |
May
|
Jun
(5) |
Jul
(5) |
Aug
(4) |
Sep
(1) |
Oct
(2) |
Nov
(15) |
Dec
(15) |
2010 |
Jan
(5) |
Feb
(20) |
Mar
(12) |
Apr
|
May
(2) |
Jun
(4) |
Jul
|
Aug
(11) |
Sep
(1) |
Oct
(1) |
Nov
(3) |
Dec
|
2011 |
Jan
(8) |
Feb
(19) |
Mar
|
Apr
(12) |
May
(7) |
Jun
(8) |
Jul
|
Aug
(1) |
Sep
(21) |
Oct
(7) |
Nov
(4) |
Dec
|
2012 |
Jan
(3) |
Feb
(25) |
Mar
(8) |
Apr
(10) |
May
|
Jun
(14) |
Jul
(5) |
Aug
(12) |
Sep
(3) |
Oct
(14) |
Nov
|
Dec
|
2013 |
Jan
(10) |
Feb
(4) |
Mar
(10) |
Apr
(14) |
May
(6) |
Jun
(13) |
Jul
(37) |
Aug
(20) |
Sep
(11) |
Oct
(1) |
Nov
(34) |
Dec
|
2014 |
Jan
(8) |
Feb
(26) |
Mar
(24) |
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
(28) |
Oct
(4) |
Nov
(4) |
Dec
(2) |
2015 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(13) |
Jul
|
Aug
(3) |
Sep
(8) |
Oct
(11) |
Nov
(16) |
Dec
|
2016 |
Jan
|
Feb
(6) |
Mar
|
Apr
(9) |
May
(23) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2017 |
Jan
|
Feb
|
Mar
|
Apr
(7) |
May
(3) |
Jun
|
Jul
(3) |
Aug
|
Sep
(8) |
Oct
|
Nov
|
Dec
(3) |
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(31) |
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2021 |
Jan
(2) |
Feb
(2) |
Mar
(5) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
(2) |
18
(2) |
19
(2) |
20
|
21
|
22
|
23
|
24
(4) |
25
(2) |
26
(1) |
27
(3) |
28
(4) |
|
|
|
|
|
|
From: IOhannes m z. <zmo...@ie...> - 2010-02-28 16:00:18
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Stephen Sinclair wrote: > Mainly, a feature request in the past has been how to access liblo > message timetags, and how to get liblo to trigger handlers _before_ > they are due. Historically of course liblo intends to make things > easier for the user by handling timing automatically by dispatching at > the correct time. However for some use cases (like message > forwarding), an application might want to handle messages ahead of > time and access the timetag information manually. > while we are there: it sometimes might also be useful to be able to modify the timestamps. that would basically just mean to get a hand on the OSC-bundle as it arrives, including the timestamp (this is what is discussed in this thread), and then reschedule the message to be dispatched by liblo (which, iirc, is currently possible anyhow). i guess it would just require a hook that overrides the automatic passing of a bundle to the scheduler. i would have needed that for my last project (which i have then implemented in Pd for various other reasons...) fgmasdr IOhannes -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkuKiwAACgkQkX2Xpv6ydvSEQACfdvMsLAQucDz55x0VDWB6Rixa 3gIAoIhuNBp1oyfRd3Q6MZ6ohHwED0TI =fc+j -----END PGP SIGNATURE----- |
From: Philippe M. <pm...@ac...> - 2010-02-28 12:05:08
|
Hi Cam, I thought including path and typespec might be useful when the same handler has been registered for multiple methods and one want to unregister it for only a given method. But I don't know if this is a need in real-life use cases. In my case, the simpler API you describe would be ok. Thx Phil Le 28 févr. 2010 à 11:37, Camille Troillard a écrit : > Hi Philippe, > > It looks like a sensitive idea. > I actually didn't know that it was possible to register multiple handlers for one method. > > Why not making the API as simple as : > > lo_server_del_method_handler (lo_server s, lo_method_handler h) > > I guess we just need to walk through the chain of handlers, and remove every occurrences of the one that we want. What do you think? > I am not sure why path and typespec would be needed, or maybe was it just a side effect of writing an email late a night? :-) > > > Best, > Cam > > > > On Sun, Feb 28, 2010 at 1:29 AM, Philippe Mougin <pm...@ac...> wrote: > Hi, > > Sometimes in my application I can have multiple different handlers registered for the same method. For example, I can be in a state similar by the one created by: > > lo_server_add_method(localOSCServer, "/foo/bar", "f", foo_handler, NULL); > lo_server_add_method(localOSCServer, "/foo/bar", "f", bar_handler, NULL); > > Now, I'd like to unregister bar_handler while keeping foo_handler registered. I don't see how to do that with the current API (seems that I can only remove the whole "/foo/bar" method with all associated handlers). > > I may well be overlooking something as I'm new to liblo (besides, it's 1 a.m.) > > If not, do you think it would be a good idea to extend the API with something like lo_server_del_method_handler (lo_server s, const char *path, const char *typespec, lo_method_handler h), which would remove h from the handlers associated with the given path/typespecs? > > Thanks, > > Philippe Mougin > http://www.fscript.org > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev_______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |
From: Camille T. <ca...@os...> - 2010-02-28 10:38:21
|
Hi Philippe, It looks like a sensitive idea. I actually didn't know that it was possible to register multiple handlers for one method. Why not making the API as simple as : lo_server_del_method_handler (lo_server s, lo_method_handler h) I guess we just need to walk through the chain of handlers, and remove every occurrences of the one that we want. What do you think? I am not sure why path and typespec would be needed, or maybe was it just a side effect of writing an email late a night? :-) Best, Cam On Sun, Feb 28, 2010 at 1:29 AM, Philippe Mougin <pm...@ac...> wrote: > Hi, > > Sometimes in my application I can have multiple different handlers > registered for the same method. For example, I can be in a state similar by > the one created by: > > lo_server_add_method(localOSCServer, "/foo/bar", "f", foo_handler, NULL); > lo_server_add_method(localOSCServer, "/foo/bar", "f", bar_handler, NULL); > > Now, I'd like to unregister bar_handler while keeping foo_handler > registered. I don't see how to do that with the current API (seems that I > can only remove the whole "/foo/bar" method with all associated handlers). > > I may well be overlooking something as I'm new to liblo (besides, it's 1 > a.m.) > > If not, do you think it would be a good idea to extend the API with > something like lo_server_del_method_handler (lo_server s, const char *path, > const char *typespec, lo_method_handler h), which would remove h from the > handlers associated with the given path/typespecs? > > Thanks, > > Philippe Mougin > http://www.fscript.org > > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel > |
From: Philippe M. <pm...@ac...> - 2010-02-28 02:55:38
|
Hi, Sometimes in my application I can have multiple different handlers registered for the same method. For example, I can be in a state similar by the one created by: lo_server_add_method(localOSCServer, "/foo/bar", "f", foo_handler, NULL); lo_server_add_method(localOSCServer, "/foo/bar", "f", bar_handler, NULL); Now, I'd like to unregister bar_handler while keeping foo_handler registered. I don't see how to do that with the current API (seems that I can only remove the whole "/foo/bar" method with all associated handlers). I may well be overlooking something as I'm new to liblo (besides, it's 1 a.m.) If not, do you think it would be a good idea to extend the API with something like lo_server_del_method_handler (lo_server s, const char *path, const char *typespec, lo_method_handler h), which would remove h from the handlers associated with the given path/typespecs? Thanks, Philippe Mougin http://www.fscript.org |
From: Stephen S. <rad...@gm...> - 2010-02-27 18:10:40
|
I guess I will apply this patch, from what I can tell the MS documentation of the socket() function says that closesocket() is the correct closing function. On the subject of Windows, a few users of some of my programs have mentioned that on Windows 7, lo_server_new() sometimes opens the _wrong_ port. Haven't figured out why. Steve On Sat, Feb 27, 2010 at 11:23 AM, Nicholas J Humfrey <nj...@ae...> wrote: > Oh, appologies, I missed that. > > nick. > > > On 27 Feb 2010, at 05:40, Stephen Sinclair <rad...@gm...> wrote: > >> I believe he did, in lo_types_internal? >> >> >> Steve >> >> >> On Thu, Feb 25, 2010 at 9:18 AM, Nicholas J Humfrey <nj...@ae...> >> wrote: >>> >>> closesocket() does not exist on Unix, so the code will have to be wrapped >>> up >>> in a macro. >>> >>> nick. >>> >>> >>> On 25 Feb 2010, at 13:49, Mok Keith <ek...@gm...> wrote: >>> >>>> Hi all, >>>> >>>> socket created in Windows must be closed with closesocket instead of >>>> using close function. >>>> Otherwise the socket will not close actually upon lo_server_free. >>>> Defined closesocket to close in lo_types_internal.h in case of >>>> non-windows system. >>>> Below is the patch against latest git tree. >>>> >>>> Keith >>>> >>>> ------------------------ >>>> diff --git a/src/address.c b/src/address.c >>>> index 1a7c2b1..684318a 100644 >>>> --- a/src/address.c >>>> +++ b/src/address.c >>>> @@ -197,7 +197,7 @@ void lo_address_free(lo_address a) >>>> { >>>> if (a) { >>>> if (a->socket != -1) { >>>> - close(a->socket); >>>> + closesocket(a->socket); >>>> } >>>> if (a->host) >>>> free(a->host); >>>> diff --git a/src/lo_types_internal.h b/src/lo_types_internal.h >>>> index 5bb8a9b..5f5f9e0 100644 >>>> --- a/src/lo_types_internal.h >>>> +++ b/src/lo_types_internal.h >>>> @@ -18,6 +18,7 @@ >>>> #include <winsock2.h> >>>> #include <ws2tcpip.h> >>>> #else >>>> +#define closesocket close >>>> #include <netdb.h> >>>> #endif >>>> >>>> diff --git a/src/send.c b/src/send.c >>>> index da7ad65..f14cacf 100644 >>>> --- a/src/send.c >>>> +++ b/src/send.c >>>> @@ -319,7 +319,7 @@ static int create_socket(lo_address a) >>>> if ((connect(a->socket, a->ai->ai_addr, a->ai->ai_addrlen))) { >>>> a->errnum = geterror(); >>>> a->errstr = NULL; >>>> - close(a->socket); >>>> + closesocket(a->socket); >>>> a->socket = -1; >>>> return -1; >>>> } >>>> @@ -358,7 +358,7 @@ static int create_socket(lo_address a) >>>> if ((connect(a->socket, (struct sockaddr *) &sa, sizeof(sa))) < 0) >>>> { >>>> a->errnum = geterror(); >>>> a->errstr = NULL; >>>> - close(a->socket); >>>> + closesocket(a->socket); >>>> a->socket = -1; >>>> return -1; >>>> } >>>> @@ -431,7 +431,7 @@ static int send_data(lo_address a, lo_server from, >>>> char *data, >>>> >>>> if (ret == -1) { >>>> if (a->protocol == LO_TCP) { >>>> - close(a->socket); >>>> + closesocket(a->socket); >>>> a->socket = -1; >>>> } >>>> >>>> diff --git a/src/server.c b/src/server.c >>>> index a32bf81..a27205f 100644 >>>> --- a/src/server.c >>>> +++ b/src/server.c >>>> @@ -463,7 +463,7 @@ void lo_server_free(lo_server s) >>>> lo_client_sockets.tcp = -1; >>>> } >>>> >>>> - close(s->sockets[i].fd); >>>> + closesocket(s->sockets[i].fd); >>>> s->sockets[i].fd = -1; >>>> } >>>> } >>>> @@ -555,7 +555,7 @@ void *lo_server_recv_raw_stream(lo_server s, size_t >>>> * >>>> size) >>>> if (s->sockets[i].revents == POLLERR >>>> || s->sockets[i].revents == POLLHUP) { >>>> if (i > 0) { >>>> - close(s->sockets[i].fd); >>>> + closesocket(s->sockets[i].fd); >>>> lo_server_del_socket(s, i, s->sockets[i].fd); >>>> continue; >>>> } else >>>> @@ -600,14 +600,14 @@ void *lo_server_recv_raw_stream(lo_server s, >>>> size_t * size) >>>> } >>>> >>>> if (i < 0) { >>>> - close(sock); >>>> + closesocket(sock); >>>> return NULL; >>>> } >>>> >>>> ret = recv(sock, &read_size, sizeof(read_size), 0); >>>> read_size = ntohl(read_size); >>>> if (read_size > LO_MAX_MSG_SIZE || ret <= 0) { >>>> - close(sock); >>>> + closesocket(sock); >>>> lo_server_del_socket(s, i, sock); >>>> if (ret > 0) >>>> lo_throw(s, LO_TOOBIG, "Message too large", "recv()"); >>>> @@ -615,7 +615,7 @@ void *lo_server_recv_raw_stream(lo_server s, size_t >>>> * >>>> size) >>>> } >>>> ret = recv(sock, buffer, read_size, 0); >>>> if (ret <= 0) { >>>> - close(sock); >>>> + closesocket(sock); >>>> lo_server_del_socket(s, i, sock); >>>> continue; >>>> } >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Download Intel® Parallel Studio Eval >>>> Try the new software tools for yourself. Speed compiling, find bugs >>>> proactively, and fine-tune applications for parallel performance. >>>> See why Intel Parallel Studio got high marks during beta. >>>> http://p.sf.net/sfu/intel-sw-dev >>>> _______________________________________________ >>>> liblo-devel mailing list >>>> lib...@li... >>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>> > |
From: Nicholas J H. <nj...@ae...> - 2010-02-27 16:24:55
|
Oh, appologies, I missed that. nick. On 27 Feb 2010, at 05:40, Stephen Sinclair <rad...@gm...> wrote: > I believe he did, in lo_types_internal? > > > Steve > > > On Thu, Feb 25, 2010 at 9:18 AM, Nicholas J Humfrey <nj...@ae...> > wrote: >> closesocket() does not exist on Unix, so the code will have to be >> wrapped up >> in a macro. >> >> nick. >> >> >> On 25 Feb 2010, at 13:49, Mok Keith <ek...@gm...> wrote: >> >>> Hi all, >>> >>> socket created in Windows must be closed with closesocket instead of >>> using close function. >>> Otherwise the socket will not close actually upon lo_server_free. >>> Defined closesocket to close in lo_types_internal.h in case of >>> non-windows system. >>> Below is the patch against latest git tree. >>> >>> Keith >>> >>> ------------------------ >>> diff --git a/src/address.c b/src/address.c >>> index 1a7c2b1..684318a 100644 >>> --- a/src/address.c >>> +++ b/src/address.c >>> @@ -197,7 +197,7 @@ void lo_address_free(lo_address a) >>> { >>> if (a) { >>> if (a->socket != -1) { >>> - close(a->socket); >>> + closesocket(a->socket); >>> } >>> if (a->host) >>> free(a->host); >>> diff --git a/src/lo_types_internal.h b/src/lo_types_internal.h >>> index 5bb8a9b..5f5f9e0 100644 >>> --- a/src/lo_types_internal.h >>> +++ b/src/lo_types_internal.h >>> @@ -18,6 +18,7 @@ >>> #include <winsock2.h> >>> #include <ws2tcpip.h> >>> #else >>> +#define closesocket close >>> #include <netdb.h> >>> #endif >>> >>> diff --git a/src/send.c b/src/send.c >>> index da7ad65..f14cacf 100644 >>> --- a/src/send.c >>> +++ b/src/send.c >>> @@ -319,7 +319,7 @@ static int create_socket(lo_address a) >>> if ((connect(a->socket, a->ai->ai_addr, a->ai- >>> >ai_addrlen))) { >>> a->errnum = geterror(); >>> a->errstr = NULL; >>> - close(a->socket); >>> + closesocket(a->socket); >>> a->socket = -1; >>> return -1; >>> } >>> @@ -358,7 +358,7 @@ static int create_socket(lo_address a) >>> if ((connect(a->socket, (struct sockaddr *) &sa, sizeof >>> (sa))) < 0) >>> { >>> a->errnum = geterror(); >>> a->errstr = NULL; >>> - close(a->socket); >>> + closesocket(a->socket); >>> a->socket = -1; >>> return -1; >>> } >>> @@ -431,7 +431,7 @@ static int send_data(lo_address a, lo_server >>> from, >>> char *data, >>> >>> if (ret == -1) { >>> if (a->protocol == LO_TCP) { >>> - close(a->socket); >>> + closesocket(a->socket); >>> a->socket = -1; >>> } >>> >>> diff --git a/src/server.c b/src/server.c >>> index a32bf81..a27205f 100644 >>> --- a/src/server.c >>> +++ b/src/server.c >>> @@ -463,7 +463,7 @@ void lo_server_free(lo_server s) >>> lo_client_sockets.tcp = -1; >>> } >>> >>> - close(s->sockets[i].fd); >>> + closesocket(s->sockets[i].fd); >>> s->sockets[i].fd = -1; >>> } >>> } >>> @@ -555,7 +555,7 @@ void *lo_server_recv_raw_stream(lo_server s, >>> size_t * >>> size) >>> if (s->sockets[i].revents == POLLERR >>> || s->sockets[i].revents == POLLHUP) { >>> if (i > 0) { >>> - close(s->sockets[i].fd); >>> + closesocket(s->sockets[i].fd); >>> lo_server_del_socket(s, i, s->sockets[i].fd); >>> continue; >>> } else >>> @@ -600,14 +600,14 @@ void *lo_server_recv_raw_stream(lo_server s, >>> size_t * size) >>> } >>> >>> if (i < 0) { >>> - close(sock); >>> + closesocket(sock); >>> return NULL; >>> } >>> >>> ret = recv(sock, &read_size, sizeof(read_size), 0); >>> read_size = ntohl(read_size); >>> if (read_size > LO_MAX_MSG_SIZE || ret <= 0) { >>> - close(sock); >>> + closesocket(sock); >>> lo_server_del_socket(s, i, sock); >>> if (ret > 0) >>> lo_throw(s, LO_TOOBIG, "Message too large", >>> "recv()"); >>> @@ -615,7 +615,7 @@ void *lo_server_recv_raw_stream(lo_server s, >>> size_t * >>> size) >>> } >>> ret = recv(sock, buffer, read_size, 0); >>> if (ret <= 0) { >>> - close(sock); >>> + closesocket(sock); >>> lo_server_del_socket(s, i, sock); >>> continue; >>> } >>> >>> >>> --- >>> --- >>> --- >>> --- >>> ------------------------------------------------------------------ >>> Download Intel® Parallel Studio Eval >>> Try the new software tools for yourself. Speed compiling, find bugs >>> proactively, and fine-tune applications for parallel performance. >>> See why Intel Parallel Studio got high marks during beta. >>> http://p.sf.net/sfu/intel-sw-dev >>> _______________________________________________ >>> liblo-devel mailing list >>> lib...@li... >>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >> |
From: Stephen S. <rad...@gm...> - 2010-02-27 05:40:16
|
I believe he did, in lo_types_internal? Steve On Thu, Feb 25, 2010 at 9:18 AM, Nicholas J Humfrey <nj...@ae...> wrote: > closesocket() does not exist on Unix, so the code will have to be wrapped up > in a macro. > > nick. > > > On 25 Feb 2010, at 13:49, Mok Keith <ek...@gm...> wrote: > >> Hi all, >> >> socket created in Windows must be closed with closesocket instead of >> using close function. >> Otherwise the socket will not close actually upon lo_server_free. >> Defined closesocket to close in lo_types_internal.h in case of >> non-windows system. >> Below is the patch against latest git tree. >> >> Keith >> >> ------------------------ >> diff --git a/src/address.c b/src/address.c >> index 1a7c2b1..684318a 100644 >> --- a/src/address.c >> +++ b/src/address.c >> @@ -197,7 +197,7 @@ void lo_address_free(lo_address a) >> { >> if (a) { >> if (a->socket != -1) { >> - close(a->socket); >> + closesocket(a->socket); >> } >> if (a->host) >> free(a->host); >> diff --git a/src/lo_types_internal.h b/src/lo_types_internal.h >> index 5bb8a9b..5f5f9e0 100644 >> --- a/src/lo_types_internal.h >> +++ b/src/lo_types_internal.h >> @@ -18,6 +18,7 @@ >> #include <winsock2.h> >> #include <ws2tcpip.h> >> #else >> +#define closesocket close >> #include <netdb.h> >> #endif >> >> diff --git a/src/send.c b/src/send.c >> index da7ad65..f14cacf 100644 >> --- a/src/send.c >> +++ b/src/send.c >> @@ -319,7 +319,7 @@ static int create_socket(lo_address a) >> if ((connect(a->socket, a->ai->ai_addr, a->ai->ai_addrlen))) { >> a->errnum = geterror(); >> a->errstr = NULL; >> - close(a->socket); >> + closesocket(a->socket); >> a->socket = -1; >> return -1; >> } >> @@ -358,7 +358,7 @@ static int create_socket(lo_address a) >> if ((connect(a->socket, (struct sockaddr *) &sa, sizeof(sa))) < 0) >> { >> a->errnum = geterror(); >> a->errstr = NULL; >> - close(a->socket); >> + closesocket(a->socket); >> a->socket = -1; >> return -1; >> } >> @@ -431,7 +431,7 @@ static int send_data(lo_address a, lo_server from, >> char *data, >> >> if (ret == -1) { >> if (a->protocol == LO_TCP) { >> - close(a->socket); >> + closesocket(a->socket); >> a->socket = -1; >> } >> >> diff --git a/src/server.c b/src/server.c >> index a32bf81..a27205f 100644 >> --- a/src/server.c >> +++ b/src/server.c >> @@ -463,7 +463,7 @@ void lo_server_free(lo_server s) >> lo_client_sockets.tcp = -1; >> } >> >> - close(s->sockets[i].fd); >> + closesocket(s->sockets[i].fd); >> s->sockets[i].fd = -1; >> } >> } >> @@ -555,7 +555,7 @@ void *lo_server_recv_raw_stream(lo_server s, size_t * >> size) >> if (s->sockets[i].revents == POLLERR >> || s->sockets[i].revents == POLLHUP) { >> if (i > 0) { >> - close(s->sockets[i].fd); >> + closesocket(s->sockets[i].fd); >> lo_server_del_socket(s, i, s->sockets[i].fd); >> continue; >> } else >> @@ -600,14 +600,14 @@ void *lo_server_recv_raw_stream(lo_server s, >> size_t * size) >> } >> >> if (i < 0) { >> - close(sock); >> + closesocket(sock); >> return NULL; >> } >> >> ret = recv(sock, &read_size, sizeof(read_size), 0); >> read_size = ntohl(read_size); >> if (read_size > LO_MAX_MSG_SIZE || ret <= 0) { >> - close(sock); >> + closesocket(sock); >> lo_server_del_socket(s, i, sock); >> if (ret > 0) >> lo_throw(s, LO_TOOBIG, "Message too large", "recv()"); >> @@ -615,7 +615,7 @@ void *lo_server_recv_raw_stream(lo_server s, size_t * >> size) >> } >> ret = recv(sock, buffer, read_size, 0); >> if (ret <= 0) { >> - close(sock); >> + closesocket(sock); >> lo_server_del_socket(s, i, sock); >> continue; >> } >> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> liblo-devel mailing list >> lib...@li... >> https://lists.sourceforge.net/lists/listinfo/liblo-devel > |
From: David R. <da...@dr...> - 2010-02-26 06:37:16
|
On Wed, 2010-02-24 at 17:49 -0500, Stephen Sinclair wrote: > On Fri, Feb 19, 2010 at 4:55 PM, David Robillard <da...@dr...> wrote: > > On Thu, 2010-02-18 at 13:21 -0500, Stephen Sinclair wrote: > >> On Tue, Feb 16, 2010 at 6:41 PM, David Robillard <da...@dr...> wrote: > >> > P.S. It would be also possible for apps that use lo_server_recv and > >> > friends directly to know about bundles if it only dispatched one message > >> > at a time and lo_server_events_pending worked as advertised, but the > >> > callback is more useful and generic anyway > >> > >> By the way, I don't actually this is an appropriate use of > >> events_pending(), but in what way is it broken? > > > > I was expecting each individual message of a bundle to be dispatched > > independently, thus events_pending() could be used to check if you're in > > a bundle. Unfortunately the entire bundle is dispatched at once so this > > is not the case. > > > > A proper callback is better anyway, so whatever > > Right. Technically they aren't "pending" until liblo sees them, and > since it goes through the bundle one at a time, it hasn't seen the > rest of the messages in the bundle yet so they aren't yet pending.. > > I wonder if this should be changed/fixed? > > Perhaps not, because I think events_pending() should be used mostly to > check if there are _future_ messages, not more _now_ messages. It's probably best to leave it as-is, it's not the right way to check for more _now_ messages anyway. Attached is a revised bundle handlers patch which uses separate callbacks and passes the timetag to the start handler. Cheers, -dr |
From: Nicholas J H. <nj...@ae...> - 2010-02-25 14:19:35
|
closesocket() does not exist on Unix, so the code will have to be wrapped up in a macro. nick. On 25 Feb 2010, at 13:49, Mok Keith <ek...@gm...> wrote: > Hi all, > > socket created in Windows must be closed with closesocket instead of > using close function. > Otherwise the socket will not close actually upon lo_server_free. > Defined closesocket to close in lo_types_internal.h in case of > non-windows system. > Below is the patch against latest git tree. > > Keith > > ------------------------ > diff --git a/src/address.c b/src/address.c > index 1a7c2b1..684318a 100644 > --- a/src/address.c > +++ b/src/address.c > @@ -197,7 +197,7 @@ void lo_address_free(lo_address a) > { > if (a) { > if (a->socket != -1) { > - close(a->socket); > + closesocket(a->socket); > } > if (a->host) > free(a->host); > diff --git a/src/lo_types_internal.h b/src/lo_types_internal.h > index 5bb8a9b..5f5f9e0 100644 > --- a/src/lo_types_internal.h > +++ b/src/lo_types_internal.h > @@ -18,6 +18,7 @@ > #include <winsock2.h> > #include <ws2tcpip.h> > #else > +#define closesocket close > #include <netdb.h> > #endif > > diff --git a/src/send.c b/src/send.c > index da7ad65..f14cacf 100644 > --- a/src/send.c > +++ b/src/send.c > @@ -319,7 +319,7 @@ static int create_socket(lo_address a) > if ((connect(a->socket, a->ai->ai_addr, a->ai- > >ai_addrlen))) { > a->errnum = geterror(); > a->errstr = NULL; > - close(a->socket); > + closesocket(a->socket); > a->socket = -1; > return -1; > } > @@ -358,7 +358,7 @@ static int create_socket(lo_address a) > if ((connect(a->socket, (struct sockaddr *) &sa, sizeof > (sa))) < 0) { > a->errnum = geterror(); > a->errstr = NULL; > - close(a->socket); > + closesocket(a->socket); > a->socket = -1; > return -1; > } > @@ -431,7 +431,7 @@ static int send_data(lo_address a, lo_server from, > char *data, > > if (ret == -1) { > if (a->protocol == LO_TCP) { > - close(a->socket); > + closesocket(a->socket); > a->socket = -1; > } > > diff --git a/src/server.c b/src/server.c > index a32bf81..a27205f 100644 > --- a/src/server.c > +++ b/src/server.c > @@ -463,7 +463,7 @@ void lo_server_free(lo_server s) > lo_client_sockets.tcp = -1; > } > > - close(s->sockets[i].fd); > + closesocket(s->sockets[i].fd); > s->sockets[i].fd = -1; > } > } > @@ -555,7 +555,7 @@ void *lo_server_recv_raw_stream(lo_server s, > size_t * size) > if (s->sockets[i].revents == POLLERR > || s->sockets[i].revents == POLLHUP) { > if (i > 0) { > - close(s->sockets[i].fd); > + closesocket(s->sockets[i].fd); > lo_server_del_socket(s, i, s->sockets[i].fd); > continue; > } else > @@ -600,14 +600,14 @@ void *lo_server_recv_raw_stream(lo_server s, > size_t * size) > } > > if (i < 0) { > - close(sock); > + closesocket(sock); > return NULL; > } > > ret = recv(sock, &read_size, sizeof(read_size), 0); > read_size = ntohl(read_size); > if (read_size > LO_MAX_MSG_SIZE || ret <= 0) { > - close(sock); > + closesocket(sock); > lo_server_del_socket(s, i, sock); > if (ret > 0) > lo_throw(s, LO_TOOBIG, "Message too large", "recv > ()"); > @@ -615,7 +615,7 @@ void *lo_server_recv_raw_stream(lo_server s, > size_t * size) > } > ret = recv(sock, buffer, read_size, 0); > if (ret <= 0) { > - close(sock); > + closesocket(sock); > lo_server_del_socket(s, i, sock); > continue; > } > > --- > --- > --- > --------------------------------------------------------------------- > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |
From: Mok K. <ek...@gm...> - 2010-02-25 13:57:21
|
Hi all, socket created in Windows must be closed with closesocket instead of using close function. Otherwise the socket will not close actually upon lo_server_free. Defined closesocket to close in lo_types_internal.h in case of non-windows system. Below is the patch against latest git tree. Keith ------------------------ diff --git a/src/address.c b/src/address.c index 1a7c2b1..684318a 100644 --- a/src/address.c +++ b/src/address.c @@ -197,7 +197,7 @@ void lo_address_free(lo_address a) { if (a) { if (a->socket != -1) { - close(a->socket); + closesocket(a->socket); } if (a->host) free(a->host); diff --git a/src/lo_types_internal.h b/src/lo_types_internal.h index 5bb8a9b..5f5f9e0 100644 --- a/src/lo_types_internal.h +++ b/src/lo_types_internal.h @@ -18,6 +18,7 @@ #include <winsock2.h> #include <ws2tcpip.h> #else +#define closesocket close #include <netdb.h> #endif diff --git a/src/send.c b/src/send.c index da7ad65..f14cacf 100644 --- a/src/send.c +++ b/src/send.c @@ -319,7 +319,7 @@ static int create_socket(lo_address a) if ((connect(a->socket, a->ai->ai_addr, a->ai->ai_addrlen))) { a->errnum = geterror(); a->errstr = NULL; - close(a->socket); + closesocket(a->socket); a->socket = -1; return -1; } @@ -358,7 +358,7 @@ static int create_socket(lo_address a) if ((connect(a->socket, (struct sockaddr *) &sa, sizeof(sa))) < 0) { a->errnum = geterror(); a->errstr = NULL; - close(a->socket); + closesocket(a->socket); a->socket = -1; return -1; } @@ -431,7 +431,7 @@ static int send_data(lo_address a, lo_server from, char *data, if (ret == -1) { if (a->protocol == LO_TCP) { - close(a->socket); + closesocket(a->socket); a->socket = -1; } diff --git a/src/server.c b/src/server.c index a32bf81..a27205f 100644 --- a/src/server.c +++ b/src/server.c @@ -463,7 +463,7 @@ void lo_server_free(lo_server s) lo_client_sockets.tcp = -1; } - close(s->sockets[i].fd); + closesocket(s->sockets[i].fd); s->sockets[i].fd = -1; } } @@ -555,7 +555,7 @@ void *lo_server_recv_raw_stream(lo_server s, size_t * size) if (s->sockets[i].revents == POLLERR || s->sockets[i].revents == POLLHUP) { if (i > 0) { - close(s->sockets[i].fd); + closesocket(s->sockets[i].fd); lo_server_del_socket(s, i, s->sockets[i].fd); continue; } else @@ -600,14 +600,14 @@ void *lo_server_recv_raw_stream(lo_server s, size_t * size) } if (i < 0) { - close(sock); + closesocket(sock); return NULL; } ret = recv(sock, &read_size, sizeof(read_size), 0); read_size = ntohl(read_size); if (read_size > LO_MAX_MSG_SIZE || ret <= 0) { - close(sock); + closesocket(sock); lo_server_del_socket(s, i, sock); if (ret > 0) lo_throw(s, LO_TOOBIG, "Message too large", "recv()"); @@ -615,7 +615,7 @@ void *lo_server_recv_raw_stream(lo_server s, size_t * size) } ret = recv(sock, buffer, read_size, 0); if (ret <= 0) { - close(sock); + closesocket(sock); lo_server_del_socket(s, i, sock); continue; } |
From: Stephen S. <rad...@gm...> - 2010-02-24 23:01:18
|
Thanks! I'll try to read up on this and test it soon. Steve On Wed, Feb 24, 2010 at 11:16 AM, Mok Keith <ek...@gm...> wrote: > Hi all, > > liblo using a try and error random routine to search for unused tcp/udp port. > However it is not necessary, passing port zero to socket, the system > will search an unused port for u. > We can get the port assigned by system using getsockname. > > Below is the patch against latest git tree. > > Keith > > ---- > diff --git a/src/server.c b/src/server.c > index a32bf81..2b959f3 100644 > --- a/src/server.c > +++ b/src/server.c > @@ -160,8 +160,6 @@ lo_server lo_server_new_with_proto_internal(const > char *group, > lo_server s; > struct addrinfo *ai = NULL, *it, *used; > struct addrinfo hints; > - int tries = 0; > - char pnum[16]; > const char *service; > char hostname[LO_HOST_SIZE]; > > @@ -256,66 +254,60 @@ lo_server > lo_server_new_with_proto_internal(const char *group, > hints.ai_flags = AI_PASSIVE; > > if (!port) { > - service = pnum; > + service = "0"; /* system will assign portno automatically */ > } else { > service = port; > } > - do { > - if (!port) { > - /* not a good way to get random numbers, but its not critical */ > - snprintf(pnum, 15, "%ld", 10000 + ((unsigned int) rand() + > - time(NULL)) % 10000); > - } > > - int ret = getaddrinfo(NULL, service, &hints, &ai); > - if (ret != 0) { > - lo_throw(s, ret, gai_strerror(ret), NULL); > - freeaddrinfo(ai); > + int ret = getaddrinfo(NULL, service, &hints, &ai); > + if (ret != 0) { > + lo_throw(s, ret, gai_strerror(ret), NULL); > + freeaddrinfo(ai); > > - return NULL; > - } > + return NULL; > + } > > - used = NULL; > - s->ai = ai; > - s->sockets[0].fd = -1; > - s->port = 0; > + used = NULL; > + s->ai = ai; > + s->sockets[0].fd = -1; > + s->port = 0; > > - for (it = ai; it && s->sockets[0].fd == -1; it = it->ai_next) { > - used = it; > - s->sockets[0].fd = socket(it->ai_family, hints.ai_socktype, 0); > - } > - if (s->sockets[0].fd == -1) { > - int err = geterror(); > - used = NULL; > - lo_throw(s, err, strerror(err), "socket()"); > + for (it = ai; it && s->sockets[0].fd == -1; it = it->ai_next) { > + used = it; > + s->sockets[0].fd = socket(it->ai_family, hints.ai_socktype, 0); > + } > + if (s->sockets[0].fd == -1) { > + int err = geterror(); > + used = NULL; > + lo_throw(s, err, strerror(err), "socket()"); > > - lo_server_free(s); > - return NULL; > - } > + lo_server_free(s); > + return NULL; > + } > > - /* Join multicast group if specified. */ > - /* This must be done before bind() on POSIX, but after bind() > Windows. */ > + /* Join multicast group if specified. */ > + /* This must be done before bind() on POSIX, but after bind() Windows. */ > #ifndef WIN32 > - if (group != NULL) > - if (lo_server_join_multicast_group(s, group)) > - return NULL; > + if (group != NULL) > + if (lo_server_join_multicast_group(s, group)) > + return NULL; > #endif > > - if ((used != NULL) && > - (bind(s->sockets[0].fd, used->ai_addr, used->ai_addrlen) < > - 0)) { > - int err = geterror(); > - if (err == EINVAL || err == EADDRINUSE) { > - used = NULL; > - continue; > - } > + if ((used != NULL) && > + (bind(s->sockets[0].fd, used->ai_addr, used->ai_addrlen) < > + 0)) { > + int err = geterror(); > + if (err == EINVAL || err == EADDRINUSE) { > + used = NULL; > + lo_server_free(s); > + return NULL; > + } > > - lo_throw(s, err, strerror(err), "bind()"); > - lo_server_free(s); > + lo_throw(s, err, strerror(err), "bind()"); > + lo_server_free(s); > > - return NULL; > - } > - } while (!used && tries++ < 16); > + return NULL; > + } > > /* Join multicast group if specified (see above). */ > #ifdef WIN32 > @@ -379,13 +371,19 @@ lo_server > lo_server_new_with_proto_internal(const char *group, > s->hostname = strdup(hostname); > > if (used->ai_family == PF_INET6) { > - struct sockaddr_in6 *addr = (struct sockaddr_in6 *) used->ai_addr; > + struct sockaddr_in6 addr; > + socklen_t slen; > + slen = sizeof(addr); > + getsockname(s->sockets[0].fd, (struct sockaddr *)&addr, &slen); > > - s->port = htons(addr->sin6_port); > + s->port = htons(addr.sin6_port); > } else if (used->ai_family == PF_INET) { > - struct sockaddr_in *addr = (struct sockaddr_in *) used->ai_addr; > + struct sockaddr_in addr; > + socklen_t slen; > + slen = sizeof(addr); > + getsockname(s->sockets[0].fd, (struct sockaddr *)&addr, &slen); > > - s->port = htons(addr->sin_port); > + s->port = htons(addr.sin_port); > } else { > lo_throw(s, LO_UNKNOWNPROTO, "unknown protocol family", NULL); > s->port = atoi(port); > |
From: Stephen S. <rad...@gm...> - 2010-02-24 22:54:22
|
On Fri, Feb 19, 2010 at 4:55 PM, David Robillard <da...@dr...> wrote: > On Thu, 2010-02-18 at 13:21 -0500, Stephen Sinclair wrote: >> On Tue, Feb 16, 2010 at 6:41 PM, David Robillard <da...@dr...> wrote: >> > P.S. It would be also possible for apps that use lo_server_recv and >> > friends directly to know about bundles if it only dispatched one message >> > at a time and lo_server_events_pending worked as advertised, but the >> > callback is more useful and generic anyway >> >> By the way, I don't actually this is an appropriate use of >> events_pending(), but in what way is it broken? > > I was expecting each individual message of a bundle to be dispatched > independently, thus events_pending() could be used to check if you're in > a bundle. Unfortunately the entire bundle is dispatched at once so this > is not the case. > > A proper callback is better anyway, so whatever Right. Technically they aren't "pending" until liblo sees them, and since it goes through the bundle one at a time, it hasn't seen the rest of the messages in the bundle yet so they aren't yet pending.. I wonder if this should be changed/fixed? Perhaps not, because I think events_pending() should be used mostly to check if there are _future_ messages, not more _now_ messages. Steve |
From: Stephen S. <rad...@gm...> - 2010-02-24 22:46:47
|
On Fri, Feb 19, 2010 at 4:54 PM, David Robillard <da...@dr...> wrote: > On Thu, 2010-02-18 at 13:11 -0500, Stephen Sinclair wrote: >> Hi, >> >> Thanks for the patch. >> >> I'm generally okay with this proposal. I think if we are going to >> consider bundles, this is inextricably linked to how OSC deals with >> timetags. We should be considering these two ideas simultaneously, >> since there are some open issues with how liblo currently deals with >> timetags. >> >> Mainly, a feature request in the past has been how to access liblo >> message timetags, and how to get liblo to trigger handlers _before_ >> they are due. Historically of course liblo intends to make things >> easier for the user by handling timing automatically by dispatching at >> the correct time. However for some use cases (like message >> forwarding), an application might want to handle messages ahead of >> time and access the timetag information manually. >> >> Well, since the timetag information in OSC is actually in the bundle, >> perhaps this "bundle handler" API could provide timetag information to >> the callback. >> >> Missing is a way to tell liblo not to queue messages. I suppose this >> could be done as a lo_server option, or alternatively on a per-message >> or per-handler basis. I'm not sure which would be most useful. >> >> void lo_server_enable_queue(bool)? > > I suppose these things are somewhat tied, but I personally just use > immediate everything and don't worry about scheduling. Me too. As I said though, there have been requests for this. e.g., bug 2858774: http://sourceforge.net/tracker/?func=detail&aid=2858774&group_id=116064&atid=673869 > It would be useful to be able to tell liblo to either schedule, or not > schedule and pass the time stamps on though. I suppose the bundle start > callback should have a timestamp parameter that would be set in that > case, then? Yes, that's what I was thinking. Anyways, if we have a handler for bundles, I don't see why not pass all available information to the user, including timestamp. The question I'm not sure of is whether telling liblo not to schedule should be system-wide, per-server, or per-handler.. >> On Tue, Feb 16, 2010 at 6:41 PM, David Robillard <da...@dr...> wrote: >> > >> > Me too. I also require this bundle functionality. Knowing about >> > bundles is very important for certain things - bundles provide >> > grouping/ordering/atomicity semantics defined in the OSC standard that >> > are very useful and sometimes necessary. Having no ordering whatsoever >> > is /extremely/ limiting if you think about it - a lot of things just >> > assume they will magically get some sort of ordering and mostly work >> > most of the time if you're lucky and have a good network etc. Less than >> > ideal... >> >> From my understanding, OSC was designed to be used in a stateless >> manner, so that ordering should not really matter. Obviously, many >> people are taking it beyond this original use case, so I suppose it >> makes sense to extend liblo to handle such things. > > The other Steve has said things along these lines as well, but I don't > see where this opinion comes from. There is certainly nothing in the > spec suggesting it. The spec defines bundles, which are atomic, and > ordered. Obviously this stuff is there for a reason, and should be > supported. My understanding is that bundles are the mechanism to be used when messages should be considered to happen at the exact same time. I said that OSC was designed with statelessness in mind, but I agree that it doesn't have to be that way. However, the point is that it is packet-oriented, not stream-oriented. > (It's not hard to find even simple cases where ordering is needed > anyway, note that the most trivial "set the parameter" use is not > stateless ("set")) > > There are also audible reasons why an atomic chunk of messages is > important. Say you have two parameters x and y, currently x=2.0 and > y=4.0. You want to set x=5.0 and y=6.0: > > A: /x 5.0 > B: > C: /y 6.0 > > What's the state at point B? Is it consistent? What if x=5.0 and y=4.0 > is an unstable set of parameters that will produce a nasty noise you > don't want? Even if it doesn't, you have some indeterminate time in > there where the parameters (and thus the sound) is not what you want. > For presets and such where there's a ton of parameters this quickly > becomes a very real problem. This seems to be a problem that the receiver has to figure out for itself, right? Like the "buddy" object in Max/MSP. But I agree, there are situations where it does make sense to bundle messages, like your example. I guess what you're saying is that with a bundle callback, an application could be made to ignore /X and /Y messages unless they are bundled together? In any case, no worries, I am convinced that we should add something like your patch. Sorry, haven't had time to work on it lately (paper due this weekend) but I'll try to some up with a suggestion soon. >> > I just independently came up with this callback idea and noticed this >> > thread while writing a new message about it, so I think it's a good >> > idea :) >> > >> > Nested bundles don't seem to be a problem, since the app can just >> > maintain a simple stack of bundle records (or whatever) and keep track. >> > >> > Are there any problems with this approach? It seems workable and >> > feasible to me >> >> I can't really think of any other way to do it anyways. I suppose the >> only other alternative would be to have a function that reports a >> bundle counter, but this wouldn't provide as much information. > > I originally thought the message could have a flag to say "I'm in a > bundle" or something along those lines, but the callback way makes much > more sense. Agreed. More backwards-compatible and seems to be more liblo-like. >> The only important change I can think of to your patch would be to >> provide timetag information on LO_BUNDLE_BEGIN. Since this would >> require an extra argument, perhaps instead of introducing a new enum >> it would be better to have two different handlers? > > Yeah, this would be justification for having a different begin and end > handler, since the timestamp argument doesn't make sense for the latter. > Originally I chose the enum just to reduce the number of callbacks, but > with the timestamp I agree two is better. I'll tinker the patch around. Cool. Steve |
From: Mok K. <ek...@gm...> - 2010-02-24 16:21:42
|
Hi all, liblo using a try and error random routine to search for unused tcp/udp port. However it is not necessary, passing port zero to socket, the system will search an unused port for u. We can get the port assigned by system using getsockname. Below is the patch against latest git tree. Keith ---- diff --git a/src/server.c b/src/server.c index a32bf81..2b959f3 100644 --- a/src/server.c +++ b/src/server.c @@ -160,8 +160,6 @@ lo_server lo_server_new_with_proto_internal(const char *group, lo_server s; struct addrinfo *ai = NULL, *it, *used; struct addrinfo hints; - int tries = 0; - char pnum[16]; const char *service; char hostname[LO_HOST_SIZE]; @@ -256,66 +254,60 @@ lo_server lo_server_new_with_proto_internal(const char *group, hints.ai_flags = AI_PASSIVE; if (!port) { - service = pnum; + service = "0"; /* system will assign portno automatically */ } else { service = port; } - do { - if (!port) { - /* not a good way to get random numbers, but its not critical */ - snprintf(pnum, 15, "%ld", 10000 + ((unsigned int) rand() + - time(NULL)) % 10000); - } - int ret = getaddrinfo(NULL, service, &hints, &ai); - if (ret != 0) { - lo_throw(s, ret, gai_strerror(ret), NULL); - freeaddrinfo(ai); + int ret = getaddrinfo(NULL, service, &hints, &ai); + if (ret != 0) { + lo_throw(s, ret, gai_strerror(ret), NULL); + freeaddrinfo(ai); - return NULL; - } + return NULL; + } - used = NULL; - s->ai = ai; - s->sockets[0].fd = -1; - s->port = 0; + used = NULL; + s->ai = ai; + s->sockets[0].fd = -1; + s->port = 0; - for (it = ai; it && s->sockets[0].fd == -1; it = it->ai_next) { - used = it; - s->sockets[0].fd = socket(it->ai_family, hints.ai_socktype, 0); - } - if (s->sockets[0].fd == -1) { - int err = geterror(); - used = NULL; - lo_throw(s, err, strerror(err), "socket()"); + for (it = ai; it && s->sockets[0].fd == -1; it = it->ai_next) { + used = it; + s->sockets[0].fd = socket(it->ai_family, hints.ai_socktype, 0); + } + if (s->sockets[0].fd == -1) { + int err = geterror(); + used = NULL; + lo_throw(s, err, strerror(err), "socket()"); - lo_server_free(s); - return NULL; - } + lo_server_free(s); + return NULL; + } - /* Join multicast group if specified. */ - /* This must be done before bind() on POSIX, but after bind() Windows. */ + /* Join multicast group if specified. */ + /* This must be done before bind() on POSIX, but after bind() Windows. */ #ifndef WIN32 - if (group != NULL) - if (lo_server_join_multicast_group(s, group)) - return NULL; + if (group != NULL) + if (lo_server_join_multicast_group(s, group)) + return NULL; #endif - if ((used != NULL) && - (bind(s->sockets[0].fd, used->ai_addr, used->ai_addrlen) < - 0)) { - int err = geterror(); - if (err == EINVAL || err == EADDRINUSE) { - used = NULL; - continue; - } + if ((used != NULL) && + (bind(s->sockets[0].fd, used->ai_addr, used->ai_addrlen) < + 0)) { + int err = geterror(); + if (err == EINVAL || err == EADDRINUSE) { + used = NULL; + lo_server_free(s); + return NULL; + } - lo_throw(s, err, strerror(err), "bind()"); - lo_server_free(s); + lo_throw(s, err, strerror(err), "bind()"); + lo_server_free(s); - return NULL; - } - } while (!used && tries++ < 16); + return NULL; + } /* Join multicast group if specified (see above). */ #ifdef WIN32 @@ -379,13 +371,19 @@ lo_server lo_server_new_with_proto_internal(const char *group, s->hostname = strdup(hostname); if (used->ai_family == PF_INET6) { - struct sockaddr_in6 *addr = (struct sockaddr_in6 *) used->ai_addr; + struct sockaddr_in6 addr; + socklen_t slen; + slen = sizeof(addr); + getsockname(s->sockets[0].fd, (struct sockaddr *)&addr, &slen); - s->port = htons(addr->sin6_port); + s->port = htons(addr.sin6_port); } else if (used->ai_family == PF_INET) { - struct sockaddr_in *addr = (struct sockaddr_in *) used->ai_addr; + struct sockaddr_in addr; + socklen_t slen; + slen = sizeof(addr); + getsockname(s->sockets[0].fd, (struct sockaddr *)&addr, &slen); - s->port = htons(addr->sin_port); + s->port = htons(addr.sin_port); } else { lo_throw(s, LO_UNKNOWNPROTO, "unknown protocol family", NULL); s->port = atoi(port); |
From: David R. <da...@dr...> - 2010-02-19 21:55:27
|
On Thu, 2010-02-18 at 13:21 -0500, Stephen Sinclair wrote: > On Tue, Feb 16, 2010 at 6:41 PM, David Robillard <da...@dr...> wrote: > > P.S. It would be also possible for apps that use lo_server_recv and > > friends directly to know about bundles if it only dispatched one message > > at a time and lo_server_events_pending worked as advertised, but the > > callback is more useful and generic anyway > > By the way, I don't actually this is an appropriate use of > events_pending(), but in what way is it broken? I was expecting each individual message of a bundle to be dispatched independently, thus events_pending() could be used to check if you're in a bundle. Unfortunately the entire bundle is dispatched at once so this is not the case. A proper callback is better anyway, so whatever -dr |
From: David R. <da...@dr...> - 2010-02-19 21:54:20
|
On Thu, 2010-02-18 at 13:11 -0500, Stephen Sinclair wrote: > Hi, > > Thanks for the patch. > > I'm generally okay with this proposal. I think if we are going to > consider bundles, this is inextricably linked to how OSC deals with > timetags. We should be considering these two ideas simultaneously, > since there are some open issues with how liblo currently deals with > timetags. > > Mainly, a feature request in the past has been how to access liblo > message timetags, and how to get liblo to trigger handlers _before_ > they are due. Historically of course liblo intends to make things > easier for the user by handling timing automatically by dispatching at > the correct time. However for some use cases (like message > forwarding), an application might want to handle messages ahead of > time and access the timetag information manually. > > Well, since the timetag information in OSC is actually in the bundle, > perhaps this "bundle handler" API could provide timetag information to > the callback. > > Missing is a way to tell liblo not to queue messages. I suppose this > could be done as a lo_server option, or alternatively on a per-message > or per-handler basis. I'm not sure which would be most useful. > > void lo_server_enable_queue(bool)? I suppose these things are somewhat tied, but I personally just use immediate everything and don't worry about scheduling. It would be useful to be able to tell liblo to either schedule, or not schedule and pass the time stamps on though. I suppose the bundle start callback should have a timestamp parameter that would be set in that case, then? > On Tue, Feb 16, 2010 at 6:41 PM, David Robillard <da...@dr...> wrote: > > > > Me too. I also require this bundle functionality. Knowing about > > bundles is very important for certain things - bundles provide > > grouping/ordering/atomicity semantics defined in the OSC standard that > > are very useful and sometimes necessary. Having no ordering whatsoever > > is /extremely/ limiting if you think about it - a lot of things just > > assume they will magically get some sort of ordering and mostly work > > most of the time if you're lucky and have a good network etc. Less than > > ideal... > > From my understanding, OSC was designed to be used in a stateless > manner, so that ordering should not really matter. Obviously, many > people are taking it beyond this original use case, so I suppose it > makes sense to extend liblo to handle such things. The other Steve has said things along these lines as well, but I don't see where this opinion comes from. There is certainly nothing in the spec suggesting it. The spec defines bundles, which are atomic, and ordered. Obviously this stuff is there for a reason, and should be supported. (It's not hard to find even simple cases where ordering is needed anyway, note that the most trivial "set the parameter" use is not stateless ("set")) There are also audible reasons why an atomic chunk of messages is important. Say you have two parameters x and y, currently x=2.0 and y=4.0. You want to set x=5.0 and y=6.0: A: /x 5.0 B: C: /y 6.0 What's the state at point B? Is it consistent? What if x=5.0 and y=4.0 is an unstable set of parameters that will produce a nasty noise you don't want? Even if it doesn't, you have some indeterminate time in there where the parameters (and thus the sound) is not what you want. For presets and such where there's a ton of parameters this quickly becomes a very real problem. Anyway... > > I just independently came up with this callback idea and noticed this > > thread while writing a new message about it, so I think it's a good > > idea :) > > > > Nested bundles don't seem to be a problem, since the app can just > > maintain a simple stack of bundle records (or whatever) and keep track. > > > > Are there any problems with this approach? It seems workable and > > feasible to me > > I can't really think of any other way to do it anyways. I suppose the > only other alternative would be to have a function that reports a > bundle counter, but this wouldn't provide as much information. I originally thought the message could have a flag to say "I'm in a bundle" or something along those lines, but the callback way makes much more sense. > The only important change I can think of to your patch would be to > provide timetag information on LO_BUNDLE_BEGIN. Since this would > require an extra argument, perhaps instead of introducing a new enum > it would be better to have two different handlers? Yeah, this would be justification for having a different begin and end handler, since the timestamp argument doesn't make sense for the latter. Originally I chose the enum just to reduce the number of callbacks, but with the timestamp I agree two is better. I'll tinker the patch around. Cheers, -dr |
From: Stephen S. <rad...@gm...> - 2010-02-18 18:21:37
|
On Tue, Feb 16, 2010 at 6:41 PM, David Robillard <da...@dr...> wrote: > P.S. It would be also possible for apps that use lo_server_recv and > friends directly to know about bundles if it only dispatched one message > at a time and lo_server_events_pending worked as advertised, but the > callback is more useful and generic anyway By the way, I don't actually this is an appropriate use of events_pending(), but in what way is it broken? Steve |
From: Stephen S. <rad...@gm...> - 2010-02-18 18:11:47
|
Hi, Thanks for the patch. I'm generally okay with this proposal. I think if we are going to consider bundles, this is inextricably linked to how OSC deals with timetags. We should be considering these two ideas simultaneously, since there are some open issues with how liblo currently deals with timetags. Mainly, a feature request in the past has been how to access liblo message timetags, and how to get liblo to trigger handlers _before_ they are due. Historically of course liblo intends to make things easier for the user by handling timing automatically by dispatching at the correct time. However for some use cases (like message forwarding), an application might want to handle messages ahead of time and access the timetag information manually. Well, since the timetag information in OSC is actually in the bundle, perhaps this "bundle handler" API could provide timetag information to the callback. Missing is a way to tell liblo not to queue messages. I suppose this could be done as a lo_server option, or alternatively on a per-message or per-handler basis. I'm not sure which would be most useful. void lo_server_enable_queue(bool)? On Tue, Feb 16, 2010 at 6:41 PM, David Robillard <da...@dr...> wrote: > > Me too. I also require this bundle functionality. Knowing about > bundles is very important for certain things - bundles provide > grouping/ordering/atomicity semantics defined in the OSC standard that > are very useful and sometimes necessary. Having no ordering whatsoever > is /extremely/ limiting if you think about it - a lot of things just > assume they will magically get some sort of ordering and mostly work > most of the time if you're lucky and have a good network etc. Less than > ideal... >From my understanding, OSC was designed to be used in a stateless manner, so that ordering should not really matter. Obviously, many people are taking it beyond this original use case, so I suppose it makes sense to extend liblo to handle such things. > I just independently came up with this callback idea and noticed this > thread while writing a new message about it, so I think it's a good > idea :) > > Nested bundles don't seem to be a problem, since the app can just > maintain a simple stack of bundle records (or whatever) and keep track. > > Are there any problems with this approach? It seems workable and > feasible to me I can't really think of any other way to do it anyways. I suppose the only other alternative would be to have a function that reports a bundle counter, but this wouldn't provide as much information. The only important change I can think of to your patch would be to provide timetag information on LO_BUNDLE_BEGIN. Since this would require an extra argument, perhaps instead of introducing a new enum it would be better to have two different handlers? Steve |
From: David R. <da...@dr...> - 2010-02-17 00:53:52
|
On Tue, 2010-02-16 at 18:41 -0500, David Robillard wrote: > On Thu, 2009-11-12 at 09:47 +0100, Camille Troillard wrote: > > > > > > On Thu, Nov 12, 2009 at 8:27 AM, Albert Graef <Dr....@t-...> > > wrote: > > But maybe a routine could be added to register a method with a > > server > > (thread) which reports the beginning and end of each bundle? I > > don't > > know whether that's feasible, but this would maintain backward > > compatibility with previous liblo versions and would allow an > > application to reconstruct the (possibly recursive) bundle > > structure if > > it needs to. > > > > > > I think this is a great idea! > > Me too. I also require this bundle functionality. Knowing about > bundles is very important for certain things - bundles provide > grouping/ordering/atomicity semantics defined in the OSC standard that > are very useful and sometimes necessary. Having no ordering whatsoever > is /extremely/ limiting if you think about it - a lot of things just > assume they will magically get some sort of ordering and mostly work > most of the time if you're lucky and have a good network etc. Less than > ideal... > > I just independently came up with this callback idea and noticed this > thread while writing a new message about it, so I think it's a good > idea :) > > Nested bundles don't seem to be a problem, since the app can just > maintain a simple stack of bundle records (or whatever) and keep track. > > Are there any problems with this approach? It seems workable and > feasible to me Attached is a preliminary patch adding this support. Cheers, -dr |
From: David R. <da...@dr...> - 2010-02-17 00:07:02
|
On Thu, 2009-11-12 at 09:47 +0100, Camille Troillard wrote: > > > On Thu, Nov 12, 2009 at 8:27 AM, Albert Graef <Dr....@t-...> > wrote: > But maybe a routine could be added to register a method with a > server > (thread) which reports the beginning and end of each bundle? I > don't > know whether that's feasible, but this would maintain backward > compatibility with previous liblo versions and would allow an > application to reconstruct the (possibly recursive) bundle > structure if > it needs to. > > > I think this is a great idea! Me too. I also require this bundle functionality. Knowing about bundles is very important for certain things - bundles provide grouping/ordering/atomicity semantics defined in the OSC standard that are very useful and sometimes necessary. Having no ordering whatsoever is /extremely/ limiting if you think about it - a lot of things just assume they will magically get some sort of ordering and mostly work most of the time if you're lucky and have a good network etc. Less than ideal... I just independently came up with this callback idea and noticed this thread while writing a new message about it, so I think it's a good idea :) Nested bundles don't seem to be a problem, since the app can just maintain a simple stack of bundle records (or whatever) and keep track. Are there any problems with this approach? It seems workable and feasible to me -dr P.S. It would be also possible for apps that use lo_server_recv and friends directly to know about bundles if it only dispatched one message at a time and lo_server_events_pending worked as advertised, but the callback is more useful and generic anyway |