You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(3) |
Jul
|
Aug
(7) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
(7) |
2006 |
Jan
(1) |
Feb
(2) |
Mar
(3) |
Apr
(3) |
May
(5) |
Jun
(1) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
(17) |
Nov
(18) |
Dec
(1) |
2007 |
Jan
|
Feb
|
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(6) |
Dec
(1) |
2008 |
Jan
(17) |
Feb
(20) |
Mar
(8) |
Apr
(8) |
May
(10) |
Jun
(4) |
Jul
(5) |
Aug
(6) |
Sep
(9) |
Oct
(19) |
Nov
(4) |
Dec
(35) |
2009 |
Jan
(40) |
Feb
(16) |
Mar
(7) |
Apr
(6) |
May
|
Jun
(5) |
Jul
(5) |
Aug
(4) |
Sep
(1) |
Oct
(2) |
Nov
(15) |
Dec
(15) |
2010 |
Jan
(5) |
Feb
(20) |
Mar
(12) |
Apr
|
May
(2) |
Jun
(4) |
Jul
|
Aug
(11) |
Sep
(1) |
Oct
(1) |
Nov
(3) |
Dec
|
2011 |
Jan
(8) |
Feb
(19) |
Mar
|
Apr
(12) |
May
(7) |
Jun
(8) |
Jul
|
Aug
(1) |
Sep
(21) |
Oct
(7) |
Nov
(4) |
Dec
|
2012 |
Jan
(3) |
Feb
(25) |
Mar
(8) |
Apr
(10) |
May
|
Jun
(14) |
Jul
(5) |
Aug
(12) |
Sep
(3) |
Oct
(14) |
Nov
|
Dec
|
2013 |
Jan
(10) |
Feb
(4) |
Mar
(10) |
Apr
(14) |
May
(6) |
Jun
(13) |
Jul
(37) |
Aug
(20) |
Sep
(11) |
Oct
(1) |
Nov
(34) |
Dec
|
2014 |
Jan
(8) |
Feb
(26) |
Mar
(24) |
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
(28) |
Oct
(4) |
Nov
(4) |
Dec
(2) |
2015 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(13) |
Jul
|
Aug
(3) |
Sep
(8) |
Oct
(11) |
Nov
(16) |
Dec
|
2016 |
Jan
|
Feb
(6) |
Mar
|
Apr
(9) |
May
(23) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2017 |
Jan
|
Feb
|
Mar
|
Apr
(7) |
May
(3) |
Jun
|
Jul
(3) |
Aug
|
Sep
(8) |
Oct
|
Nov
|
Dec
(3) |
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(31) |
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2021 |
Jan
(2) |
Feb
(2) |
Mar
(5) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
(5) |
13
|
14
|
15
(1) |
16
(3) |
17
|
18
(7) |
19
(2) |
20
|
21
(3) |
22
|
23
|
24
|
25
(3) |
26
|
27
|
28
(2) |
29
(2) |
30
|
|
|
|
|
From: Stephen S. <rad...@gm...> - 2014-09-29 10:00:33
|
On Mon, Sep 29, 2014 at 3:43 AM, <to...@tr...> wrote: > Hi Steve > > On Sun, September 28, 2014 23:50, Stephen Sinclair wrote: >> Hi Tom, >> >> >> I'm not sure if the misunderstanding is on my side or yours, but the >> UDP max message size is due to how UDP works, not due to anything >> regarding OSC or liblo. >> > > yes, i think so far i can follow. this is IIUC the reason why in the > fixmax branch there is a new constant LO_MAX_UDP_MSG_SIZE that reflects > the limit given by network specs and the "old" LO_MAX_MSG_SIZE with half > the size is kept for compatibility. > > my question was, how could a program that relies on liblo find out what > max UDP size the *currently installed* liblo supports? If this is a > version that still uses LO_MAX_MSG_SIZE then it's limited to 32768 > (sending and receiving) and if it's a newer version it could be 65535. How > could that be determined at runtime? Well, unfortunately since there is no previous API for this constant in the current version of liblo, of course it's impossible to determine it with a function call. However, if the function lo_server_max_msg_size() exists in the library (determined e.g. using autoconf AC_CHECK_FUNC), then you can call it with, int prev_max_size = lo_server_max_msg_size(s, 0); This will return the previous maximum message size without changing it. If it returns -1, message size is unlimited. At least this is the API that I am proposing in the fixmax branch. Comments are welcome. If this function doesn't exist, the max should be assumed to be LO_MAX_MSG_SIZE = 32k. (I could add this comment to the header perhaps.) >> I'm won't say right out that it's wrong, but It's a little unusual to >> be sending audio data via OSC messages. Usually OSC is considered a >> control protocol, like MIDI. Maybe you could give us the bigger picture >> of what you are trying to accomplish? >> > > i keep hearing that, "OSC is for control data" :) i agree it's (not just) > for that. if you're interested in the program, it's all on github under > audio-rxtx but this wasn't the focus of my mail (i just mentioned a nice > side effect once we have 65k max UDP which again was a side-effect of > asking for larger TCP). > MIDI has similarities, but not only for control data. sysex was often used > to transfer sample/wave data so that's nothing new or weird, but it could > be done say in 1 MB blocks using TCP with modern OSC clients today. my 2 > cent :) It's true that it can be used to send any old data using blobs, but fundamentally OSC is a message-based protocol. The impact is that afaik no known OSC implementation supports unlimited sized message reception. You could consider this an "underdefined" bug in the OSC spec if you wish, but basically cross-implementation compatibility would require all OSC implementations to support some kind of incremental interface. (Like callbacks for example.) Since this isn't really the current state of things, using very large messages won't be well-supported across implementations. Additionally, as IOhannes mentions, the semantics of bundles really doesn't agree well with the idea. This discussion could be taken to the OSC_dev list if you want to continue it. I would strongly suggest that for large transfers you open a dedicated TCP (or HTTP) connection, in parallel to your OSC port. Note that this would also avoid blocking your control data while transferring large amounts of audio/sysex data for example. Don't try to pound in screws with a hammer. My recommendation is that OSC messages stay under 64k for compatibility reasons, as it is basically an ad-hoc convention. However, I don't mind expanding liblo to support larger maximums on explicit request from the application code. ... thinking on the sysex thing further.. probably system state should be transferred in a more semantic way than a single OSC blob. i.e., a series of messages defining the (timestamped) value of each (named) variable. TCP is appropriate for this, since UDP messages may get lost. Steve |
From: <to...@tr...> - 2014-09-29 01:43:52
|
Hi Steve On Sun, September 28, 2014 23:50, Stephen Sinclair wrote: > Hi Tom, > > > I'm not sure if the misunderstanding is on my side or yours, but the > UDP max message size is due to how UDP works, not due to anything > regarding OSC or liblo. > yes, i think so far i can follow. this is IIUC the reason why in the fixmax branch there is a new constant LO_MAX_UDP_MSG_SIZE that reflects the limit given by network specs and the "old" LO_MAX_MSG_SIZE with half the size is kept for compatibility. my question was, how could a program that relies on liblo find out what max UDP size the *currently installed* liblo supports? If this is a version that still uses LO_MAX_MSG_SIZE then it's limited to 32768 (sending and receiving) and if it's a newer version it could be 65535. How could that be determined at runtime? > I'm won't say right out that it's wrong, but It's a little unusual to > be sending audio data via OSC messages. Usually OSC is considered a > control protocol, like MIDI. Maybe you could give us the bigger picture > of what you are trying to accomplish? > i keep hearing that, "OSC is for control data" :) i agree it's (not just) for that. if you're interested in the program, it's all on github under audio-rxtx but this wasn't the focus of my mail (i just mentioned a nice side effect once we have 65k max UDP which again was a side-effect of asking for larger TCP). MIDI has similarities, but not only for control data. sysex was often used to transfer sample/wave data so that's nothing new or weird, but it could be done say in 1 MB blocks using TCP with modern OSC clients today. my 2 cent :) Tom > Steve > > > > On Sun, Sep 28, 2014 at 6:09 PM, <to...@tr...> wrote: > >> >> Hello >> >> >> i've played with the fixmax branch and did some tests with UDP. The >> higher (double the size) limit for UDP makes an important difference for >> some applications (here: it doubled the audio channel count that can be >> transmitted). >> >> What would be an elegant way to let OSC programs know their max message >> size (of the currently liblo in use, not at compile time)? >> >> Each client could ask the version of the local liblo and derive caps >> from the version number. However some of the max constants could be >> changed prior to compilation so that's not reliable. >> >> Checking for constants also has some issues, it will depend on the >> liblo version the program was compiled against. Scenario: >> Client A compiled with liblo 0.26. The source could do >> #ifdef LO_MAX_UDP_MESSAGE_SIZE >> ... >> that works, however if the liblo version changes the program can't >> dynamically use the now available LO_MAX_UDP_MESSAGE_SIZE (maybe i'm >> wrong here). >> >> How could an OSC program find out the max udp msg size at runtime of >> the local liblo in place, not the one it was compiled against >> (considering >> there might be NO LO_MAX_UDP_MESSAGE_SIZE avail)? >> >> Best regards >> Tom >> >> > liblo-devel mailing list lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel > > |
From: Stephen S. <rad...@gm...> - 2014-09-28 21:50:45
|
Hi Tom, I'm not sure if the misunderstanding is on my side or yours, but the UDP max message size is due to how UDP works, not due to anything regarding OSC or liblo. I'm won't say right out that it's wrong, but It's a little unusual to be sending audio data via OSC messages. Usually OSC is considered a control protocol, like MIDI. Maybe you could give us the bigger picture of what you are trying to accomplish? Steve On Sun, Sep 28, 2014 at 6:09 PM, <to...@tr...> wrote: > > Hello > > i've played with the fixmax branch and did some tests with UDP. The higher > (double the size) limit for UDP makes an important difference for some > applications (here: it doubled the audio channel count that can be > transmitted). > > What would be an elegant way to let OSC programs know their max message > size (of the currently liblo in use, not at compile time)? > > Each client could ask the version of the local liblo and derive caps from > the version number. However some of the max constants could be changed > prior to compilation so that's not reliable. > > Checking for constants also has some issues, it will depend on the liblo > version the program was compiled against. > Scenario: > Client A compiled with liblo 0.26. The source could do > #ifdef LO_MAX_UDP_MESSAGE_SIZE > ... > that works, however if the liblo version changes the program can't > dynamically use the now available LO_MAX_UDP_MESSAGE_SIZE (maybe i'm wrong > here). > > How could an OSC program find out the max udp msg size at runtime of the > local liblo in place, not the one it was compiled against (considering > there might be NO LO_MAX_UDP_MESSAGE_SIZE avail)? > > Best regards > Tom > > > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |
From: <to...@tr...> - 2014-09-28 16:09:23
|
Hello i've played with the fixmax branch and did some tests with UDP. The higher (double the size) limit for UDP makes an important difference for some applications (here: it doubled the audio channel count that can be transmitted). What would be an elegant way to let OSC programs know their max message size (of the currently liblo in use, not at compile time)? Each client could ask the version of the local liblo and derive caps from the version number. However some of the max constants could be changed prior to compilation so that's not reliable. Checking for constants also has some issues, it will depend on the liblo version the program was compiled against. Scenario: Client A compiled with liblo 0.26. The source could do #ifdef LO_MAX_UDP_MESSAGE_SIZE ... that works, however if the liblo version changes the program can't dynamically use the now available LO_MAX_UDP_MESSAGE_SIZE (maybe i'm wrong here). How could an OSC program find out the max udp msg size at runtime of the local liblo in place, not the one it was compiled against (considering there might be NO LO_MAX_UDP_MESSAGE_SIZE avail)? Best regards Tom |
From: <to...@tr...> - 2014-09-25 13:22:23
|
On Thu, September 25, 2014 13:49, to...@tr... wrote: > On Thu, September 25, 2014 09:47, IOhannes m zmoelnig wrote: > >> On 2014-09-19 16:31, to...@tr... wrote: >> >> >>> -the sender *must* create chunks in order to send a continuous >>> ("unlimited") datastream. the question is how large these chunks >>> can be at max. >> >> this - unfortunately - is either wrong or misses the point. >> >> the "missing the point" part: the network-layer (e.g. "IP") does some >> packetizing on it's own. but this is totally transparent to both the >> sender and receiver (application). the maximum size of these chunks is >> the MTU and has an upper limit of 65535 bytes. >> >> >> >> the "wrong" part: it's fairly easy to create an OSC-message of >> *unlimited* >> size on the sender side, when you are dealing with a serial datastream >> that is SLIP-encoded: >> >> <snip> >> MARK[]=[0xCE]; // slip packet boundary >> bundle[]=[0x23,0x62,0x75,0x6E,0x64,0x6C,0x65,0] // "#bundle" >> timestamp[]=[0,0,0,0,0,0,0,0x1]; // now msg[]=[47, 120, 0, 0, 44, 0, >> 0, >> 0]; // "/x" >> >> >> >> send(MARK); send(bundle); send(timestamp); while(true) { send(len(msg); >> send(msg); } </snip> >> >> >> >> just make sure you never send an END-mark and the packet keeps growing >> on the receiving side, due to the semantics of the OSC-bundle (messages >> within the bundle are atomic, so you cannot simply output them as you >> receive them instead you have to wait till the end of the bundle). >> >> the above code is a DoS attack that will eat away all the memory of a >> receiver that does not impose a maximum OSC packet size. >> > > I see the problem now. What would be a sensible way to handle this? > The receiver knows how many bytes it received, it could detect if a limit > was passed. What then? (the "DOS" sender keeps on sending). > > Testing the fixmax branch, one issue left is just that case. If the SLIP > data is larger than the allowed size (right now, the receiver goes into a > loop (?) that won't stop even if sender stopped sending). Could a > receiver "shutdown" the session somehow in order to continue normally? > > > Thomas > ... scratch that. I just recognized SLIP is only ever used when set explicitly via lo_address_set_stream_slip(lo_address t, int enable) on the sender. I haven't set that so the SLIP methods in server.c were never called! The case i'm looking at was non-SLIP TCP messages, where on the receiver side, the max allowed size is set via lo_server_max_msg_size(). This works for large messages (test was 50 MB), when the receiver was set at runtime to the requested max size. If that size is too small, the problem starts (which i suspect is not SLIP-related). These are the two test programs, in experimental shape: https://github.com/7890/liblo/tree/fixmax/tcp_max_test Best regards |
From: <to...@tr...> - 2014-09-25 11:49:42
|
On Thu, September 25, 2014 09:47, IOhannes m zmoelnig wrote: > On 2014-09-19 16:31, to...@tr... wrote: > >> -the sender *must* create chunks in order to send a continuous >> ("unlimited") datastream. the question is how large these chunks >> can be at max. > > this - unfortunately - is either wrong or misses the point. > > the "missing the point" part: the network-layer (e.g. "IP") does some > packetizing on it's own. but this is totally transparent to both the > sender and receiver (application). the maximum size of these chunks is the > MTU and has an upper limit of 65535 bytes. > > > the "wrong" part: it's fairly easy to create an OSC-message of *unlimited* > size on the sender side, when you are dealing with a serial datastream > that is SLIP-encoded: > > <snip> > MARK[]=[0xCE]; // slip packet boundary > bundle[]=[0x23,0x62,0x75,0x6E,0x64,0x6C,0x65,0] // "#bundle" > timestamp[]=[0,0,0,0,0,0,0,0x1]; // now msg[]=[47, 120, 0, 0, 44, 0, 0, > 0]; // "/x" > > > send(MARK); send(bundle); send(timestamp); while(true) { send(len(msg); > send(msg); } > </snip> > > > just make sure you never send an END-mark and the packet keeps growing on > the receiving side, due to the semantics of the OSC-bundle (messages > within the bundle are atomic, so you cannot simply output them as you > receive them instead you have to wait till the end of the bundle). > > the above code is a DoS attack that will eat away all the memory of a > receiver that does not impose a maximum OSC packet size. > I see the problem now. What would be a sensible way to handle this? The receiver knows how many bytes it received, it could detect if a limit was passed. What then? (the "DOS" sender keeps on sending). Testing the fixmax branch, one issue left is just that case. If the SLIP data is larger than the allowed size (right now, the receiver goes into a loop (?) that won't stop even if sender stopped sending). Could a receiver "shutdown" the session somehow in order to continue normally? Thomas |
From: IOhannes m z. <zmo...@ie...> - 2014-09-25 07:47:54
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 while you are generally right, ... On 2014-09-19 16:31, to...@tr... wrote: > -the sender *must* create chunks in order to send a continuous > ("unlimited") datastream. the question is how large these chunks > can be at max. this - unfortunately - is either wrong or misses the point. the "missing the point" part: the network-layer (e.g. "IP") does some packetizing on it's own. but this is totally transparent to both the sender and receiver (application). the maximum size of these chunks is the MTU and has an upper limit of 65535 bytes. the "wrong" part: it's fairly easy to create an OSC-message of *unlimited* size on the sender side, when you are dealing with a serial datastream that is SLIP-encoded: <snip> MARK[]=[0xCE]; // slip packet boundary bundle[]=[0x23,0x62,0x75,0x6E,0x64,0x6C,0x65,0] // "#bundle" timestamp[]=[0,0,0,0,0,0,0,0x1]; // now msg[]=[47, 120, 0, 0, 44, 0, 0, 0]; // "/x" send(MARK); send(bundle); send(timestamp); while(true) { send(len(msg); send(msg); } </snip> just make sure you never send an END-mark and the packet keeps growing on the receiving side, due to the semantics of the OSC-bundle (messages within the bundle are atomic, so you cannot simply output them as you receive them instead you have to wait till the end of the bundle). the above code is a DoS attack that will eat away all the memory of a receiver that does not impose a maximum OSC packet size. fgmdr IOhannes -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBCAAGBQJUI8iOAAoJELZQGcR/ejb4HjYP/0INj5Ev0O+PZFTqRQA9EO4P aV96CRWlP2sA88rY0OpNMDZZnm8Z5gsLy5Y/IaOJsHTymrWCPMTULTw8e4HG0Vwj UUl+/+UOTzZAv5PGv+AJ/WkAPqOrF8H4N4BbXA3X4vVTCa0uVlud96YgaBV7tw0r /6VszhR7ygXqUJCwKMDkn99Q8Sk0rZZ3rhsdx99i0O7yitoaFCMg4ayx2SPTmAB1 Kuir5akFKpmufThpMWX813+QZeXGhg7pw0b/Vo3EE57go0GCF8I/fXcDPE84rf1g jVMMjrZq6AAP5Q0Mw1IPrIcNil39D+DqK0Df8ftA76fxt1TUoXmvP23NZvTBGp8I mG0XznfFxFhuRj5WeOPi739CuhN2lI3iAY7plgwKkZUyM7BBs/SFPEGdDg4nMNVD 2egWSrJpILg97WQ36IvlKOt9l5pEEits/TrQiMW82MbWn+kzRp22rZktqmA/O1mR mZwj1OmF98d7/wysEmcRe4iRrAGq+f+fTzILdyWo/LIhOfumcLBq2EbutGculPla vQPVeOsLB1QHml5Q4sxohTaAWLR15x580/vonuFZSGaRJeATN3ci56Ne3CsIgjQ+ 7/DawcGYZCXF9qDhiRllVoEVZxnQfjOMAhQTBxgOCPJFeT84d5x+pZah6uECYziQ Or9zlPaVUVEdI8NYJCxH =9GNF -----END PGP SIGNATURE----- |
From: <to...@tr...> - 2014-09-21 13:55:27
|
Hi again i often use oscdump to get an idea of what's being sent. Therefore being able to grep for a path or sender is handy. With just a very small modification this is working well. Please see attached diff. (If this doesn't have any other side-effects, please consider for addition) Cheers Thomas |
From: <to...@tr...> - 2014-09-21 13:50:12
|
Hi Steve, this is great news! It compiles nicely and i will do some tests. Thanks for looking into this! Best regards Thomas On Sun, September 21, 2014 15:37, Stephen Sinclair wrote: > You're generally right. The probably is not any implementation > difficulties, but rather a lack of consensus among OSC implementations. > > Anyway, a proposed patch is available on my github "fixmax" branch. > https://github.com/radarsat1/liblo/tree/fixmax > > > I added the function, > > > /** > * \brief Set the maximum message size accepted by a server. > * > * For UDP servers, the maximum message size cannot exceed 64k, due to > * the UDP transport specifications. For TCP servers, this number may > * be larger, but be aware that one or more contiguous blocks of > * memory of this size may be allocated by liblo. (At least one per > * connection.) > * > * \param s The server on which to operate. > * \param req_size The new maximum message size to request, 0 if it > * should not be modified, or -1 if it should be set to unlimited. > * Note that an unlimited message buffer may make your application > * open to a denial of service attack. > * \return The new maximum message size is returned, which may or may > * not be equal to req_size. If -1 is returned, maximum size is > * unlimited. > */ > int lo_server_max_msg_size(lo_server s, int req_size); > > As seen in the commit message, I also made the UDP receiver use a > heap-allocated buffer when expected message size exceeds 4k. This is > probably not super efficient, so some more work may be needed. The buffer > should be allocated and kept in the lo_server struct. (Frankly this > struct is getting a little out of hand.) > > > Steve |
From: Stephen S. <rad...@gm...> - 2014-09-21 13:38:06
|
You're generally right. The probably is not any implementation difficulties, but rather a lack of consensus among OSC implementations. Anyway, a proposed patch is available on my github "fixmax" branch. https://github.com/radarsat1/liblo/tree/fixmax I added the function, /** * \brief Set the maximum message size accepted by a server. * * For UDP servers, the maximum message size cannot exceed 64k, due to * the UDP transport specifications. For TCP servers, this number may * be larger, but be aware that one or more contiguous blocks of * memory of this size may be allocated by liblo. (At least one per * connection.) * * \param s The server on which to operate. * \param req_size The new maximum message size to request, 0 if it * should not be modified, or -1 if it should be set to unlimited. * Note that an unlimited message buffer may make your application * open to a denial of service attack. * \return The new maximum message size is returned, which may or may * not be equal to req_size. If -1 is returned, maximum size is * unlimited. */ int lo_server_max_msg_size(lo_server s, int req_size); As seen in the commit message, I also made the UDP receiver use a heap-allocated buffer when expected message size exceeds 4k. This is probably not super efficient, so some more work may be needed. The buffer should be allocated and kept in the lo_server struct. (Frankly this struct is getting a little out of hand.) Steve On Fri, Sep 19, 2014 at 4:31 PM, <to...@tr...> wrote: > Hi again, > > this is a more interesting problem than i thought initially :) > > from what i learned so far the following can be assumed: > > -an OSC message from an application POV can be a datastructure that holds > a number of typed values and arbitrary byte data (blobs) > -for an OSC message to send via UDP, the message size is limited by > network specs > -for an OSC message to send via TPC, the message size is theoretically > unlimited, when just looking at network specs > -it's impossible that a sender can create an *OSC message / datastructure* > of unlimited size > -consequently, a receiver will never be confronted with such a message of > unlimited size > -the sender *must* create chunks in order to send a continuous > ("unlimited") datastream. the question is how large these chunks can be at > max. > -the lower the message size, the higher the message rate > -high message rates have a different effect on the underlying networking > layers when using TCP compared to UDP > -UDP: more or less send and forget about it. lost network connection, > sender doesn't care. > -TCP: session-style transmission of data involving a "back"-channel for > acks, retransmission, timeouts etc etc -> overall more complex. (lost > network connection & recovery seems to work better when having larger > chunks / lower rates. i can only speculate that the networring layers get > "confused" when there are too many independent sessions alive, or > something similar.) > -OSC as a spec doesn't mention size limits for max TCP message size. > -consequently it is left to the implementation > -different implementations use different max sizes > -sending a message of size x to a receiver that has a max <x > implementation will not work > -receiver can tell about it or create chunks in some way (this is fully > implementation dependant) > -on an application level, OSC units could negotiate about max TCP size > though that is fully application specific since OSC doesn't impose any > semantical protocols per se > > facit: > -it's not possible to be compatible in respect of max TCP size of OSC > message structures *over all OSC implementations*. > -it seems "cheap" to increase the max TCP size to a value of say 1MB for a > specific implementation. other implementations could follow to be > compatible and because it's "just a number" to change somewhere in the > easiest case > > i could be wrong in some aspects, please let me know > > On Fri, September 19, 2014 11:44, Stephen Sinclair wrote: >> On Fri, Sep 19, 2014 at 1:09 AM, IOhannes m zmölnig <zmo...@ie...> >> wrote: >> >>> On 09/18/2014 10:34 PM, Stephen Sinclair wrote: >>> >>>> Aside from the fact that the liblo max message length is just wrong, >>>> >>>> >>>> #define LO_MAX_MSG_SIZE 32768 >>>> >>>> >>>> .. I don't see where you are indicating that it is different for >>>> different platforms?? >>> >>> i'm not. i'm saying that if i use liblo on one side and OscQuirks (i >>> just made that up) on the other side, they cannot share this define (on >>> a code level). so if OscQuirks accepts to receive packets of lengths up >>> to 1500bytes (sounds reasonable with the default MTU :-(), then it's >>> moot to discuss the possible incompat problems when changing liblo's >>> MAX_MSG_SIZE for >>> *sending* from (say) 32768 to (e.g.) 100000. >>> >>> >>>> >>>> In any case I think fixing this is an acceptable idea but should take >>>> security and sane stack memory behaviour into account, that's all. >>>> >>> >>> yes of course. i'm aware that the desired "unbound" packet size (for >>> SLIP encoded >>> packets) is impossible to implement (with processing that only happens >>> after the full packet is received). >>> >>> my comment was of more general nature: what we need to care for is >>> interoperability between different OSC-applications (by adhering to the >>> specs) rather than interoperability between different liblo versions >>> (by >>> not changing a define). the latter comes automatically with the former. >> >> I agree, but unfortunately the specs say nothing about this. As far >> as I understand the spec leaves it as a transport-layer issue. For UDP >> this is therefore clear (64k), and as for TCP it is up to the >> implementation but I suppose it should support unlimited messages if >> possible, although this allows a rather easy DoS attack. >> >> For the record, in the TCP receiver >> (lo_server_recv_raw_stream_socket), the buffer is heap allocated and >> the size is increased dynamically as necessary. The size limit is imposed >> only for security reasons. It could be easily changed, or even made >> specified at run-time. It's just a question of deciding. >> >> (Note that if the max size becomes user-specified, this _still_ leaves >> open the compatibility issues you mention.) >> >> Part of the reason I haven't worried about maximum TCP message size is >> that I never really considered OSC an appropriate protocol for sending >> large blobs. If you're transferring files, for example, why not use HTTP? >> >> >> Steve >> >> >> ------------------------------------------------------------------------- >> ----- >> Slashdot TV. Video for Nerds. Stuff that Matters. >> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clkt >> rk _______________________________________________ >> liblo-devel mailing list lib...@li... >> https://lists.sourceforge.net/lists/listinfo/liblo-devel >> >> > > > > ------------------------------------------------------------------------------ > Slashdot TV. Video for Nerds. Stuff that Matters. > http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |
From: <to...@tr...> - 2014-09-19 14:31:59
|
Hi again, this is a more interesting problem than i thought initially :) from what i learned so far the following can be assumed: -an OSC message from an application POV can be a datastructure that holds a number of typed values and arbitrary byte data (blobs) -for an OSC message to send via UDP, the message size is limited by network specs -for an OSC message to send via TPC, the message size is theoretically unlimited, when just looking at network specs -it's impossible that a sender can create an *OSC message / datastructure* of unlimited size -consequently, a receiver will never be confronted with such a message of unlimited size -the sender *must* create chunks in order to send a continuous ("unlimited") datastream. the question is how large these chunks can be at max. -the lower the message size, the higher the message rate -high message rates have a different effect on the underlying networking layers when using TCP compared to UDP -UDP: more or less send and forget about it. lost network connection, sender doesn't care. -TCP: session-style transmission of data involving a "back"-channel for acks, retransmission, timeouts etc etc -> overall more complex. (lost network connection & recovery seems to work better when having larger chunks / lower rates. i can only speculate that the networring layers get "confused" when there are too many independent sessions alive, or something similar.) -OSC as a spec doesn't mention size limits for max TCP message size. -consequently it is left to the implementation -different implementations use different max sizes -sending a message of size x to a receiver that has a max <x implementation will not work -receiver can tell about it or create chunks in some way (this is fully implementation dependant) -on an application level, OSC units could negotiate about max TCP size though that is fully application specific since OSC doesn't impose any semantical protocols per se facit: -it's not possible to be compatible in respect of max TCP size of OSC message structures *over all OSC implementations*. -it seems "cheap" to increase the max TCP size to a value of say 1MB for a specific implementation. other implementations could follow to be compatible and because it's "just a number" to change somewhere in the easiest case i could be wrong in some aspects, please let me know On Fri, September 19, 2014 11:44, Stephen Sinclair wrote: > On Fri, Sep 19, 2014 at 1:09 AM, IOhannes m zmölnig <zmo...@ie...> > wrote: > >> On 09/18/2014 10:34 PM, Stephen Sinclair wrote: >> >>> Aside from the fact that the liblo max message length is just wrong, >>> >>> >>> #define LO_MAX_MSG_SIZE 32768 >>> >>> >>> .. I don't see where you are indicating that it is different for >>> different platforms?? >> >> i'm not. i'm saying that if i use liblo on one side and OscQuirks (i >> just made that up) on the other side, they cannot share this define (on >> a code level). so if OscQuirks accepts to receive packets of lengths up >> to 1500bytes (sounds reasonable with the default MTU :-(), then it's >> moot to discuss the possible incompat problems when changing liblo's >> MAX_MSG_SIZE for >> *sending* from (say) 32768 to (e.g.) 100000. >> >> >>> >>> In any case I think fixing this is an acceptable idea but should take >>> security and sane stack memory behaviour into account, that's all. >>> >> >> yes of course. i'm aware that the desired "unbound" packet size (for >> SLIP encoded >> packets) is impossible to implement (with processing that only happens >> after the full packet is received). >> >> my comment was of more general nature: what we need to care for is >> interoperability between different OSC-applications (by adhering to the >> specs) rather than interoperability between different liblo versions >> (by >> not changing a define). the latter comes automatically with the former. > > I agree, but unfortunately the specs say nothing about this. As far > as I understand the spec leaves it as a transport-layer issue. For UDP > this is therefore clear (64k), and as for TCP it is up to the > implementation but I suppose it should support unlimited messages if > possible, although this allows a rather easy DoS attack. > > For the record, in the TCP receiver > (lo_server_recv_raw_stream_socket), the buffer is heap allocated and > the size is increased dynamically as necessary. The size limit is imposed > only for security reasons. It could be easily changed, or even made > specified at run-time. It's just a question of deciding. > > (Note that if the max size becomes user-specified, this _still_ leaves > open the compatibility issues you mention.) > > Part of the reason I haven't worried about maximum TCP message size is > that I never really considered OSC an appropriate protocol for sending > large blobs. If you're transferring files, for example, why not use HTTP? > > > Steve > > > ------------------------------------------------------------------------- > ----- > Slashdot TV. Video for Nerds. Stuff that Matters. > http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clkt > rk _______________________________________________ > liblo-devel mailing list lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel > > |
From: Stephen S. <rad...@gm...> - 2014-09-19 09:45:04
|
On Fri, Sep 19, 2014 at 1:09 AM, IOhannes m zmölnig <zmo...@ie...> wrote: > On 09/18/2014 10:34 PM, Stephen Sinclair wrote: >> Aside from the fact that the liblo max message length is just wrong, >> >> #define LO_MAX_MSG_SIZE 32768 >> >> .. I don't see where you are indicating that it is different for >> different platforms?? > > i'm not. > i'm saying that if i use liblo on one side and OscQuirks (i just made > that up) on the other side, they cannot share this define (on a code level). > so if OscQuirks accepts to receive packets of lengths up to 1500bytes > (sounds reasonable with the default MTU :-(), then it's moot to discuss > the possible incompat problems when changing liblo's MAX_MSG_SIZE for > *sending* from (say) 32768 to (e.g.) 100000. > >> >> In any case I think fixing this is an acceptable idea but should take >> security and sane stack memory behaviour into account, that's all. >> > > yes of course. > i'm aware that the desired "unbound" packet size (for SLIP encoded > packets) is impossible to implement (with processing that only happens > after the full packet is received). > > my comment was of more general nature: what we need to care for is > interoperability between different OSC-applications (by adhering to the > specs) rather than interoperability between different liblo versions (by > not changing a define). the latter comes automatically with the former. I agree, but unfortunately the specs say nothing about this. As far as I understand the spec leaves it as a transport-layer issue. For UDP this is therefore clear (64k), and as for TCP it is up to the implementation but I suppose it should support unlimited messages if possible, although this allows a rather easy DoS attack. For the record, in the TCP receiver (lo_server_recv_raw_stream_socket), the buffer is heap allocated and the size is increased dynamically as necessary. The size limit is imposed only for security reasons. It could be easily changed, or even made specified at run-time. It's just a question of deciding. (Note that if the max size becomes user-specified, this _still_ leaves open the compatibility issues you mention.) Part of the reason I haven't worried about maximum TCP message size is that I never really considered OSC an appropriate protocol for sending large blobs. If you're transferring files, for example, why not use HTTP? Steve |
From: IOhannes m z. <zmo...@ie...> - 2014-09-18 23:35:06
|
On 09/18/2014 10:34 PM, Stephen Sinclair wrote: > Aside from the fact that the liblo max message length is just wrong, > > #define LO_MAX_MSG_SIZE 32768 > > .. I don't see where you are indicating that it is different for > different platforms?? i'm not. i'm saying that if i use liblo on one side and OscQuirks (i just made that up) on the other side, they cannot share this define (on a code level). so if OscQuirks accepts to receive packets of lengths up to 1500bytes (sounds reasonable with the default MTU :-(), then it's moot to discuss the possible incompat problems when changing liblo's MAX_MSG_SIZE for *sending* from (say) 32768 to (e.g.) 100000. > > In any case I think fixing this is an acceptable idea but should take > security and sane stack memory behaviour into account, that's all. > yes of course. i'm aware that the desired "unbound" packet size (for SLIP encoded packets) is impossible to implement (with processing that only happens after the full packet is received). my comment was of more general nature: what we need to care for is interoperability between different OSC-applications (by adhering to the specs) rather than interoperability between different liblo versions (by not changing a define). the latter comes automatically with the former. gfmsdr IOhannes |
From: <to...@tr...> - 2014-09-18 20:58:19
|
UDP and TCP should probably be handled differently (using different max. sizes and buffers), i'm shooting into the dark On Thu, September 18, 2014 22:55, Stephen Sinclair wrote: > Because the LO_MAX_MSG_SIZE is used at compile-time and changing it > affects the hard-coded behaviour and is therefore part of the definition of > the library. For instance, in server.c, there is a line, > > char buffer[LO_MAX_MSG_SIZE]; > > which allocates the whole buffer contiguously on the stack. If it's > larger than 65k, this is probably not recommended. Hell it probably > shouldn't be done with 32k. It needs to be fixed. > > Steve > > > > On Thu, Sep 18, 2014 at 10:47 PM, <to...@tr...> wrote: > >> On Thu, September 18, 2014 22:34, Stephen Sinclair wrote: >> >>> Aside from the fact that the liblo max message length is just wrong, >>> >>> >>> >>> #define LO_MAX_MSG_SIZE 32768 >>> >>> >>> >>> .. I don't see where you are indicating that it is different for >>> different platforms?? >>> >>> In any case I think fixing this is an acceptable idea but should take >>> security and sane stack memory behaviour into account, that's all. >>> >>> Steve >>> >> >> do you say >> >> #define LO_MAX_MSG_SIZE 65535 >> >> >> would be correct? >> >> >> why not also adding >> >> #define LO_MAX_MSG_SIZE_TCP 65535*x >> >> >> where x is to be defined :) >> >> >>> >>> >>> >>>> afair, the specs do not impose any limit of the packet-size at all. >>>> when using UDP, the transport gives us a hard maximum of 65535 bytes >>>> per >>>> >> >> >> ----------------------------------------------------------------------- >> ------- >> Slashdot TV. Video for Nerds. Stuff that Matters. >> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.cl >> ktrk _______________________________________________ >> liblo-devel mailing list lib...@li... >> https://lists.sourceforge.net/lists/listinfo/liblo-devel >> > > ------------------------------------------------------------------------- > ----- > Slashdot TV. Video for Nerds. Stuff that Matters. > http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clkt > rk _______________________________________________ > liblo-devel mailing list lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel > > |
From: Stephen S. <rad...@gm...> - 2014-09-18 20:55:34
|
Because the LO_MAX_MSG_SIZE is used at compile-time and changing it affects the hard-coded behaviour and is therefore part of the definition of the library. For instance, in server.c, there is a line, char buffer[LO_MAX_MSG_SIZE]; which allocates the whole buffer contiguously on the stack. If it's larger than 65k, this is probably not recommended. Hell it probably shouldn't be done with 32k. It needs to be fixed. Steve On Thu, Sep 18, 2014 at 10:47 PM, <to...@tr...> wrote: > On Thu, September 18, 2014 22:34, Stephen Sinclair wrote: >> Aside from the fact that the liblo max message length is just wrong, >> >> >> #define LO_MAX_MSG_SIZE 32768 >> >> >> .. I don't see where you are indicating that it is different for >> different platforms?? >> >> In any case I think fixing this is an acceptable idea but should take >> security and sane stack memory behaviour into account, that's all. >> >> Steve > > do you say > > #define LO_MAX_MSG_SIZE 65535 > > would be correct? > > > why not also adding > > #define LO_MAX_MSG_SIZE_TCP 65535*x > > where x is to be defined :) > > >> >> >> >>> afair, the specs do not impose any limit of the packet-size at all. when >>> using UDP, the transport gives us a hard maximum of 65535 bytes per >>> > > > ------------------------------------------------------------------------------ > Slashdot TV. Video for Nerds. Stuff that Matters. > http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |
From: <to...@tr...> - 2014-09-18 20:47:50
|
On Thu, September 18, 2014 22:34, Stephen Sinclair wrote: > Aside from the fact that the liblo max message length is just wrong, > > > #define LO_MAX_MSG_SIZE 32768 > > > .. I don't see where you are indicating that it is different for > different platforms?? > > In any case I think fixing this is an acceptable idea but should take > security and sane stack memory behaviour into account, that's all. > > Steve do you say #define LO_MAX_MSG_SIZE 65535 would be correct? why not also adding #define LO_MAX_MSG_SIZE_TCP 65535*x where x is to be defined :) > > > >> afair, the specs do not impose any limit of the packet-size at all. when >> using UDP, the transport gives us a hard maximum of 65535 bytes per >> |
From: Stephen S. <rad...@gm...> - 2014-09-18 20:34:12
|
Aside from the fact that the liblo max message length is just wrong, #define LO_MAX_MSG_SIZE 32768 .. I don't see where you are indicating that it is different for different platforms?? In any case I think fixing this is an acceptable idea but should take security and sane stack memory behaviour into account, that's all. Steve On Thu, Sep 18, 2014 at 10:55 AM, IOhannes m zmoelnig <zmo...@ie...> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 2014-09-16 19:02, to...@tr... wrote: >> one thing that's odd is when liblo has different (compile-time) max >> sizes set on different hosts making them partially incompatible > > hmm, i think this is a weird argument. > > liblo is to communicate with *unknown entities* on the internet. > that is: there is absolutely no guarantee that the peer will use the > same version of liblo, or even liblo at all. (there are a number of > implementations of the OSC protocol stack). > > so the *only* way to ensure compatibility between peers is to follow > the protocol specs on both sides (and as always: be conservative in > the output and liberal in the input). > > > afair, the specs do not impose any limit of the packet-size at all. > when using UDP, the transport gives us a hard maximum of 65535 bytes > per packet, but when using a serial protocol like TCP and the > (specs-compliant) slip-encoding, there is *no* maximum size for a message. > and liblo (at least when receiving) should be able to receive *very* > long messages... > > fgmsdr > IOhannes > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQIcBAEBCAAGBQJUGp4XAAoJELZQGcR/ejb4TzUQAJkBFHUHQYWmCpjroBQqiNFt > PUEYaBAmmFMlEDK/UYZ6OgG0vK1aqmAkTyrotmrQVzsbA3ANA30ek54ISVi2gUeh > D/aH5HCZGfSQVeSHQNzjHnR+b0pyHhy1kDdXx+cyVyWgKRBdxGTp+R9l61RKJls5 > KvmKfXpG07eceDXMWS7x+LVMvNuWC5VQ4XeuUJtChKNvEhP18JxlKFqOpRg3s3jc > Swg4dqFzHDz2wVoy90oZu+4aOHEQI4E6X7ry0ewEXYAJfiI+9CEf5rkpoRIjAf5i > PsUwELc2IohN+p1woLBYGkPzjFrgLXppX1evk3Pa9GYPrpoNBF8Y75OMzcpXW19n > PDlMcZDAXsg5HxZ9HtKejIcfi6XsL8fmqWtOiZUEwnGhDbd6uPy0T6jduHNbwBJY > qO64r9btbn1Ijp0xzgRj6W/as429gIEaU/+TOfejD6tHSgVZtDtC5AjcFzdK7BLX > Y6LJ8ZearMI+IS3xnKS298EbqT08U10fC4kuTQcsyb+TkSIyX0VcdPA+GUNfJH5Y > gnQ3nSo8jZGLdgO+LB/dOvpo6rFEJ2pgSL2NVz3D0z9r+QFqs9Ti03jm/bq4iya6 > BgRW6aG14FcYC+/u0V1PwEVipiMzKAhWu0REBF0/ioCKenvlw4scfl/s7WL6YkXa > J8ffXxkN1E3aDtdOlBH/ > =LC5t > -----END PGP SIGNATURE----- > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |
From: <to...@tr...> - 2014-09-18 20:30:02
|
On Thu, September 18, 2014 10:55, IOhannes m zmoelnig wrote: > so the *only* way to ensure compatibility between peers is to follow the > protocol specs on both sides (and as always: be conservative in the output > and liberal in the input). > > > afair, the specs do not impose any limit of the packet-size at all. when > using UDP, the transport gives us a hard maximum of 65535 bytes per > packet, but when using a serial protocol like TCP and the > (specs-compliant) slip-encoding, there is *no* maximum size for a > message. and liblo (at least when receiving) should be able to receive > *very* > long messages... > yes, that sounds reasonable. But how to pack an unlimited stream into an OSC data structure on the receiving side? There must be a limit somehow. Sender and receiver must always match. You could even show incompatibilities between (not same versioned) liblo, so that's a natural thing to happen, just the tag "OSC" on a program doesn't assure interop. The main idea was that liblo can send larger TPC messages than today, not necessarily of unlimited size, while keeping compatibility between (at least same versioned) lo libraries. Cheers Thomas |
From: IOhannes m z. <zmo...@ie...> - 2014-09-18 09:49:58
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2014-09-16 19:02, to...@tr... wrote: > one thing that's odd is when liblo has different (compile-time) max > sizes set on different hosts making them partially incompatible hmm, i think this is a weird argument. liblo is to communicate with *unknown entities* on the internet. that is: there is absolutely no guarantee that the peer will use the same version of liblo, or even liblo at all. (there are a number of implementations of the OSC protocol stack). so the *only* way to ensure compatibility between peers is to follow the protocol specs on both sides (and as always: be conservative in the output and liberal in the input). afair, the specs do not impose any limit of the packet-size at all. when using UDP, the transport gives us a hard maximum of 65535 bytes per packet, but when using a serial protocol like TCP and the (specs-compliant) slip-encoding, there is *no* maximum size for a message. and liblo (at least when receiving) should be able to receive *very* long messages... fgmsdr IOhannes -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBCAAGBQJUGp4XAAoJELZQGcR/ejb4TzUQAJkBFHUHQYWmCpjroBQqiNFt PUEYaBAmmFMlEDK/UYZ6OgG0vK1aqmAkTyrotmrQVzsbA3ANA30ek54ISVi2gUeh D/aH5HCZGfSQVeSHQNzjHnR+b0pyHhy1kDdXx+cyVyWgKRBdxGTp+R9l61RKJls5 KvmKfXpG07eceDXMWS7x+LVMvNuWC5VQ4XeuUJtChKNvEhP18JxlKFqOpRg3s3jc Swg4dqFzHDz2wVoy90oZu+4aOHEQI4E6X7ry0ewEXYAJfiI+9CEf5rkpoRIjAf5i PsUwELc2IohN+p1woLBYGkPzjFrgLXppX1evk3Pa9GYPrpoNBF8Y75OMzcpXW19n PDlMcZDAXsg5HxZ9HtKejIcfi6XsL8fmqWtOiZUEwnGhDbd6uPy0T6jduHNbwBJY qO64r9btbn1Ijp0xzgRj6W/as429gIEaU/+TOfejD6tHSgVZtDtC5AjcFzdK7BLX Y6LJ8ZearMI+IS3xnKS298EbqT08U10fC4kuTQcsyb+TkSIyX0VcdPA+GUNfJH5Y gnQ3nSo8jZGLdgO+LB/dOvpo6rFEJ2pgSL2NVz3D0z9r+QFqs9Ti03jm/bq4iya6 BgRW6aG14FcYC+/u0V1PwEVipiMzKAhWu0REBF0/ioCKenvlw4scfl/s7WL6YkXa J8ffXxkN1E3aDtdOlBH/ =LC5t -----END PGP SIGNATURE----- |
From: <to...@tr...> - 2014-09-16 17:03:00
|
Hi Steve one thing that's odd is when liblo has different (compile-time) max sizes set on different hosts making them partially incompatible. Can we make a difference of UDP and TPC max sizes and maybe put the TCP max to a larger value per default (without touching the API) as a non-intrusive step towards larger possible TPC messages. Are there any overseen obstacles, what do you think? Tom On Tue, August 19, 2014 15:25, Stephen Sinclair wrote: > Hm, well I can't think of an easy way to do that with the current API. > It could be something to consider for a future version of liblo. > Currently the maximum message size is a global compile-time variable. > It could potentially be changed to a run-time global, but some of the > buffering code would definitely need to be changed so as to avoid large > stack allocations. Additionally there is the security consideration that > an attacker could cause problems by sending a never-ending message stream. > > > Steve > > > On Thu, Aug 14, 2014 at 5:34 PM, <to...@tr...> wrote: > >> On Wed, August 13, 2014 23:57, Stephen Sinclair wrote: >>>> >>>> I wondered if it would be possible to not limit the size of TCP >>>> messages so it would be possible to send large messages. >>>> >>> >>> It does make sense. I'm surprised there's ever a need for OSC >>> messages greater than 32k, what are you sending? in any case, in >>> principle there should be no real limit for TCP. The limit just >>> simplifies much of the programming, and I'd have to check over the >>> code again to be sure it doesn't cause any ill effects to increase the >>> limit. Increasing this variable likely causes liblo to allocate much >>> larger buffers, so I'm not sure that's an appropriate fix, even if it >>> works. >>> >>> Steve >>> |
From: Stephen S. <rad...@gm...> - 2014-09-16 08:57:33
|
Excellent, it's a pretty minimal change that should have little impact I think. lo_server_get_url() still works as before. I'll push it. Steve On Tue, Sep 16, 2014 at 6:40 AM, Stéphane Letz <le...@gr...> wrote: > Yes indeed, it solves the problem, thanks ! > > Stéphane > > Le 15 sept. 2014 à 23:06, Stephen Sinclair <rad...@gm...> a écrit : > >> Can you test the branch, "deferredhostname", on >> http://github.com/radarsat1/liblo? >> >> thanks, >> Steve >> >> >> On Fri, Sep 12, 2014 at 4:19 PM, Stéphane Letz <le...@gr...> wrote: >>> Well i don't know the code enough to do that… but possibly. >>> >>> Are you going to correct that? >>> >>> Stéphane >>> >>> >>> Le 12 sept. 2014 à 16:15, Stephen Sinclair <rad...@gm...> a écrit : >>> >>>> Right, so the server gets its own hostname so that it can provide a >>>> URL for contacting it. This may or may not be appropriate depending >>>> on the network, but I guess the right fix would be to defer this >>>> operation until the s->hostname field is needed. Such a change was >>>> already made for lo_address in the past. >>>> >>>> Steve >>>> >>>> >>>> On Fri, Sep 12, 2014 at 4:05 PM, Stéphane Letz <le...@gr...> wrote: >>>>> I found out that "gethostbyname" call in "lo_server_new_with_proto_internal" was the problem. Just removed the code for now, but don't think this is the real fix... >>>>> >>>>> Stéphane >>>>> >>>>> Le 12 sept. 2014 à 15:58, Stephen Sinclair <rad...@gm...> a écrit : >>>>> >>>>>> Is IPv6 enabled? I believe there were some major slow-downs in DNS >>>>>> lookups due to using getnameinfo(), so we disabled it unless IPv6 is >>>>>> enabled. Don't know if that is your problem, but it would be good to >>>>>> eliminate it. >>>>>> >>>>>> Is it only the multicast variety that is slow? >>>>>> >>>>>> Steve >>>>>> >>>>>> >>>>>> On Fri, Sep 12, 2014 at 1:58 PM, Stéphane Letz <le...@gr...> wrote: >>>>>>> Hi, >>>>>>> >>>>>>> We are using liblo code on a completely autonomous wireless network (made by 2 Mac OSX laptop + iPad device + Apple Aiport express) >>>>>>> >>>>>>> In this case the "lo_server_thread_new_multicast" takes a huge time to return since it seems an internal time-out is used somewhere to (DNS access? name resolution ? etc…) >>>>>>> >>>>>>> The same code works well as soon as the autonomous wireless in connected again on internet. >>>>>>> >>>>>>> Any idea how to solve this issue? >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>>> Stéphane Letz >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Want excitement? >>>>>>> Manually upgrade your production database. >>>>>>> When you want reliability, choose Perforce >>>>>>> Perforce version control. Predictably reliable. >>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>>>>> _______________________________________________ >>>>>>> liblo-devel mailing list >>>>>>> lib...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Want excitement? >>>>>> Manually upgrade your production database. >>>>>> When you want reliability, choose Perforce >>>>>> Perforce version control. Predictably reliable. >>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>>>> _______________________________________________ >>>>>> liblo-devel mailing list >>>>>> lib...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Want excitement? >>>>> Manually upgrade your production database. >>>>> When you want reliability, choose Perforce >>>>> Perforce version control. Predictably reliable. >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>>> _______________________________________________ >>>>> liblo-devel mailing list >>>>> lib...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>>> >>>> ------------------------------------------------------------------------------ >>>> Want excitement? >>>> Manually upgrade your production database. >>>> When you want reliability, choose Perforce >>>> Perforce version control. Predictably reliable. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> liblo-devel mailing list >>>> lib...@li... >>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>> > |
From: Stéphane L. <le...@gr...> - 2014-09-16 04:52:50
|
Yes indeed, it solves the problem, thanks ! Stéphane Le 15 sept. 2014 à 23:06, Stephen Sinclair <rad...@gm...> a écrit : > Can you test the branch, "deferredhostname", on > http://github.com/radarsat1/liblo? > > thanks, > Steve > > > On Fri, Sep 12, 2014 at 4:19 PM, Stéphane Letz <le...@gr...> wrote: >> Well i don't know the code enough to do that… but possibly. >> >> Are you going to correct that? >> >> Stéphane >> >> >> Le 12 sept. 2014 à 16:15, Stephen Sinclair <rad...@gm...> a écrit : >> >>> Right, so the server gets its own hostname so that it can provide a >>> URL for contacting it. This may or may not be appropriate depending >>> on the network, but I guess the right fix would be to defer this >>> operation until the s->hostname field is needed. Such a change was >>> already made for lo_address in the past. >>> >>> Steve >>> >>> >>> On Fri, Sep 12, 2014 at 4:05 PM, Stéphane Letz <le...@gr...> wrote: >>>> I found out that "gethostbyname" call in "lo_server_new_with_proto_internal" was the problem. Just removed the code for now, but don't think this is the real fix... >>>> >>>> Stéphane >>>> >>>> Le 12 sept. 2014 à 15:58, Stephen Sinclair <rad...@gm...> a écrit : >>>> >>>>> Is IPv6 enabled? I believe there were some major slow-downs in DNS >>>>> lookups due to using getnameinfo(), so we disabled it unless IPv6 is >>>>> enabled. Don't know if that is your problem, but it would be good to >>>>> eliminate it. >>>>> >>>>> Is it only the multicast variety that is slow? >>>>> >>>>> Steve >>>>> >>>>> >>>>> On Fri, Sep 12, 2014 at 1:58 PM, Stéphane Letz <le...@gr...> wrote: >>>>>> Hi, >>>>>> >>>>>> We are using liblo code on a completely autonomous wireless network (made by 2 Mac OSX laptop + iPad device + Apple Aiport express) >>>>>> >>>>>> In this case the "lo_server_thread_new_multicast" takes a huge time to return since it seems an internal time-out is used somewhere to (DNS access? name resolution ? etc…) >>>>>> >>>>>> The same code works well as soon as the autonomous wireless in connected again on internet. >>>>>> >>>>>> Any idea how to solve this issue? >>>>>> >>>>>> Thanks. >>>>>> >>>>>> Stéphane Letz >>>>>> ------------------------------------------------------------------------------ >>>>>> Want excitement? >>>>>> Manually upgrade your production database. >>>>>> When you want reliability, choose Perforce >>>>>> Perforce version control. Predictably reliable. >>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>>>> _______________________________________________ >>>>>> liblo-devel mailing list >>>>>> lib...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Want excitement? >>>>> Manually upgrade your production database. >>>>> When you want reliability, choose Perforce >>>>> Perforce version control. Predictably reliable. >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>>> _______________________________________________ >>>>> liblo-devel mailing list >>>>> lib...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Want excitement? >>>> Manually upgrade your production database. >>>> When you want reliability, choose Perforce >>>> Perforce version control. Predictably reliable. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> liblo-devel mailing list >>>> lib...@li... >>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> liblo-devel mailing list >>> lib...@li... >>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >> |
From: Stephen S. <rad...@gm...> - 2014-09-15 20:06:33
|
Can you test the branch, "deferredhostname", on http://github.com/radarsat1/liblo? thanks, Steve On Fri, Sep 12, 2014 at 4:19 PM, Stéphane Letz <le...@gr...> wrote: > Well i don't know the code enough to do that… but possibly. > > Are you going to correct that? > > Stéphane > > > Le 12 sept. 2014 à 16:15, Stephen Sinclair <rad...@gm...> a écrit : > >> Right, so the server gets its own hostname so that it can provide a >> URL for contacting it. This may or may not be appropriate depending >> on the network, but I guess the right fix would be to defer this >> operation until the s->hostname field is needed. Such a change was >> already made for lo_address in the past. >> >> Steve >> >> >> On Fri, Sep 12, 2014 at 4:05 PM, Stéphane Letz <le...@gr...> wrote: >>> I found out that "gethostbyname" call in "lo_server_new_with_proto_internal" was the problem. Just removed the code for now, but don't think this is the real fix... >>> >>> Stéphane >>> >>> Le 12 sept. 2014 à 15:58, Stephen Sinclair <rad...@gm...> a écrit : >>> >>>> Is IPv6 enabled? I believe there were some major slow-downs in DNS >>>> lookups due to using getnameinfo(), so we disabled it unless IPv6 is >>>> enabled. Don't know if that is your problem, but it would be good to >>>> eliminate it. >>>> >>>> Is it only the multicast variety that is slow? >>>> >>>> Steve >>>> >>>> >>>> On Fri, Sep 12, 2014 at 1:58 PM, Stéphane Letz <le...@gr...> wrote: >>>>> Hi, >>>>> >>>>> We are using liblo code on a completely autonomous wireless network (made by 2 Mac OSX laptop + iPad device + Apple Aiport express) >>>>> >>>>> In this case the "lo_server_thread_new_multicast" takes a huge time to return since it seems an internal time-out is used somewhere to (DNS access? name resolution ? etc…) >>>>> >>>>> The same code works well as soon as the autonomous wireless in connected again on internet. >>>>> >>>>> Any idea how to solve this issue? >>>>> >>>>> Thanks. >>>>> >>>>> Stéphane Letz >>>>> ------------------------------------------------------------------------------ >>>>> Want excitement? >>>>> Manually upgrade your production database. >>>>> When you want reliability, choose Perforce >>>>> Perforce version control. Predictably reliable. >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>>> _______________________________________________ >>>>> liblo-devel mailing list >>>>> lib...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>>> >>>> ------------------------------------------------------------------------------ >>>> Want excitement? >>>> Manually upgrade your production database. >>>> When you want reliability, choose Perforce >>>> Perforce version control. Predictably reliable. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> liblo-devel mailing list >>>> lib...@li... >>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>> >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> liblo-devel mailing list >>> lib...@li... >>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> liblo-devel mailing list >> lib...@li... >> https://lists.sourceforge.net/lists/listinfo/liblo-devel > |
From: Stéphane L. <le...@gr...> - 2014-09-12 14:19:27
|
Well i don't know the code enough to do that… but possibly. Are you going to correct that? Stéphane Le 12 sept. 2014 à 16:15, Stephen Sinclair <rad...@gm...> a écrit : > Right, so the server gets its own hostname so that it can provide a > URL for contacting it. This may or may not be appropriate depending > on the network, but I guess the right fix would be to defer this > operation until the s->hostname field is needed. Such a change was > already made for lo_address in the past. > > Steve > > > On Fri, Sep 12, 2014 at 4:05 PM, Stéphane Letz <le...@gr...> wrote: >> I found out that "gethostbyname" call in "lo_server_new_with_proto_internal" was the problem. Just removed the code for now, but don't think this is the real fix... >> >> Stéphane >> >> Le 12 sept. 2014 à 15:58, Stephen Sinclair <rad...@gm...> a écrit : >> >>> Is IPv6 enabled? I believe there were some major slow-downs in DNS >>> lookups due to using getnameinfo(), so we disabled it unless IPv6 is >>> enabled. Don't know if that is your problem, but it would be good to >>> eliminate it. >>> >>> Is it only the multicast variety that is slow? >>> >>> Steve >>> >>> >>> On Fri, Sep 12, 2014 at 1:58 PM, Stéphane Letz <le...@gr...> wrote: >>>> Hi, >>>> >>>> We are using liblo code on a completely autonomous wireless network (made by 2 Mac OSX laptop + iPad device + Apple Aiport express) >>>> >>>> In this case the "lo_server_thread_new_multicast" takes a huge time to return since it seems an internal time-out is used somewhere to (DNS access? name resolution ? etc…) >>>> >>>> The same code works well as soon as the autonomous wireless in connected again on internet. >>>> >>>> Any idea how to solve this issue? >>>> >>>> Thanks. >>>> >>>> Stéphane Letz >>>> ------------------------------------------------------------------------------ >>>> Want excitement? >>>> Manually upgrade your production database. >>>> When you want reliability, choose Perforce >>>> Perforce version control. Predictably reliable. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> liblo-devel mailing list >>>> lib...@li... >>>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> liblo-devel mailing list >>> lib...@li... >>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >> >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> liblo-devel mailing list >> lib...@li... >> https://lists.sourceforge.net/lists/listinfo/liblo-devel > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |
From: Stephen S. <rad...@gm...> - 2014-09-12 14:15:22
|
Right, so the server gets its own hostname so that it can provide a URL for contacting it. This may or may not be appropriate depending on the network, but I guess the right fix would be to defer this operation until the s->hostname field is needed. Such a change was already made for lo_address in the past. Steve On Fri, Sep 12, 2014 at 4:05 PM, Stéphane Letz <le...@gr...> wrote: > I found out that "gethostbyname" call in "lo_server_new_with_proto_internal" was the problem. Just removed the code for now, but don't think this is the real fix... > > Stéphane > > Le 12 sept. 2014 à 15:58, Stephen Sinclair <rad...@gm...> a écrit : > >> Is IPv6 enabled? I believe there were some major slow-downs in DNS >> lookups due to using getnameinfo(), so we disabled it unless IPv6 is >> enabled. Don't know if that is your problem, but it would be good to >> eliminate it. >> >> Is it only the multicast variety that is slow? >> >> Steve >> >> >> On Fri, Sep 12, 2014 at 1:58 PM, Stéphane Letz <le...@gr...> wrote: >>> Hi, >>> >>> We are using liblo code on a completely autonomous wireless network (made by 2 Mac OSX laptop + iPad device + Apple Aiport express) >>> >>> In this case the "lo_server_thread_new_multicast" takes a huge time to return since it seems an internal time-out is used somewhere to (DNS access? name resolution ? etc…) >>> >>> The same code works well as soon as the autonomous wireless in connected again on internet. >>> >>> Any idea how to solve this issue? >>> >>> Thanks. >>> >>> Stéphane Letz >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> liblo-devel mailing list >>> lib...@li... >>> https://lists.sourceforge.net/lists/listinfo/liblo-devel >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> liblo-devel mailing list >> lib...@li... >> https://lists.sourceforge.net/lists/listinfo/liblo-devel > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > liblo-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/liblo-devel |