No, I didn't check with a wire monitor. (Is bad...
)
I just know that most consumer ISPs limit the upload pipe, in favor of the download pipe. Most consumers have a hard time pushing bits faster than 20k/sec for total upload capacity. In reality, it is usually closer to 5k/sec, to 15k/sec, with throttling.
My users claim to have noticed an improvement after telling putty to connect at 115200, instead of 38400, but due to total upload speed on my end I am loath to have them connect faster than that. Due to it being a live stream, I suppose it could be a QoS related problem.. (where the isp thinks it is ordinary dumb data that can arrive out of order\throttled, and so it gets demoted in favor of VoIP, and other high priority packets.)
I will need to do some remote site testing with wireshark (or similar) to find out.
I was just hoping for data compression, because it would help with the "tiny pipe" problem.
I suppose that since my host *is* linux, I could be brazen and tell my clients to connect using a remote ssh shell under the limited user I created for dfterm, and then have them run the "local" telnet, connect to the loopback to talk to dfterm, and use the ssh daemon to deal with all the over-the-wire traffic.
Loopback is not physically constrained for how fast it can deliver bits, so that kludge should give me the compression I want with little else going on. However, I don't know if dfterm can handle multiple connections from the "same host" like that. (Eg, user A and user B are both using compressed ssh sessions, and appear to dfterm as being the localhost. How does dfterm know which is which?)
Testing is necessary...
I just wish american ISPs would be more friendly about people running a server, since that is how the internet was actually designed. As-is, they think running a server is "srs business!" And that consumers should only ever download. It sucks.