It is currently Thu, 20 Jan 2022 05:18:08 GMT

Author Message
 parallel TCP streams increasing total throughput?
Why is it that multiple, concurrent TCP streams between the same endpoints
results in higher throughput? It seems like something must be wrong with
TCP if it exhibits that behavior, or that there should be a better fix
than having multiple parallel TCP streams.

The SDSC SRB FAQ[1] says:

    For transferring large files, SRB will normally be significantly
    faster than FTP, SCP, or NFS and the like, because of the SRB's
    parallel I/O capabilities (multiple threads each sending a data stream
    on the network). Sreplicate and Scp use parallel I/O for large-file
    data transfers by default, and you can use the -m option on Sput and
    Sget to select parallel I/O.

We had this problem at work as well, in simply transferring large files
across the building's ethernet; it turned out that opening up multiple TCP
connections dramatically improved total throughput.  I chalked this up to
poor default TCP settings in Windows (as explained in [2]) but maybe there
is more to the story.


[1] http://www.**-**.com/
[2] http://www.**-**.com/

 Wed, 22 Nov 2006 01:34:27 GMT   
 parallel TCP streams increasing total throughput?
Hi Tobin,

 > Why is it that multiple, concurrent TCP streams between the same endpoints
 > results in higher throughput? It seems like something must be wrong with
 > TCP if it exhibits that behavior, or that there should be a better fix
 > than having multiple parallel TCP streams.

i would really appriciate, if you could go more into the details of the
tests you did or read about.
What exactly is transmitted between how many hosts?
Single or broadcast connections?
How much did throughput increase under which curcumstances?

I'm really interested in network performance issues (i have some
strange "problems" with my home network, ok not really, but
i'm not satisfied...)

I'm sorry, i can't help you 'cuase if've never heard that multiple
TCP connections increased throughput.....but i have some
questions regarding your internet sources;-)

Well, i looked over this FAQ and i think this comment has nothing to do with
TCP performance at the first point.....
I think it's more a question of hdd performance and system load or better ways
to handle multiple requests the same service.
May home LAN (100 Mbps) has a fileserver w/ software RAID 0 and if i'm using
FTP, i get 11,9 MBytes/sec peak throughput when reading large files
(average is about 10,5MByte/s). I think that's pretty well and it's
all done by single connections. Whatever that SRB is designed for, i'd be
really suprised if it could do this better than this.
Ok i think it's for some other task, so there might be application
where the FTP server app i use will loose the competition:-)

Again: what performance improvement did you see for which tasks?

Really interesting article i think. But it does not lead directly to the
conclusion, that multiple TCP connections outperform single conncetions.
I have some idea though, but i would expect the performance boost to be
_very_ small in most cases....

Thx in advance!


 Wed, 22 Nov 2006 04:19:49 GMT   
 parallel TCP streams increasing total throughput?
In article <>, (tobin fricke) wrote:

Any individual TCP stream has its throughput limited to Window/RTT, due
to the nature of the sliding window acknowledgement mechanism.  If the
window size is small or the end-to-end latency is high, this throughput
limit may be less than your link bandwidth, so an individual stream
won't be able to fill up the pipe.  Thus, if Window/RTT is 1/4 the speed
of your link, you could theoretically get 4 connections going and that
will get you the maximum combined throughput.

However, modern TCP implementations generally support window scaling,
which allows for very large windows.  It's not usually used by default,
because the most connections don't need it (if you're communicating over
a LAN, the RTT will be very small, so Window/RTT is not usually the
bottleneck).  If the application asks for a large receive buffer,
though, window scaling will be enabled and you should get good

Barry Margolin,
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***

 Wed, 22 Nov 2006 05:08:39 GMT   
 parallel TCP streams increasing total throughput?

I ran into this site about a month ago, but have not had time to read
it in detail.  It seems to be an interface for standardizing
"accelerated" transfers -- ie., basically acts like download
accelerators that open multiple streams.  Still, the pipe is only so
big and some way must be found to satisfy _all_ users.  Large,
continuous file transfers have always been problematic -- especially
in nets with variable usage.

This second link is pretty standard fair re: tweaking Win network
performance, but note that XP defaults are even better out of the box
than W2K.  With Linux similar adjustments (and more) are available
that are "global" as well as config options specific to different

One problem ignored in this second link has to do with the overhead
involved with retransmissions when cwnd sizes get "overly large" for
sudden changes in network conditions -- also he confuses mss and mtu.
There are ongoing efforts to standardize on improved
scaling/retransmission algorithms.

The real world problem faced in most networks is the _variety_ of
traffic that must be accommodated compounded by the difference in
measured _total_ throughput and the _perceived_ responsiveness or
latency of specific users/apps, eg., the differences between large ftp
transfers and a telnet (char by char packets) session.

In the end, I think you are still faced with the need to monitor,
characterize, and adjust to the traffic on _your_ network.  Software
can help but can't replace good resource management.  Vlans, too, are
very useful when you can segregate usage patterns -- especially sites
that have re-invented centralized resources (ie., server farms;-).

email above disabled

 Wed, 22 Nov 2006 12:58:49 GMT   
 parallel TCP streams increasing total throughput?

This isn't the only effect.  TCP also has a notion of fairness built
into its congestion avoidance algorithms - if some router in the
network is overloaded, all the TCPs involved back off their
transmission rates until the sum of the rates is equal to the capacity
of the link.  By symmetry, the division is fair - each TCP connection
on average gets its fair share of the resource.

If you use multiple TCP connections, you get multiple "votes" - extra
shares of the resource.  This way you can be a bully and push other
users out of the way.  This is an easy way of achieving something that
can also be achieved by tweaking the parameters of the TCP congestion

Users with high bandwidth requirements are certainly using this
technique on the internet today - see GridFTP for a good example (the
Grid project pushes terabytes of data from supercolliders to

Congestion control and fairness are discussed on the TCP Friendly
website - to avoid "congestion collapse of the internet", even non-TCP
applications (such as media streaming) should implement the same
congestion avoidance algorithm.


--KW 8-)
Keith Wansbrough <>
University of Cambridge Computer Laboratory.

 Fri, 24 Nov 2006 18:34:00 GMT   
   [ 5 post ] 

Similar Threads

1. Why throughput increases as MTU size is increased

2. Increasing throughput on server sockets via dup?

3. Increasing throughput on to RAID array

4. Increasing throughput on to RAID array

5. PCI Throughput increased by breaking into chunks?

6. how to increase total # login sessions?

7. Odd TCP client/server throughput problem

8. Improving TCP throughput over high latency links

Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by ST Software