BitTorrent the creators of the very popular uTorrent client have stated that in the next version (currently alpha) they are going to make a proprietary UDP-based protocol called uTP the default protocol used by uTorrent (see announcement). This prompted a fellow over at The Register to predict that this move will cause the internet to meltdown. Of course such a drastic prediction will bring about all sorts of discussion of the issue, see Slashdot posts.
The first thing to understand about BitTorrent and TCP (what it currently uses/defaults to) is that the two are a poor fit. If you think about the way BitTorrent works, it generally creates lots of connections for short lived high throughput transfers. Unlike previous P2P applications (e.g. Kazaa/Napster), files on BitTorrent are split into small chunks, this way the chunks can be downloaded from different hosts simultaneously and likley signficantly faster. Chunks themselves are generally quite small (1MB by default), furthermore, most people tend to have high (broadband) speed connections, thus the duration of the connections to individual hosts is short. These factors make TCP a terrible fit for use in BitTorrent. Basically, when TCP starts a connection it begins sending very slowly (1 in-flight packet) and (exponentially) builds up until it a packet is lost which signals the sender that they may be going too fast. While the exponential increase should lead the sender to reach the maximum send rate quickly it is not quick enough. With the high speed broadband connections available today, it actually takes a reasonable number of round-trips for TCP to reach its maximum send rate. Based on some back of the napkin calculations a 1Mbps link requires 93 in-flight packets to be sent (assuming 1500 byte packet size and 100ms RTT, Throughput=.75*(window size/RTT)). Starting at 1 packet, it will take slow-start 7 round trips (2^7=128) to get up to that rate, and 1Mbps isnt even that fast.... Since the connections made are generally short-lived they will likley spend most of their time in slow start sending much slower then it can actually be.
With all of these factors it comes as no surprise that the folks over at BitTorrent want to make their own protocol which they can optimize for torrenting. Especially since torrent traffic is completely different from the types of traffic TCP was designed for and performs well with. The protocol and the client itself is proprietary which is going to be a point of contention with quite a few people I am sure.
The reason why some have predicted the meltdown of the internet is because UDP does not employ any congestion-control or congestion-avoidance schemes like TCP does, it instead continues to blast away packets as fast as it can. This is the basic version of UDP, the nice thing about UDP is that one add to it the services that they require, and omit those that they dont. TCP already has so many things built it (reliability, etc) that result in poor performance that they simply chose to build it on their own, only including those things that they need while not having to navigate around those things which TCP has already included.
As for now, the UDP based protocol will likley avoid many of the throttling mechanisms put into place by ISPs (e.g. Comcast) to help curb downloading. I dont believe that this is an attempt to avoid these mechanisms since making the traffic UDP allows it to be much easier to distinguish between torrent traffic and other traffic whereas with TCP it is a somewhat more convoluted process. I suspect that this will make it easier for ISPs to classify torrent traffic, the question is what will they do with it.
Update:
I just read this article over at networkperformancedaily.com where the author actually talks to the VP over at BitTorrent Inc. about why they chose to make a protocol, based on what they said it does in fact seem like they are doing the responsible thing, even stating that the protocol needs to be "MORE sensitive to latency then TCP." He also says that it is a "scavenger protocol. It scavanges and uses bandwidth that is not being used by other applications at all.." I think this is the absolutely correct approach to be taking. Just as a kicker to the above article, the first comment posted was by Richard Bennet, the author of the Register's "The Internet is going to meltdown" article.
Search
Labels
- (14) microsoft
- (11) Google
- (9) Amazon
- (9) cloud computing
- (8) windows
- (6) apple
- (5) amazon ec2
- (5) linux
- (4) Rails
- (4) Ruby on Rails
- (4) advertising
- (4) design
- (4) laptop
- (4) python
- (4) web performance
- (3) Blog
- (3) Caching
- (3) Facebook
- (3) Gmail
- (3) Internet Explorer
- (3) Kindle
- (3) Performance
- (3) Platform-as-a-Service
- (3) RSS
- (3) RSS Readers
- (3) Rackspace
- (3) Ruby
- (3) Seinfeld
- (3) Yahoo
- (3) web design
- (2) Andriod
- (2) Azure
- (2) Bloggers
- (2) CSS
- (2) Chrome
- (2) College
- (2) Downtime
- (2) Firefox
- (2) G1
- (2) Gates
- (2) Google Andriod
- (2) Google Chrome
- (2) HTML
- (2) Jeff Bezos
- (2) Memcached
- (2) MySQL
- (2) Relational Database
- (2) Slashdot
- (2) Software
- (2) Twitter
- (2) Website Performance
- (2) bing
- (2) data storage
- (2) dell
- (2) google app
- (2) iPhone
- (2) interview questions
- (2) ipod
- (2) programming
- (2) search engine
- (2) torrent
- (2) usability
- (2) vista
- (2) web applications
0 comments:
Post a Comment