Tuesday, 26 March 2013

Is there a better way to upgrade the internet?

If we persist in thinking of the internet as an information superhighway, then we’ll continue to handle congestion by adding more lanes, via expensive upgrades in the core network, at the edge and at the last mile. The end result of our love affair with connectivity is a losing proposition for ISPs who are forced to upgrade their networks to meet the ongoing demand for broadband without taking enough of a share from the growing internet economy to meet their margins.


Or so writes Eric Klinker, in the Harvard Business Review blog, in a solid post about how we’re going to manage the growth of the internet. While Klinker sounds like many a telco-funded astroturfer in his worries about ISP profits, he’s actually the CEO of file sharing site, BitTorrent. And his arguments are worth listening to on both sides of the internet divide — the ISPs and the content companies looking to ride those pipes.

In the post, which is similar in spirit to one he wrote for GigaOM in 2011, he agues that the problem on the Internet is congestion, and that there are far more ways to address congestion than just adding more lanes. And of course as the CEO of BitTorrent, which has a proprietary file transfer system that is composed of masses of distributed computers, his main idea is distributed computing.

Distributed computing systems work with unprecedented efficiency. You don’t need to build server farms, or new networks, to bring an application to life. Each computer acts as its own server; leveraging existing network connections distributed across the entirety of the internet.

BitTorrent is a primary example of distributed computing systems at work. Each month, via BitTorrent, millions of machines work together to deliver petabytes of data across the web, to millions of users, at zero cost. And BitTorrent isn’t the only example of distributed technology at work today. Skype uses distributed computing systems to deliver calls. Spotify uses distributed computing systems to deliver music.

 


 


 

The challenges associated with this are obvious. Customers have to download clients in order to use such networks, and they will still affect the end user’s connection at the last mile or in the airwaves and at cell sites on mobile networks. Thus, they can tax ISP networks (although they can be optimized). But with video a huge driver of congestion on the consumer side, it’s a solution that could work, since people will download software in order to watch TV. Even ISPs have tested distributed computing when they tried out the P4P network protocol way back in 2008.

Distributed computing would force many popular web services to reconsider how they build their applications and stream their files, which could have a large effect on big web sites such as Facebook or Google as well as content companies and content delivery networks. Another option, and one that we’re inching toward, is smart routers and prioritization schemes where the user can set their own network parameters to best use the bandwidth they have available. Software-defined networks will also make such prioritization easier and cheaper to manage inside the core telco network as well.

There’s also a more controversial idea of ISPs charging more for broadband during peaks times, as opposed to current data caps that limit people no matter if they download information at 2AM or during prime time. True congestion pricing would also force users to bear to cost of overburdening the ISP network, although ISPs would then have to be open about how often their networks are congested and would risk consumers losing their appetite for broadband. My hunch is that neither the ISPs or the content companies want that to happen, although it’s still far from clear that upgrades are the death knell for the cable and telco companies, as opposed to a painful shift in their margin profiles.

Authour  Murigi Benson

No comments:

Post a Comment