The complexity of the Internet is overwhelming some older routers, but these systems can easily be upgraded.
Last week, thousands of unlucky Internet users briefly found their service sluggish or nonexistent because of a hardware problem related to the complexity of supporting the millions of new connections being added every week. But while the continued expansion of the Internet has often prompted calls for a major overhaul, this latest sputter doesn’t mean it is nearing a breaking point.
At fault for the outages were routers that serve as connection points between organizations that move a lot of data around, including Internet service providers, the operators of major data centers, and other large companies. The border gateway patrol (BGP) routers, as they’re known, share information with each other about all the possible paths that data can take as it travels across the Internet.
Some older but still widely used BGP routers run software capable of storing a maximum of 512,000 routes. For years that was more than enough. But last week, according to Dyn, a company that specializes in monitoring the structure of the Internet, the total number of routes that each BGP router needed to store to function properly hit that threshold of 512,000. Internet service providers began having connectivity problems if they were running BGP routers that could no longer keep track of the possible routes they could utilize. These routers’ instability also affected some neighboring routers, widening the effects of the outage.
Last week’s routing problem is the latest evidence that the Internet is outgrowing its aging equipment and protocols, some of which were designed in the 1990s. But it also demonstrates that the infrastructure can often be upgraded on a piecemeal basis rather than overhauled to deal with these growing pains, and that recent fears over the Internet’s health are overblown. After all, the routing problem can be fixed simply by adjusting the configurations of old BGP routers so that they can store up to a million routes.
In recent years, some network engineers have argued that the Internet was running out of IP addresses, the numeric identifiers the computers use to communicate with each other (see “The Internet Just Ran Out of Numbers”). As a result, network engineers have created a new method of allocating IP addresses and routing data between them. Known as Internet protocol version six (IPv6), it allows for 340 trillion trillion trillion IP addresses, while the older method, IPv4, allows for about 4.3 billion unique addresses.
But a surprising thing happened: because of workarounds that allow many devices to share one address, IPv4 is still used, even though the number of devices now vastly outnumbers the number of available addresses. Most sites and network operators aren’t even using IPv6 yet. That’s why it was so easy to fix BGP routers: all it took was telling them to set aside less memory for data that follows IPv6 and make more room for data based on IPv4.
To be sure, if the Internet is proving more stable than some engineers had feared, that doesn’t necessarily mean it will remain that way. New connections to the Internet are being added at an accelerating rate. The number of devices online reached 12.5 billion in 2010 and will grow to 25 billion in 2015 and 50 billion by 2020, according to Cisco Systems. The addition of all these new connections will lead service providers and large organizations to change the way they move data around, sometimes by adding new routes in order to maintain the most efficient paths. As the number of routes grows, so too will the amount of memory needed to store them. But the latest incident shows that “the IPv4 model still has plenty of growth left,” says Dyn’s chief scientist, Jim Cowie. And so does the Internet at large.
View “Despite the Latest Creaks, the Internet Isn’t Close to Breaking” and find more technology news from MIT Technology Review.
© 2014 MIT Technology Review