The gist of this study (by Arbor Networks, the University of Michigan and Merit Network) is that Internet traffic has been shifting from moderately fast connections with a relatively large number of servers to fast connections with a relatively small number of servers. This has apparently been driven by (among other things) the rising amount of video being watched. So, if this seems like a problem, blame YouTube.
Some thoughts I had after reading this:
- Some of this may be accounted for by the movement of Internet-facing servers to server farms, away from on-site servers. Which makes sense: having a popular Web server on your company's WAN connection is inviting a sort of inadvertent, non-malicious DDoS attack.
- Fewer systems representing the majority of the traffic on the Internet means fewer points of failure. Which means these "Hyper Giants" (the term used in the study summary) had best keep the systems and planning for their security, disaster recovery and redundancy up to snuff. Take a look at Telecom Risk and Security Part 2 – The Carrier Hotel SuperNode | Virtualization Journal or Verizon router glitch slams parts of U.S. to get an idea of what can go wrong with too few points of failure.
- Having applications on the Web is convenient, but it also means that people outside of your organization now have direct influence over your ability to use those applications. This might be something like Google suffering an Gmail outage or a backhoe operator digging in the wrong spot.
Here's the link to the Arbor Networks report:
Two-Year Study of Global Internet Traffic Will Be Presented At NANOG47 | Arbor NetworksHere's a PC World article on the same subject:
No comments:
Post a Comment