How can that be? One premise of cloud computing is that there is a near infinite well of resources to draw from. You need more processing, you bring it online almost instantly. You need more disk storage, you take it. After all, there is plenty to go around. So how can responses that were snappy when the data center was in the basement become sluggish when the same facilities are in the cloud?
It all comes down to connectivity. Everybody knows that electrical signals move at the speed of light, right? You might guess that’s so fast that it shouldn’t make any difference if the wire is a hundred feet long or a thousand miles. At the speed of light a human can’t possibly detect the travel time of electrical impulses over wires and fiber optic cables. That’s right, isn’t it?
It sure sounds nice in theory, but in practice the speed of light isn’t infinite and the speed of communication signals never gets near light speeds anyway. Remember that the speed of light so often quoted is in a vacuum. That’s 186,000 miles per second when your laser beam is shooting through space. On terrestrial circuits, 186,000 miles per second translates to 186 miles per millisecond or 10 milliseconds for 1,860 miles. Those signals can’t even go that fast because any medium slows them down. You’ll be lucky to go 2/3 as fast, or a millisecond for every 124 miles.
Are we forgetting something? You bet. There’s no such thing as communicating over one long strand of pure wire or fiber. There’s circuitry at both ends and amplifiers, regenerators, add-drop multiplexers and other equipment in-between. Those will add milliseconds or tens of milliseconds more.
That’s still nothing compared to what happens when packets are routed on the Internet. They get from point A to point B alright. But they seldom go in a straight line. They go from router to router to router to router and eventually to the destination. There’s no guarantee the next packet will take the same route as the last one. There’s also no guarantee that the packet will even get there intact. Oh, one is missing? TCP/IP will resend it and all will be well. The file being transferred will certainly be intact at the other end, but how long did it take to replace all the lost packets and wait for traffic jams congesting certain nodes?
Cloud providers and companies sensitive to lag time, also known as latency, are taking a close look at colocation to have the shortest and most direct communications paths possible. A step beyond even standard colocation facilities is the new cloud exchange service from Telx. It’s branded cloudXchange and it may be the future of data centers.
Telx’s breakthrough comes in inviting cloud service providers to move in with them, literally. A service provider can locate their infrastructure in one or all 15 Telx facilities. What they gain is access to a wealth of carriers who have created point of presence within the Telx facilities and major corporations, content delivery networks and others who are just down the hall in the same building. For long haul connections, Telx has access to low latency fiber routes between data centers and to worldwide destinations.
This may be where we’re all headed. Instead of every company having its own server racks connected directly to the corporate LAN, most infrastructure will be outsourced to a cloud service provider or collocated in the same building to form a hybrid cloud. User connectivity will be over high speed dedicated lines, perhaps just to the nearest colo facility where service providers will have a portion of their infrastructure. A separate Internet access path will be available to browse the Web, share email with outsiders, and connect with consumers.
Are you a user or provider of cloud services who is unhappy with their networking connections? Perhaps you can benefit from an upgrade to higher bandwidth, lower latency connectivity to get rid of the lags that are plaguing your business processes.
Note: Photo of data center servers courtesy of WikimediaCommons
No comments:
Post a Comment