Startup proposes fiber-based Glass Core as a bold rethink of data center networking

Software Defined Networking (SDN) challenges long held conventions, and newcomer Fiber Mountain wants to use the SDN momentum to leap frog forward and redefine the fundamental approach to data center switching while we’re at it. The promise: 1.5x to 2x the capacity for half the price.

How? By swapping out traditional top of rack and other data center switches with optical cross connects that are all software controlled. The resultant “Glass Core,” as the company calls it, provides “software-controlled fiber optic connectivity emulating the benefits of direct-attached connectivity from any port … to any other server, storage, switch, or router port across the entire data center, regardless of location and with near-zero latency.”

The privately funded company, headed by Founder and CEO M. H. Raza, whose career in networking includes stints at ADC Telecommunications, 3Com, Fujitsu BCS and General DataComm, announced its new approach at Interop in New York earlier this week. It’s a bold rethinking of basic data center infrastructure that you don’t see too often.

“Their value proposition changes some of the rules of the game,” says Rohit Mehra, vice resident of network infrastructure at IDC. “If they can get into some key accounts, they have a shot at gaining some mind share.”

Raza says the classic approach of networking data center servers always results in “punting everything up to the core” – from top of rack switches to end of row devices and then up to the core and back down to the destination. The layers add expense and latency, which Fiber Mountain wants to address with a family of products designed to avoid as much packet processing as possible by establishing what amounts to point-to-point fiber links between data center ports.

“I like to call it direct attached,” Raza says. “We create what we call Programmable Light Paths between a point in the network and any other point, so it is almost like a physical layer connection. I say almost because we do have an optical packet exchange in the middle that can switch light from one port to another.”

That central device is the company’s AllPath 4000-Series Optical Exchange, with 14 24-fiber MPO connectors, supporting up to 160×160 10G ports. A 10G port requires a fiber pair, and multiple 10G ports can be ganged together to support 40G or 100G requirements.

The 4000 Exchange is connected via fiber to any of the company’s top-of-rack devices, which are available in different configurations, and all of these devices run Fiber Mountain’s Alpine Orchestration System (AOS) software.

That allows the company’s homegrown AOS SDN controller, which supports OpenFlow APIs (but is otherwise proprietary), to control all of the components as one system. Delivered as a 1U appliance, the controller “knows where all the ports are, what they are connected to, and makes it possible to connect virtually any port to any other port,” Raza says. The controller “allows centralized configuration, control and topology discovery for the entire data center network,” the company reports, and allows for “administrator-definable Programmable Light Paths” between
How do the numbers work out? Raza uses a typical data center row of 10 racks of servers as the basis for comparison. The traditional approach;

Each rack typically has two top-of-rack switches for redundancy, each of which costs about $50,000 (so $100,000/rack, or $1 million per row of 10 racks).
Each row typically has two end-of-row switches that cost about $75,000 each (another $150,000)
Cabling is usually 5%-10% of the cost (10% of $1.15 million adds $115,000)
Total: $1.265 million

With the Fiber Mountain approach:
Each top-of-rack switch has capacity enough to support two racks, so a fully redundant system for a row of 10 racks is 10 switches, each of which cost $30,000. ($300,000).
The 4000 series core device set up at the end of an isle costs roughly $30,000 (and you need two, so $60,000).
Cabling is more expensive because of the fiber used, and while it wouldn’t probably be more than double the expense, for this exercise Raza says to use $300,000.

Total $660,000. About half, and that doesn’t include savings that would be realized by reducing demands on the legacy data center core now that you aren’t “punting everything up” there all the time.

What’s more, Raza says, “besides lower up front costs, we also promise great Opex savings because everything is under software control.”

No one, of course, rips out depreciated infrastructure to swap in untested gear, so how does the company stand a chance at gaining a foothold?

Incremental incursion.
Try us in one row, Raza says. Put in our top-of-rack switches and connect all the server fibers to that and the existing top-of-rack switch fibers to that, and connect our switches to one of our cores at the end of the isle. “Then, if you can get somewhere on fiber only, you can achieve that, or, if you need the legacy switch, you can shift traffic over to that,” he says.

Down the road, connect the end of isle Glass Core directly to other end of row switches, bypassing the legacy core altogether. The goal, Raza says, is to direct connect racks and start to take legacy switching out.

While he is impressed by what he sees, IDC’s Mehra says “the new paradigm comes with risks. What if it doesn’t scale? What if it doesn’t do what they promise? The question is, can they execute in the short term. I would give them six to 12 months to really prove themselves.”

Raza says he has four large New York-based companies considering the technology now, and expects his first deployment to be later this month (October 2014).


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

 

Leave a comment

(*) Required, Your email will not be published