Anticipating the next-gen data center
Agility and flexibility are two of the most popular words to describe the attributes expected from IT in helping achieve future business objectives. But how do you apply those attributes to what many large enterprises still consider the linchpin of IT infrastructure – the data center?
There are not, yet, many companies like Condé Nast, which recently shuttered its data center to go “all in with the cloud.” Let’s face it, if you’re a content company, albeit one of the select few with a still thriving print business, transforming to an all-cloud strategy makes a lot of sense.
For just about any other industry, cloud may drive new growth and innovation, but the bulk of business is still dependent on heavy-duty data center servers and applications to run the daily operations.
For many, the prospect of going “all in” on cloud is an unachievable goal, at least for the short-term. Why disrupt transactional systems that are working well and then have to deal with the reliability and performance questions that need to be resolved to migrate them to the cloud? You can utilize online tools to calculate the ROI of new cloud projects, but how do you calculate unknown potential disruption to mission-critical applications?
CIOs are, by nature, fairly cautious and few are willing to gamble on putting all enterprise applications out in the cloud at this stage. Meeting the challenge of tomorrow, however, doesn’t necessarily mean abandonment of what is working well today. Enterprises can cost-effectively implement phased network architecture upgrades that enable new levels of application flexibility and business agility.
Many have reduced data center costs by moving more applications onto fewer servers and also reduced licensing fees and other costs by migrating to a Software-as-a-Service (SaaS) model. This highly virtualized, services on demand model is the foundation on which future cloud efforts will build.
As this migration to a virtualized environment continues, the underlying network architecture should also evolve. You wouldn’t want a Boeing 777 to rely on the hydraulics controls of an earlier era, nor should you expect the cloud-based enterprise to perform well on networking that hasn’t evolved to meet new needs and expectations.
Existing network design must adapt gracefully, one rack at a time, but network infrastructure must be flexible enough to support both dedicated and virtualized hardware. Each organization needs to determine when and how far to converge IP and Fibre Channel traffic; in some cases, convergence may not make sense for applications that require assured high availability.
Data center evolution without revolution does, however, require a determination on the number of layers within each network, the number of switching tiers in each layer, and the management model for virtualization and cloud computing services. The key is to move toward a target design along a well-planned path and to use incremental steps to control risk.
Ultimately, this path will lead many to realize that the virtualized environment requires a flatter network that accommodates the flow within, between and among servers, something that can be achieved with Ethernet Fabrics.