As I wrap up another week of meetings with customers and partners throughout the US, I look down at my boarding pass and reflect upon the additional customer and partner viewpoints to take with me on my next trip. After this latest trip, I wanted to share some insight around a commonly ignored portion of the data center: network visibility and network visibility for applications.
The Big Network Visibility Problem
After speaking with partners and customers about their current application workloads and the business drivers that allow them to invest in more information technology, I’ve started to see a common theme:
Most organizations have limited to no idea of what traffic is traversing their network.
The first time it really hit me, I realized how big of a problem this. As technologists, we all spend countless hours researching and testing the newest technologies in the data center that are designed to make applications run more efficiently. Our end goal has always been to design networks to serve applications by providing them IP addressing, Layer 2 adjacencies for high-availability or fault tolerance to keep these applications up and running as close to 100% of the time as possible, regardless of location.
Until recently, little attention has been paid to what these applications are doing aside from what network segments they needed access to or where these workloads would go in a disaster recovery (DR) scenario. Very few customers have considered the intricate application flows that link multi-tiered applications, hyperconverged infrastructures, big data analytics pods or virtual desktop deployments. When I asked these customers about how they ensure that these applications are performing properly, many replied that they didn’t have any visibility whatsoever. A minority had rather expensive monitoring investments to provide some visibility – but extensive gaps still remained.
This lack of data center network visibility makes it extremely difficult to troubleshoot application issues but it also makes it impossible to understand what applications are in use and to what extent across the data center. It is also nearly impossible to map dependencies that different hosts have on the rest of the data center. This living, breathing data center network that we are all hesitant to make changes to, might just be hiding more information that we never knew was important.
If you can’t see what’s going on behind the (hopefully green) flashing LEDs on your network switches, where do you start? Let’s explore the monitoring space and where there might be some room for innovation.
How It’s Done Today
In today’s world, network visibility is heavily dependent on taking network data of interest and replicating it out to separate monitoring infrastructure. Traditionally this has been done with switch-based monitoring sessions where the network switch is configured to replicate desired traffic to a monitoring or packet broker infrastructure. Even with this type of solution, there is still a substantial gap in the traffic that can be analyzed because of hardware limitations on how many replication sessions can be supported at any one time. In today’s common top of rack solutions, there is a maximum of 4 replication sessions supported. (To get around these limitations, physical layer 1 taps are sometimes used but they are cumbersome and require a substantial investment in additional hardware.)
Regardless of how traffic is extracted from the network, this approach is also wasteful of monitoring network resources because it is not usually possible to filter the interesting application traffic. Traditionally these replications are based on physical ports, port aggregations or VLANs. If a network administrator is only interested in a specific application or host, the entire VLAN or port must be replicated with traffic filtered at the monitoring device, wasting valuable monitoring link throughput with traffic that is not interesting. To scale these environments, packet brokers are normally deployed to efficiently use limited and costly monitoring ports.
Another option that attempts to design around these limits is the deployment of a specialized, hardware-based approach to flow visibility. There are solutions on the market today that will generate flow data natively in an open standards format known as IPFIX. This solution requires three components, a flow exporter to take interesting packets and aggregate them into flows, a flow collector that receives these flows, and then processes them and an analysis application that analyzes collected flow data to be interpreted by the end-user. Some network switches can natively act as flow exporters but most data center solutions only provide a sampled export of this data, where missing crucial data is a very real possibility. With all of this specialized hardware, a significant investment is needed not only in-network switching that supports this functionality but also in the separate, dedicated infrastructure to collect and analyze this data. This discussion won’t get into the costs behind this investment but it’s safe to say you can expect at a minimum to spend $100k USD to get started (with $500k USD not being untypical).
Applications are critical to keeping the business running. Lack of visibility shouldn’t be accepted. There has to be a better way!
What if there was a solution that allowed you to keep all of your existing network investments and choose how you wanted to get interesting packets off the network, eliminating packet brokers and giving you an easy, single dashboard to analyze this data.
The Pluribus Networks Visibility Solution: Virtualization Centric Fabric
One of the core tenants of the Pluribus VCF architecture is Insight. Insight is our approach to bringing application visibility into the data center, without mandatory packet brokers or analysis tools. In fact, if you have any existing investments in network switching, network taps or packet brokers, use them!
If you’ve been following our blog and website over the last few months, you have seen updates around our recent announcements in the network visibility and monitoring space. In fact, the Pluribus team recently came back from Interop 2016 Las Vegas with a win of “Best of Interop” for the Performance, Management and Monitoring category. VCF Insight Analytics (VCF-IA) is our newest product release that won this honor but this showcased what the entire Pluribus architecture can do.
The Network is where it’s at
Our belief is the best place to see what’s traversing the network is…the network! Pluribus’ VCF architecture operates at the flow level and allows you to analyze packets regardless of their source. When paired with our award-winning VCF-IA software, this application flow data is searchable, filterable and now contributes to a business-centric snapshot of the network, applications, and end business goals. With our newest release, you can now classify applications according to your business needs. Want to see the whole end-to-end picture of what traffic is traversing the network? Check out our Dashboard. Want to custom filter different business initiatives with network or application resources? Are you curious about what traffic patterns looked like over the past 24/36/etc hours? Would it be helpful to see top talkers or top applications on the network? Check out our Projects and Reporting features.
Deploy Pluribus VCF Insight Analytics any way you like it
What’s great about the Pluribus VCF Architecture now that VCF-IA is available, is that now network monitoring doesn’t have to be a nice-to-have, network monitoring can be a standard feature in your data center. It can even be part of your data center strategy regardless of when you deployed your last network switch without fear of major expenditure. Our VCF architecture allows for flows to be analyzed without a mandatory topology regardless of source. This means that you can deploy a VCF fabric as your Top of Rack, End of Row, Spine or Leaf or even totally outside the data center traffic path as a flow aggregator.
Data center just refreshed? No Problem!
What if you just refreshed your data center in the last few months? No problem, leave everything where it is! The Pluribus VCF architecture allows you to deploy one or many collector appliances to gather important flow data using your existing investments, allowing you to select the most important sources to analyze with VCF and VCF-IA. All of this data is aggregated and presented to VCF-IA so that you can troubleshoot network connectivity, application flow dependencies, client-server transactions as well as specific application performance such as VDI, hyperconverged IP storage or even Big Data Hadoop workloads. We even partner with Nutanix to map numerous flows in their architecture into our VCF-IA dashboards. The possibilities are almost endless of what you can do with this level of detail.
If you’re considering a data center refresh, I encourage you to ask what you’re getting for the next generation of hardware. Instead of refreshing ports to faster speeds and bigger tables like we’ve all come to expect every 36-48 months, consider what additional features and functionality you can get without ripping and replacing any existing investments. Remember, the same VCF architecture that can collect flow data from your existing network can expose 100% of all flows to and from your servers if deployed in-line at the top of rack or leaf while still leveraging your all of your existing network.
Give VCF Insight Analytics a Try!
If you haven’t yet seen VCF-IA, I encourage you to have a closer look and to request a demonstration!
If you’d like a custom demonstration or just want to chat about your thoughts on our approach to network visibility, I’d love to hear from you! Drop me a note at email@example.com.