Using Openflow – state of the ART
What is a Flow and the split between Hardware and Software
- Server Based Approach - Have S/W in the server itself to do the implementation of the rules. It’s the easiest way since the server already has to process the packets and it can keep track. The issue with this approach arises when it’s not a real server but a virtual machine on the server that we want to track. We can still let the hypervisor track the packets or ask the Virtual Machine to track it. The big disadvantage of this approach is that asking the server to do stuff on your behalf needs certain trust (security holes and digital certificates come to mind), depends on its capability and lowers performance. Since the server hypervisor has to measure these things, it needs to see the packets, making hardware based virtualization (SR-IOV) hard to adopt. Most of the data center bridging standards and IO virtualization standards are today going towards hardware based switch in the server and doing things in the S/W layers of the server is not going to be possible.
- Server based with H/W offload - There is more talk around this than any real implementation but its worth mentioning that people have discussed putting special capabilities in the NICs on server to offload some flow processing. The advantage is performance and security (since the Hypervisor controls the NIC, the Virtual Machine can’t circumvent it). The disadvantage is cost and scaling issues. The chips capable of doing this (TCAMs etc) are expensive and trying to orchestrate across large numbers of servers severely limits the scale. We are already seeing Intel Sandy Bridge architecture coming to life which is integrating 10GigE NICs. Adding TCAMs will only increase the basic cost by $800-900 and also add significant complexity.
- Probe Based Approach - Have probes in the network to do it. There are companies which specialize in inserting probes in the network and collecting data that can do this quite well, as long as you only want to observe things. If redirection or traffic shaping action is needed, these passive probes will not work and inserting them requires intrusive work in cabling etc. Not my favorite approach either.
- Switch based approach - Since all the traffic passes through the switches anyway, having them do it makes a lot more sense. The modern switch chips have H/W based CAMs and TCAMs which can take a rule and do the needful without adding to the latency or throughput of the packet stream. In my past life, as Architect of Solaris Networking and Network Virtualization, I have done the software based approach, but given the growing Virtual machine density, SR-IOV type features and growing need for analytics and traffic shaping with performance, I think that the switch based approach is far superior. Here, the CAM and TCAM that measure flows are the Hardware pieces. The software piece is able to add and delete rules on the fly. And Openflow provides a pseudo standard that allows a programmer to work and program any switch. But the biggest advantages are scale, ease of use, and administrative separation of this approach. The scale comes from orchestrating your flows and policies across fewer devices (one switch for approx 50 servers). Also, the people in charge of networks and storage networks are at times different and keeping the administrative separation is useful although not required.
Now a little overview of projects and people who are leading the charge in the brave world of flows and Software Defined Networking. Before raking me over coal on the missing things, let me clarify that the stuff below is what I consider mainstream implementations that apply in world of data centers today (Disclaimer: I have purposely left out most of the research efforts that didn’t reach a mainstream product since there are too many):
- The discussion has to start with project Crossbow which I believe is the first flow implementation with dedicated H/W resources approach that was available in OpenSolaris in 2007 and finally shipped in Solaris 11 (delayed courtesy the Oracle/Sun merger). The virtual switching in Host and H/W based patents (7613132, 7643482, 7613198, 7499463, etc) were filed by me and fellow conspirators from 2004 onwards and awarded from 2009 onwards. Keep in mind that when Crossbow had virtual switching with a H/W classifier running in OpenSolaris, Xen etc were just coming out with S/W based bridging. The 2 commands - flowadm and dladm allow users to create Flows and S/W or H/W based virtual NICs that can be assigned to virtual machines. This is the Server Based Approach that ships in main stream OS and is pretty widely deployed.
- A similar approach has been adopted by our fellow company Nicira in the form of their NVP Architecture. They enhanced the offering by allowing an Openflow based Orchestrator to control the virtual switching in the host although their focus has primarily been on the virtualization side and not so much on application flows side.
- Another of our sister and partner companies, Big Switch Networks has taken a hybrid approach of orchestrating any Openflow capable device which can be a switch or a virtual switch inside a hypervisor. Since they are still in partial stealth, it would not be my place to talk about more details.
- Obviously, every existing network vendor claims that they are working on SDN and openflow. But by definition, SDN requires programmability and Operating Systems to run your programs on. Most of the existing network vendors lack the know how or the ability to do this. They have rich bank balances and if they can acquire the right companies and leave them alone, then they can potentially bridge the chasm (although it is going to be painful).