7 layer bean dip and 7 layer OSI network model – coincidence or design?
Where I grew up in the Southwestern U.S., there was a well-understood expression that someone “knows his/her beans”, a compliment that meant that he or she was knowledgeable, skilled, and versatile in some domain – ranging from auto repair or construction to, well, activities that actually dealt with beans such as farming and cooking.
In my analyst debriefings, for example, I’ve learned that industry analysts who cover the application space sometimes don’t fully grasp how the right network infrastructure is critical to clouds in general and applications in particular – and here’s where the beans comes in I found that the best analogy to explain why the network is crucial for clouds is a comparison to the classic American 7-layer bean dip.
Let’s start with the recipe: there are many out there but the most authentic versions have 7 clearly defined layers, such as the Southwest version shown here:
The purpose of having well-defined, discrete layers is so that the right mix of all 7 ingredients is layered onto each tortilla chip. The bean layer is not visible from the top but has a critical role. It acts as the foundation for all of the layers and keeps the “stack” – from olive to bean – intact on the chip.
The quality of each individual layer is important but the foundation is critical. For example, while a very tasty bean layer can be made from pinto beans black beans, or white beans, to my mind refried pinto beans provide the most robust and powerful foundation to support the 6 additional layers of goodness (an important point for later).
The Open System Interconnection model
While it may be coincidence (or the work of other fans of bean dip), the Open System Interconnection (OSI) networking model also has 7 layers.
As for bean dip, the application (“olive”) level is most visible and is the focus for many. Yet, the capabilities or the limitations in the underlying cloud network are make or break for cloud applications. Just like bean dip, the answer of course is in the layers, especially the foundation!
The Nokia network provides a globally consistent cloud foundation by “federating” across layers 1 through 3 – comprising data centers and public/private clouds. Further, it overcomes limitations in the application and presentation layer – for example OpenStack® scalability as explored in our white paper Taking OpenStack from Trial to Production. Without this capability, the software (or dip) stack falls part.
Enterprises simply cannot provide a rich and consistent “taste” experience since the layers melt together and become silos that either compromise the user experience (which would be like getting nothing but a big gob of sour cream on your chip) or rob resources from the rest of the environment (like a dip without enough guacamole – ugh!). In the cloud context, this makes it hard to provide a repeatable structure that drives both improved market responsiveness and reduced costs.
Other network vendors (let’s call them white and black bean providers) restrict the enterprise’s flexibility by locking-in the layer 4 vendors (think limiting the choice of guacamole to a single supplier) that can be leveraged by the enterprise. As a result, the enterprise’s ability to bring a unique and competitive “recipe” to market is compromised if not eliminated.
A standard workload should be completely portable from dish to dish (or Virtual Machine and Container). With other vendors’ approaches, different chips must be matched to the dip, or the chip literally breaks under the load. With Nokia’s approach (equivalent to having a strong tortilla chip for the dip), diverse virtualization and container environments are supported consistently and workloads are completely portable from even a private to a public cloud (or kitchen to picnic, so to speak).
For an enterprise to cost effectively provide services at scale, processes must be automated and flexible. In dip terms, each layer must be automatically laid out in order and the process must automatically accommodate different sizes and shapes of dishes. Nokia’s cloud approach uses declarative policies that are intelligently executed such as “cover the bottom of the dish with a 1 inch layer of bean spread” so that all containers have consistent resources.
At the onion / presentation layer 6, Nokia’s cloud approach utilizes a universal communications platform so that all the complexities of each device are removed from the application developer’s role. Further, Nokia’s cloud approach utilizes network templates so that application developers do not need to continually provide network definitions and security parameters. (This somewhat equivalent to having containers with a scale that helps the cook judge the relative volumes of ingredients needed rather than having to measure each time.)
Also, without a rich security approach, “party crashers” (aka hackers) will dip in, so to speak. Nokia’s cloud approach leverages multi-layer security provisions – from physical encryption against wire taps all the way up to a “zero trust” default policy to guard against a variety of hacker attacks.
To wrap up this analogy, I hope you conclude that I “know my beans” and that the Nokia network is to cloud as beans are to bean dip! And if you relate to this approach, you may also be interested in reading my Enterprise Cloud TCO blog, which focuses on cost advantages.
To explore how the Nokia cloud can help your business, please contact our sales team.
Share your thoughts on this topic by replying below – or join the Twitter discussion with @nokianetworks and @nokia_cloud using #cloud #enterprise