Every so often, we witness a major shift in the networking industry that fundamentally changes the industry landscape including product portfolios and investment strategies. The recent advancements in the 25G Ethernet speed technology, specifically at the Top of Rack server interfaces can be considered as one such occurrence. As we will see, there is a good reason why 25GbE ports are poised to become the most prevalent data center server access ports in next 5 years.
So what is really causing this transition and why is everyone so excited about it? How does it affect your Data Center strategy? Is there a seamless migration path? What about your existing cabling infrastructure? How much will it cost? Will the cost/performance ratio justify your new investment? What is Cisco’s strategy moving forward?
These are all relevant questions one must ask before jumping on the 25G bandwagon. Let us take a closer look at this new movement and help address all of the questions above.
The new application driven world has taken the concept of business continuity to a completely new level. Customers demand applications round the clock. Virtualization is the new legacy. Big data is more mainstream than it has ever been. Applications continue to be more distributed by the minute. A multi-cloud data center strategy is no longer just a nice to have. These advancements have resulted in the applications becoming more demanding than ever. Isn’t it time for the good old network to keep up!
Let us try to add a little more perspective here. On one hand, consider the background tasks such as but not limited to application and network snapshots, application backups, workload migration for high availability and load balancing. The distributed nature of the modern applications have resulted in increased background traffic in addition to the real traffic traversing through the network fabric. On the other hand, the advancements in the server technology have made the servers better and faster. They have no problems keep up with the migration to the 100G spine switches.
Therefore, the challenge lies in ensuring the server-leaf Ethernet downlink connections do not become bottlenecks when the spine switches migrate to 100G.
Capital Expenditure is one of the biggest considerations in the adoption of any new data center technology. The one aspect all data center operators dread most is cabling; some consider is to be the most cumbersome and riskiest tasks in data center management. Most of them prefer to install cables once and never touch them again unless they must! 25 GbE allows for seamless migration from 10 GbE without the need to touch those much-dreaded cables.
Most 10Gbps data centers use Small Form Factor (SFP) optical transceivers. For those considering a 25G upgrade, the good news is that the fiber which was already been installed for the most part will continue to work for the new speeds. You will only need to upgrade the optical transceivers to support the faster 25Gbps speeds.
Until recently, as shown in Table 1, 10 Gbps has been the building block for all the higher Ethernet speeds. The fundamental building block was the Serializer and Deserializer (SerDes) that operated with a clock speed of about 12.5 GHz, providing the 10-Gbps transfer rate. For example, a 40 Gigabit Ethernet interface is constructed of 4 parallel SerDes links. Similarly, 100 Gigabit Ethernet interfaces were initially constructed out of 10 parallel paths of 10-Gbps streams.
Table 1 Ethernet speeds
These are all relevant questions one must ask before jumping on the 25G bandwagon. Let us take a closer look at this new movement and help address all of the questions above.
Why now?
The new application driven world has taken the concept of business continuity to a completely new level. Customers demand applications round the clock. Virtualization is the new legacy. Big data is more mainstream than it has ever been. Applications continue to be more distributed by the minute. A multi-cloud data center strategy is no longer just a nice to have. These advancements have resulted in the applications becoming more demanding than ever. Isn’t it time for the good old network to keep up!
Let us try to add a little more perspective here. On one hand, consider the background tasks such as but not limited to application and network snapshots, application backups, workload migration for high availability and load balancing. The distributed nature of the modern applications have resulted in increased background traffic in addition to the real traffic traversing through the network fabric. On the other hand, the advancements in the server technology have made the servers better and faster. They have no problems keep up with the migration to the 100G spine switches.
Therefore, the challenge lies in ensuring the server-leaf Ethernet downlink connections do not become bottlenecks when the spine switches migrate to 100G.
First, let us address the elephant in the room
Capital Expenditure is one of the biggest considerations in the adoption of any new data center technology. The one aspect all data center operators dread most is cabling; some consider is to be the most cumbersome and riskiest tasks in data center management. Most of them prefer to install cables once and never touch them again unless they must! 25 GbE allows for seamless migration from 10 GbE without the need to touch those much-dreaded cables.
The case for 25GbE
Until recently, as shown in Table 1, 10 Gbps has been the building block for all the higher Ethernet speeds. The fundamental building block was the Serializer and Deserializer (SerDes) that operated with a clock speed of about 12.5 GHz, providing the 10-Gbps transfer rate. For example, a 40 Gigabit Ethernet interface is constructed of 4 parallel SerDes links. Similarly, 100 Gigabit Ethernet interfaces were initially constructed out of 10 parallel paths of 10-Gbps streams.
Table 1 Ethernet speeds
Port speed (Gbps) | Lane speed (Gbps) | Lanes per port |
10 (current) | 10 | 1 |
40 (current) | 10 | 4 |
25 (new) | 25 | 1 |
40 (new) | 20 | 2 |
100 (new) | 25 | 4 |
In the past several years, rapid technology advancements have made 25-GHz SerDes links economically viable. As of June 2016, 25 Gigabit Ethernet equipment is available on the market using the SFP28 and Quad SFP28 (QSFP28) transceiver form factors.
Thus, data can now be pushed across the serial links 2.5 times as fast as across 10-Gbps interfaces based on the 12.5-GHz clock speed. The new 25 Gigabit Ethernet ports can still support 10 Gigabit Ethernet, but they would not use all the capacity of that connection.
40 Gigabit Ethernet will then include two parallel 25-GHz SerDes links, rather than the four required today. Similarly, 100 Gigabit Ethernet will quickly move away from the current 10-lane implementations to four 25-Gbps lanes. These reduced space requirements ultimately translates into less cabling between devices, leading to greater efficiency at lower cost: a win-win situation.
The Cisco Advantage
Switches ready to support 25 and 50 Gigabit Ethernet have been a main focus in the development of next-generation Cisco Nexus® switches. The launch of the Cisco Nexus 93180YC-EX Switch, the industry’s first 25-Gbps-capable 1-Rack-Unit (1RU) switch, underscores that focus. What makes this achievement especially significant is the fact that the 93180YC-EX was built using the home-grown Cisco® Cloud Scale Application-Specific Integrated Circuit (ASIC) leveraging 16-nanomer (nm) technology. Since then, Cisco has aggressively continued to build its 25 and 50 Gigabit Ethernet switch portfolio. The new Cisco Nexus 93180YC-FX Switch supports the full IEEE requirements for 25 Gigabit Ethernet. The advancements in 16-nm technology have enabled Cisco to differentiate itself from its competitors. By adding flow tables, unified ports, congestion control, and other features to its chips, Cisco can offer more to customers at an affordable price.
0 comments:
Post a Comment