Coming of age: the new economics of 100GbE

Coming of age:
the new economics of 100GbE
Increased demand and new industry innovations bring
cost-effective 100-gigabit networking to data centers
Introduction
As processor speeds increase and cores continue to grow, the
need for faster transfer of data to and from the local fabric is
also accelerating. Until recently, 100 Gigabit Ethernet (100GbE)
networking has been dedicated largely to client connections
to service provider networks such as research centers and
telecom carriers. Now, 100GbE is poised to fill the need for
faster data transfer more affordably and in a wider range of
organizations than ever before.
Several new developments are contributing to the reshaping of
data center networks:
• New silicon: ASIC development now allows high-density
100GbE switches in very affordable form factors.
• New optics:
-- New 100GbE optical modules use an x4 electrical interface.
-- Multi-Mode Fiber (MMF) optics have been reduced from
10 fibers (100GBASE-SR10) to 4 fibers (100GBASE-SR4),
allowing the reuse of current 40GbE optical plans to
preserve infrastructure investments.
-- New 100GbE Single-Mode Fiber (SMF) optics using x4
electrical interface consume less power than currently
deployed solutions.
• New capabilities: New switches offer the ability to run five
different high-capacity speeds — 10GbE, 25GbE, 40GbE, 50GbE,
100GbE — off a 100GbE port, providing the flexibility to meet a
wide range of emerging needs.
This paper examines each of these developments and shows how
they are working together to change the networking landscape.
Evolving networking demands
100GbE market drivers
Factors contributing to an
expanded 100GbE ecosystem
include:
• 25/40/50 network interface cards
(NICs): 25/40/50GbE adoption
on servers will drive the need for
100GbE uplinks at the top of the
rack
• Internet2: The Internet2 backbone
already runs at 100GbE
• Optics: Cost of optics for 100GbE
is no longer a barrier as newer fourlane 1x100GbE QSFP28 transceivers
become readily available from
numerous optics vendors in the
industry this year
• Silicon: 32x100GbE switching
bandwidth is now available on a
single chip—this bandwidth can
also be configured in multiple rates
in any combination of 10, 25, 40, 50
or 100Gb Ethernet.
In today’s data center, growing data
volumes, different types of traffic,
widespread use of virtualization and
other factors have led to far greater
east-west traffic. That is, more data is
staying in the data center, moving back
and forth among servers. Organizations
are also getting more usage from each
server, resulting in increased traffic flow.
This trend toward greater east-west
traffic will continue to become more
pronounced as workloads grow larger.
Organizations considering 100GbE tend
to arrive at a decision point from either
a top-down or bottom-up perspective.
The top-down scenario is typical of
enterprises that are pushing more traffic
into their enterprise and private data
centers. With 100GbE coming into
the enterprise, bigger “pipes” are also
required as data moves down into the
fabric or top of rack.
The bottom-up discussion often occurs
with large Web 2.0 companies that
live and die by internet activity. These
organizations need increasingly more
bandwidth from the server to top of rack,
and 100GbE from top of rack into the
fabric. It follows that all pipes from the
fabric up must also be at least 100GbE.
Movement from 10-lane to
4-lane technology
100GbE has been available for some
time, but has not been economically
feasible except for extreme uses
such as research at Internet2-level
organizations. The cost of fiber optic
modules for 100GbE has been a key
inhibitor to greater adoption. However,
optics are becoming more affordable.
Organizations considering 40GbE today
could soon step up to 100GbE for a very
modest price point jump. Only a few
months ago, that price point increase
would have been much larger (a factor
of 10).
The cost factor is changing because,
with new industry options, the optic
technology itself is changing. Until
2
recently, most available 100GbE
implementations used 10 lanes of
10.3125 Gb/s. The QSFP28 form factor
instead uses 4 lanes of 25.78125 Gb/s to
achieve 100GbE aggregated throughput.
These lanes may also function as 4
independent ports supporting 25GbE
(for Cu and MMF optics, or perhaps SMF
optics).
The new economics of connectivity
While some legacy connectors should
continue to sell moderately, four-lane
variants like QSFP28 are the wave of
the future because they work more
economically in several ways:
• Recent innovations make four 25Gb/s
lane transceivers less expensive than ten
10Gb/s lanes, because the transceiver is
simpler and less costly to manufacture.
• The power required to run that
transceiver is much less than required for
a typical 10-lane transceiver.
• Fiber cabling is less expensive in the case
of SR4 (100m) or PSM4 (500m) optics
because 4 fiber pairs are required instead
of 10 used by 10x10G technology.
• QSFP28 provides an option for direct
attach copper cables (DACs) and active
optical cables (AOCs), helping to further
reduce the cost of 100GbE deployments,
while legacy variants require fiber for all
runs.
With the QSFP28 form factor, the four
lanes can be used exclusively or in
combination to enable one or more 10,
25, 40 or 50GbE network connections.
Discussions with Dell customers that
previously centered on 10, 40 and
100GbE networking have expanded to
include 25 and 50GbE technology.
Many mainstream organizations are
beginning to see a need for 25GbE
capacity for server to switch connectivity.
The new multi-rate switches are expected
to serve this market segment well, with
25GbE price points that are closer to
today’s 10GbE than to 40GbE prices.
For only a modest increase in price,
organizations can choose 25 instead of
10GbE switch-to-server connections.
Organizations with large numbers
of high-output servers in their data
center will be the first to find 50GbE
financially compelling. The new 50GbE
solutions can be implemented with
emerging 100GbE multi-rate switches
and the newer-generation NICs on
rack servers. Not every organization will
jump to 25/50GbE right away — some
will continue to use 10 and 40GbE and
change when they are ready or when the
NICs are widely available.
Run lengths with QSFP28
Organizations moving to new QSFP28–
based fiber optics will find that run
length options are as comprehensive
as today’s 40GE field, if not more so.
Although few optic vendors are likely to
offer every connector speed and type,
most will offer something, resulting in
a full range of options. These offerings
will satisfy the various different standard
reaches on both multi-mode and
single-mode fiber. Whether for 10, 40
or 100 gigabits, the typical short reach
with today’s optics is 100 to 300 meters.
100-meter QSFP28 solutions will be
first to market with longer reaches over
multi-mode fiber at 100 gigabits being
studied. QSFP28 provides longer reach
over single-mode fiber with solutions for
500m, 2km and 10km break-points.
The DAC option in QSFP28 can be a
money-saver for short lengths due to the
lower cost of DAC cable compared to
combined cost of optical modules and
fiber. Today’s copper runs can extend to
7 meters for 10 or 40 gigabit, or 5 meters
for 100 gigabit. 5 meters is sufficient
for most intra-rack and adjacent rack
cabling needs.
The impact of ASIC development
To obtain a 100GbE switch today,
organizations have two options: one,
they can buy from a company that has
developed its own proprietary ASIC. This
option is expensive. Alternatively, they
can buy from a vendor using merchant
silicon. This is usually a lower-cost
option, but has still been expensive at
100G gigabits due to the limited number
of 100 gigabit ports per ASIC, which
thus requires multiple ASICs in a switch.
3
The next generation of 100GbE switch
implementations will be based on highdensity, industry-standard ASICs that can
support up to 32 ports of 100GbE in a
single chip. This is a huge leap forward.
Using a single high-density ASIC instead
of lower-density ASICs significantly
drives down the cost of the switch and
reduces energy consumption. Also, the
new high-density ASICs can support
multi-rate speeds of 10/25/40/50
and 100GbE, most yielding 10 and
25 breakout ports, out to 4 ports per
one QSFP28. With only a single chip
in the switch, the latency is very low
as well, helping to increase workload
performance. Until recently, typical
latency for a 32-port 40GbE switch was
approximately 550 nanoseconds. The
new standard is 32 ports of 100GbE with
even lower latency and attractive price
points.
With the new merchant silicon, multiple
vendors will be offering 100GbEcapable platforms. That means vendors
will increasingly need to differentiate
themselves based on their ability to
deliver innovative software solutions
rather than relying solely on their ASIC
development team. Please see sidebar
for discussion on Dell Open Networking
innovations with 3rd party Operating
Systems.
Flexibility with multi-rate ports
Multi-rate ports are not a new concept in
the industry, but the new optics enable
an explosion in the number of different
speeds possible on a single port. Using a
breakout cable, a QSFP28-based 100GbE
port could break out to four 25GbE
connections or four 10GbE connections,
or potentially 40 or 50Gb Ethernet as
well.
Never before has there been an option
to run five different high-capacity
speeds out of the same port. This variety
of speeds that can be deployed with
the same switch solution means that
organizations can economically meet
the needs of a variety of use cases.
Dell Networking Z9100-ON
100GbE fabric switch
Next-generation fixed-form factor
10/25/40/50/100GbE
Industry’s first 100GbE multi-rate
1U switch
• Multi-rate switching with 32 ports
100GbE (QSFP28), 32 ports 40
GbE, 128 ports 10GbE, 128 ports
25GbE or 64 ports 50GbE
• Additional two 1/10GbE
SFP+ ports
Built to support future-ready,
high-end data center applications
• Cloud, high-performance
computing (HPC) and Web 2.0
requiring a range of switching rate
options in high-density environments
• Big data analytics with high
performance, ultra-low
latency requirements
Key differentiators
• A range of switching speeds from
10 to 100GbE, giving organizations
flexibility for the life of the investment
• Support for Open Networking Install
Environment (ONIE)
• Flexibility, performance and support
of third-party operating systems
Dell Active Fabric TM
Support for Open Networking
Open Networking brings a
paradigm shift to the network,
mirroring the same cost savings
the industry has seen in the
server industry by moving
from monolithic proprietary
environments to open standards.
Open Networking can reduce
CapEx by up to 65 percent,1
according to Gartner. It can also
reduce OpEx by bringing greater
management agility with thirdparty tools for orchestration.
The Dell commitment to open
platforms is evidenced by multiple
product innovations:
• Open Networking switches with
support for the Open Network
Install Environment (ONIE)
• Best-of-breed open networking
switches that enable organizations
to run selected network operating
systems from different vendors
• Third-party network operating
systems for Dell switches built for
particular environments, such as
Cumulus Networks, Big Switch
Networks, IP Infusion and Pluribus
Dell is continuing to support open
networking with its QSFP28-based
switch offerings.
Virtual Link Trunking
Z9100
40/100GbE uplinks
100GbE uplinks
TORs
TOR
10/25GbE servers
Blade servers
10GbE servers
Storage
Data Center Active Fabric architecture Virtual Link Trunking
100GbE interconnects between access and aggregation layers
Use cases for different switching
rates
In a typical scenario, an organization
might deploy 10 or 25 GbE down to the
servers and use 40/100GbE for uplinks
(see figure).
If an organization wants to replace and
enhance its current fabric, it could use
a new QSFP28-based switch to easily
and economically connect down to
existing 40GbE ports at top of rack, since
the 40GbE connections are already based
on four-lane technology. In the future, if
the organization decides to upgrade its
top-of-rack switches and increase from
40GbE to 100GbE up into the spine, it
can simply change the transceiver on
the QSFP28-based switch already in the
fabric. Still another option is to deploy
10 or 25GbE from an aggregation layer
directly down to the servers, with the
capability for an upgrade in the future.
Using 40/100GbE for uplinks will also
be a common use case. The customer
can deploy a 32-port, 100GbE switch
4
Layer 2/Layer 3
with the new optics, paying a bit more
than today’s 40GbE price. When the
organization is ready to upgrade, it
already has a switch that can perform at
100Gbps speed.
Looking ahead: migration from
large users to smaller enterprises
The new multi-rate switches should
initially appeal to telecom operators,
cloud providers, large enterprises
and government labs at U.S.
Department of Energy (DOE) and
Department of Defense (DOD) agencies
that have a mandate to expand their
100GbE footprint.
Additionally, many educational
institutions currently have a large,
expensive switch running one or two
100GbE lines for Internet2. With the new
switches, these institutions will now
be able to economically increase the
number of 100GbE lines accessing the
Internet2 backbone. Given the favorable
economics of the new optics, they can
also transfer that data to every building
on campus at 100-gigabit speed.
Will the new class of 100 gigabit switches
drive down $/bit and enable a move into
mass market deployments in the future?
Several factors may make this migration
a reality, including:
• The versatility of the port speeds, which
enables growing organizations that have a
small fabric to run 10 or 40GbE today, then
make an economical upgrade as needed
• The switch price, which is lower due to
drastic reductions in overall TCO in 100GbE
ports based on the new multi-rate switches
• The fact that organizations can reuse the
existing fiber runs they already have in place
just by changing the optics and transceivers
— and can get a lot more bandwidth for the
money than expected
• The ease of use of the new technology,
enabling IT to leverage the knowledge it
has already acquired
• Dell’s licensing approach allows for
customers to have access to all speeds
supported in the hardware without
additional port speed licenses
As a steering committee member of the
25 Gigabit Ethernet Consortium and
as a leader in the IEEE 802.3 Ethernet
Working Group, Dell continues to help
drive standards for the new multirate switches, and Dell is dedicated
to bringing the benefits of the new
technology to its customers. In fact,
Dell has introduced the industry’s first
100GbE multi-rate switch in a 1U fixedform factor for aggregation and access
layers (see sidebar).
For more information
To learn more about new optical
solutions taking advantage of the
QSFP28 four-lane electrical interface
solution utilizing DAC, AOC and
transceiver technologies and the Dell
Networking Z9100-ON switch, visit:
http://www.dell.com/learn/us/en/04/
shared-content~data-sheets~en/
documents~dell-networking-z9100spec-sheet.pdf
Conclusion
Considering all these potential uses and
benefits, new 100GbE capable switches
based on four-lane signaling technology
represent an important step forward
in the performance and economics of
networking.
1
“The Future of Data Center Network Switches Looks ‘Brite’”, Gartner Research Report, November 2014.
© 2015 Dell, Inc. ALL RIGHTS RESERVED. No part of this document may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying and recording for any purpose without the written permission of Dell,
Inc. (“Dell”).
Dell, the Dell logo and products — as identified in this document — are registered trademarks of Dell, Inc. in the U.S.A. and/or
other countries. All other trademarks and registered trademarks are property of their respective owners.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND
TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
July 2015
5
White Paper - Coming of age: the new economics of 100GbE - US - TDA - 2015-07-29