The Road to 200 & 400 Gig is Already Paved (and Traveled)

August 15, 2018 / General, Standard and Certification, Industrial Networks

The increasing demand for higher bandwidth to support Big Data has driven the need for ever-increasing Ethernet speeds, starting with 10 Gig in 2004 to the introduction in 2010 of 40 Gig with 4 fibers transmitting and 4 fibers receiving at 10 Gbps (40GBASE-SR4) and 100 Gig with 10 fibers transmitting and 10 fibers receiving at 10 Gbps (100GBASE-SR10).

In 2014, IEEE standards that transmit 25 Gbps per lane led to the introduction of 100 Gig over just 8 fibers (100GBASE-SR4). This paved the way for even higher speeds, and in late 2017, the IEEE officially ratified the 802.3bs standard for 200 and 400 Gig.

Large hyperscale data centers have already deployed these next generation speeds and are now looking at 800 Gig, and cloud and colocation data centers are gearing up to follow suit. With some enterprise environments starting to look at 200 and 400 Gig, we thought it would be a good time to take a look at the various methods for achieving these speeds and what we can potentially expect in terms of adoption and testing.

7 Ways to Sunday

When it comes to running 200 and 400 Gig, the IEEE needed to consider every possible way, from every conceivable angle to support a wide range of applications – from short-reach links in the data center to long-haul links in the service provider outside plant environment. Since using multiple parallel fibers and MPO connectors that deliver 25 Gbps per lane was already well established for shorter-reach data center links over multimode fiber, IEEE could easily support 400 Gig over multimode fiber using 32-fiber MPOs (16 fibers transmitting at 25 Gbps and 16 receiving at 25 Gbps).

 

While 400GBASE-SR16 enables 400 Gig over 70 meters of OM3 multimode fiber and over 100 meters of OM4 multimode fiber, when it came down to cost and density, it became clear that 32 fibers did not have large market appeal, especially for longer links. As such, the IEEE needed to define more cost-effective methods, which called for increasing the data rate per lane to 50 Gbps via changing the encoding scheme from simple 2-level Non-Return-to-Zero (NRZ) signalizing to 4-level Pulse Amplitude Modulation (PAM4). While PAM4 requires more sophisticated active equipment, it enables doubling the capacity per lane.

Further technological advancements also gave way to a data rate per lane of 100 Gbps, as well as enhanced short-wave division multiplexing (WDM) technology that runs multiple signals on a single fiber using different wavelengths. With all of these technological advances, the IEEE 802.3bs standard implements a combination of NRZ, PAM4 and WDM technology to support the following variations of 200 and 400 Gig, from 70 meters to 10 kilometers.

  • 400GBASE-SR16 uses NRZ signaling and 16 parallel lanes at 25 Gbps to deliver 400 Gig on up to 100 meters of multimode fiber
  • 200GBASE-DR4 and 400GBASE-DR4 use PAM4 signaling and 4 parallel lanes at 50 and 100 Gbps respectively to deliver 200 and 400 Gig on up to 500 meters of singlemode fiber.
  • 200GBASE-FR4 and 400GBASE-FR8 use PAM4 signaling and 4 and 8 WDM lanes respectively at 50 Gbps to deliver 200 and 400 Gig on up to 2km of singlemode fiber
  • 200GBASE-LR4 and 400GBASE-LR8 use PAM4 signaling and 4 and 8 WDM lanes respectively at 50 Gbps to deliver 200 and 400 Gig on up to 10km of singlemode fiber

Can Multimode Hang On?

When it comes to short-reach data center links, the 400GBASE-SR16 option has identical optical properties to existing 100GBASE-SR4 (and identical insertion loss requirements), which offers backwards compatibility and the ability to configure a 400 Gig port as four 100 Gig ports. Unforunately, the 32-fiber MPO connector and cabling densities required has most industry experts seeing very little adoption.

Before you start thinking that this will mean the end of multimode fiber, IEEE 802.cm standard for 400 Gbps over multimode is currently in development and includes 400GBASE-SR8 that uses a total of 16 fibers with 8 fibers transmitting and 8 fibers receiving at 50 Gbps to deliver 400 Gig on up to 100 meters of multimode. It will also include a SWDM option that would leverage two wavelengths per wide-band multimode fiber (i.e., OM5) to bring the fiber count back down to a total of 8 (4 transmitting and 4 receiving at 50 Gbps over two wavelengths).

While these new developments will allow some to leverage their existing multimode fiber plants, and drive adoption of wide-band multimode OM5 for others, the short-reach singlemode options of 200GBASE-DR4 and 400GBASE-DR4 that deliver 200 and 400 Gig on up to 500 meters of singlemode fiber will give multimode a run for its money. In larger data centers, especially hyperscale, cloud and colocation environments, the 100 meters supported by multimode may simply not offer enough reach. What was once considered a large data center at 250,000 square feet is now closer to 2 million square feet or more, and most of these environments will find the sweet spot in terms of cost and distance with short-reach singlemode applications.

Testing Won’t Change

When it comes to certifying fiber optic links, Tier 1 certification measures insertion loss of the entire link. And by now, you should know that different fiber applications have different maximum insertion loss requirements to ensure that the loss isn't too high to prevent the signal from properly reaching the far end.

As we move from 40 and 100 Gig to 200 and 400 Gig, it is insertion loss that still matters. So in terms of testing, not much changes. You will still choose your fiber type and your insertion loss test limit (or set a custom limit) and test. And with Fluke Networks’ CertiFiber® Pro optical loss test set and MultiFiber™ Pro power meter, you’ll be able to easily test both duplex and 8-fiber deployments (by which all fiber roads are divisible, in case you missed that blog).

Explore nossos Produtos

                

                   

Configurador do kit Versiv

                   

Como você usará o Versiv