A well-planned optical infrastructure will typically be used for more than 20 years and will need to remain operational through different equipment solutions and multiple generations of protocols with increasing data rates.
Managing all the requirements of a data center can be a daunting task, but several tools are available for its design. TIA-942, the Telecommunications Infrastructure Standard for Data Centers, provides a comprehensive overview for structured cabling in a data center.
TIA
-942 recommends using a star topology and defines the following areas and spaces in a typical enterprise data center.
Topology in the Enterprise Data Center:
Telecommunications spaces in a Data Center include a Main Distribution Area (MDA), a Zone Distribution Area (ZDA), and an Equipment Distribution Area (EDA). The Main Distribution Area (MDA) includes the main cross-connects (MC), serving as the central distribution point for structured cabling solutions in the Data Center (Figure 1). The Zone Distribution Area (ZDA), if used, acts as a consolidation point between the Main Distribution Area (MDA) and the different zones within the Data Center. Incorporating this architecture into a Data Center cabling design allows for the installation of trunk cabling in a single step, providing the flexibility needed to support frequent reconfigurations in the required zones (MACs – Moves, Expansions, and Changes). The Equipment Distribution Area (EDA) is the space designated for end-site equipment, including computer systems and telecommunications equipment.
To meet the requirements of a high-performing Data Center, the cabling infrastructure topology should not be chosen in isolation. The infrastructure topology and product solutions must be considered together.
A structured cabling architecture, combined with a modular cabling solution to provide connectivity as defined by TIA-942, facilitates a flexible and manageable infrastructure. A modular cabling solution consists of pre-terminated trunk distribution cables based on 12-fiber MPO connectors.
These trunk distribution cables connect to modules or harnesses that convert 12-fiber MPO connectors into single-fiber connections. Patch cords are used to connect equipment systems to the modules, completing the system.
Deploying an MPO-based modular cabling system, including MPO-terminated trunk cables, modules, and harnesses, offers significant advantages.
These include a 50% reduction in cable tray space, an 80% improvement in deployment time, and a 70% reduction in cabling within cabinets. A high-density modular solution deployed using a structured cabling topology can be easily scaled to thousands of ports, significantly reducing move times, expansions, and changes, thereby lowering operating costs.
The Storage Area:
While a trunk and module cabling system works well in most areas of a data center, the unique requirements of the Storage Area Network (SAN), particularly the SAN Director equipment, often necessitate a specialized solution. Due to the large number of ports on SAN Director equipment, a solution using modules and patch cords can require a significant amount of rack space due to the high patch cord density, and also requires additional management. To address this unique requirement and alleviate the resulting problems, custom harness solutions have been introduced. A harness allows us to take advantage of the density provided by an MPO connector on the patch panel and the use of simple connectors for interconnecting with the electronics. Using 12-fiber cables with harnesses instead of individual patch cords reduces congestion on the SAN Director equipment, as well as in vertical cable organizers and cable trays.
In addition to the benefits of a structured cabling system, an MPO-based cabling infrastructure allows us to easily migrate to the highest data rate technologies, including parallel optical links. This technology will be used in 32, 64, and 128 Gigabit Fibre Channel, and 40 and 100 Gigabit Ethernet (GbE).
Serial transmission using 850 nm VCSELs is currently used for data rates up to 10 GbE. Using serial transmissions over duplex fiber is impractical for data rates above 16 GbE due to the reliability of 850 nm VCSELs in extreme temperature ranges at the Data Center. Therefore, 40 GbE and 100 GbE will use parallel optical links. (See Figure 2 and Figure 3).
Parallel optical link technology, including 850 nm VCSEL arrays and OM3 fibers, offers a low-cost solution for high-speed Ethernet data transmission.
Parallel optical link transmission technology multiplexes, or divides, the data signal across different fibers that are simultaneously transmitting and receiving.
At the receiving end, the signals are demultiplexed back to the original signal. MPO connectivity is used for parallel optical link channels.
The IEEE 802.3ba Working Group began developing a transmission guide for 40 GbE and 100 GbE in January 2008. Among its objectives was a minimum distance of 100 m for laser-optimized 50/125 multimode fiber (OM3).
At the IEEE meeting in May, several proposals were adopted for drafting the 40 GbE and 100 GbE standards. Transmission over parallel optical links was chosen as the starting point for 40 GbE and 100 GbE over OM3 fiber. The proposal defines 40 GbE and 100 GbE interfaces based on 4x10 GbE channels using four fibers in each direction and 10x10 GbE channels using ten fibers in each direction, respectively.
In terms of fiber bandwidth, delay difference, and insertion loss per connection, these factors must be considered to ensure that the cabling infrastructure will meet future 40 and 100 GbE requirements. Taking these factors into account, the system will meet the proposed requirement of an operating distance of 100 m over OM3 fiber.
OM3 is the only multimode fiber considered for 40 and 100 GbE systems. It is optimized for transmissions at 850 nm and has a minimum effective modal bandwidth of 2000 MHz*km. The calculated minimum effective modal bandwidth (minEMBc) is a measure of the system bandwidth for OM3 fiber, and is the most accurate measurement compared to the differential-mode delay (DMD) technique. With minEMBc, a real and scalable measurement, the calculated value predicts reliable system performance.
Excessive Delay Difference:
Optical delay difference is defined as the difference in propagation time between different light signals traveling through different fibers. Its consideration is important for parallel optical links. An excessive delay or delay difference across different channels can cause bit errors. Delay difference requirements for cabling are being considered for 40 GbE and 100 GbE. Deploying a cabling infrastructure with a low delay difference will ensure compliance with the requirements for a wide variety of applications. For example, in InfiniBand, a protocol that uses transmission over parallel optical links, the maximum allowable delay difference is 0.75 ns.
Insertion loss in the channel impacts the reliability of the system to operate beyond the maximum supported distance for a given data rate. As insertion loss per connection increases, the supported distance for a given data rate decreases. Currently, the standard for 40 and 100 GbE transmissions over multimode fiber is a total connection loss of 1.5 dB operating up to 100 m. Therefore, when designing a Data Center, it is crucial to evaluate the insertion loss specifications per connection for the connectivity components. Low-loss connectivity components allow for maximum flexibility, enabling the introduction of multiple connections into the system link.
A well-designed cabling architecture, implemented according to the TIA-942 standard and incorporating a modular cabling system, provides the reliability, manageability, scalability, and flexibility required in the Data Center. The use of high-quality, low-loss components ensures that our Data Center will not only meet current requirements but also future ones.
Equinsa Networking is a distributor of Corning Cable Systems in Spain.
Source: Cablingbusiness Digital Magazine
Author:
David Hessong and Daryll Kerns, Private Networks, Corning Cable Systems.
Translated by José Carlos Granja, Private Networks, Corning Cable Systems.
