Highest board performance and lowest latency in the world
World-leading virtualization technologies
World's first to support both cloud data center and cloud campus network
One of the first vendors in China to use the multi-process modular OS
Advanced CLOS multi-level multi-plane switching architecture, orthogonal design, and zero cables on the backplane, ensuring minimum transmission loss, full line-rate forwarding on all ports without blocking, and continuous broadband upgrade and service support
Super large distributed buffer design (200 ms), no packet loss upon traffic bursts
The VSU3.0 technology can virtualize multiple core switches into one logical device. Combined with the VSD technology, which virtualizes one
switch into multiple logical device, VSU3.0 is used to simplify network structure dramatically, realizing network resource pooling.
The virtualization ratios of VSU and VSD are 4:1 and 1:16, which are the highest in the world and ensures on-demand network resource allocation.
Learn more about the Tolly Testing and Certification report >
RG-N18000 Series support
1GE/10GE/40GE/100GE ports,so the Serises can adapt to different bandwidth
Preferred switching architecture for the top-class switches in the industry
Direct connection between switch fabric modules and switch arrays at an angle of 90° and no backplane design, reducing electromagnetic interference, protecting cables and ensuring high efficiency
Non-blocking packet forwarding within the Newton 18000 switch
Multi-process modularization, ensuring high software stability
Support for hot patching, ensuring normal service running during software upgrade
Support for OpenFlow 1.3, enabling smooth upgrade to an SDN network
Enhanced virtualization features and correlation between VSU and VSD for infinite possibilities
1 TRILL RILL is a data center layer-2 routing protocol that virtualizes the whole data center network into a layer-2 switch.
2 Automatic migration of VM security policies When VMs are migrated between different physical servers, configuration does not need to be performed on each VM, simplifying management and improving security
3 L2GRE When VMs are migrated between different physical servers, configuration does not need to be performed on each VM, simplifying management and improving security
4 VERA VM traffic is diverted to physical switches for forwarding, ensuring flow control and security control
The VSU technology virtualizes a maximum of 4 physical core switches into one logical device
Uplinks are aggregated on ports across devices, ensuring data load-balancing and improving the bandwidth utilization
The fault recovery time of switches, supervisor modules, and links is only 50 ms, ensuring high availability of services
After virtualization via the VSU, multiple devices can be managed in a unified manner, saving time for configuration management
The VSD technology virtualizes one physical device into a maximum of 16 logical devices
One network can be used as multiple networks, improving the device utilization and enterprise ROI
Different services, user groups, and departments can be isolated at the physical layer without extra purchases or deployment
VSD isolation ensures security of key services
When the VSU and VSD are correlated, 4 Newton 18000 switches can be virtualized into one logical switch, and the one logical switch can further be virtualized into 16 logical switches
Network resources are fully integrated and pooled. In addition, vertical virtualization is supported, ensuring on-demand resource allocation
Tolly Group, an international third-party authority for testing and certification, proves the superb performance of Newton 18000
Tolly Group, an international third-party authority for testing and certification, proves the superb performance of Newton 18000
Delivered latency as low as 0.532 μs
Provided MAC table capacity up to 512K, ARP table capacity up to 170K and 802.1x authentication cocurrent users capacity up to 100K
Supported rapid authentication at a speed up to 1,200 STAs per second
Supported VSU and VSD
Supported N+1 redundancy , and hot patchin
Supported OpenFlow 1.3 features with the SDN controlle
Comparison of Port Bandwidths
Ruijie RG-N18010 Switch Layer 2 40GbE RFC2544 Throughput
192 40GbE ports in a Snake Configuration across 8 Line Cards
(as reported by Spirent TestCenter v4.33)
Frame Size (bytes)
Note: Eight M18000-24QXS-DB line cards with four M18010-FE-DIII fabric modules were used on one Ruijie RG-N18010 switch. 100% line-rate Layer 2 throughput with 192 40GbE ports were verified for all tested frame sizes with zero frame loss. Aggregated throughput is 7.68Tbps. All traffic passed across line cards instead of passing to ports on the same line card.
Source: Tolly, May 2014 Figure 1
As shown in the test results, the RG-N18000 switch outfitted with 24-port 40GE modules delivered 100% line-rate layer-2 throughput with 192*40GE ports in snake topology for 64- to 9216-byte frames without frame loss.
Latency Comparison
Ruijie RG-N18010 Switch Layer 2 40GbE RFC2544 Average Latency
Two 40GbE Ports in Port-to-Port Traffic Configuration
(as reported by Spirent TestCenter v4.33)
Frame Size | 64-byte | 128-byte | 512-byte | 1518-byte | 4096-byte | 9216-byte |
---|---|---|---|---|---|---|
Cut-Through Latency (μs) | 0.532 | 0.545 | 0.588 | 0.586 | 0.586 | 0.586 |
Store -and-Forward LIFO Latency (μs) | 0.650 | 0.651 | 0.666 | 0.665 | 0.665 | 0.665 |
Frame Size | Cut-Through Latency (μs) | Store -and-Forward LIFO Latency (μs) |
---|---|---|
64-byte | 0.532 | 0.650 |
128-byte | 0.545 | 0.651 |
512-byte | 0.588 | 0.666 |
1518-byte | 0.586 | 0.665 |
4096-byte | 0.586 | 0.665 |
9216-byte | 0.586 | 0.665 |
Note: Port 1 and Port 2 on one M18000-24QXS-DB line card was used for the test. For cut through mode, FIFO latency was captured in the Spirent RFC2544 latency test. For store and forward mode, LIFO latency was captured in the Spirent RFC2544 latency test suite. Thus, store-and-forward results do not include the time required to store the frame.
Source: Tolly, May 2014 Table 1
As shown in the test results, the forwarding latency of 40GE ports on the RG-N18000 switch was lower than 1 μs in both Cut-Through and Store-and-Forward modes. The lowest latency was 0.532 μs, which is the lowest in the industry.
Fault Recovery Time Comparison
Ruijie RG-N18000 Series Switch VSU Convergence Performance
(as reported by Spirent TestCenter v4.33)
Uplink Failure | Main and Backup control modules switch over (Main control module failure) | Backup control module failure | Main switch failure | Backup switch failure | |
Average Convergence Time | 30ms | 0 | 0 | 11ms | 11ms |
Average Convergence Time | |
---|---|
Uplink Failure | 30ms |
Main and Backup control modules switch over (Main control module failure) | 0 |
Backup control module failure | 0 |
Main switch failure | 11ms |
Backup switch failure | 11ms |
Note: Convergence time here considered load balancing. In the uplink failure, main switch failure and backup switch failure tests, due to load balancing, half traffic streams were affected by the failure while half were not. The convergence time for the affected streams is reported here. See the test methodology section for detail.
Source: Tolly, May 2014 Table 3
Uplink failover: 35 ms
Failover between the active and standby supervisor modules: 0 ms
Failover between the standby and slave supervisor modules: 0 ms
Failover between the active and standby switches: 11 ms
Failover between the standby and slave switches: 11 ms
The RG-N18000 switch fabric modules can work in N+1 redundancy mode. When the switch was outfitted with 192 40GE ports , only three switch fabric modules were required to deliver the layer-2 traffic at 100% line rate across the backplane .
The switch outfitted with 192 40GE ports delivered the layer-2 traffic at 100% line rate with zero frame loss when any line card was removed. N18000 delivered stable data forwarding in N+1 redundancy mode.
At the software layer, the RGOS 11.X modular OS of Ruijie Networks supports ISSU, stateless process restart, and hot patching. This ensured software upgrade during device operation without affecting other service.
Stability
Ruijie RG-N18000 Series Switch Features
Tolly Certified Features | ||
---|---|---|
4 Mem mber Virtual Switch Unit (VSU) with RG-N18010 | ||
![]() | Virtual Switch Dev vice (VSD) - virtualize one RG-N18010 into12 virtual switch devices | |
![]() | OpenFlow 1.3 Main Feature | |
![]() | Modular System | (ISSU) for Control Modules |
![]() | Stateless Process Restart | |
![]() | Hot Process Patching | |
![]() | N+1 Fabric Module Redundancy | |
![]() | Hot Swappable Fabric Module |
Source: Tolly, May 2014 Table 4
Power Consumption Analysis
Ruijie RG-N18010 Switch Power Consumption with 192 40GbE ports
(as reported by Fluke 317 Clamp Meter and Fluke 15B Digital Multimeter)
Without Any Line Card | Load All Modules - 0% Traffic | Load All Modules - 30% Traffic | Load All Modules - 100% Traffic | Difference between 100% traffic and without any line card | Average Apparent Power per line card with 100% traffic | Average Apparent Power per 40GbE port with 100% traffic |
|
Apparent Power | 1.456VA | 2.573VA | 2.742VA | 3.108VA | 1.652VA | 206.5VA | 8.6VA |
Apparent Power | |
---|---|
Without Any Line Card | 1.456VA |
Load All Modules - 0% Traffic | 2.573VA |
Load All Modules - 30% Traffic | 2.742VA |
Load All Modules - 100% Traffic | 3.108VA |
Difference between 100% traffic and without any line car | 1.652VA |
Average Apparent Power per line card with 100% traffic | 206.5VA |
Average Apparent Power per 40GbE port with 100% traffic | 8.6VA |
Note: 1. Apparent power = Current * Voltage. Real power = Apparent power * power factor. Power factor <=1 . So the real power in Watts, which is what the utility company charges the customers, is less than the apparent power reported here.
2. Results in white cells were measured. Results in green cells were calculated from the measured results.
3. One Ruijie RG-N18010 switch was fully loaded with two M18010-CM control modules, four M18010-FE-D III fabric modules, eight power supplies, four fan modules and eight M18000-24QXS-DB line cards. Each line card has 24 40GbE ports.
4. iMIX 4-point traffic in Spirent TestCenter was used as the test traffic.
5. The per 40GbE port’s power consumption does not count the base power consumption of the switch chassis (apparent power without any line card). See Test Methodology section for all calculations.
Source: Tolly, May 2014 Table 5
As shown in the test results, the power of each M18000-24QXS-DB line card of the RG-N18000 switch in full load was 206.5 W. The power of each fully loaded 40GE port was only 8.6 W. In the TLC test by the Ministry of Industry and Information Technology (MIIT), the power of a single 10GE port was only 1.76 W, which is the lowest in the industry.
Tolly Group, an international third-party authority for testing and certification, proves the superb performance of Newton 18000
Tolly Group, an international third-party authority for testing and certification, proves the superb performance of Newton 18000
Delivered latency as low as 0.532 μs
Provided MAC table capacity up to 512K, ARP table capacity up to 170K and 802.1x authentication cocurrent users capacity up to 100K
Supported rapid authentication at a speed up to 1,200 STAs per second
Supported VSU and VSD
Supported N+1 redundancy , and hot patchin
Supported OpenFlow 1.3 features with the SDN controlle
Comparison of Port Bandwidths
Ruijie RG-N18010 Switch Layer 2 40GbE RFC2544 Throughput
192 40GbE ports in a Snake Configuration across 8 Line Cards
(as reported by Spirent TestCenter v4.33)
Frame Size (bytes)
Note: Eight M18000-24QXS-DB line cards with four M18010-FE-DIII fabric modules were used on one Ruijie RG-N18010 switch. 100% line-rate Layer 2 throughput with 192 40GbE ports were verified for all tested frame sizes with zero frame loss. Aggregated throughput is 7.68Tbps. All traffic passed across line cards instead of passing to ports on the same line card.
Source: Tolly, May 2014 Figure 1
As shown in the test results, the RG-N18000 switch outfitted with 24-port 40GE modules delivered 100% line-rate layer-2 throughput with 192*40GE ports in snake topology for 64- to 9216-byte frames without frame loss.
Latency Comparison
Ruijie RG-N18010 Switch Layer 2 40GbE RFC2544 Average Latency
Two 40GbE Ports in Port-to-Port Traffic Configuration
(as reported by Spirent TestCenter v4.33)
Frame Size | 64-byte | 128-byte | 512-byte | 1518-byte | 4096-byte | 9216-byte |
---|---|---|---|---|---|---|
Cut-Through Latency (μs) | 0.532 | 0.545 | 0.588 | 0.586 | 0.586 | 0.586 |
Store -and-Forward LIFO Latency (μs) | 0.650 | 0.651 | 0.666 | 0.665 | 0.665 | 0.665 |
Frame Size | Cut-Through Latency (μs) | Store -and-Forward LIFO Latency (μs) |
---|---|---|
64-byte | 0.532 | 0.650 |
128-byte | 0.545 | 0.651 |
512-byte | 0.588 | 0.666 |
1518-byte | 0.586 | 0.665 |
4096-byte | 0.586 | 0.665 |
9216-byte | 0.586 | 0.665 |
Note: Port 1 and Port 2 on one M18000-24QXS-DB line card was used for the test. For cut through mode, FIFO latency was captured in the Spirent RFC2544 latency test. For store and forward mode, LIFO latency was captured in the Spirent RFC2544 latency test suite. Thus, store-and-forward results do not include the time required to store the frame.
Source: Tolly, May 2014 Table 1
As shown in the test results, the forwarding latency of 40GE ports on the RG-N18000 switch was lower than 1 μs in both Cut-Through and Store-and-Forward modes. The lowest latency was 0.532 μs, which is the lowest in the industry.
Fault Recovery Time Comparison
Ruijie RG-N18000 Series Switch VSU Convergence Performance
(as reported by Spirent TestCenter v4.33)
Uplink Failure | Main and Backup control modules switch over (Main control module failure) | Backup control module failure | Main switch failure | Backup switch failure | |
Average Convergence Time | 30ms | 0 | 0 | 11ms | 11ms |
Average Convergence Time | |
---|---|
Uplink Failure | 30ms |
Main and Backup control modules switch over (Main control module failure) | 0 |
Backup control module failure | 0 |
Main switch failure | 11ms |
Backup switch failure | 11ms |
Note: Convergence time here considered load balancing. In the uplink failure, main switch failure and backup switch failure tests, due to load balancing, half traffic streams were affected by the failure while half were not. The convergence time for the affected streams is reported here. See the test methodology section for detail.
Source: Tolly, May 2014 Table 3
Uplink failover: 35 ms
Failover between the active and standby supervisor modules: 0 ms
Failover between the standby and slave supervisor modules: 0 ms
Failover between the active and standby switches: 11 ms
Failover between the standby and slave switches: 11 ms
The RG-N18000 switch fabric modules can work in N+1 redundancy mode. When the switch was outfitted with 192 40GE ports , only three switch fabric modules were required to deliver the layer-2 traffic at 100% line rate across the backplane .
The switch outfitted with 192 40GE ports delivered the layer-2 traffic at 100% line rate with zero frame loss when any line card was removed. N18000 delivered stable data forwarding in N+1 redundancy mode.
At the software layer, the RGOS 11.X modular OS of Ruijie Networks supports ISSU, stateless process restart, and hot patching. This ensured software upgrade during device operation without affecting other service.
Stability
Ruijie RG-N18000 Series Switch Features
Tolly Certified Features | ||
---|---|---|
4 Mem mber Virtual Switch Unit (VSU) with RG-N18010 | ||
![]() | Virtual Switch Dev vice (VSD) - virtualize one RG-N18010 into12 virtual switch devices | |
![]() | OpenFlow 1.3 Main Feature | |
![]() | Modular System | (ISSU) for Control Modules |
![]() | Stateless Process Restart | |
![]() | Hot Process Patching | |
![]() | N+1 Fabric Module Redundancy | |
![]() | Hot Swappable Fabric Module |
Source: Tolly, May 2014 Table 4
Power Consumption Analysis
Ruijie RG-N18010 Switch Power Consumption with 192 40GbE ports
(as reported by Fluke 317 Clamp Meter and Fluke 15B Digital Multimeter)
Without Any Line Card | Load All Modules - 0% Traffic | Load All Modules - 30% Traffic | Load All Modules - 100% Traffic | Difference between 100% traffic and without any line card | Average Apparent Power per line card with 100% traffic | Average Apparent Power per 40GbE port with 100% traffic |
|
Apparent Power | 1.456VA | 2.573VA | 2.742VA | 3.108VA | 1.652VA | 206.5VA | 8.6VA |
Apparent Power | |
---|---|
Without Any Line Card | 1.456VA |
Load All Modules - 0% Traffic | 2.573VA |
Load All Modules - 30% Traffic | 2.742VA |
Load All Modules - 100% Traffic | 3.108VA |
Difference between 100% traffic and without any line car | 1.652VA |
Average Apparent Power per line card with 100% traffic | 206.5VA |
Average Apparent Power per 40GbE port with 100% traffic | 8.6VA |
Note: 1. Apparent power = Current * Voltage. Real power = Apparent power * power factor. Power factor <=1 . So the real power in Watts, which is what the utility company charges the customers, is less than the apparent power reported here.
2. Results in white cells were measured. Results in green cells were calculated from the measured results.
3. One Ruijie RG-N18010 switch was fully loaded with two M18010-CM control modules, four M18010-FE-D III fabric modules, eight power supplies, four fan modules and eight M18000-24QXS-DB line cards. Each line card has 24 40GbE ports.
4. iMIX 4-point traffic in Spirent TestCenter was used as the test traffic.
5. The per 40GbE port’s power consumption does not count the base power consumption of the switch chassis (apparent power without any line card). See Test Methodology section for all calculations.
Source: Tolly, May 2014 Table 5
As shown in the test results, the power of each M18000-24QXS-DB line card of the RG-N18000 switch in full load was 206.5 W. The power of each fully loaded 40GE port was only 8.6 W. In the TLC test by the Ministry of Industry and Information Technology (MIIT), the power of a single 10GE port was only 1.76 W, which is the lowest in the industry.
Ultra-Simplified Solution for Campus Networks
To meet new challenges from evolving application environments, the market-leading Ruijie RG-N18000 Series delivers an innovative heterogeneous solution to power campus networks.
The RG-N18000 Series operates as the core of unified authentication and gateway in the ultra-simplified network solution. The switch achieves centralized authentication of wired and wireless networks on the core device via the built-in/external 802.1X/Portal authentication system. It can eliminate all the differences between access layer device performance and access mode. The RG-N18000 Series supports ≥170K ARP capacity, concurrent ≥60K IPv4/IPv6 dual stack devices with centralized authentication and authentication speed of 1000 devices per second.
Feature highlights supported by the respective sub-solution are illustrated in the figure below and described in the following sections.
Ruijie RG-N18000 Series can act as the core of unified authentication and gateway of the campus network to offer simplified network experience for users. As the centralized authentication gateway, the core device can achieve unified assignment of security policies. The access layer and aggregation layer are only responsible for Layer 2 forwarding. As the device maintenance is simpler, the performance capacity is no longer a bottleneck. The core layer device provides rich features, high performance and high reliability. The centralized management of network management policies facilitates security monitoring, network expansion and new service development. The Ruijie RG-N18000 Series supports multiple authentication modes such as Portal/ 802.1X. MAC. Different management modes and technologies will be deployed in different scenarios according to different user requirements of the campus network so as to provide targeted and high-availability technologies and solutions.
World’s Leading Cloud Network Core
●CLOS Non-blocking Architecture
Ruijie RG-N18000 Series deploys the advanced CLOS multi-plane, multi-stage architecture, which achieves complete separation of the forwarding and control planes. With independent fabric engines and control engines, it ensures all ports are running at full line rate in a non-blocking manner. The solution continues to strengthen bandwidth upgrade and business supporting capacities.
Advanced CLOS Architecture
Using an orthogonal design for service modules and fabric engines, the cross-board traffic is transmitted to the fabric engines through the orthogonal connector. Ruijie RG-N18000 Series achieves zero wiring for backplane with minimized transmission loss and signal degradation. It can also improve internal transmission efficiency of the switch.
●Scalable Performance for Future Development
Ruijie RG-N18000 Series single slot supports bandwidth of 2Tbps and it is scalable to 4Tbps. The series also supports high-density 40GE Ethernet ports to meet the evolving requirements of cloud computing data center in the coming decade.
The RG-N18000 Series is market leading in supporting line-rate packet forwarding. All boards including the one with the highest density support 64-byte packet forwarding at line rate. The switches thereby ensure high-speed forwarding with zero packet loss in large-scale data center.
The RG-N18000 Switches offer ultra-low latency up to 0.5μs to support high-speed transmission.
The series sustains a huge distributed cache design to achieve 200ms caching capacity. This feature fulfills spontaneous traffic requirements for data centers, high-performance network and so on.
Virtual Switch Unit 3.0 (VSU)The series supports the Virtual Switch Unit 3.0 (VSU). The technology can virtualize multiple physical devices into one logical unit, which largely minimizes the number of network nodes and reduce administrator workload. Superior 50~200ms link failover ensures smooth and uninterrupted transmission of key services. The RG-N18000 Series supports cross-device link aggregation for easy double uplink to server/switch. The network can effectively maximize bandwidth investment return.
Virtual Switch Device (VSD)
Ruijie RG-N18000 Series delivers industry’s first 1:12 virtualization. One device can be virtualized into multiple virtual units. Hence, every virtual unit has a unique configuration management interface, independent hardware allocation (e.g. storage, TCAM and hardware forwarding table). All the features support restart with no effects on other virtual machines. Users can realize network resources allocation based on different needs. Resources of the core switch can hence be shared with other domains and users.
Benefits of 1-to-12 Virtualization
Layer 2 Generic Routing Encapsulation (L2-GRE)
With the international L2-GRE standard, the RG-N18000 switches break the geographical boundaries to achieve data center L2 communication. Data center resources at different locations can be centrally managed and allocated.
Software-Defined Network (SDN) & OpenFlow
Software Defined Networking is an emerging network architecture where network control is decoupled from forwarding and is directly programmable.
Core Concepts
●Decoupling of control plane and forwarding plane → hardware / network unified abstraction & virtualization, ease of independent development
●Centralized control & distributed forwarding → convert the distributed protocol problem into algorithm problem
●Open programming interface → softwarization of hardware, programmable devices, scalable network features & higher flexibility
Solution Components
●Hardware Switching Devices:
Ruijie Newton 18000 and S6000 series platforms will fully support OpenFlow modular hardware switching
●SDN Controller RG-IONC
Ruijie Intelligent OpenFlow Network System is a X86 hardware platform, which fully supports OpenFlow and SNMP2.0, providing the SDN control service modules below:
○Switch/host/topology management, L2/L3 communication
○Traffic editing/path calculation/static routing/DHCP
○MPLS L3 VPN service
○Virtual tenant network service
RG-N18000 Series Offers a Comprehensive SDN Solution
High Reliability & Energy-saving Design
Redundant design of the RG-N18000 Series key components delivers excellent protection: control engine 1+1 redundancy, fabric engine N+1 redundancy, fan N+M redundancy and power module N+M redundancy. All redundant components are hot-swappable to enhance the reliability and availability of the device to the maximum extent. Hot patch is also supported to enable online upgrade of devices.
Support GR for OSPF/IS-IS/BGP and BFD for VRRP/OSPF/BGP4/ISIS/ISISv6/MPLS/static routing to enable the fast fault detection mechanism of different protocols, which minimized the fault detection time to less than 50ms.
The RG-N18000 Series adopts 40nm chip technology, more energy efficient than the traditional 90nm and 65nm. Multi-core CPU supports dynamic power management with all fiber ports adopting non-PHY design to reduce power consumption. All Ethernet ports support the Energy-Efficient Ethernet (EEE) standard to save power under light load.
The internal system is designed for low voltage power supply with high-efficiency modular power to form a more efficient power supply system. The smart fan supports 256 speed modulations with precise temperature control, energy saving and noise control. The device can function at high temperature for a long period of time or in harsh environment for significant savings on energy consumption by air conditioning.
Abundant Energy Conservation Features
Multi-processing Modular Operating System
Since 1998, Ruijie has been investing on the R&D of modular operating system. The RG-N18000 software platform is designed based on the next-generation RGOS 11.X multi-processing modular operating system to integrate the service features such as loosely coupled firewall, wireless, IPFIX and authentication into a unified cloud network operating system. The RG-N18000 software platform also supports full virtualization and offers rich data center and campus network features. The key availability indicators such as multi-processing modules, process backup and hot patch have reached the industry-leading level.
Architecture and Benefits of Multi-process Modular Operating System
Model | RG-N18007 | RG-N18010 | RG-N18012 | RG-N18014 |
Module Slots | 7 (2 for control engines) | 10(2 for control engines) | 12(2 for control engines) | 14 (2 for control engines) |
Modular Power Slots | 6 (4 for system power; 2 for PoE power) | 10 (8 for system power; 2 for PoE power) | 6 (all for system power) | 8 (all for system power) |
Fan Slots | 1 | 4 | 2 | 5 |
Control Engine Slots | 2 | 2 | 2 | 2 |
Service Module Slots | 5 | 8 | 10 | 12 |
Fabric Engine Slots | N/A | 4 | 4 | 4 |
Switching Capacity | 50Tbps/208.5Tbps | 80Tbps/333.6Tbps | 100Tbps/417Tbps | 120Tbps/500.4Tbps |
Packet Forwarding Rate | 900Mpps | 11,520Mpps | 3,840Mpps | 17,280Mpps |
Max. Number of 10GE Ports | Up to 240 | Up to 768 | Up to 480 | Up to 1,152 |
Max. Number of 40GE Ports | Up to 60 | Up to 192 | Up to 120 | Up to 288 |
Max. Number of 100GE Ports | N/A | Up to 96 | Up to 120 | Up to 144 |
PoE | Support | Support | N/A | N/A |
Port Buffer | Up to 16MB | Up to 24GB | Up to 24MB | Up to 24GB |
ARP Table | Up to 170K |
|||
MAC Address | Up to 512K | Up to 512K | Up to 225K | Up to 512K |
Routing Entries | Up to 384K |
|||
Routing Table Size | Up to 12K/6K | Up to 384K/128K | Up to 384K/128K | Up to 384K/128K |
Multicast Entries | Up to 16K/8K |
|||
ACL Entries | Up to 8K |
|||
VLAN | 4K |
|||
QinQ | Basic QinQ, Flexible QinQ |
|||
Link Aggregation | AP, LACP |
|||
Port Mirroring | Many-to-one mirroring, One-to-many mirroring, Flow-based mirroring, Over devices mirroring, VLAN-based mirroring, VLAN-filtering mirroring, AP-port mirroring, RSPAN |
|||
Spanning Tree Protocols | STP, RSTP and MSTP |
|||
DHCP | Support DHCP relay, DHCP snooping, DHCP server, DHCP client |
|||
Multiple Spanning Tree (MST) Instances | 64 (not include default 0) |
|||
Maximum Aggregation Port (AP) | Up to 256 |
|||
Virtual Routing and Forwarding (VRF) Instances | Up to 2K |
|||
Data Center Unified Network Features | L2 GRE |
|||
VXLAN | VXLAN Layer 2 Bridge, VXLAN Layer 3 Bridge, EVPN VXLAN |
|||
SDN | OpenFlow |
|||
VSU (Virtual Switch Unit) | Up to 2 stack members |
|||
VSD (Virtual Switch Device) | Up to 12 VSD units |
|||
L2 Features | Jumbo Frame, 802.1Q, STP, RSTP, MSTP, Port-based VLAN, Super VLAN, Private VLAN, Protocol-based VLAN, IP subnet-based VLAN, Guest VLAN, GVRP, QinQ, Flexible QinQ, LLDP, ERPS (G.8032) Tip: Guest VLAN features are only supported by specific software versions |
|||
Layer 2 Protocols | IEEE802.1d (STP), IEEE802.1w (RSTP), IEEE802.1s (MSTP), IGMP Snooping, Jumbo Frame (9Kbytes), IEEE802.1ad (QinQ and flexible QinQ), GVRP |
|||
Layer 3 Features | ARP, IPv4/v6, PBR v4/v6 |
|||
Layer 3 Protocols (IPv4) | Ping, Traceroute, Equal-Cost Multi-Path Routing (ECMP), URPF, GRE Tunnel(4 over 6), GRE Tunnel(6 over 4), IPv4 VRF |
|||
Centralized Authentication (With RG-SAM+ Integration) | 60K IPv4 and IPv6 dual-stack concurrent users; 1,000 devices per second authentication speed; Authentication modes including 802.1x/ Portal/Mac/IPoE; Portal authentication, RADIUS and TACACS+ user authentication; Layer 2 portal, Layer 3 portal authentication; Traffic billing, traffic control, refined management; Gateway authentication |
|||
IPv4 Features | Static routing, RIP, OSPF, IS-IS, BGP4, VRRP, Equal-cost routing, Policy-based routing, GRE tunnel |
|||
IPv6 Features | Static routing OSPFv3, BGP4+, IS-ISv6, MLDv1/v2, VRRPv3, Equal-cost routing, Policy-based routing, Manual tunnel, Auto tunnel, ISATAP tunnel, GRE tunnel |
|||
Basic IPv6 Protocols | DHCP Relay v6, DHCP Server v6, Telnet v6, TFTP Client v6, FTP v6, NTP client v6, NTP server v6 |
|||
IPv6 Routing Protocols | RIP, RIPng, OSPFv2/v3, BGP4, BGP4+, IS-ISv4/v6, Routing Policy |
|||
IPv6 Tunnel Features | 6over4 Manual Tunnel, 6to4 Auto Tunnel, Manual Tunnel, Auto Tunnel, ISATAP Tunnel, IPv4 over IPv6 Tunnel, IPv6 over IPv6 Tunnel, GRE Tunnel(4 over 6), GRE Tunnel(6 over 6) |
|||
Multicast | IGMP v1/v2/v3, IGMP proxy, Multicast routing protocols (PIM-DM, PIM-SM, PIM-SSM), MLD, Multicast static routing |
|||
MPLS | MPLS forwarding, MPLS VPN, VPLS/VPWS , LDP, LSP |
|||
G.8032 | Support |
|||
ACE Capacity | Up to 8K |
|||
ACL | Standard/Extended/Expert ACL; ACL 80; IPv6 ACL |
|||
QoS | 802.1p, Queue scheduling mechanisms (SP, WRR, DRR, WFQ, SP+WFQ, SP+WRR, SP+DRR, and 8 hardware queues at egress ports), RED/WRED, Input/output port-based speed limit |
|||
IPv6 ACL | Support |
|||
Reliability | Control engine 1+1 redundancy; power supply 1+1 redundancy; Hot-swappable components; Hot patch and online patch upgrade; GR for OSPF/IS-IS/BGP; BFD for VRRP/OSPF/BGP4/ISIS/ISISv6/static routing |
|||
EEE Format | Support EEE (802.3az) |
|||
Security | NFPP (Network Foundation Protection Policy), CPP (CPU Protection), DAI, ARP Check, Port Security, IP Source Guard, 802.1x, Portal authentication, RADIUS and TACACS+ user login authentication, uRPF, Account privileges and password security policy, Unknown multicasts are not delivered to CPU and support unknown unicasts suppression, Support SSHv2 to provide a secure and encrypted channel for user login |
|||
Manageability | Console/AUX Modem/Telnet/SSH2.0 command line configuration; FTP, TFTP, Xmodem file upload/download management; SNMP V1/V2c/V3; RMON; NTP clock; Fault alarm and self-recovery; System log; sFlow |
|||
Hot Patch | Support |
|||
CWMP | Support |
|||
Smart Temperature Control | Fan speed auto-adjustment; Fan malfunction alerts; Fan status check |
|||
Smart Power Supply | Support power control and management |
|||
Other Protocols | DHCP client, DHCP relay, DHCP server, ARP proxy, Syslog |
|||
Dimensions | 442 x 598 x 352.8 | 442 x 836 x 797.3 | 442 x 725 x 708.4 | 442 x 814 x 886.2 |
Rack Height | 8RU | 18RU | 16U | 20RU |
Weight | 30.2kg | 103.32kg | 105kg(total weight of empty chassis and fans) | 107.55kg |
MTBF | 229K hours | 259K hours | 259K hours | 216K hours |
Power Supply | RG-PA1600I: 90-180V~ 1200W; 180-264V~ 1600W RG-PA600I: 90-180V~ 600W; 180-264V~ 600W RG-PD1600I: -40.5VDC-75VDC ~1600W RG-PD600I: -40.5VDC-75VDC ~600W RG-PA1600I-PL: 90-180V~1000W; 180-264V~1600W RG-PA3000I-PL: 90-180V~ 1200W; 210-264V~ 3000W | RG-PA1600I: 90-180V~ 1200W; 180-264V~ 1600W, 16A RG-PA600I: 90-180V~ 600W; 180-264V~ 600W, 10A RG-PD1600I: -40.5VDC-75VDC ~1400W RG-PD600I: -40.5VDC-75VDC ~600W RG-PA1600I-PL: 90-180V~1000W; 180-264V~1600W (PoE) RG-PA3000I-PL: 90-180 V~1200W; 210-264V~3000W (PoE) | RG-PA1600I: 90-180V~1200W; 180-264V~ 1600W; 16A RG-PA600I: 90-180V~ 600W; 180-264V~ 600W; 10A RG-PD1600I: -40.5VDC-75VDC ~1400W RG-PD600I: -40.5VDC-75VDC ~600W | RG-PA1600I: 90-180V~ 1200W; 180-264V~ 1600W, 16A RG-PA600I: 90-180V~ 600W; 180-264V~ 600W, 10A RG-PD1600I: -40.5VDC-75VDC ~1400W RG-PD600I: -40.5VDC-75VDC ~600W |
Power Consumption | <432W | <730W | <518W | <860W |
PoE Power | <6,000W | <6,000W | N/A | N/A |
Temperature | Operating temperature: 0ºC to 50ºC |
|||
Storage temperature: -40ºC to 70ºC |
||||
Humidity | Operating humidity: 10% to 90% RH (non-condensing) |
|||
Storage humidity: 5% to 95% RH |
||||
Operating Altitude | -500m to 4,000m |
Weight and Typical Power
Below table lists the weight and maximum power consumption of the Newton 18000 switch platform.
Component | Weight | Maximum Power |
Main Chassis |
||
| 30.2kg | 432W |
| 103.32kg | 730W |
| 105kg | 518W |
| 107.55kg | 860W |
Control Engine |
||
1.68 kg | 40W |
|
| 2.0kg | 102W |
| 2.0kg | 102W |
| 1.68kg | 40W |
| 2.04kg | 95W |
| 1.68kg | 40W |
| 2.0kg | 95W |
| 3.22kg | 40W |
| 3.58kg | 100W |
Fabric Engine |
||
| 2.72kg | 90W |
| 2.8kg | 107W |
| 3.36kg | 313W |
| 2.2kg | 107W |
| 3.76kg | 158W |
| 4.56kg | 425W |
Power Supply |
||
| 2.04kg | N/A |
| 1.64kg |
|
| 1.6kg |
|
1.6kg |
||
| 1.6kg |
|
| 1.6kg |
|
| 1.3kg |
|
Line Card & Service Module |
||
| 3.76kg | 135W |
| 3.86kg | 175W |
| 5.06kg | 267W |
| 3.7kg | 95W |
| 3.8kg | 175W |
| 4.04kg | 95W |
| 3.76kg | 100W |
| 3.42kg | 85W |
| 3.52kg | 120W |
| 4.20kg | 156W |
| 4.50kg | 250W |
| 5.20kg | 296W |
| 4.25kg | 232W |
| 4.0kg | 208W |
| 3.92kg | 200W |
| 4.95kg | 374W |
| 5.4kg | 366W |
Multiservice Module |
||
| 4.58kg | 190W |
| 4.58kg | 190W |
| 4.58kg | 190W |
The Ruijie Newton 18000 platform is applicable to a wide range of deployment scenarios. The series can act as the core for large campus network, data center network, integrated network of data center and campus network, and large MAN. Respective illustrations are shown below.
Large Campus Network Core
1. Main Chassis & Engine Management
Select the main chassis and control engine according to specific product model.
Model | Description |
---|---|
RG-N18000 Series Main Chassis & Control Engine |
|
RG-N18014 | 14-slot Chassis with fan (without power supply) |
RG-N18012 | 12-slot Chassis with fan (without power supply) |
RG-N18010 | 10-slot Chassis with fan (without power supply) |
RG-N18007 | 7-slot Chassis with fan (without power supply) |
M18014-CM II | N18014 2nd Generation Control Engine |
M18014-CM | N18014 Control Engine |
M18012-CM II | N18012 2nd Generation Control Engine |
M18012-CM | N18012 Control Engine |
M18010-CM II | N18010 2nd Generation Control Engine |
M18010-CM | N18010 Control Engine |
M18007-CM | N18007 1st Generation Control Engine |
M18007-CM II | N18007 2nd Generation Control Engine |
M18007-CM II Lite | N18007 2nd Generation Lite Control Engine |
2. Power Supply
Please select at least 1 power module or up to N+M redundancy according to the power supply requirement of the device.
Model | Description |
---|---|
RG-PA600I | N18000 Power Module (support redundancy, AC, 600W, 10A) |
RG-PD600I | N18000 Power Module (support redundancy, DC, 600W) |
RG-PA1600I | N18000 Power Module (support redundancy, AC, 1600W, 16A) |
RG-PD1600I | N18000 Power Module (support redundancy, DC, 1400W) |
RG-PA1600I-PL | N18000 PoE Power Module (support redundancy, AC, 1600W, 16A) |
RG-PA3000I-PL | N18000 PoE Power Module (support redundancy, AC, 3000W, 16A) |
3. Fabric Engine
Please select at least 1 or up to 4 fabric engines. It is recommended to select at least 2 to ensure fabric engine redundancy.
Model | Description |
---|---|
M18014-FE-D III | N18014 D Series 3rd Generation Fabric Engine (For ED and DB series Line Card and Service Module) |
M18014-FE-D I | N18014 D Series 1st Generation Fabric Engine (For ED and DB series Line Card and Service Module) |
M18012-FE-D I | N18012 D Series 1st Generation Fabric Engine |
M18010-FE-D III | N18010 D Series 3rd Generation Fabric Engine (For ED and DB series Line Card and Service Module) |
M18010-FE-D I | N18010 D Series 1st Generation Fabric Engine (For ED and DB series Line Card and Service Module) |
M18010-FE-C I | N18010 C Series 1st Generation Fabric Engine (For CB series Line Card and Service Module) |
4. Line Card & Service Module
Select the host line cards according to your application scenario.
Model | Description |
---|---|
M18000-44SFP4XS-ED | 44 Gigabit Ethernet fiber ports (SFP, LC), 4-port 10GE Ethernet optical interface board (SFP+, LC) |
M18000-44SFP4XS-EF | 44 Gigabit Ethernet fiber ports (SFP, LC), 4-port 10GE Ethernet optical interface board (SFP+, LC) |
M18000-48GT-ED | 48-port Gigabit Ethernet electrical interface board (RJ45) |
M18000-48GT-EF | 48-port Gigabit Ethernet electrical interface board (RJ45) |
M18000-48GT-P-ED | 48-port Gigabit PoE Ethernet electrical interface board (RJ45) |
M18000-24GT20SFP4XS-ED | 24-port Gigabit Ethernet electrical interface board (RJ45), 20 Gigabit Ethernet fiber ports (SFP, LC), 4 10GE Ethernet fiber ports (SFP+, LC) |
M18000-08XS-ED | 8 10GE fiber ports (SFP+, LC) |
M18000-08XS-EF | 8 10GE fiber ports (SFP+, LC) |
M18000-16XT-CB | 16 10GE copper ports (RJ45) |
M18000-48XS-DC | 48 10GE fiber ports (SFP+, LC) |
M18000-24XS4QXS-DC | 24 10GE fiber ports (SFP+, LC) + 4-port 40GE optical interface module (QSFP+, MPO) |
M18000-12QXS-DC | 12 40GE fiber ports (QSFP+, MPO) |
M18000-24QXS-DB | 24 40GE fiber ports (QSFP+, MPO) |
M18000-12CQ-EH | 12 100GE fiber ports(QSFP28) |
Multi-service Line Card |
|
RG-WALL 1600-B-ED | Firewall card, 2 10GE fiber ports (SFP+, LC) |
M18000-MSC-ED | Service module with Gateway authentication and accounting function, integrated with high-level core switches RG-N18000 series, to support authentication, throughput based accounting, URL audit and flow control function. |
M18000-WS-ED | WS Series Wireless Controller Module for RG-N18000 Switch Series, 2 1G/10GBASE-X SFP+ ports, 128 APs License by default, maximum 2560 APs or 4000 Wall APs License |
5. Transceiver and Cable
Model | Description |
---|---|
Mini-GBIC-SX-MM850 | 1000BASE-SX mini GBIC Transceiver (850nm) |
Mini-GBIC-LX-SM1310 | 1000BASE-LX mini GBIC Transceiver (1310nm) |
Mini-GBIC-GT | 1000BASE-TX, SFP Transceiver (100m) |
Mini-GBIC-LH40-SM1310 | 1000BASE-LH mini GBIC Transceiver, SM (1310nm, 40km) |
Mini-GBIC-ZX50-SM1550 | 1000BASE-ZX mini GBIC Transceiver (1550nm, 50km) |
Mini-GBIC-ZX80-SM1550 | 1000BASE-ZX mini GBIC Transceiver (1550nm, 80km) |
Mini-GBIC-ZX100-SM1550 | 1000BASE-ZX mini GBIC Transceiver (1550nm, 100km) |
XG-SFP-AOC1M | 10GBASE SFP+ Optical Stack Cable (included both side transceivers) , 1 Meter |
XG-SFP-AOC3M | 10GBASE SFP+ Optical Stack Cable (included both side transceivers), 3 Meter |
XG-SFP-AOC5M | 10GBASE SFP+ Optical Stack Cable (included both side transceivers), 5 Meter |
XG-SFP-AOC10M | 10GBASE SFP+ Optical Stack Cable (included both side transceivers), 10 Meter |
XG-SFP-SR-MM850 | 10GBASE-SR, SFP+ Transceiver, MM (850nm, 300m, LC) |
XG-SFP-LR-SM1310 | 10GBASE-LR, SFP+ Transceiver, SM (1310nm, 10km, LC) |
XG-SFP-ER-SM1550 | 10GBASE-ER, SFP+ Transceiver, SM (1550nm, 40km, LC) |
XG-SFP-ZR-SM1550 | 10GBASE-LC, SFP+ Transceiver, SM (1550nm, 80km, LC) |
40G-AOC-5M | 40G Cable for QSFP+, 5M |
40G-QSFP-SR-MM850 | 40GBASE-SR, QSFP+ Transceiver, MM (850nm, 100m with OM3 fiber, 150m with OM4 fiber, MPO) |
40G-QSFP-LSR-MM850 | 40GBASE-SR, QSFP+ Transceiver, MM (850nm, 300m with OM4 fiber, 400m with OM4 fiber, MPO) |
40G-QSFP-LR4-SM1310 | 40G LR Single-mode Fiber Module, QSFP+ Transceiver, LC, 10km (1310nm) |
40G-QSFP-LR4-PSM-SM1310 | 40G LR Single-mode 1-to-4 Fiber Module, QSFP+ Transceiver, LC, 10km (1310nm) |
QSFP-MPO8-4LC-SM-1M | 40G Single-mode 1-to-4 Fiber Jumper, MPO/APC-4*LC/PC, 8 cores, 1m, for 40G-QSFP-LR4-PSM-SM1310 |
QSFP-MPO8-4LC-MM-1M | 40G Single-mode 1-to-4 Fiber Jumper, MPO/APC-4*LC/PC, 8 cores, 1m, for 40G-QSFP-SR-MM850 and 40G-QSFP-LSR-MM850 |
100G-QSFP-LR4-SM1310 | 100G LR Fiber Module, QSFP28 Transceiver, LC, 10km (1310nm) |
100BASE-LX, SFP Transceiver, MM (1310nm, 2km, LC). |
|
100BASE-LH, SFP Transceiver, SM (1310nm, 15km, LC). |
|
1000BASE-LX, SFP Transceiver, BIDI-TX1310/RX1550, 20km, LC |
|
1000BASE-LX, SFP Transceiver, BIDI-TX1550/RX1310, 20km, LC |
|
1000BASE-LH, SFP Transceiver, BIDI-TX1310/RX1550, 40km, LC |
|
1000BASE-LH, SFP Transceiver, BIDI-TX1550/RX1310, 40km, LC |
|
Ruijie Console Cable RJ45-to-DB9 (2m) |