Visit the rest of . Use the Previous and Next Page buttons above, or click on the Table of Contents ...

Our Network Data Center

 

ClassWorld systems are located in the new state-of-the-art Verio Data Center near Denver, Colorado. Verio, a subsidiary of Tokyo-based NTT Communications, is one of the world's largest hosting and data services companies.

  • Reliable black-out free power sources.
  • Highest bandwidth "pipes" to the internet.
  • Connectivity to multiple networks ensures low latency "pings".

Please Note: ClassWorld is not a reseller for Verio, nor are we otherwise associated with Verio. Our hosting business is not associated with Verio’s Hosting Division.

About the Data Center

Our Verio Data Center is a "showplace", located near their corporate headquarters. It features ...

Reliable Power

  • Redundant uninterruptible power supplies (UPS) and switches.
  • Redundant back-up diesel generators

Physical Security

  • Fire detection and suppression
  • Multiple levels of access security including security guards, biometric hand scans and video surveillance

Environment Control

  • Redundant air conditioning with separate cooling zones and humidity control
  • Raised floors for even air circulation and secure cable routing.

Continuous Monitoring (24/7/365)

Click here to take a tour of a Verio Data Center.

Network Connectivity and Bandwidth

ClassWorld's Verio data center is located on a major Point of Presence (POP) with 2 Qwest OC-3's connecting to Verio Chicago and Verio Palo Alto, and 2 MCI OC-12's which connect to the same points. The data center is connected to the POP using 4 OC-192 connections. Because the OC-192's are already in place, bandwidth can be upgraded by simply adding hardware on site. There is no need to wait months for new lines to be installed.

Verio network map

Click here to view a zoomable copy of Verio's global Tier-1 network map.


Backbone Connectivity

ClassWorld systems in the data center connect to four Cisco 6509 aggregation switch/routers with multiple connections via Foundry BigIron4000 Giga Ethernet switches to two Juniper M20 backbone routers. The Junipers each have multiple connections to the backbone OCn's.

The data center itself has connections to many different Internet backbones including UUNet, Sprint, Cable and Wireless, CRL, Qwest, Exodus, Agis and Net Axs. We also have private and direct peering DS3's set up between our location and that of American Online and PSI-Net. The data center also operates its own DS3 to Mae East to peer with many of the smaller Tier One providers as well as operating another DS3 to the ATM switch located there.

By connecting to multiple backbones, the data can be distributed through many sources. This architectural design also means that the network connections are not dependent upon any single Internet backbone. Thus when problems occur, traffic rerouting is automatic, thereby ensuring the integrity of the network and continued access for our high-speed server clients. This takes the term "multi-homing" to a whole new level.

Presently bandwidth utilization is 25% during peak traffic times. Therefore, the network is very flexible. If one of the backbone connections experiences problems, the traffic can simply be re-routed over other paths, thereby ensuring that users receive fast access times to sites hosted on our network.

In addition, the network runs Border Gate Protocol (BGP4). BGP is used at a provider with more than one access point to the Internet. It helps create a truly redundant network. In fact, in an ideal situation, a lease line failure should result in the BGP routing session to close on the bad leased line and the router on a working circuit should then begin to accept the additional traffic.

In other words, traffic from a down circuit is re-distributed across other circuits, thereby maintaining network integrity. Providers that are multi-homed and correctly setup can actually be more reliable than a single backbone provider because they have multiple paths to multiple providers.

Internal Connectivity

A provider's local area network is not often enough being seen as a point of latency. The two main sources of latency for a full-time Internet connection are the user's local area network and the Internet provider's local area network. The local network in our data center is anchored by Cisco 5500 Series ether switches and high-end Cisco routers (like a Cisco 7513). This top-of-the-line network hardware ensures that data requests get to their destination and back out of the network as fast as possible. We use ether switches instead of hubs because of their speed and their security capabilities. Whereas only one computer plugged into a hub can talk at one time, all the machines connected to a switch can talk at the same time. This means more data can travel through a switch and each server acts as its own node on the network. Furthermore, since each servers is its own node on the network, it is difficult for hackers to trace data packets with sensitive information (i.e. passwords) to a particular server.

Servers on the network do not share a single path (T3). Instead, the servers are connected into a high-speed ethernet switch. This switch is connected to the core router at the data center. From the core router, data is sent back to the end user across the fastest available path. Whereas statically routing traffic over one path creates a single point of failure, this distributed architecture ensures that users can access data extremely quickly and have multiple paths both into and out of our network.

 

 
 
Customers: Login