Network Hardware


The saying that the chain is only as strong as it’s weakest link definitely
applies to the network.
For a network operating system
like Linux, the
network hardware can become a deciding factor in terms of how well it performs
(or at least how the performance is perceived). It is therefore essential that
your network
hardware can not only handle the load now, but also as you network
grows.


One of the problems I encountered when researching this material is that
there is much material available on so many different products. In addition,
networking covers such a wide range of products, you could write an entire book
just on the networking aspects. In fact, there is a number of good books that
do just that.


Since I cannot talk about every aspect, I decided that I would limit my coverage
to the network interface card
(NIC) which is the first piece of hardware in the long journey between
workstation and server.
In addition, the most common pieces of
hardware on this journey are routers, bridges, hubs and switches (if you have
a twisted pair network).


As its name implies a router
routes the traffic along the network. However,
it more than just deciding what path to take. Instead, modern routers have the
ability to determine if the packet
should be sent at all. This can be determined
by which port as well as which machine is to send or receive the packet.
For example, it is common to have a router that only allows connections to a specific
machine using only the HTTP
or SMTP
(email) protocols. Other protocols or even
these protocols to other machines are blocked. This is the basic functionality
of a firewall.


Typically, routers are a connection between two separate
networks. Depending on the router
itself, you could have several different
networks connected to the same router.
In fact, it is possible to have different
kinds of physical networks connected to the routers, such as having both serial (to connect
to modems, for example), twisted pair and optical.


A hub is often called a repeater, because it serves as a
hub the network
cables as well as “repeats” the signal,
allowing you to transmit
over greater distances. A hub is needed when you are using twisted pair cables
and every node (client and server) must be connected to a hub. Since a hub sits
at the bottom of the protocol
stack,
it transmits every type of packet.


Typically, hubs are used to organize the nodes on your network
into physical groups. However, they do not perform any logical functions, such as determining
routes to take (that’s what a router
does). Despite this, most hubs are capable of doing collision detection.


A modification of a hub is a bridge.
Bridges allow
you to physically separate network
segments and can extend the length of your
cables. The difference lies in the fact that the bridge
determines if a packet
is intended for a machine on the same segment
or not. If it is, it can be
ignored and not passed through to other segments.


The key lies in what is
called a collision domain.
In essence, this is the set of nodes that send out
packets, which collide with each other. The more collisions you have, the worse
your network
performance because it means you have more network traffic and
other machines need to wait. By grouping machines into groups that communicate
with each other, you reduce the collisions with unrelated machines.


Because bridges block the packets for the local collision domain,
each domain has fewer
collisions. Keep in mind that this only works when there is a lot of traffic
between the nodes, such as in a work group. If you have a strict client-server
model, a bridge may not bring you much advantage.


Another way of
significantly reducing collisions is using a switch. The difference is that the
switch analyzes packets
to determine the destination and makes a virtual
connection between the two ports, thus reducing the number of collisions. Using
the store-and-forward method, packets are stored within the switch before being
sent along. The cut-through method reads just the header
to determine the destination.


An important aspect to look at is obviously the transfer speed
of the card. One common problem I have seen in companies without a dedicated IT
organization (as in some cases with one) is forgetting the saying about the
weakest link. This happens when they buy 10Mbit cards for their workstations (or
are perhaps using older models), but install a 100Mbit card in their server. The
problem is that the server can only send at 10Mbit, because that’s what the
clients can handle.


As we discussed previously, the two most common Ethernet
types are twisted pair and thin-wire. Traditional Ethernet
was limited to only
10Mbit and has been essentially replaced by FastEthernet, which can handle
100Mbits. The problem is that you may not be able to use other existing network
components such as cables if you were using thin-wire. The reason is simply that
thin-wire is unable to transmit at the higher speed. On the other hand twisted
pair can handle it.


One place this is commonly noticed is the connectors on
the network
cards themselves. You will often find many cards designated 10/100
or something in their name. As you might guess, this indicates they can handle
either 10 or 100Mbits, depending on the speed of the hub to which they are
connected. I have seen some cards that require you to set the speed either in
software or hardware.


However, my 3Com cards detect the speed the hub uses
and adjust automatically. In my office at home, I have three computers all
hooked through a 10Mbit hub. Since very little data is going through the
network, this was sufficient as well as less expensive. Even so, my 3Com cards
are all 10/100 and adjust to the slower speed. When I upgrade to a faster HUB, I
do not need to replace the cards or do any configuration. I just plug the cables
into the new hub and go.


This may sound like a minor point and it is for my
three node network. However, at work, with hundreds of nodes
it becomes a major
issue. Imagine having to change the hardware settings on hundreds of PCs. That
means opening the cases, pulling out the card, setting the jumper, putting the
card back in, and then closing the case. Granted most newer cards are plug and
play, but are you sure yours is.


Some cards like my 3Com Fast EtherLink XL
3C905B-COMBO have connectors for thin-wire, thick-wire and twisted pair, only the
twisted pair connector allows you to use the 100Mb connector. Note also that
most of the 3Com Fast EtherLink 10/100 cards, just have the twisted-pair
connector.


Keep in mind that even if you do use the twisted pair connector,
you are limited by the speed of the other hardware. I chose a 10Mbit hub because
I did not want or need to spend the extra money for a 100Mbit hub. Even in a
business, you may not need more. If all of your applications are installed
locally, with only the data on the server, you probably won’t even come close to
needing even the 10Mbit. This is especially true if you break down your network
into sub-nets, which are separated by routers or you are using switches.


However, speed is not the only
consideration, particularly in a server. Take the analogy of a 100 mile race
between a Ferrari and a Geo Metro. The winner is fairly obvious, unless you take
a Ferrari loaded with bricks and has to refuel every mile. In some cases, you
might have a Ferrari network
card which is slowed down by other things.


There
are several things your card can do, such as what my 3Com 3C980-TX Fast
EtherLink Server NIC does. The first is the ability to combine multiple cards
into a single virtual interface. One card is processing the packet
while the other is receiving, for example. The load is balanced between the cards to
ensure that one is not overburdened.


The next feature is what 3Com calls
self-healing drivers. Here the card is monitored and action is taken based on
what it finds. One simple example would be shutting down one card in a virtual
set if it appeared to be causing to many errors.


Throughput (the true measure
of speed) is increased by using 3Com’s Parallel Tasking. Traditionally, network
cards transfer data between the card and memory in one direction at a time. 3Com
cards can transmit in both directions. In addition, there was a previous
limitation with PCI
cards that could transmit a maximum of 64 bytes at once. The
newest 3Com cards have increased this to 1514, the maximum for a standard Ethernet
packet.
This meant that with previous cards, it might need up to 24 bus cycles
to transmit the data, the 3Com card can do it in a single cycle.


A moment
ago, I mentioned cases where people would install 100Mbit cards in their server
and 10Mbit cards in their clients. In those cases, they actually had 10 Mbit
hubs, so the problem was as much an issue with the hub as with the speed of the
client cards. In some cases, it actually makes sense to configure your system
like that, but you need a hub that can handle the job.


One solution to the problem is
the 3Com SuperStack II Dual Speed Hub. The key is part of the name: “dual
speed.”. As its name implies it can actually handle both 10Mbit and 100Mbit
connections. It is able to sense the speed on the port and adjust itself for
that port. This means that the connection between the server could be running at
100Mbit, with the connection between the hub and clients running at 10 Mbit (or
maybe just some of the clients).


This ends up increasing overall performance
since the hub can operate in duplex mode. That is, it can send and receive at
the same time. 10 Mbit data is being sent to the hub as it is sending 100Mbit
data to the server.


Some vendors try to save a little by making hubs that
“pretend” to run at both 10 and 100Mbits. This is done by having a single port
that can handle the 100Mbits, which is typically connected to the servers.
However, this means that if you ever upgrade a single client,
you have to upgrade the hub as well. The 3Com solutions automatically make the change for
you.


One thing to keep in mind here is the cabling. FastEthernet requires
what is referred to as category 5 cabling. However, 10Mbit can be handled by
category 3 or 4. Although you can certainly connect your network
using category 3 cable,
the number of errors increases dramatically. Packets need to get resend and it
can actually turn out to be slower than running at 10Mbit. The 3Com SuperStack
addresses this issue by monitoring the frequency and type of errors. Should the
errors be too high, it will automatically lower the speed to 10Mbit.


In
principle, routers have the same limitations as hubs, in that they can limit,
well as are limited by, the other network
components. However there are several features that we ought to take a look at.


One feature provided by 3Com’s
NETBuilder routers is what is referred to as bandwidth
grooming. Among other things, this allows you to prioritize the traffic on
your network, based on a
number of different criteria. For example, you define higher priority to
specific protocols or specific ports (or both). This is useful when defining
priority based on a specific application,
type of connection and many other cases.


In addition, the NETBuilder series features dual processors. While one
processor is handling tradition routing functions such processing the packets,
the second processor concerns itself with the “grooming” functions, which
greatly increases the overall performance.


There is also the issue of
security. Many people think of router
security only in terms of connections to
the Internet. However, some companies are concerned with internal
security as
well. For example, it is possible with the NETBuilder routers to disallow
connections from the warehouse to the main server, except for specifically
defined ports. This might give them access to the main database
application, but
prevent them from poking around the file system.


One thing to keep in mind is
that there are a number of differences between the behavior of a Wide Area
Network (WAN) and a Local Area Network (LAN). In my opinion, the two most
significant aspects are the fact that a WAN has slower speeds and the routing of
the packets is the dominant behavior as compared to fast speeds and switching
for the LAN. Even if your internal network
only runs at 10Mbps, it is still 160
times faster than a typical 64Kbps WAN connection.


The result of all of this
is that you typically have different kinds of equipment for both. In addition,
because of the slower speeds, a WAN has less bandwidth
and your are “encouraged”
to reduce unnecessary traffic. This is where routing comes in. You want to
limit unnecessary and even unwanted traffic. For example, we talked above
about the ability of 3Com routers to direct traffic based on specific ports. In
some cases, you may want to turn off specific ports to certain network
segments
to reduce the traffic, although other ports (and therefore other protocols) are
allowed. One common thing is to restrict broadcast traffic, which the 3Com
routers can do.


Another thing we discussed was the ability of the 3Com
routers to prioritize the packets. In most cases, applications always use the same range
of ports to access other machines. For example, an Oracle database is usually accessed using port

  1. To ensure proper response times, port 1521 could be given priority
    over something like file data transfer. Files going across the WAN can be typically given
    lower priority than the database application. The 3Com
    router
    thus allows you to manage the performance on each network
    segment.


A off-shoot of this is “protocol reservation.” As its name implies, a certain portion
of the bandwidth is reserved for specific protocols. That means that no matter what
other traffic is on the link, the reserved portion will always be available for that
protocol.


Another thing to consider is how the routing information is transferred
between routers. Many routers use what is called “distance vector routing” where
the router can determine the shortest path between two nodes.
However, you may not want the router to choose the shortest path,
since “short” means the number of nodes it goes through (or hops) and not the length
of the cable or the speed. Often such routers will exchange information even though
the network has not changed. In essence, this wastes
bandwidth.


Instead, to limit bandwidth
you want all packets going to a particular subnet
to always use a pre-defined route. This is a capability of “link state” routing.
Although this requires
more computational power than distance vector routing, it also requires a lot
less bandwidth.
Since routes are calculated, less data is transferred, so when a link goes down,
the updated information reaches the effected routers more quickly and the new
route in effect more quickly as the computation is faster
than thenetwork.


Another core aspect of the vendor your choose is the after
sales service. For most companies, the primary concern is the warranty. That is,
what happens when a card malfunctions. Most warranties last a year, which is
normally long enough to identify any manufacturing defects. However, even within
the warranty period, you will generally find that you will either have to return
the card to the reseller or return it directly to the manufacturer. Therefore,
it is a good idea to have enough spares on hard. Although you might be able to
work out an arrangement with either the vendor or reseller to send you a replacement
before they receive the defective card, you are still out of work for a couple
days, so spares are still a good idea.

Thin Wire versus Twisted Pair


The fact that twisted pair cabling is less expensive than thin wire is deceiving. For a given
length of cable, the cable itself and the connectors are cheaper. However, you must keep in
mind that there will be a cable from the hub to each node, including the server. In contrast,
thin wire
cables are laid between the nodes, forming a “loop”.


Let’s take an example with a server and four computers, spaced
evenly every ten feet. You could get away with just forty feet of thin wire cable, as you need
ten feet from the server to the first machine, another ten feet from the first to the second,
and so on.


With twisted pair, let’s assume that the hub is right next to the server, so the cable length
can be ignored. You need ten feet of cable to the first computer, but twenty feet to the
second, thirty feet to the third, and forty feet to the fourth. This means a total of 100
feet. The more computers you have the greater the difference in cable lengths.


In addition, there is more work.
You cannot just move from computer to computer, adding cable as you go. You lay the cable from the
hub to the first computer, then go back to the hub. You lay the cable from the hub to the
second computer, then go back to the hub, and so forth.


One the other hand, twisted pair is a lot safer. As I mentioned,
if the connection to one computer goes down, the rest can still work.

Well enough for the theory. Reality today is a lot different than it was when both of these technologies where fairly young. Today, most installations have switched to twisted pair and every new installation I know does so as well. For the system administrator or network technician any perceived disadvantage of twisted pair is easily countered by the advantages.

The problems that thin-wire cabling has, such the “messy” physical connections at the back, the “loop” nature of the cabling, plus the slower speeds make thin-wire far less attractive than five years ago. Because of the loop, a problem anywhere means problems for everyone. This goes beyond finding connection breaks. For example, if one of the NICs has a problem and is causing interference on the network, all nodes are affected. Added to this is the fact �that it is often difficult to determine which NIC is causing the problem. Each node needs to be examined individually, which means much higher costs.

On the other hand with twisted pair, the cabling is easier to manage, problems are easier to �troubleshoot and the system is generally easier to manage. As an example, take the company where I currently work. We did some renovations on an existing building, but insisted on double floor. Within the floor we laid cabling for both the telephone and the LAN. At several places in each office we installed �a “well”, with receptacles for the telephone and LAN. Each was then connected to a central location, which then provided the connection to other areas of the company. For our LAN, each node is connected to a 100 Mbit switch which is then connected via optical fiber to other parts of the building and even to another office across town.

For the network technicians, this means all they need to do is plug one end of the cable into the back of the computer and the other into a plug in the nearest floor well. Therefore, they don’t need to worry about ensuring the loop is complete. Just plug and go.

As I mentioned, all of the cables lead to a central location, which is initially just a patch panel. From here the connection is made to the switch. Since it is the physical connection from the patch panel to a switch that determines which segment a computer is on, we can easily patch computers from one segment to another without re-wiring. This allows you to have completely seperate networks within the same room. For example, my work station needs access to customer machines, so I am on one segment. The test machines do not need that access so they are on a different segment. However, they not only can be in the same room, but can be plugged into connections in the same floor well.

Another nice thing about this is that the physical connections for both the phone and LAN are the same. Although physically seperate within the patch cabinet, the wires are the same, the patch cables are the same and so forth. Since the LAN and phone patch panels are physically seperate within the cabinet, it is much each for our network technicians.

Because of the explosion in computer use during the past few years, you will be able to find many motherboards with a NIC built-in. Needless to say, this will be twisted-pair and not thin-wire. These NICs, as well as new ones that you can buy seperately typically do autoswitching duplex 10/100 Mbit.