Return to the lecture notes index
Lecture 2: Networking in a Day

Local Area Networks (LANs), Inter-networks, The Internet

Generally speaking, we call a small, homeogeneous network a Local Area Network (LAN). When we connect these together, we call it an inter-network. Colloquially, network usually refers to an inter-network, rather than LAN. "The Internet" is the name for the global inter-network that we all know and love.

Local Area Networks (LANs)

Okay, we talk a bunch of stations and connect them together to from a LAN. Maybe they are all connected to the same wire. Or, maybe they are all connected to the same network switch. Or, maybe they are all within earshot of each other over the air. How do they talk?

They broadcast. In other words, they shout out into the common channel. And, when any station does -- they all hear the broadcast. In the degenerate case of a point-to-point network, there are only two stations, so the only recipient is the intended recipient. But, in the more general case, every station hears the broadcast messages.

Each station has an station id or LAN address. When a message is sent, it includes both the source and destination addresses. Although all stations might well hear all messages -- they ignore all but those for which they are the intended recipient. Only those messages intended for a particular station get passed up by the network software to the application level. It is certainly possible to cheat and listen in to other stations messages, this is called promiscusous mode. But, this is usually only done for diagnostic (or malicious) purposes. This is one reason that end-to-end encryption can be very important.

The Size of a LAN is Self-Limiting

The size of a LAN is self-limiting, both in terms of physical size and also in terms of the number of stations. The longer a wire, the more attenuation -- the signal is weaked as it travels farther and farther. The greater the distance through the air, the weaker the signal. In the end, measured in physical distance, there is only so far that a signal can travel.

And, beyond that, we've got other problems. The more stations we have sharing a broadcast channel, the less network time exists per station. With not-so-many stations, even those with just modest use, the network could become clogged with collision, i.e. stations with overlapping transmisssions garble each others signals. Broadcast protocols only work with low contention and bursty loads -- they rely on relatively large periods of quiet time in which to resolve collisions, e.g. retransmit, resulting from the relative short bursts. Broadcast networks can collapse with utilization as low as 30%.

Stretching LANs with Bridges

It is possible to stretch the size of a LAN by using a bridge. The basic idea is that we can take a bunch of separate physical LANs and connect them together to form a larger logical LAN. The bridges receive and retransmit signals from one network to another, correcting the signal strength, noise, and timing, as they do.

And, modern, active bridges go farther than that. As stations transmit, they make note of the originating LAN. Then, if they later hear a message destined for that station, they send it only to that one LAN, not to all of the connected LANs.

Basically, they maintain a hash table of <station, LAN> pairs. When they hear a message, they update the hash table. Entries in the hash table age out or succumb to cache pressure to make room for new entries. It is only when the bridge does not have an entry for a particular destination station that it needs to broadcast the message onto all connected LANs. In short, the bridges listen carefully and, in so doing, they are able to cache the location of stations and send messages only to the physical segments on which they live.

By carving up a big LAN into multiple segments, or building one up from multiple segments, contention is reduced. A message can only collide on either the sender's segment or the receiver's segment, or, depending on the bridge's desing, within its own fabric. As long as the bridge knows where the receiver lives, the other segments of the network are unaffected and can support additional transmissions.

Inconveniences of Bridging

It is worth nothing that when using bridges, moving a station from one segment to another can be a minor inconvenience. Until the old location ages out of the bridge's hash table, it will send messages destined for the moved host onto its old segment. It is also possible to break a LAN by faking a station on the wrong segment -- and thereby stealing its traffic. To get around these problems, most modern bridges are managed. The system administrator is able to lock stations onto certain segments, disable discovery mode, and delete stale entries.

It is also the case that complex geometries can be challenging for bridged networks. For example, it is often desirable to create multiple paths from segment-to-segment. This allows paths to exist, even if a bridge failes. But, such paths can create cycles. Bridges will resent messages until they appear on both sides of the same bridge. Depending on the dynamic behavior, this can cause messages to be transmitted to a wrong or redundant segment -- or not all all. To solve this problem, most modern bridges include configuration protocols. When enabled by the system administrator, they go into a configuration mode, learn each other's location, elect a root node, and form a spanning tree. This tree breaks the cycles enabling good communication. In the event that a birdge fails, they can subsequently agree to form a different tree to get around the failure.

Limitations of Bridges

Bridges extend the size of LANs by a bit -- but they surely aren't the global answer. Like anything else, they've got limits. In the case of bridges, memory and failure are the limiting factors. There are just far too many stations on the planet for any one bridge to remember them all. It simply ain't possible. No way, no how.

And, even if magic were to happen to make this possible, it would be challenging to build a spanning tree the size of the globe. There would always be failure. And, they'd always be trying to learn a new tree.

Building up the Protocol Stack

We often talk about the architecture of the network protocol stack in terms of layers:

We're now about to enter the domain of the network layer. We're going to talk about how, instead of scaling up LANs, we can recognize them as separate networks and efficiently communicate messages from one to the next, until we get from the source network to the destination network.

Rethinking the Problem and Hierarchical addresses

So, it is pretty clear that we can't hope to keep track of every host on the Internet. We must, somehow, structure the problem and play with bigger blocks. The way people usually deal with large problems is to impose a hierarchy so that no single level is too large. Consider the organization of corporations, schools, books in a library, files on a computer, &c.

We're going to do the same thing with the Internet. Instead of viewing the entire Internet as one large, flat network, we are going to view it for what it is -- a collection of individual networks. Step one is going to be routing packets from one network to another network. Once there, we'll worry about getting them to the right machine.

To achive this, we are going to create a new network address -- one that is structured so that it contains both a network number and the host number, rather than the flat address, a.k.a station id, that we've discusses so far. The station id will still be used within a LAN -- but we'll use this new IP Address to get from one network to another.

IP Address Details

For the moment, let's focus on the venerable IPv4 protocol. The addresses are 32-bits wide. The first left-most bits represent the network number. They are the only bits required to route from network-to-network. The next, right-most, bits represent the host number and are only used once the packet makes its way to the destination network.

In designing IP addresses, they could have decided that, in all cases, the left n-bits would be the network number. But, this would leave, in all cases, the right (32-n) bits to represent individual hosts. The problem is, of course, that not all networks are the same size. IBM's network is huge -- not so much for Greg's Garage. There aren't enough addresses if we give enough IP addresses to Greg's Garage as we do a multi-national technology conglomerate.

So, what the designers of IP did was to created a few different classes of network address: small networks, medium networks and large networks. You'll notice that the way they divided up the bits results in very few, very large networks, a lot of mid-sized networks, and a huge number of small networks. And, intuitively, this makes sense -- there are more small organizations than IBMs:

But, given this organization, how can we look at an address to determine if it is class A, B, or C? We look at the first 1, two, or three bits of the IP address. Based on these bits, we know whether the address is Class A, Class B, or Class C, and can interpret it correctly.

Classless Inter-Domain Routing (CIDR)

Sadly, the routing scheme described above hit limits. In effect, all of the class A an B networks were assigned many ages ago. And, sadly, no one wanted class C addresses -- the networks were just too small to be managable.

IPv6, a new generation of IP, solves this problem using 128-bit addresses. Unfortunately, the Internet is composed and used by many networks managed by many organizations. It can't be switch instantly on a "Flag Day". So, instead organizations are transitioning internally to IPv6, and converting to IPv4 as needed for external use, with the hope of eventually being able to drop IPv4, because it has fallen into disuse. We are a long way from that goal.

For today, we are still using a band-aid from the 1990s: Classes Inter-Domain Routing. The basic idea is this. Routers will still react to routing as described above. But, the backbone routers were upgraded to accept classless addresses. Instead of relying on the first few bits to describe the network/host division, CIDR communicates this directly. For example:

...describes an address with 15 network bits and 17 host bits.

The beauty of this is that it allows the combination of adjoining class-C networks into larger networks -- networks large enough to be useful to reasonably sized organizations. Some people call these supernets. Regardless, this system made a whole bunch of new, useful, networks available -- in tunable sizes. It worked so well that we are still using it -- and mostly not IPv6.

Assigning IP Addresses

In the case of Ethernet LAN addresses, the station IDs are, at least in theory, built into the network interfaces and assigned by the vendor. The first bits of the address identify the vendor, the rest are uniquifiers.

But, IP addresses, by virtue of their hierarchical nature, can't work this way. Instead, they need to be assigned according to the network in which they live. In the olden days, this was handled by the sysadmin. You'd simply trade her/him your MAC address for an IP address.

While this works, it ran into two problems. The first is that it made work for sysadmins -- work that could easily be automated. Secondly, since it made work, the assignments were semi-permanant -- they took effort to reassign. And, back in thge day, this was okay. But, these days, we all have many devices that might, sometimes, be turned on and need an IP -- I've got 3 computers and a printer in my office, never mind my cellphone. But, typically, only one computer is in use and the printer uses BlueTooth.

It would be great if we had an automated way of getting IP addresses that could be used "for a while" on an as-needed basis. And, we do.

Dynamic Host Configuration Protocol (DHCP)

The Dynamic Host Configuration Protocol (DHCP) allows IP addresses to be assiged on a temporary or quasi-permant basis. The idea is that we give a pool of IP addresses to a DHCP server, which then leases them to machines. The machines can renew those leases, in effect, permanantly. But, if a lease isn't renewed, it will expire, and the IP address can be reclaimed into the assignable pool.

In the simplest configuartion, there is one DHCP server per network. When a host boots and wants an IP address, it uses a LAN-level broadcast message to request the IP address. The DHCP server grabs the MAC address, e.g. station ID, from this message and, as did the human sysadmin before, trades it for the IP address. It sends the IP address back via a LAN-level message going directly back to the requestor.

These addresses are leased. The initial request asks for a lease of a particular term. The reply grants the request for some length of time up to, but possibly shorter than, the requested amount. Before the lease expires, the client will automatically renew the request, so that it can keep the IP address.

The client can explicitly release an address, but this is not necessary. Unless the client renews the address, it will expire in a fixed amount of time. And, this is a very important aspect of reliable network and distributed systems: Never loan anything -- lease it. You can never count on a client to give something back. In this case, for example, the client might be turned off or leave the area, before releasing an address -- invariable, but for the time-limited nature of a lease, the addresses would just all end up lost by clients that failed to return them.

But, this system runs into one complication, and an interesting one. Broadcast messages are LAN-level messages -- they are not routed from network to network. Can you Imagine an Internet-wide broadcast? Slam! Bye-bye Internet. This means that, in a large enterprise, unless we develop another solution, we'll need to have one DHCP server per network segment -- not for the entire enterprise.

This isn't so much fun. It means that we'll need to maintain a bunch of DHCP servers. And, more importantly, it means that we'll need to partition our IP addresses statically among them. Sadly, some servers might run out of IP addresses, while others have plenty to spare. It would be much less work, and much more efficient, if we could have one server (ignoring redundancy) for the entire enterprise.

DHCP relays allow us to do exactly this. We put a DHCP relay onto each, individual LAN. We set up one DHCP server for the enterprise. When a relay hears a request, it sends it via IP to the enterprise level DHCP server, whcih sends the request back to the relay for delivery to the requestor. In this way, DHCP relays enable a DHCP server to serve multiple network segments, without an inter-network wide broadcast.

It is worth noting that DHCP in this mode is great for clients -- but not for servers. Servers need to have well-known addresses so that clients can find them. Clients, on the other hand, can have different addresses each session, because they communicate their current address with the request, as the sender's IP address. But, no worries: DHCP servers can be programmed to consistently assign IP addresses to the same MAC address, where appropriate.

How Internet Routing Works

At this point, we are viewing our internetwork as what it is -- a collection of networks tied together. Tying these networks together are routers. Ultimately, when a message is sent from one host to another, one of two things is true:

If it is destined for a host on the same network, there is no routing. The host, itself, looks at the destination IP address, notices that it is on the same network, and simply sends the message to the destination using the lower-level protocol. But, if it is destined for a different network, it sends it to the router instead.

The router is a device that ties together several networks. It gets a message becasue the lower-level MAC address indicates it as the destination. But, it knows that it isn't the real destination, because the higher-level IP address indicates another recipient.

It masks off the host bits of the IP address, so it sees only the network number. Recall that it knows how many bits to mask because it is either A CIDR address, including the number of bits, or a classed address, in which case the first few bits indicate the class of the address and the corresponding division between host and network fields.

Based on this, it looks in a table and forwards the message onto one of the connected networks. If the destination lives on that connected network, it get sent directly there using the lower-level protocol. Otherwise, the lower level protocol is still used -- but to send it to another router, as described above.

Routing Protocols

It is important to note that these routers might be connected to many networks. It is even more important to note that these networks might form a graph, with mutliple paths between destinations. And, yet more important to reaize that there might be many, many hops from source to destination.

Given this, how do the routers know which way to send a packet so that it doesn't get lost or go around in circles? The answer is that the routers talk, and, based on that conversation, they build up two tables: one that describes the network, as a whole, known as the routing table and one that describes exactly what the router should do, known as the forwarding table.

We're going to leave the details of how these tables get built to 15-441. Especially since there are different protocols that get the job done and different strategies -- and there are tons of interesting and subtle things about them.

Sub Networks

Back in the day, large organizations used to carve up their networks into subnets. The basic idea is that, to get a packet form one network to another network, only the network portion of the address is needed. Once there, the destination network can really do whatever it would like with the host bits. It is, in fact, the case that they were originally intended to be used in a flat way to represent hosts. But, look, within an idnividual network, the administrators can do whatever they'd like and not break anyone else. So, they can actually take the left side of the -host- bits and interprete it as a subnetwork number and use only the right portion as a network number. By doing this, one can carve a very large network, into small managable pieces, reduce collision domains, etc.

But, we've got the same problem we had before -- how many bits are for the subnet number and how many are for the host id? They didn't create subnet classes, but weren't quite as clean as they were many years later with CIDR. Instead of a simple number, they use a subnet mask. they put 1s into the bits that represent the subnet number and 0s into the bits that represent the host number. When this mask is logically ANDed with the IP address, it leaves only the subnet number. Technically speaking the mask need not be dense (e.g. 11111111000 vs 1010101), but, in practice, it needs to be dense. Plenty of routers would break if presented a sparse mask.

Let's make sure we see how this works. Here's an example from Wikipedia

  IP address   11000000.10101000.00000101.00001010 
  Subnet Mask  11111111.11111111.11111111.00000000 
  Network Portion    11000000.10101000.00000101.00000000 
  Host Portion     00000000.00000000.00000000.00001010 

So, routers -within- a particular network are configured to know the network number -and- network mask associated with it. When presented with an address, they perform a logical AND with the network mask to find the subnet number. Routers know which mask to use, because they know the subnet number and mask associated with each of their directly connected legs.

Because the trade-offs and scale of inter-network (Internet-wide) routing and intranetwork (within an organization) routing are different, the routers use different protocols to exchange routing information and develop the forwarding tables.

Fragmentation and Subnets

Back in the day, organizations loved subnets. Everyone got one: departments large and small. By breaking things into small subnets, which confined traffic to tightly-knit groups, networks became very efficient. Traffic was mostly confined to the segment that contained the sender and the receiver -- and way from unrelated users. This dramatically reduced contention and collision.

But, by breaking a large network into subnets, we have the same problem we had on an Internet-wide scale: fragmentation of the address space. As organizations grew, some subnets ran out of IP addresses,w hile others had extras. Organizations were forced to reallocate subnets, and reassign IP addresses to machines. This was inconvenient and time consuming.

Eventually, this was relieved by better network switches and bridges. Quite simply, they got better at remembering more addresses and better at talking to each other. And, they became much more managed -- system adminsitrators were able to remotely program and configure them to describe which MAC addresses belonged where.

The bottom line is that better bridges were abel to confine traffic much better, reducing the marginal benefit of subnets -- while, at the same time, the cost of subnets went up as fragmentation became an increasing issue.

Organizations began tearing out subnets and buying better and better switches, flatening out subnets. Maybe it can't be done on a global scale -- but it can be done on, oh, a university-wide scale. And, it works especially well, since organizations, internally, can make coordinated purchasing decision to ensure hardware compatibility -- something that can't happen on a global scale.

Taking a Pass on Routing

We talked about many of the details associated with the IP protocol and how packets find their way across a network. We talked about the role of routers. We talked about the routing table, the table held by each router that describes the global state of the network. And, we talked about the forwarding table, the table derived from the routing table, also present on each router, that contains the simple mapping from destination network to port number. And, we said that the routers talk to each other in order to collect routing information and then build their own forwarding tables.

But, we did not talk about the details of this router-router conversation. We're going to leave that for a netowrks class or a later discussion about managing distributed state with lossy communication that is not atomic. It is better approached as a distributed systems problem, once we've gotten some depth, than a shallowly understood network protocol.

Network Layers, A Reference Model

As we work our way up from network hardware to the application programmer, we are beginning to see the overal organization of a network. This architecture is sometimes described using the following model:

Application Layer: The details of the messages and structures used by a particular application
Transport Layer: Establishment of endpoints and other services commonly used by programmers
Network Layer: Movement of packets from network to network across an intern-network
Link Layer: Management of stations sharing the same channel
Physical layer: Voltages, connector shapes, power levels, light colors, &c

Thus far, we've worked our way up, talking a little about the physical layer, which is really the domain of various engineering disciplines, and a lot about the link and network layers. Today, we are going to begin our discussion of the transport layer.

The Transport Layer

The transport layer establishes an end-to-end abstraction that is useful to the programmer. Part of that is that it needs to hide the hop-by-hop nature of the network layer's routing process. And, part of that is that it needs to establish program-to-program communication, since multiple programs might be running on the same host -- and the network layer just goes hop-to-hop.

In addition to these basic requirements, it must somehow answer the question, "What is a message, and how do we know when we have one?". For example, we often classify transport layers as being either:

Protocols are often also classified in terms of their quality of service:

As we'll talk about soon, unreliable protocols may, or may not, be session-oriented. A session-oriented protocol establishes a relationship between the sender and receiver before any data is exchanged. this session remains in place until it is closed. So, in some sense, the recipient knows to be waiting for communication. Unreliable protocols need not be session-oriented. But, reliable protocols need to be session oriented so that the sender and receiver can coordinate what has, and what has not, been successfully received.

In the context of Internet protocols, the TCP/IP protocol suite, there are two general-purpose transport protocols:

User Datagram protocol (UDP)

The biggest value added by UDP to IP is that it adds port numbers. There are critical, becuase they allow messages to be communicated from one application, or part of an application, to another. By itself, IP just moves messages from host-to-host. But, from there, what to do with them? Is the packer for an IM client? A Web tab? A music stream? A backup?

Simple Reliability

It is easy to see how we could create a reliable protocol above UDP. We add a sequence number to each message. We send a message and wait for an ACKnowledgement. We know the maximum round-trip time, and wait at least that long. If we don't get an ACK within that time, we assume that the message got lost en route to the recipient -- even though maybe only the ACK got lost. We resend. When the sender gets it, it'll ACK, possibly again. There won't be any confusion, even if it is received twice, because the sequence number will enable the duplicate to be detected and discarded. The same is true of a duplicate ACK. If we send more than once copy, and more than one ACK eventually makes its way to the sender, the sender just ignores the duplicates -- it ignores any ACK that is not associated with the present message number.

To this end, it is important to note that only one message is in flight at a time. The time between the sending of a message and when its ACK is received is dead air. For this reason, this type of reliable protocol is often known as a stop-and-wait protocol.

Reliable vs Unreliable

A reliable media certainly beats one that is not. But, as we now know, a reliable protocol is really just a diligent protocol. It tries, and tries, and tries some more.

But, this is not always desirable. In some cases, a late packet is worthless -- and resending it just wastes network time. This is the case for many types of real-tiem communication, such as live video or audio, e.g. telephone calls or web cams.

What does one do with a 10 minute old syllabal? If we delay the subsequent syllabals by 10 minutes, the call is worthless. And, if we charge forward, we can't exactly introduce a stray word later. It is best to just let it go an hear a brief pause or pop. The same is true of video. We'd rather see a brief freeze and a jump in one part of the frame than have th4e whole thing delayed.

The Transmission Control Protocol (TCP)

The Transmission Control Protocol (TCP) is a reliable stream-oriented protocol. The goal of TCP is to establish a reliable connection, much like the ACKed UDP we described above, but without having to stop and wait while waiting for a reply.

The basic idea is that we want to keep sending instead of waiting. To do this, we need to make sure that, even though we've sent more messages (segments), we hold on to the old ones until they are ACKed. Since we know the worst-case Round Trip Time (RTT) and the maximum speed that we can tranmit (bits/second), we can figure out how large a buffer we'll need to do this. For this purpose, we use a circular buffer called a window.

We transmit packets from the window, until we hit the end of the window, at which time we have to wait. But, as messages (segments) are ACKed, we check them off. As soon as we check of the head of the window, we can slide it. Basically, we can rotate the head to the back, so we can send another message.

On the receiver side, there is also a window of the same size. It adds the messages (segments) to it as they arrive. Since data needs to be given to the application in order, it is held in the buffer until the head of the buffer has arrived. Once that happens, the one or more messages (segments) at the head can be copied up to the application and the buffer can be rotated allowing for more data to be received.

If messages are lost for any reason, wehter, for example, because the network dropped them or because they couldn't be buffered, they can be retransmitted fromt he sender's window.

Different options within TCP can fine-tune the way acknowledgements work. They can also intentionally shrink the window to allow some network time to go unused. The purpose of this is to intentionally slow down transmission to be cooperative and help to reduce congestion.