Errata

  • Jan 12: Changed the name of the server from “http-server” to “httpd” to make it the same as what the provided starter code generates

  • Jan 19: Added a subsection about the autograder

  • Jan 24: Removed the restriction that you have to use pthreads.

  • Jan 25: Added a command-line option for extension 3

  • Jan 25: You can assume that the maximum HTTP request size is 8KB (2^13).

  • Jan 28: The Content-Type for .jpg files should be “image/jpeg”, for .png files it should be “image/png”, and for html it should be “text/html”

  • Jan 28: The format for the Last-Modified header is: Last-Modified: <day-name>, <day> <month> <year> <hour>:<minute>:<second> GMT

  • Feb 3: Make sure your server works on ieng6! We’re going to be building it there and testing it on the ieng6 machines. We won’t be able to test the server on other servers, laptops, etc.

Overview

In this project, you are going to build a simple webserver that implements a subset of the HTTP/1.1 protocol specification (as defined in this document). On top of that base, you can optionally implement some extensions which add new features to your server for a potentially higher maximum grade.

Learning objectives

The goal of this project is to build a simple web server that can receive requests, and send back responses, to web clients. During this project you will:

  • Correctly implement a network protocol from a written specification
  • Master the UNIX sockets API
  • Developing a methodology for testing protocol correctness and performance
  • Using git and GitHub.com for managing source code

Logistics

Triton-HTTP/1.1 Specification for this project

In this project, we are going to be implementing a subset of the full HTTP/1.1 specification (which is many hundreds of pages long when you consider all the extensions and supplemental documents!). Because our implementation differs slightly from the official HTTP spec, we’re calling it “TritonHTTP.” Portions of this content are courtesy of 1, used with permission from the author.

Client/server design

TritonHTTP is a client-server protocol that is layered on top of a reliable stream-oriented transport protocol (typically TCP). Clients issue request messages to the server, and servers reply with response messages. In its most basic form, a single HTTP-level request-reply exchange happens over a single, dedicated TCP connection. The client first connects to the server, sends the HTTP request message, the server replies with an HTTP response, and then the server closes the connection:

Single request protocol exchange

The HTTP protocol is stateless, meaning that each response is provided without reference to a client’s previous interactions with that server2.

It is possible to exchange more than one HTTP request/response pair on a single TCP connection through a mechanism called “Pipelining.” Adding pipelining support to your server is one of the optional extensions described later in this document.

HTTP messages

HTTP request and response messages are in plain-text format, consisting of a header section and an optional body section. The header section is separated from the body section with a blank line. The header consists of an initial line (which is different between requests and responses), followed by zero or more key-value pairs. Ever line is terminated by a CRLF (carriage-return followed by a line feed). Thus a message looks like:

<initial line, differs between requests and responses>[CRLF]
Key1: Value2[CRLF]
Key2: Value2[CRLF]
Key3: Value3[CRLF]
[CRLF]
<optional body...>

Messages without a body section still have the trailing CRLF (a blank line) present so that the server knows that it should not expect additional headers. You can assume that HTTP requests are not larger than 8KB (2^13).

Request Initial Line

The initial line of an HTTP request header has three components:

  • The method (in this project that will be GET)
  • The URI
  • The highest HTTP version that the client supports

The method field indicates what kind of request the client is issuing. Most common is a GET request, which indicates that the client wants to download the content indicated by the URI (described next).

The URI is a pointer to the resource that the client is intersted in. Examples include /images/myimg.jpg and /classes/fall/cs101/index.html.

The version field takes the form HTTP/x.y, where x.y is the highest version that the client supports. For this course we’ll always use 1.1, so this value should be HTTP/1.1.

The fully formed inital request line would thus look something like:

GET /images/myimg.jpg HTTP/1.1

Response Initial Line

The initial line of an HTTP response also has three components, which are slightly different than those in the request line:

  • The highest HTTP version that the server supports
  • A three-digit numeric code indicating the status of the request (e.g., whether it succeeded or failed, including more fine-grained information about how to interpret this response)
  • A human-friendly text description of the return code
HTTP Response code semantics

The first digit of the response code indicates the type of response. These types include:

  • 1xx is informational
  • 2xx is a success type
  • 3xx means that the content the client is looking for is located somewhere else
  • 4xx means that the client’s request had some kind of error in it
  • 5xx means that the server encountered an error while trying to serve the client’s request

For this project, depending on which extensions you implement, you’ll need to support:

  • 200 OK: The request was successful
  • 400 Client Error: The client sent a malformed or invalid request that the server doesn’t understand
  • 403 Forbidden: The request was not served because the client wasn’t allowed to access the requested content
  • 404 Not Found: The requested content wasn’t there
  • 500 Server Error: An error occured internal to the server. In this project we do not implement plugins, server-side scripting, or other extensions, and so you shouldn’t expect to use this return code

HTTP header key-value pairs

After the intial request line, the HTTP message can optionally contain zero or more key-value pairs that add additional information about the request (called “HTTP Headers”). Some of the keys are specific to the request message, some are specific to response messages, and some can be used with both requests and responses. These key-value pairs follow the standard outlined in RFC 822.

For this assignment, you must implement or support the following HTTP headers:

  • Request headers:
    • Host (required, 400 client error if not present)
    • You should gracefully handle any other valid request headers that the client sends. Any request headers not in the proper form (e.g., missing a colon), should signal a 400 error.
  • Response headers:
    • Server (required)
    • Last-Modified (required only if return type is 200)
    • Content-Type (required if return type is 200; if you create a custom error page, you can set this to ‘text/html’)
    • Content-Length (required if return type is 200; if you create a custom error page, you can set this to the length of that page)

A custom error page is simply a human-friendly message explaining what went wrong in the case of an error; custom error pages are optional, however if you use one, the Content-type and Content-length headers have to be set correctly.

Project details

Basic web server functionality

At a high level, a web server listens for connections on a socket (bound to a specific adderss and port on a host machine). Clients connect to this socket and use the above-specified HTTP protocol to retrieve files from the server. For this project, your server will need to be able to serve out HTML files as well as images in jpg and png formats. You do not need to support server-side dynamic pages, Node.js, server-side CGI, etc.

Mapping relative URIs to absolute file paths

Clients make requests to files using a Uniform Resource Identifier, such as /images/cyrpto/enigma.jpg. One of the key things to keep in mind in building your web server is that the server must translate that relative URI into an absolute filename on the local filesystem. For example, you might decide to keep all the files for your server in ~aturing/cse124/server/www-files/, which we call the document root. When your server gets a request for the above-mentioned enigma.jpg file, it will prepend the document root to the specified file to get an absolute file name of ~aturing/cse124/server/www-files/images/crypto/enigma.jpg. You need to ensure that malformed or malicious URIs cannot “escape” your document root to access other files. For example, if a client submits the URI /images/../../../.ssh/id_dsa, they should not be able to download the ~aturing/.ssh/id_dsa file. If a client uses one or more .. directories in such a way that the server would “escape” the document root, you should return a 404 Not Found error back to the client.

Filesystem permissions

After your server maps the client’s request into a file in the document root, you must check to see whether that file actually exists and if the proper permissions are set on the file (the file has to be “world” readable). If the file does not exist, a file not found error (error code 404) is returned. If a file is present but the proper permissions are not set, a permission denied error is returned (error code 403). When a 403 error is returned, no information about the real file should be returned in the headers or body section of the reply (e.g., the file size). Otherwise, a 200 OK message is returned along with the contents of a file.

You should also note that web servers translate GET / to GET /index.html. That is, index.html is assumed to be the filename if no explicit filename is present. That is why the two URLs http://www.cs.ucsd.edu and http://www.cs.ucsd.edu/index.html return equivalent results. You will need to support this mapping in your server.

When you type a URL into a web browser, the browser will retrieve the contents of the file. If the file is of type text/html, it will parse the html for embedded links (such as images) and then make separate connections to the web server to retrieve the embedded files. If a web page contains 4 images, a total of five separate connections will be made to the web server to retrieve the html and the four image files. The client handles this–your web server needs to only return one response at a time for basic functionality. One of the optional extensions called “HTTP/1.1 Pipelining” enables the client to send multiple requests on a single connection, and the server will respond with multiple responses on that same connection. Pipelining makes more efficient use of the network, as we’ll discuss further in class.

Program structure

At a high level, your program will be structured as follows.

Initialize

Take a port number and document root as a command-line argument.

$ ./httpd 8080 /var/lib/www/htdocs

This tells the server to listen to port 8080, and to serve out documents from the /var/lib/www/htdocs directory. Note that the document root and port number needs to be a parameter that is passed into your program–do not hard code file paths or ports, as we will be testing your code against our own document root. Also do not assume that the files to serve out are in the same directory as the web server.

Setup server socket and threading

Create a TCP server socket, and arrange so that a thread is spawned (or thread in a thread pool is retrieved) when a new connection comes in. An optional extension (event-driven design) does not rely on a dedicated thread to handle each incoming connection.

Separating framing from parsing

As we will discuss in class, two key operations that must be performed to build your web server are (1) separating out application-level messages by determining when one message starts and another ends (framing), and (2) processing individual messages to understand their meaning (parsing). In your project, you must separate these steps as follows.

Along the request path (from the client to the server), you will write code that reads from the client socket and produces HTTPMessage structs (C) or objects (C++). You will then have parsing code that turns an HTTPMessage into an HTTPRequest struct or object.

On the response path (from the server back to the client), the reverse will happen: your code will initialize and fill in an HTTPResponse struct/object, and then your framing code will convert that into an HTTPMessage struct/object, which you will then send over the socket back to the client. Your code must separate parsing and framing into separate steps to receive credit.

Implementation

You may choose from C or C++ to build your web server but you must do it in a Unix-like environment with the sockets API we’ve been using in class (e.g., no HTTP libraries). You must keep a Makefile so that we can build your code by simply typing “make”. Make sure that your code builds from a fresh clone of your repository. It should be possible for us to perform the following commands to run your server:

$ git clone git@github.com/...
$ cd <your_repo_directory>
$ make
$ ./httpd [port] [doc_root]

Grading

The points on this project are assigned as follows:

  • 80% is based on the correctness of your web server and extension(s) code
  • 12% is based on the quality of your code
  • 8% is based on the comprehensiveness of your testing code

Correctness (quantitative)

Grade Equiv. letter Requirements
High pass + Picture on wall of fame A+ Same as Low Pass, except with all four extensions
High pass A Same as Low Pass + three of the optional extensions
Pass B Same as Low Pass + two of the optional extensions
Low pass C Basic functionality works on all provided test cases and most of the additional test cases
High fail D Basic functionality fails on some of the provided test cases
Low fail F Basic functionality fails on most of the provided test cases or no submission given

Autograder

On ieng6 we’ve provided an autograding program that we’ll use to evaluate your project. We have populated it with a few of the basic test cases so that you can ensure that you’re on the right track. There are some additional test cases we have not provided to you that we’ll use in testing the final project.

In the directory ~cs124w:

.
|-- public
|   `-- project1
|       |-- bin
|       |   `-- cse124HttpdTester
|       `-- htdocs
|           |-- LICENSE
|           |-- index.html
|           |-- kitten1.jpg
|           `-- subdir1
|               `-- wolves.jpg

bin/cse124HttpdTester is the autograder, and it should be tested against the provided htdocs folder.

If your code passes all the basic, provided test cases, then we will grade the extensions. Note that if you fail a significant number of the additional test cases, we may reduce your project grade by a letter grade. The extensions will only be graded if the baseline functionality works for all provided test cases (so make sure that your webserver passes all the provided tests before working on the extensions!).

Quality (qualitative)

We will evaluate code quality using the following rubric.

  Does not meet expectations (0) Meets expectations weakly (2) meets expectations strongly (4) Outstanding (4)
Readability (4%) Code lacks structure; a reader must spend considerable time and effort to undertand where functionality is and how parts of the code relate to each other Code has structure, variables and functions are largely descriptive, layout allows a reader to gain understanding where functionality is and how different portions of the code relate to each other after some effort Code is well structured, indented, variables and functions are descriptive, layout is highly conducive to quickly understanding where functionality is and how different portions of the code relate to each other Code serves as a reference example for others
Modularity (4%) The codebase lacks modularity. The implementation of different tasks and functions are intermixed. Code is divided into well-formed, separate modules. Code is divided into modules that can be developed independently of each other. A reader can gain insight into the design and function of the program through its structure. Code is divided into loosely coupled modules that can evolve separately, can be independently tested and reasoned about, and be developed independently of each other. These modules also serve as implicit documentation on how the program works and shed light on the overall design of the system.
Efficiency (4%) Algorithms, data structures, and code structure is designed and implemented in a way that uses up excessive resources and is subject to severe performance penalties Algorithms, data structures, and code structure is designed and implemented to meet specifications in a way that does not needlessly or exceptionally use up resources or hinder performance Algorithms, data structures, and code structure is designed and implemented to meet specifications in a way that largely balances readability, modularity, testability, evolvability, and performance Algorithms, data structures, and code structure is designed and implemented to meet all specifications in a way that ideally balances readability, modularity, testability, evolvability, and performance

Extensions

In addition to the basic web server functionality, you can also attempt one or more of the following extensions to the server.

Extension 1: HTTP/1.1 pipelining

Multiple request protocol exchange

Setting up and tearing down TCP connections reduces overall network throughput and efficiency, and so HTTP has a mechanism whereby a client can reuse a TCP connection to a given server. The idea is that the client opens a TCP connection to the server, issues an HTTP request, gets an HTTP reply, and then issues another HTTP request on the already open outbound part of the connection. The server replies with the response, and this can continue through multiple request-reply interactions. The client signals the last request by setting a “Connection: close” header. The server indicates that it will not handle additional requests by setting the “Connection: close” header in the response. Note that the client can issue more than one HTTP request without necessarily waiting for full HTTP replies to be returned.

To support clients that do not properly set the “Connection: close” header, the server must implement a timeout mechanism to know when it should close the connection (otherwise it might just wait forever). For this project, you should set a server timeout of 5 seconds, meaning that if the server doesn’t receive a complete HTTP request from the client after 5 seconds, it closes the connection.

Hint: the recv() system call may not return a full HTTP request, meaning that multiple recv() calls are needed for the server to read in a complete request. On the other hand, if the client issues two back-to-back HTTP requests, it is possible that the server issues a read() call and both requests are returned by that call. For this reason, you must ensure that you frame/unframe HTTP messages based on the protocol, not based on what recv() returns.

Extension 2: IP address-based allow/deny rules and enforcement

Some web servers add support for protecting content based on the IP address of the client, such that certain documents are only accessible to clients coming from certain IP addresses. For example, a company might have an “internal” portion of their website that is only accessible to employees who are physically located at the company or logged into the company network via a VPN (virtual private network, a topic we’ll go over later in the term).

To define the access rules based on IP addresses, many web servers, including the Apache server, read from a “.htaccess” file (note the dot in front of the filename). If there is a .htaccess file in a given directory, then the rules in that file apply to that directory. In general, production servers apply the rules from .htaccess files to subdirectories as well, and there are complex rules about how rules in different directories interact with each other and how they are merged together. For this project, we’re going to simply that whole system and only apply rules in a .htaccess file to that directory and none other. This simplification should make it easier to attempt this extension.

The format of a .htaccess file is a list of lines, each starting with “allow from” or “deny from”. After the from is an IP address range in CIDR format (e.g., xxx.yyy.zzz.www/pp). For example, “allow from 172.22.16.12/24”, “deny from 121.229.0.0/16”, or “deny from 0.0.0.0/0”. Note that 0.0.0.0/0 is a special address that just means “all IP addresses” (wildcard address). You may also provide a fully qualified host and domain name to specify a specific machine, for example “allow from mymachine.ucsd.edu”. Rules should be applied in descending order.

For example:

deny from 172.22.16.18/32
allow from 172.22.16.0/24
allow from 192.168.0.0/16
allow from mymachine.ucsd.edu
deny from 0.0.0.0/0

allows any host in the 172.22.16.0 subnet, except for host 172.22.16.18, to access this page, as well as mymachine.ucsd.edu. Hosts in the 192.168 subnet can access the content. Any other hosts are denied by the default rule on the last line.

When a host is denied, it should receive a 403 error message and the content should not be returned, nor should any metadata about the real file (e.g., its file size).

Extension 3: Different strategies for threading

It is somewhat easier to implement concurrency by simply spawning off a new worker thread when a request arrives to the system, however creating that thread (and destroying it after the request is finished processing) incurs overhead and reduces overall performance. An alternative approach is to keep a pre-spawned set or “pool” of threads ready-to-go. When a request comes in, you simply grab an unused thread, and have it handle the request. After the request is handled, instead of destroying the thread, you put it back into the pool. Note that you have to think through your policy on what happens when more concurrent clients have arrived than you have threads in the thread pool. Take a look at the listen queue in the listen() system call for more information on how to keep a queue of clients.

In this extension, you will implement concurrency via both (1) a thread-per-connection model, and (2) a fixed-size thread pool. You should compare the latency of issuing small requests between both models. Quantitatively evaluate both the latency and throughput of these two approaches, as a function of the number of concurrent clients and the size of the thread pool. In particular, answer these questions:

Q1. Does the latency of a small request vary between the two threading models for pools of size 5 when evaluated at concurrency levels of {2,3,4,5}?

Q2. Does the throughput of large requests vary between the two threading models for pools of size 5 when evalutaed at concurrency levels of {2,3,4,5}?

Q3. Does the latency of a small request vary between the two threading models for pools of size 5 when evaluated at concurrency levels of {10,20}?

Q4. Does the throughput of large requests vary between the two threading models for pools of size 5 when evalutaed at concurrency levels of {10,20}?

Put your answers for these questions in a file called EXTENSION3.txt in the base directory of your project.

Note: You will need a way to specify which mode the server should be in, and so for that you can add an additional command-line argument. It should have the form:

$ ./httpd [port] [doc_root] pool N

for the thread-pool mode with N threads in the pool. For example, “./httpd 8080 htdocs pool 5” would specify a thread pool size of 5

$ ./httpd [port] [doc_root] nopool

for the thread-per-connection mode.

Extension 4: A fully non-blocking, event-driven design (Warning!)

Threads provide a useful abstraction for thinking about concurrency in a networked application, because as a programmer, you only have to reason about the state from one request. However, if the number of concurrent clients exceeds the amount of parallelism available in your server, then the OS will have to start scheduling multiple threads of execution on the same processor/core. This incurs overhead, since switching between these threads takes time.

An alternative way to implement a concurrent networked application is via an event-driven, non-blocking design, where a single processor or core handles multiple client requests concurrently. The idea is that a single thread of control will handle a portion of a given request, then when that request would block waiting on another resource (like the disk request, or a space to free up in a network socket), it instead saves information about the state of the request and goes on to work on a portion of another request. Such a thread is called “event-driven” because it is simply goes from one event to another, processing a request as much as it can before having to switch over to another request.

Event-driven design is closely tied to non-blocking file and socket APIs, because an event-driven system cannot afford to block on an API call. Why is that? Well, because if the thread goes to sleep waiting on e.g., a disk access to return, then all clients would be blocked and performance would not be good. Thus the programmer (you!) must be careful to ensure that none of the APIs you call block.

To implement a single-threaded, event-driven, non-blocking web server, you will need to rely on non-blocking file and socket APIs, and use select() or epoll() to determine when events are ready to be handled. You must keep a per-request data structure that ensures that you can make progress on all in-flight requests, and must augment that structure so that you can maintain bookkeeping information on what status every client is in.

If you want to attempt this extension, please see me before starting, as it is quite difficult and error-prone. However the advantage of this extension is that, if you’re successful, you’ll have a deep mastery of network programming and will be ready to implement highly concurrent and high-performance network code.

Reminders

  • Make sure to separate framing and parsing in your code to receive credit
  • Make sure your basic web server code works against the provided autograder before working on the extensions

Acknowledgements

  1. https://www.jmarshall.com/easy/http/

  2. There is a mechanism called “Cookies” which enables the server to send some state to the client, that the client then sends back to the server the next time it connects, so that it appears like the server keeps state. However, if you delete that cookie, or use a different web browser to reconnect to the server, all that state is lost.