You may have heard about Node.js, and how it maximizes scalability. But what is the big innovation behind Node? HTTP servers have been around for a very long time (Apache is nearly 20 years old), what have we been missing for so long?
Let’s re-invent it together!
What is an HTTP server
Writing a proof-of-concept web server is easy. All you have to do is follow those steps:
- Listen on a TCP port
- Whenever a client tries to open a connection, accept it
- Parse the text sent by the client (which is actually an HTTP request)
- Process said request
- Reply a textual answer (this is your HTTP response)
Let’s build a sample web server: it will reply “Hello” to any incoming request.
// CAUTION: This code is *extremely* dirty, it's aimed at being as compact as possible// to get straight to the point. We'll kick you if you use it "for real".#include<stdio.h>#include<string.h>#include<stdlib.h>#include<sys/socket.h>#include<sys/types.h>#include<netdb.h>#include<sys/select.h>//void processRequest(int socket) {
int requestBufferSize = 8096;
char * request = malloc(requestBufferSize);
recv(socket, request, requestBufferSize, 0);
free(request); // Actually we don't care about the client's requestchar * response = "HTTP/1.1 200 OK/nContent-Type: application/json\n\n{\"hello\": \"world\"}\n";
send(socket, response, strlen(response), 0); // Let's send our static response
close(socket);
}
//int main(int argc, char * argv[]) {
struct addrinfo hints;
memset(&hints, 0, sizeof(hints));
hints.ai_family = PF_INET; // We want an IP socket
hints.ai_protocol = IPPROTO_TCP; // That'll speak TCP
hints.ai_socktype = SOCK_STREAM; // TCP is stream-based
hints.ai_flags = AI_PASSIVE; // We'll want to be a serverstruct addrinfo * res;
getaddrinfo("localhost", "8080", &hints, &res); // Let's setup our server on "localhost:8080"int serverSocket = socket(res->ai_family, res->ai_socktype, res->ai_protocol);
int reuse = 1;
setsockopt(serverSocket, SOL_SOCKET, SO_REUSEADDR, &reuse, sizeof(reuse));
int status = bind(serverSocket, res->ai_addr, res->ai_addrlen); // Let's bind to the actual socket
listen(serverSocket, 10000); // And listen. We setup a 10'000 clients bufferwhile (1) {
struct sockaddr clientAddress;
socklen_t clientAddressLength;
// The call to "accept" will *wait* until a client actually knocks on the door of our serverint currentClientSocket = accept(serverSocket, &clientAddress, &clientAddressLength);
processRequest(currentClientSocket); // Let's process that client
}
}
As you can see, it’s extremely simple. Granted, we’re doing absolutely no error checking and this code should remain as far as possible from a production environment, yet it does make a point: the core logic of a web server isn’t complicated.
Now, this server makes absolutely no processing whatsoever. Let’s simulate some work: we’re going to write an HTTP server with some delay. Let’s say we’ll answer to clients 5 seconds after we’ve received their request. We won’t do anything at all during said 5 seconds, but it will be a good simulation for some actual work (reading a file from disk for example).
The synchronous model
So, we want to reply 5 seconds after receiving the request. The classical approach is to use the sleep function.
void processRequest(int socket) {
int requestBufferSize = 8096;
char * request = malloc(requestBufferSize);
recv(socket, request, requestBufferSize, 0);
free(request);
// And now, some magic...
sleep(5);
// Ta-daa! Easy, wasn't it?char * response = "HTTP/1.1 200 OK/nContent-Type: application/json\n\n{\"hello\": \"world\"}\n";
send(socket, response, strlen(response), 0);
close(socket);
}
The single process case
Simple, right? Of course. Now what if another client makes a request while we’re still processing the first one? Well, he won’t get any answer: we’re sleeping!
Sub-processes and threads to the rescue
If you ask a developer how to fix this, his answer will most likely be “just fork a thread”. And that does make sense! So our webserver becomes:
pthread_t thread;
while (1) {
struct sockaddr clientAddress;
socklen_t clientAddressLength;
longint currentClientSocket = accept(serverSocket, &clientAddress, &clientAddressLength);
pthread_create(&thread, NULL, threadRoutine, (void *)currentClientSocket);
pthread_detach(thread);
}
And it does the job: for each incoming request the server will fork a thread, so it will still be able to process other incoming requests.
This is the way a very large majority of web servers work. Including some refinements of course:
- To avoid ressource exhaustion, servers generally cap the max number of threads they’ll fork, and they’ll try to re-use threads that are done working (this is called a threadpool).
- It is also possible to spawn multiple processes, and dispatch the work to said processes.
- You can even mix’n’match and have multiple processes each having multiple theads.
The asynchronous model
So, problem solved, right? Meh, not really. Turns out using threads and subprocesses comes with a cost. It’s not noticeable at first, but if you use many threads your program will spend a significant part of its time switching contexts, which will end up being a bottleneck. Let’s get back to our “sleep” example: we should be able to handle millions of concurrent users, even with modest hardware, since after all we’re only waiting! But of course if you hit the threaded version of our code with several thousands requests per seconds, it won’t handle the load.
An event loop
Now those of you who have ever done GUI programming will have interacted with an event loop before. This is a programming model that is at the core of iOS and Android development for example. It runs as follow :
- Your app sets up a bunch of widgets on screen (buttons, text areas, sliders, you name it).
- And it enters a infinite loop. Inside this loop, it sleeps for a very tiny amount of time. Pretty much like this :
event * queue[100];
int queue_length;
while(TRUE) {
usleep(1000); // Sleeps for 0.001 secondif (queue_length > 0) { // Check if there is something to do. If there is, do it!
process_event(queue[0]);
queue_length--;
}
}
The process_event
function looks at the actual event
structure and does the corresponding work. For example, an event
could describe the click of a “compute” button, and the corresponding work could be to compute a hash.
On a mobile phone, the magic is in the OS: whenever the user taps the touchscreen, the OS creates an event
and adds it to the queue array to be processed by your application.
Notice how most of the time the application uses nearly 0% CPU: every 1/1’000th of a second it awakes for a extremely quick check, sees that the queue is empty, and returns to sleep.
An evented web server
Turns out, that metaphor is a very good fit for web servers! Let’s re-write our sample server using an evented approach:
#include<stdio.h>#include<string.h>#include<stdlib.h>#include<sys/socket.h>#include<sys/types.h>#include<netdb.h>#include<sys/select.h>//void processRequest(int socket) {
int requestBufferSize = 8096;
char * request = malloc(requestBufferSize);
recv(socket, request, requestBufferSize, 0);
free(request); // Actually we don't care about the client's requestchar * response = "HTTP/1.1 200 OK/nContent-Type: application/json\n\n{\"hello\": \"world\"}\n";
send(socket, response, strlen(response), 0); // Let's send our static response
close(socket);
}
//int main(int argc, char * argv[]) {
// Let's initialize our server. Just like before.struct addrinfo hints;
memset(&hints, 0, sizeof(hints));
hints.ai_family = PF_INET;
hints.ai_protocol = IPPROTO_TCP;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
struct addrinfo * res;
getaddrinfo("localhost", "12345", &hints, &res);
int serverSocket = socket(res->ai_family, res->ai_socktype, res->ai_protocol);
int reuse = 1;
setsockopt(serverSocket, SOL_SOCKET, SO_REUSEADDR, &reuse, sizeof(reuse));
int status = bind(serverSocket, res->ai_addr, res->ai_addrlen);
listen(serverSocket, 10000);
// So far, no change. Here comes the evented actionstruct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 1000000;
fd_set readFdSet;
// Let's define a "responseEvent" structure.// This will store an event describing an HTTP response to be sent.typedefstruct {
int socket;
time_t fireTime;
} responseEvent;
// And let's create our event queueint eventPoolSize = 1024; // We won't queue more than 1024 events
responseEvent * eventPool = malloc(sizeof(responseEvent) * eventPoolSize);
memset(eventPool, 0, sizeof(responseEvent) * eventPoolSize);
// We're all set. Let's enter our infinite event loop!while (1) {
FD_ZERO(&readFdSet);
FD_SET(serverSocket, &readFdSet);
// The following is equivalent to the "accept" call, except it times out// The "select" call will wait for a new client for 1ms// If none comes in, it'll skip the embedded codeif (select(serverSocket+1, &readFdSet, NULL, NULL, &timeout) > 0) {
// A new client knocked on our door!// We will *not* answer the client straight away! That's precisely the trick.// Instead, we're going to build a "responseEvent" structure, and process it later on// when the time will be right.struct sockaddr clientAddress;
socklen_t clientAddressLength;
int currentClientSocket = accept(serverSocket, &clientAddress, &clientAddressLength);
int eventIndex = 0;
// Let's create a new event. Actually, we're not deleting processed events, we're simply// setting their "socket" field to zero. So let's just find the first event whose socket// is equal to zero, and re-use it!for (eventIndex = 0; eventIndex < eventPoolSize; eventIndex++) {
if (eventPool[eventIndex].socket == 0) {
break;
}
}
// Mmh, we didn't find any un-used event...if (eventIndex >= eventPoolSize-1) {
// This may happen if we can't process events as fast as they are created// unfortunately going the evented way won't solve *all* your problems!
printf("Overflowing queue size !\n");
return;
}
// So, here's our event. We have to populate it with all the data we'll need// for sending an answer to the client when we'll process it.
responseEvent * newEvent = &eventPool[eventIndex];
struct timeval currentTime;
gettimeofday(¤tTime, NULL);
newEvent->socket = currentClientSocket; // We'll need that socket to know *where* to reply
newEvent->fireTime = (currentTime.tv_sec + 5); // And we'll need to know *when* to reply
}
// Since "select" times out after 1ms, we'll run this at least once every 1msfor (int i=0; i < eventPoolSize; i++) {
// We're iterating on all events. We could have sorted them and only process the relevant ones// But we didn't for claritystruct timeval currentTime;
gettimeofday(¤tTime, NULL);
responseEvent * event = &(eventPool[i]);
if (event->socket != 0) { // That event is liveif (event->fireTime <= currentTime.tv_sec) {
// And it is now the time to reply to that client !
processRequest(event->socket);
event->socket = 0; // "Delete" the event
}
}
}
}
}
This code will perform much, much better under heavy load. Indeed, the only ressource we’re allocating per client is an event structure. Our routine to check for events isn’t the smartest, but it wouldn’t be hard to improve.
Now there is a drawback: as you can see, we’ve significantly refactored our code. No more call to the sleep function: we’re storing an fireTime
value, and we regularly check if it matches the current time. This works pretty well in our case, but what if we were doing some actual work?
Let’s elaborate on a more common scenario: a server serving static files. Pretty common, right? Turns out it is strangely similar to our example: most of the time, the CPU is waiting. It’s not sleeping as in our example: it’s waiting for the hard drive to complete its seek operation. But yet, the CPU does pretty much nothing most of the time. So this would be a very nice candidate for an evented web server. But how?
Closure
Turns out the read function call is synchronous. It blocks your process until the data has been read. Exactly like the sleep function did. If you want to go the evented route, you’ll have to use something else: an evented I/O API.
While this is possible in C using callbacks, it is pretty painful to use. Using a language with closure makes it much easier. Think of it this way :
// We're using blocks, which is a proprietary extension made by Apple// to add closure to the C languageint fileDescriptor = open("path/to/my/file.txt", "r");
read_from_file_then(fileDescriptor, ^(char * data){
// That is a *block*. It represents a piece of code.// That called is called with "data" being equal to the content of the file.// It is called whenever the "read" code has finished
});
// Here is the trick: execution of "read_from_file_then" would return *immediately*// And the call to block would happen "later on", whenever appropriate.
With such an asynchronous API, it becomes much easier to create events that uses blocks to process the response.
So which one should I pick?
Latest doesn’t always mean greatest: evented web servers can be awesome, but they aren’t the answer to every question. Namely, they shine when your CPU stands idle most of the time. Otherwise, if your web server code is CPU-bound, it won’t make a big difference.
Here’s when you should try to use an evented web server:
- When writing any kind of “proxy” app: retrieving data from an external source, and (quickly) reformatting it.
- When building a standard DB-backed website, as long as the time taken to generate the view is significantly lower than the time taken to query the database.
Here’s when you should keep away from using an evented web server:
- When your language doesn’t have closure.
- When your app is computationally intensive: an image processing webservice for example.
If you have any question or comment, we would be very happy to hear from you : just ping us on Twitter!