==============================================================================
C-Scene Issue #2
Server Design under UNIX systems.
Nick: Shadows
Email: shadows@whitefang.com
==============================================================================
Intro
------
The following is the first part of what hopefully will be a series of
articles discussing server design/implementation. Its assumed the reader
programs under UNIX flavors, but the reader may come from another background.
Considerations
--------------
The first consideration taken when developing a server, is how busy the
server will be. It's obvious, a telnet server is used less often than a web
server. That's mostly due to the protocols telnet and http. Http is
not stateful, its all one shot, you query and get a response. Most html
pages may include several references to other resources which require more
queries. Thus the web server is busier, since it handles one query per
connection, taking more hits. A telnet server on the other hand accepts a
connection, and proceeds to implement its protocol during the course of this
one connection. The telnet server is called once, and remains running for
longer. The web server gets called many times, and lives in short bursts.
The second consideration taken, is whether this server will require more
than one connecting entity at a time. A web server only accepts one
connection, handles it, and then terminates the connection, the same goes
towards the telnet server. A chatline server, such as ircd, would not. It
remains running, and accepts numerous connections, since it's goal is to
setup a forum for these numerous clients to talk to each other. The IRC
server is probably an example of a very complex server, we'll see later how
it will have to make sure it does not block on any system call, or all
clients will not be served equally.
Some servers such as rwhod, use udp, and can accept more than one client
at a time, without really spending a lot of resources doing so. Only one
listening socket is setup to receive udp datagrams, and the processing time
behind accepting that one datagram per client is not measurable.
Simple inetd servers
--------------------
Probably the first exercise for any network programmer, is to write a very
simple server that runs out of inetd.
Inetd's job is to dup (or duplicate), the accepted socket, on stdout,
stdin, and stderr. To the enlightened, this doesnt make much of a difference
which descriptor you use, since sockets are indeed fully duplex, allowing
you to recv()/send() on one descriptor. This was mearly meant to allow
normal programs to read and write to the socket. Its important to note
"simple" programs, since mixing between stdio and normal socket routines on
a socket descriptor can be dangerous. Also stdio one some systems are so
broken using them on sockets is questionable.
What inetd provides, is the management of accepting new connections, and
spawning the server's to handle them. Inetd can be told to spawn many, or
only one instance of a server. It also supports both tcp and udp.
--- Example inetd echo server
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define BUFFSIZE 512
int main(int argc,char *argv[])
{
char buff[BUFFSIZE];
if(isatty(0)) {
fprintf(stderr,"This program was meant to run out of inetd\n");
exit(-1);
}
while(fgets(buff,sizeof(buff),stdin) != NULL) {
fputs(buff,stdout);
fflush(stdout);
}
exit(0);
}
---
There's an important lesson to be learned here, about using stdio with
sockets. Although to the naive, one might think flushing will immediately
send the data over to the user on the other side of the network. The truth
is it only flushes it out of the stdio buffers into the socket buffer
allowing to to be transmitted over the network. TCP itself is buffered, and
using stdio with TCP sockets can leave one in a tangle, where he'd have to
deal with two buffers at once. Also an error check should be made on fflush(),
the socket buffer's could fill up to the point of no longer being able to
hold anymore data.
Going back to our first consideration on server design, its obvious server's
spawned out of inetd can act poorly if taking a lot of hits. This is due to the
constant fork() calls inetd does in order to spawn the servers each time its
hit. Also most system's will place a limit on how many children a process
can have, since inetd caters a lot of different servers, the server that is
most busy will have to share its parental resources with other servers who
might not be.
Inetd is meant for quick servers to be built and setup, without bothering
with all the details behind a network server. At the same time, all inetd
server's should be simplistic.
The Preforking server
----------------------
You could reimplement inetd, simply bind, accept() TCP connections and fork
out a daemon to handle them. But it makes more sense for the daemons you
fork out to remain alive. This is called preforking, webservers like apache
do this to handle the amount of load the webserver is under. fork() is slow
and takes up resources, since your copying an image of the parent process
(Even if your OS does copy-on-write your still wasting resources).
Preforking servers, attempt to do the forking before actually getting hit,
and each forked instance will keep going taking care of yet more connection
attempts. It can also attempt to fork out as many children as required, for
example if the parent notices all its children are busy, it will fork out
more, untill it reached its own maximum limit which can be configurable.
Ofcourse the child process, cant go on forever. Eventually memleaks in
the standard libraries may cause your children to bloat up system resources.
It's a good idea to have a lifetime set for each child, where after X many
processed requests, the child dies and a new one is forked.
This is quazi interesting when we compare inetd based servers, and
preforking servers. Each time an inetd server accepts a new connection, it
forks out a new server, with a preforking server its every 30 times or 60
times that a new child process is forked, depending on how you set it up.
You can imagine how much fork() time is saved here.
The way a preforking server works is by forking out X many children
initially. Each child then attempts to get a lock to wait in line for the
next connection, the one that gets the lock does an accept() on the
listening descriptor. After it accepts, it immediately releases the lock and
notifies the parent of its acceptance, so the parent may decide to fork out
more children if it feels overloaded. It handles the request, and then
notifies the parent of its idleness, getting back on the lock queue. This
goes on through the lifetime of the server.
The following diagram will attempt to outline what a preforking server does.
Preforking Lifecycle:
Parent (Polls on ipc back to children, waiting for status info)
| (Forks out children)
--------------------------------------------------------------------------
Lock queue | | | | | |
Child Child Child Child Child Child
(flock) (flock) (flock) (flock) (flock) (accept)
There are times when preforking servers do more harm than good. If you know
for a fact your server will not be getting lots of hits, its silly to
prefork since you'll be reserving system resources for a service thats not
used much.
I've written an example preforking library which can act as a skeleton for
anyone who wants to study writing preforking servers.
http://www.whitefang.com/prefork.html
Or alternatively you can download it from here.
In my next article, I'll cover writing servers that handle more than one
connection, how to get around non blocking calls, and even how to do
asynchronous DNS lookups!
C Scene Official Web Site :
http://cscene.oftheinter.net
C Scene Official Email :
cscene@mindless.com
This page is Copyright © 1997 By
C Scene. All Rights Reserved