503 Service Unavailable


Limiting the number of user processes under Linux (or how I learned to stop worrying and love the fork bomb)

Filed under: Programming,Software — rg3 @ 10:51

Some weeks ago there was a controversial discussion at Kriptopolis (a Spanish site mainly dedicated to computer security) about a supposed Denial of Service (DoS) vulnerability present in many Linux distributions and some BSDs. In the end, the vulnerability was a mere shell-based fork bomb that a local user would be able to trigger in most desktop Linux distributions, because it’s not a common practice to limit the number of user processes. This is the cryptic piece of code that may probably lock your system after some seconds:

:(){ :|:& };:

Code explanation

Its usage of special characters may make it difficult to understand for some people, and impossible to understand for those unfamiliarized with Bourne shell scripts. A shell function can be defined in two ways: either function function_name { code ; } or function_name () { code ; }. The code above uses the second form to define a function named : (a colon). The body of the function runs the function twice recursively in a pipe which is sent to the background and, after the function is defined, it is called by invoking its name as a command. If we call this function spawn_two we could write it this way:

spawn_two() { spawn_two | spawn_two & }; spawn_two

Why is this called a fork bomb? In POSIX, the system call fork is used to create new processes in a system. A fork bomb is a program of some kind that starts creating processes rapidly, and all of them remain in the system (that is, they don’t finish immediately). If there is no established system limit in the number of processes a user may run, this process creation routine will eventually take all the system resources and lock the machine for a long time, usually forcing a hard reboot when it becomes unresponsive.

This special piece of code is very nasty, and its composition has been calculated precisely. In particular, you’ll notice that the function calls itself twice, using a pipe, and sends the pipe to the background. Each of these steps has a purpose. If it simply called itself, the shell process would automatically start eating all available CPU time, while the amount of memory used by the shell would start increasing, but you could kill this routine at any time pressing Ctrl+C and it wouldn’t create any new process. If you add the ampersand at the end, you’ll trigger the creation of a subshell to run the function, achieving a fork. But the parent function call would finish immediately after creating this subshell (the subshell would be sent to the background and the function would then finish). New processes would be created continuously, but processes would finish continuously too, and the process count in the system would barely increase. If you instead called the function twice, using the pipe, without sending it to the background, you’d create a fork bomb:

:(){ :|:; };:

Using & instead of a semicolon inside the function body serves the purpose of making it nasty, because the subshells are created as background processes while the control returns to the original shell. You can’t cancel the process creation routine with Ctrl+C, and if you exit the shell you used to launch the routine, the process creation will still continue. It’s almost impossible to stop it.

Fork bombs are sometimes created by mistake, specially when you are learning the use of fork during a programming course. These fork bombs have the collateral effect of triggering a Doh! exclamation that can be heard from miles away. The exact distance is proportional to the boot time and the number of users in the system. Fortunately, there are ways to limit the number of user processes in a system. These limits can protect you mainly from your own mistakes. If a remote attacker is able to trigger a fork bomb in your system, you probably have a more serious problem than simply the lack of this limit.

System calls involving resource limits

In POSIX systems, programs can use setrlimit() to set resource limits and getrlimit() to get them. There are two limits, the soft limit and the hard limit. Only privileged processes may surpass the soft limit and go up to the hard limit, so in the usual case both limits have the same value or the soft limit is the only one that matters. Use man setrlimit to get the gory details. Resource limits are preserved via fork and exec, so the key to limit the whole system is to establish them from a process that is as close to the process tree root as possible. While we are interested in setting the maximum number of processes per user, there are more types of resource limits, including the size of core dumps, the number of open files, the number of pending signals and many more.

System commands and facilities to set limits

There are at least three common ways of establishing resource limits, depending on your system and how strict you want to get regarding who will have limits and what will those limits be. The Gentoo wiki has an entry on limiting the number of user processes which mentions two of those ways.

The configuration file /etc/security/limits.conf is read by PAM. Its syntax is very flexible and allows setting general limits as well as specific limits for users and groups. Any application and login system using PAM will benefit from this central configuration point. Unfortunately (in this case), Slackware does not ship PAM and I can’t report on how effective this configuration point is, and if its settings are used when logging in from virtual terminals as well as graphical login managers. It probably works on both and it’s the mechanism you should try to use if your system features PAM.

The shadow package (the one that provides login, su, chsh, passwd, useradd, etc) uses the file /etc/limits. Its syntax differs from the previous configuration file and it’s not as flexible or powerful, but it should be more than enough for basic usage. This file is used, in my system, by login when you log in using a virtual terminal, because login is invoked by agetty, but it doesn’t seem to be used by my graphical login manager, which is KDM. For this reason, my X11 session wouldn’t be limited if I relied on /etc/limits.

The third and most flexible way of setting resource limits is via the shell built-in ulimit command, if it exists. Bash, for example, has this command. It’s a built-in command and not an external program for obvious reasons. Just like the cd command is a shell built-in because it needs to run the chdir system call inside the shell process (running it from a child process wouldn’t make sense), ulimit will always be a built-in command if it exists, so it sets the limits for the current shell and all its subprocesses. Most shells read /etc/profile when they are started normally, so you can call ulimit from it or from any file “sourced” by it. Under Bash, use help ulimit to get a brief description of the command. Being able to call ulimit from the shell is also flexible, while inconvenient, in the sense that you can trigger the call depending on many conditions. You can selectively run ulimit depending on the username or group. It’s as flexible as a shell script is.

Example: In my Slackware system I considered this was the best way to set a limit in the number of processes, so I created a file called /etc/profile.d/_ulimit.sh and run ulimit -u 256 from it. It works in both virtual terminals and X11 sessions, setting a limit of 256 processes per user.

Note that when you manage a multiuser system you need to make sure that your limits are enforced whatever the login mechanism and shell are. You may also want to restrict the shell your users may establish via chsh by restricting the contents of /etc/shells to shells in which you know your mechanism works. In multiuser systems you should take this seriously because a fork bomb (by mistake or not) can potentially harm many users. In the same way multiuser systems usually enforce disk quotas, other resource limits should also be in place.

Appropriate values

There is no universal value that will fit every situation. Some people probably won’t want to establish a limit. Many Linux and BSD distributions don’t have any limit set because they’re oriented to desktop usage (a handful of users, one at a time) and may not want to establish a limit in the number of processes they may run, in the same way that they don’t set any disk quotas by default. But, if you want to protect the system from your own mistakes, you should try to use a number high enough for your typical needs but not very high. In the Kriptopolis discussion people mentioned their systems crashing with the limit set to 1024 or 512 processes, but I don’t trust those comments, unless they’re testing on a very old machine. Mine had absolutely no problem with 1024 or 512 processes, but I set the limit, as you saw, to 256. Under normal usage, check the number of processes you have running on your machine. Right now I checked and I have 32 processes. Hence, 256 is a pretty conservative while safe number. The syntax of ps is awfully platform specific, but ps --no-headers -U $(whoami) | wc -l gives me that number in my system.


At least in Linux 2.6 systems with NPTL, the limit does not really apply to the number of processes, but to the number of threads. See the code for my pthread_bomb.c:

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>

void *create_and_join(void *unused)
    pthread_t self = pthread_self();
    pthread_t subthread;
    if (pthread_create(&subthread, NULL, create_and_join, NULL) != 0) {
        printf("Thread %lu: thread creation failed\\n", self);
        return NULL;
    printf("Thread %lu: created thread %lu\\n", self, subthread);
    pthread_join(subthread, NULL);
    return NULL;

int main()
    return 0;

It can be compiled with something like gcc -pthread -Wall -O2 -o pthread_bomb pthread_bomb.c but remember that, due to the multithreaded nature of the program, the message about the thread creation failure may not appear in the last line.


You may have noticed how some shells, specially bash, implement a number of typical commands as shell built-ins, despite the fact that they exist as independent programs in your system. This goes agains the old Unix philosopy “one program for one task”. Sometimes the shell built-ins help it being more efficient but sometimes they’re created for security reasons. If you’re enforcing a limit in the number of processes but reach that limit by accident, the shell built-in command kill can help you send signals. If the shell relied on the external kill command, it would need to create a new process to run it, and that may not be possible.



The potential danger of .desktop files and more

Filed under: Programming,Software — rg3 @ 18:24

.desktop files

In many X11 desktop environments, links to applications are usually represented by files which have the desktop extension in their names. These files, internally, have a format similar to INI files and specify information such as the command to execute and the icon used when representing it. They do not need to have execution permission and may run programs when they are clicked or doubleclicked (depending on your setup). The security implications of this have been already discussed many times before, in places like Linux Weekly News, but I think it hasn’t received enough exposure.

Traditionally, people consider Unix systems more secure for several reasons, one of them being that the ability to execute a program depends on the program having execution permissions, instead of depending on the file extension like Windows systems do. Usually I don’t take these comments seriously, because the weakest point in the system security is the user, in my humble opinion. Many people are simply so dumb that if they receive an email message from someone they may not even know, they might follow the instructions in it, including saving an attachment to disk and giving it execution permissions if necessary. However, desktop files make this easier, as they don’t need to give it execution permissions. Moreover, the file can disguise itself as an image or audio file, easily deceiving a novice user.

I have created an example file with these contents, and I’ve named the file paris_hilton.jpg.desktop:

[Desktop Entry]
Comment=JPEG Image
Comment[en_US]=JPEG Image
Exec=sh -c '>~/paris_hilton.desktop.txt'
GenericName=Paris Hilton Nude
GenericName[en_US]=Paris Hilton Nude

I sent it to myself by email and tried to see what happens when you receive it as an attachment. KMail is nice enough not to let you run the file directly, and represents the attachment with the icon reserved for programs (a gear). It also displays the full name, including the desktop extension. On the other hand, if I click on Save As and save it to my desktop, the file is then represented by the image icon, with the icon name paris_hilton.jpg. If I “activate” it, the program in the Exec line is run, and a file is created in my home directory. And there lies the danger, because the Exec line can run literally anything, including any Perl or Python program, for example. It can write or read anything from your home directory and maybe other places. It can be as complex as the creator wants. It can create a file on disk with a Paris Hilton image, detect which desktop environment you’re running and display the image with the default image viewer while, in the background, it continues to run and do all sorts of nasty things. It could write a script to your Autostart directory that will send spam or listen on a high port. This is potentially very dangerous because, I repeat, there’s no need to give it execution permissions at any point. Simply save it to your desktop and click on it. So, X11 desktop users, we should be careful. This is not as easy as running it directly from the mail client but it’s only a few more clicks away.

Other types of files

Thinking about my previous post, I wondered what happens when you try to do something similar with a Kommander script. In this case, the security implications only affect people with KDE installed and kmdr-executor associated to the kmdr file extension, which is the default under KDE, as far as I’ve seen. Kommander is also nice in the sense that it performs several security checks. The Kommander script is displayed in KMail with its own icon, and it’s not a gear in the Crystal icon set I use, which makes it harder to identify as a program. Furthermore, when you click on the attachment icon you are given the option of running it directly with kmdr-executor. After I clicked on this option, I got a warning dialog with the following text:

This dialog is running from your /tmp directory. This may mean that it was run from a KMail attachment or a webpage. Any script contained in this dialog will have write access to all of your home directory; running those dialogs may be dangerous: are you sure you want to continue?

And the options are Run Nevertheless and Cancel, the first one being the default option (I think that’s wrong). I can only conclude that Kommander scripts are also potentially dangerous. If you’ve seen novice users at a computer you may agree with me.

Other script files, like files with the pl extension for Perl or the py extension for Python do not have these problems. They can’t choose their own icon or name, like desktop files, and need execution permissions. At least on KDE, there’s no way of running those programs by clicking of them. All the file associations they have are meant to open them with a text editor. If a user wants to run them, they need to save them to their hard drive, give them execution permissions and/or open a terminal window to run them from there.

Update: When I said that there’s no way in KDE to launch a Perl or Python script by only clicking, I meant by default. Of course, you can (at your own risk) set up an association between those file extensions and the interpreters. Second, I forgot to mention it’s indeed possible to launch an attached Perl or Python script directly from the mail client in a default setup. In KMail, you have to right-click on the file and choose Open with…, and then type perl or python in the program selection screen. However, if you want to trick a user into doing this you’d have to include instructions for several mail programs because the mechanism may be different. Kommander scripts and desktop files may trick a lot of users. Instructions to save the file on disk and give it execution permissions will probably trick much less people, and this last idea would probably trick an intermediate number of people. I don’t think they’re as dangerous as Kommander files and, especially, desktop files.

Blog at WordPress.com.