503 Service Unavailable

2010-04-01

iptables rules for desktop computers

Filed under: Software — rg3 @ 13:30

Today I will show you the iptables rules I set on my main personal computer, with detailed comments about why I came to use these rules after several years of Linux desktop usage. The rules I use now have been simplified as much as I could and are based on common rules and advice that can be found on the network and also on input I got from experienced network administrators. I’ve been using them unmodified for a few years. They are designed for desktop users either directly connected to the Internet or behind a router. They are a bit restrictive in some aspects but we’ll see you can easily create a few holes for specific purposes. So here they are:

# iptables -v -L
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
 663K  905M ACCEPT     all  --  any    any     anywhere             anywhere            state RELATED,ESTABLISHED
  105  6300 ACCEPT     all  --  lo     any     anywhere             anywhere
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere            icmp destination-unreachable
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere            icmp time-exceeded
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere            icmp source-quench
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere            icmp parameter-problem
    0     0 DROP       tcp  --  any    any     anywhere             anywhere            tcp flags:!FIN,SYN,RST,ACK/SYN state NEW

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

We’ll start by the most obvious rules. The FORWARD chain has a policy of “DROP” and no specific rules. A desktop computer isn’t usually employed as a router or to share an Internet connection, so there’s no reason in allowing forwarding.

The OUTPUT chain has a policy of “ACCEPT” and no rules. Basically, we are allowing everything going out of our computer. While this isn’t the most secure policy at all, it’s usually enough for a desktop computer. Many paranoid people would not let everything out. For example, to prevent their computers from being used to send spam due to a mistake somewhere else, sometimes people forbid from sending traffic from the source port 25, or in general from source ports below 1024, where most common services are. We could do that, but I think it’s not really needed for a desktop computer. We’ll put more effort blocking incoming traffic, and we can keep a relaxed policy on outgoing traffic.

Finally, the guts of the rules. The INPUT chain has a policy of DROP. That is, everything not explicitly allowed will be forbidden. If anything passes through all the rules, the traffic will be discarded silently without making noise.

The rules in the INPUT chain are sorted according to the typical frequency of hits. “Popular” and frequent traffic will be quickly accepted instead of having to check many rules before. That’s why the first rule is to allow RELATED and ESTABLISHED traffic, for any protocol. The any part is important. This is the rule that, basically, allows us to receive replies and normal traffic for connections we start ourselves. For example, when we open a web page with our web browser, we’ll send traffic one way and when we receive the reply, the connection will be ESTABLISHED and we’ll see the reply. This first rule is the most important one because, just due to it, we can use the computer “normally”.

The stateful packet firewall in Linux is quite clever and understands established connections even when the underlying protocol has no notion of connections. For example, that first rule allows us to receive DNS replies from queries we made ourselves, using the UDP protocol, or allows receiving ICMP echo replies from our own requests. In other words, we can ping other computers thanks to that rule.

On to the second rule, it looks like it would accept any traffic from anywhere, but the keyword here is lo:

  105  6300 ACCEPT     all  --  lo     any     anywhere             anywhere

This rule accepts all incoming traffic from interface “lo”, which is the loopback interface. This rule allows us to connect to services on our own machine by pointing to 127.0.0.1, or ::1 in IPv6. This rule would allow connecting to the CUPS printing service, for example, if we had a printer connected to our computer. A variant of this rule that can be frequently found on the Internet is to include a further check to verify the destination IP is 127.0.0.1, just to be more paranoid and forbid strange traffic. While this can increase security, I don’t think you need that further check generally. Just to clarify, browsing unsafe web pages with Javascript and/or Flash is more dangerous than not checking if traffic coming through “lo” is really directed to 127.0.0.1, so it’s not a priority.

Then, you can see I allow some specific types of ICMP packets that usually signal network problems. None of those require a reply to be sent, so we accept them and try to interpret what they would mean if they ever come in. I don’t think it’s possible to get anything more than a DoS attack with those rules, but comments are welcome. And, of course, you can be DoS’ed just by someone saturating you with incoming traffic. Again, this is a matter of getting your priorities sorted. If you feel paranoid, well, drop those rules.

Finally, at the end of the chain we have the famous specific rule to block incoming traffic with state “NEW” and the SYN flag not set in TCP. This rule is quite specific and an explanation for it can be found in many iptables manuals, FAQs and tutorials. I put the rule in the end because the first rule is not affected by it, because the second rule isn’t either (we are allowing ALL traffic coming from “lo”, after all), and the ICMP rules are not affected either.

However, we still keep it there even if the traffic was going to be dropped anyway due to the chain policy, because when we want to create a hole in these rules, we do it by adding more rules at the end of the INPUT chain. For example, sometimes I want to allow incoming traffic to a specific port where I have configured a server that is supposed to be reached from other machines, to serve a specific content in a specific point in time. For that, I have created a couple of scripts called “service-open” and “service-close”, that can be used followed by a list of service names or port numbers. For example, when I start a web server to allow someone in my home network to get a file from my computer, I usually run the command “service-open 8080” (the server would be listening on that port). Once the file is served, I run “service-close 8080” and shut the server down. Those commands add and remove rules at the end of the INPUT chain, so that’s why I put the last rule there, so it’s present before any holes I punch through my firewall in those special cases. If you frequently run a P2P application on your computer, you may want to open a hole permanently to some port and save it as part of your usual rules. I don’t, so I keep everything closed.

The content of my scripts are:

# cat /usr/local/sbin/service-open 
#!/bin/sh
if test $# -eq 0; then
        echo usage: $( basename $0 ) service ... 1>&2
        exit 1
fi
while test $# -ne 0; do
        /usr/sbin/iptables -A INPUT -p tcp --dport "$1" -j ACCEPT
        /usr/sbin/iptables -A INPUT -p udp --dport "$1" -j ACCEPT
        shift
done
# cat /usr/local/sbin/service-close
#!/bin/sh
if test $# -eq 0; then
        echo usage: $( basename $0 ) service ... 1>&2
        exit 1
fi
while test $# -ne 0; do
        /usr/sbin/iptables -D INPUT -p tcp --dport "$1" -j ACCEPT
        /usr/sbin/iptables -D INPUT -p udp --dport "$1" -j ACCEPT
        shift
done

Those scripts play nicely with my set of rules because they are designed with my rules in mind. Also, you can see they are dead simple.

With the set of rules I have described, you can use your computer normally, you can easily let more traffic through in specific cases and, more importantly, you’ll be “invisible” on the network. Nobody will know if your computer is really there or not unless you send them traffic or if they found out by other means. And, also, it’s a very small set of rules and it’s very easy to remember and understand, and to create scripts that modify it easily.

Edit: The commands needed to create those rules:

iptables -P FORWARD DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT 
iptables -A INPUT -i lo -j ACCEPT 
iptables -A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT 
iptables -A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT 
iptables -A INPUT -p icmp -m icmp --icmp-type 4 -j ACCEPT 
iptables -A INPUT -p icmp -m icmp --icmp-type 12 -j ACCEPT 
iptables -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP 
iptables -P INPUT DROP
Advertisements

2010-03-20

Poor Dillinger

Filed under: Communication — rg3 @ 21:32

I’ve been enjoying the two Tron Legacy Official Trailers that have been released so far. My first contact with the original Tron movie was not long ago, when an uncle of mine gave me the Collector’s Edition DVD a few years back as my birthday present. Tron was released before I was born and it’s a very uncommon movie, at least in my country, so I didn’t have many opportunities to watch it until recently, with the 20th anniversary of the film release and, even more recently, the arrival of its sequel.

The first impression I got from the movie was so-so. It’s fun and, for a computer engineer, the references to mainframes, IO ports, programs and games are enjoyable. After watching the movie I went directly to disc 2 and watched the documentary on how it was done. It was then when I started enjoying the film much more. The documentary helps you appreciate TRON as the piece of art it really is and all the attention paid to the different details in the movie.

By coincidence, I watched the film last Christmas with a friend of mine and we both share a fun interpretation of its script. TRON is a movie you can enjoy because, as in many other good fantasy and science-fiction films, the bad guys win in the end. Now, before you jump at me and wonder what I’ve been smoking to say that, just think about it. Do you really think Dillinger and the Master Control Program are the bad guys in the movie? The bad guy is Flynn! And that little program, TRON! It’s a tragic and realistic story.

A company, ENCOM, and two programmers: Ed Dillinger and Kevin Flynn. Ed is the good guy in the company. Working hard to improve technology in his cubicle until late hours, motivated by the need to create something bigger, better and never seen before. He’s a shy guy with brilliant ideas and creates a program, called the Master Control Program, based originally on a chess program, with several features that will be a breakthrough in computing history. First, the Master Control Program allows for real multitasking. Programmers don’t interfere with each other and they no longer have direct access to the computer hardware. The modern operating system is born, also with a built-in firewall to monitor and control connections to and from external systems. Second, this program is powered by an incredibly advanced AI system capable of developing primitive feelings, and also features natural language parsing via audio input and replies in the same language, with a voice synthesizer.

The Master Control Program is amazing and could push ENCOM from being a medium-sized company into a big corporation in every field of technology. However, management are too short-sighted to pay attention to it and the shy guy who created it, and are amused by the extrovert programmer Kevin Flynn. Much younger than Ed Dillinger, as we can see in the film, he enjoys creating video games and breaking into different systems and, with such a personality, the company board is waiting for their golden boy to do something spectacular that will never really arrive, because Flynn uses the company resources to create games that he will keep for himself. He won’t let the company see the real good games and will be jumping ship as soon as he finds a good deal with a big game publisher.

Dillinger, untalented for creating popular games, sees envy grow at the core of his heart and one day decides to steal the good games from Flynn and presents them to the company board as his work. He shouldn’t have done that, but poor Dillinger thought that was the only way to get attention from the board. From then on, they finally pay attention to him and he can push the Master Control Program forward as a way to manage the company’s computing resources and is promoted to the position he really deserves. They even start investigating on teleportation. Of course, TRON (the program) is rejected by Dillinger and the MCP. After all, TRON is redundant and its tasks are already being performed by the MCP. No good engineer would tolerate such an evident duplication in functionality. Alan Bradley suffers from the NIH syndrome.

All that technology never reaches the market because, in the film, we see the bad guys preying on this good guy for his only mistake until he is defeated, his programs are deleted forever from the hard drive and he, probably, he’s fired from the company.

I’ll be watching Tron Legacy to follow the adventures of Flynn and the result of his evil and ego-driven plot to control the world with his videogames, unable to realize he lacks the talent Dillinger had. If you watch the trailers released so far you’ll see Flynn is really evil, as he has always been.

2010-01-20

Managing Linux kernel sources using Git

Filed under: Software — rg3 @ 21:53

This will be a short and easy tutorial on how to use Git to manage your kernel sources.

Before Git, the easiest way to manage your kernel sources was to download the kernel using the provided tarballs from kernel.org and update them downloading the provided patches between releases, which was very important to keep the download size small, instead of downloading complete tarballs each time. Also, by applying patches, you only needed to rebuild stuff that changed between releases instead of the full kernel once more. This is a good method that can be applied today and will probably never disappear. Simple HTTP and FTP downloads are very convenient in many situations.

However, with the arrival of kernel 2.6, its stable branches (e.g. the 2.6.32.y branch) and Git, there have been some changes. First of all, the process is now a bit more complicated. Stable patches are applied against the base release. If you have the kernel sources for version 2.6.32.1 and want to jump to version 2.6.32.2, you first have to revert the changes of release 2.6.32.1 (patch --reverse) and then apply the 2.6.32.2 patch. Slightly less convenient and, furthermore, you’ll modify every file that changed with every patch until that moment. This will affect the compilation process that would follow afterwards. In other words, if patch 2.6.32.1 meant (hypothetically speaking) a long build because it changed stuff that affected a lot of systems, so will be the build process for any other subsequent release in the 2.6.32.y branch. It was this small glitch that prompted me to manage my kernel sources the way I’m going to describe. Also, using Git is fun. :)

We will try to achieve the following:

-------------------------------------------------> Linus Torvalds' master branch
           \                   \
            \                   \
             A stable release    Another stable release

We will have a master branch that will follow Torvalds’ master branch and will be updated from time to time, or when he releases a new stable version of the Linux kernel (e.g. 2.6.32).

We will have other local branches that follow the stable releases by Greg K-H (e.g. 2.6.30.y, 2.6.31.y, 2.6.32.y, etc).

Git is very flexible and simple, and allows more than one way to do things. I will try to explain why I do things this way and why they make sense to me, and will try to avoid shortcuts, i.e. I will use one command for each action even if two actions could be compressed into a single command.

First, we will create a directory to hold the kernel sources. Let’s name it /path/to/kernel. In it we’ll have a directory named “src” that will hold the unmodified kernel sources and a second directory named “build” that we’ll use to build the kernel and keep the sources intact, for clarity. We start by cloning Torvalds’ branch:

cd /path/to/kernel
mkdir build
git clone 'git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git' src

This will create a directory named “src” with the sources. Take into account you’ll be downloading the full repository with a lot of revision history. It’s a relatively long download that requires a lot of patience or a good broadband connection. Whatever you have at hand. At the moment I’m writing this, it’s several hundred MBs but less than 1 GB, if I recall correctly.

If you issue a “git branch” command you’ll see you only have a local branch named “master”. This local branch follows Torvalds’ master branch. You can update your kernel sources when you are in this branch issuing a simple “git pull” command.

Now, we will add a second local branch to follow the stable 2.6.32.y kernel. In other words, our master branch follows Torvalds’ master branch and our “branch_2.6.32.y” (let’s call it that way) will have to follow the master branch in the stable 2.6.32.y repository.

First, we create a shortcut to the 2.6.32.y repository for convenience:

git remote add remote_2.6.32.y \
    'git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-2.6.32.y.git'

The name “remote_2.6.32.y” is arbitrary. At this moment, that name is only like an alias for that long URL and barely anything more. The next step is very important so that the name becomes something more and the following git commands understand what you mean when you use it. It will download data to your repository under that name.

git fetch remote_2.6.32.y

After you run that, which will take considerably less time that the full repository clone we did previously, remote_2.6.32.y will have a meaning in your hard drive. You can then use the following command:

git branch --track branch_2.6.32.y remote_2.6.32.y/master

This will create a new branch in your local repository that will be tracking the master branch at the 2.6.32.y repository. If you issue a “git branch” command you’ll now see you have two branches. Being a “tracking branch” means several things. You can change between the master branch and the new branch using “git checkout <branch name>” and, in each branch, you can perform a simple “git pull” to retrieve changes to that branch from the remote repository. From this point you’re on your own using Git to manage the sources and perform more operations if you need them, but the many tutorials available on the web will get you going in the basics of Git and that’s the only thing needed to manage the kernel sources with the only purpose of easing the downloading and building process.

Note that, between release 2.6.32.1 and 2.6.32.2, for example, you will only download the changes between those releases and a painful build for 2.6.32.1 does not have to mean a painful build for 2.6.32.2 if you update your sources this way.

Finally, we had created a “build” directory previously, in parallel to the “src” directory, in order to keep the sources directory clean. We can use this directory easily. When we are at the “src” directory, any “make command” we use can and would have to be replaced by “make O=../build”. To avoid mistakes, I have created a global alias in my system called “kmake”, aliased precisely to “make O=../build”. It affects the regular user account that I use to compile the kernel sources and the root account that I use in the installation step, to perform the “modules_install”, “firmware_install” and “install” operations.

As a regular user account:

  • kmake menuconfig
  • kmake
  • kmake oldconfig
  • etc

As the root account:

  • kmake modules_install
  • kmake firmware_install
  • kmake install

These aliases could be tuned further to install the kernel image, modules, firmware, etc to a sandbox directory if you intend to create packages with them, for example. The README file in the kernel sources directory has more information about this topic.

2009-12-15

When your hobby becomes a job: reflections on the em28xx driver situation

Filed under: Programming,Software — rg3 @ 21:59

More than one year ago I bought a TV USB stick to be able to watch analog and digital TV in my computer running Linux. It was not an easy task. As you may know, usually it’s not hard to find hardware that is supported by Linux. Sometimes, however, while there are multiple devices supported that would serve your purposes, the trouble will be locating a place or site that will have one of those models available for you to buy. This was my case. I printed a list of supported digital and/or analog TV tuner USB devices and went to most computer stores and malls in my area trying to locate at least one of them and compare prices, and I went back home with hands empty.

I had to change the strategy and get the list of devices I could buy, and then search for them on the Internet, trying to know if any of them were supported by an out-of-tree driver or something similar. After a couple of returns, thanks to some manufacturers changing devices internally while keeping the product name unchanged, I finally arrived home with a working, hybrid, TV USB stick, the Pinnacle PCVT Hybrid Pro Stick, sold in some countries as model 330e. It costed just over 100 euros.

My main target being digital TV, I quickly got it working with an out-of-tree driver by Markus Rechberger. This out-of-tree driver was part of a project that tried to create the possibility of having user-space tuners for TV cards. While I am nobody to judge if that’s a good or bad idea, it was different enough to not make it into the main kernel tree. The author, then, appeared to change the approach and created a different out-of-tree driver called “em28xx-new”, based on the in-kernel “em28xx” driver that he had already contributed. This driver used a more traditional approach, and worked like a charm too. Unfortunately, it never made it into the vanilla kernel either, for whatever reasons.

I contacted Markus Rechberger a couple of times, if I recall correctly. I thanked him for his efforts and time put into creating the driver and asked a couple of questions once, and also sent him a patch for the build scripts some time later. I don’t recall if the patch was applied or not. He was always very nice and polite.

However, one day I had just compiled a new kernel and was about to build the driver for it. Before doing that, I always downloaded the latest copy of the driver source code from its Mercurial repository. This time, when I ran Mercurial it exited with a confusing error message, saying the remote tree was not the same repository I had in my hard drive. I supposed the author would have created a new repository for the driver, so I cloned it to a new directory. It turned out there was only a README file in the repository. I opened it and… uh oh. A note saying the old driver had been pulled from the Internet and giving a URL that led to the web site of a TV card manufacturer offering products that were supposedly supported by Linux. The equivalent USB stick costed just about 100 euros, like the one I had. But, of course, it was too late for me to return the one I had bought. I had been using the device for months.

I searched on the Internet again trying to find the reason that led to the driver being pulled from the web site, and everything I got was the site of an Arch Linux user that uploaded the latest version he got from the repository and even offers some patches to make the code work with more recent kernels. However, as of the time I’m writing this, the latest patch is for kernel 2.6.30 and the driver does not compile for the recently released kernel 2.6.32. So the status of this device is that it works, but only if you have a specific kernel version. At the top of that page, you can see a huge banner that reads like this:

DISCLAIMER: Don’t bother me or the original author, Markus Rechberger, with any questions about problems with this driver, because Markus Rechberger deleted it because of these questions and because I just host these files.

I thought the driver may have been pulled from the Internet for some kind of legal reasons, but the disclaimer suggests a different reason. I don’t know if I buy the reason. I’m not sure it’s entirely credible but there’s no point in not believing those words are true. Markus Rechberger, for all we know, got burned out maintaining the driver and decided not to maintain it any longer.

A story published months ago at lwn.net explains this case with more details and further information. The situation for people owning this device and wanting to use it under a recent kernel is that you are supposed to be using the in-kernel em28xx driver. However, as the linuxtv.org page for the device says, the difficulty in supporting digital TV for it has its source in the Micronas DRX3975D DVB-T chipset it features. This chipset already has an in-kernel driver, which can be located at Device Drivers > Multimedia support > DVB/ATSC adapters > Customize the frontend modules to build > Customize DVB Frontends > Micronas DRX3975D/DRX3977D based. The location may change in the future (2.6.32 as I’m writing this).

Unfortunately, the driver cannot be used by now. As its help text mentions, this driver needs external firmware which currently cannot be obtained. Marked as “TODO” in the help text, you are told to run “<kerneldir>/Documentation/dvb/get_dvb_firmware drx397xD”. But, if you try, you’ll get an error saying that drx397xD is not a know component.

It’s an appropriate moment to thank and encourage the developers that are working on this, being the last missing piece. Devin Heitmueller has done a good job trying to keep people up-to-date with information on the progress and the difficulties encountered. The last comment on that blog post is from December 6 and says:

Unfortunately, at this point the answer is “not right now”. I’m waiting for the DVB generator to arrive, at which point I should be able to complete the work.

Again, thanks for working on this, keep up the good work and we’re eager to make our 330e USB cards work again with recent kernels, Devin!

While reflecting on the driver situation and putting together the different pieces of this soap opera, it all reminded me of the situation we professional programmers face from time to time while maintaining open source software. Many of us really love programming and we have tried to make it our job, successfully. There’s a difference, however, when you change from student to professional programmer.

When you are a student, you have a lot of time in your hands. It’s a wonderful experience going to college and learning new things everyday, buy books, read about different languages and technology, and the amount of spare time to learn and have fun programming is incredible. Later, however, you become professional and you start working for a company in a full time job. You leave home before dawn everyday and, at least in winter and in my case, you arrive home after sunset. It’s incredibly depressing if you think about it. You spend the day coding, fixing issues in programs, debugging, testing, etc. This kind of life doesn’t make it impossible to enjoy programming again, but if you arrive home and find that you have a popular open source program in your hands with users reporting bugs and requesting new features you may feel as if you were still at the office.

My advice here is obvious. Don’t stop coding in your spare time, but do it for fun. If you don’t feel like adding a new feature someone requested, don’t add it. It’s very important to say “no” often so your program will still be your program, the product you wanted. If a user or a group of users are still in the fortunate situation in which they are students and have a lot of spare time, they can always fork your code. That is the beauty of free and open source software.

I couldn’t care less if some people would like to use all this text above to attack FOSS and say bad things about it: non-working drivers, unresponsive maintainers, lack of documentation, user unfriendliness. Mental health of the people writing the code is more important. Don’t burn out. Produce something and let others make better things out of it if you don’t have the time. Start new projects all the time. Handle maintenance of old projects to new people. Have fun. Enjoy. Code. Help others. Submit patches with bug reports if possible. Appreciate the effort of others and thank them for the work they provide you. Try to be kind and explain your users the reasons behind your “noes”.

2009-11-01

New tiny project: lddsafe

Filed under: Software — rg3 @ 10:27

Some days ago we could all read that “ldd”, a tool which prints shared library dependencies, should not be run on untrusted binaries. I read it first on Hacker News and later it hit Slashdot’s frontpage. In some operating systems, this is stated clearly in the man page for the program, while in others it’s not mentioned at all. I belonged to the camp that didn’t know about it and I was a bit surprised. I supposed ldd was doing its job by examining the binary file and not by running it setting some special environment variables.

A Hacker News user, anyway, pointed out something interesting. You can easily get information about the needed shared library dependencies for a program or library using “objdump”, so I spent a few hours writing and tweaking a small script called lddsafe that prints almost the same information as “ldd” using “objdump” and avoiding the security problems, as it doesn’t have to run the program. Two major caveats at this point in time:

  • It requires bash and, more specifically, bash version 4 or later. I needed to use associative arrays to make the program reasonably fast and they are only available in bash 4.
  • It’s only been tested under Slackware Linux. However, bug reports and patches are welcome if it doesn’t run properly in other distributions.

Future improvements may include rewriting it in Perl so as not to require bash 4, knowing that Perl is present in most Unix systems.

A picture is worth a thousand words:

$ lddsafe /usr/bin/xcalc 
        libXaw.so.7 => /usr/lib/libXaw.so.7
        libXmu.so.6 => /usr/lib/libXmu.so.6
        libXt.so.6 => /usr/lib/libXt.so.6
        libSM.so.6 => /usr/lib/libSM.so.6
        libICE.so.6 => /usr/lib/libICE.so.6
        libc.so.6 => /lib/libc.so.6
        ld-linux.so.2 => /lib/ld-linux.so.2
        libuuid.so.1 => /lib/libuuid.so.1
        libX11.so.6 => /usr/lib/libX11.so.6
        libxcb.so.1 => /usr/lib/libxcb.so.1
        libXau.so.6 => /usr/lib/libXau.so.6
        libXdmcp.so.6 => /usr/lib/libXdmcp.so.6
        libdl.so.2 => /lib/libdl.so.2
        libXext.so.6 => /usr/lib/libXext.so.6
        libXpm.so.4 => /usr/lib/libXpm.so.4
        libm.so.6 => /lib/libm.so.6
« Previous PageNext Page »

Blog at WordPress.com.