Posts Tagged ‘debian’

danger at the docks

Friday, November 14th, 2014

docker.io
Docker.io is taking the world by storm, but a day at the docks is not without its perils. Here I hope to inspire you to try out docker by showing you how to avoid its pitfalls.

In the days of yore

As the FreeBSD jailers and Solaris zoners will attest to, containerizing your services is a great boon, saving space and resources and providing easy management akin to chroots and potential security benefits, without the overheads of full-blown virtual machines.

FreeBSD Jail Linux has had containers for the longest time, in the ancient form of User Mode Linux, which actually ran a kernel in userland, and more recently OpenVZ, which was more like jails.

The former didn’t lend itself to production deployments and the latter never made it into the linux mainline, coming at a time when people were more interested in virtualization than containment. In recent years, a kernel facility named Cgroups has made LinuX Containers (LXC) possible, which as afforded the management, if not security, of bsd jails.

what can be gained

The biggest potential benefit from containers is that CPU, memory and disk resources are 100% shared at native speeds, so no libraries and no data need ever be duplicated on disk nor in memory.

In FreeBSD jails, this was achieved by providing most of the system read-only like /usr, /lib and /bin, and sharing it amongst jails. This worked quite well, but was surprisingly tricky to update.

LXC
You can do similar stuff with LXC, just as long as you understand that if it breaks, you get to keep all the pieces. This gives you full control, and means that I for one have LXC instances in production with uptimes of 1200 days and counting.

minimalizing

Taking the approach of single-container-single-responsibility further, you could instead of deploying whole system containers create image filesystems that contained only the bare necessities. For instance, your python application would have apart from its code,just the python runtime, libc and other dependant libraries, and naught much else.

Inspired by the “leaner is better” philosophy backed by the experience of running LXC in production, we built this minimal deployment framework complete with a tool to magically find all the required libraries.
leaner is better
Awesomely small images come from this approach, where the “contact surface” of the application has shrank to nothing but the app itself. It was far from perfect, serving to make the images awesomely less debuggable and managable, and never made it into production proper.

layer upon layer is two steps further

In comes Docker, and its concept of filesystem image layers based on AUFS. The approach isn’t novel itself, having been used by live-CD distributions for the longest time, but it’s the first that provides tools to manage the layers effortlessly for containers. So you can now have 100 servers with 100 application layers, and all your Ruby applications share one runtime layer and your Python applications share another, and they all run on the same base image of Ubuntu, and they do all that transparently, without you having to consciously think about which bit goes where.

layersDocker takes another step further, borrowing heavily from distributed social source control ala github, allowing you to clone, build, push, pull, commit, share and remix images as easy as that.

This is the type of thing that blows disk-image-based virtualization straight out of the water.

Perils and rough starts

The Docker docs are well written and will get you spawning containers and dockerizing applications in no time at all. What they will not tell you is how to run containers in production for all values of production.

In particular, the following things require special attention:

  • changing ips
  • service discovery
  • dns issues
  • cache clash

.. and that is precisely what we will talk about next time.
see you later

systemd crash course, with remote logging

Saturday, September 20th, 2014

live a better life

The world is taking systemd by storm and there is no looking back now.

Still, there are some elements that you would expect to be there that are missing. One of them is remote logging!

Another thing missing is a decent crash course [*]. This is frustrating because making init scripts and checking logs is the staple diet of any old sysadmin.

Read on to readjust gently but quickly.
she wants it

Systemd crash course

Find “unit” – that’s the new name for “init script name” to us oldtimers:

systemctl list-units --type=service
# this one is way more verbose
systemctl list-units

Start, stop, restart, reload, status:

systemctl start sshd
systemctl stop sshd
systemctl restart sshd
systemctl reload sshd
# status, gives some log output too
systemctl status sshd

Check ALL the logs, follow the logs, get a log for a service:

journalctl -l
journalctl -f
journalctl -u sshd

Install a systemd service:
(This is what a systemd service description looks like)

    cat > ossec.service << EOF
[Unit]
Description=OSSEC Host-based Intrusion Detection System

[Service]
Type=forking
ExecStart=/var/ossec/bin/ossec-control start
ExecStop=/var/ossec/bin/ossec-control stop

[Install]
WantedBy=basic.target
EOF

# now copy that file into the magic place, /etc/init.d in the old days
install -Dm0644 ossec.service /usr/lib/systemd/system/ossec.service

# now make systemd pick up the changes
systemctl daemon-reload

Enable or disable a service:

systemctl enable ossec
systemctl disable ossec

systemd components

Remote logging

OK so you now know your way around this beast.
Now you want remote logging.

According to the Arch wiki [#], systemd doesn’t actually do remote logging (yet. what else doesn’t it do?) but it will helpfully spew its logs onto the socket /run/systemd/journal/syslog if you knock twice, gently.

To convince systemd to write to this socket, go to /etc/systemd/journald.conf and set

ForwardToSyslog=yes

then issue a journald restart

systemctl restart systemd-journald

You can install syslog-ng and it should pick up the logs. Test it now by making a log entry with

logger -t WARN zool

and check /var/log/syslog.log

If you have a distro running systemd, then hopefully syslog-ng will be recent enough to be aware enough of systemd that things should just work at this point.

If it don’t, syslog-ng.conf’s source src { system(); }; isn’t picking up the socket file. Fix this by adding the socket explicitly by changing the source in /etc/syslog-ng/syslog-ng.conf like so:

source src {
  unix-dgram("/run/systemd/journal/syslog");
  internal();
};

if you are working with a laptop or desktop then the console_all on tty12 is handy too:

log { source(src); destination(console_all); };

the systemd monster

[*] IMHO Fedora’s cheatsheet on systemd is a little too cluttered
[#] Arch has a decent intro to systemd

No sockpuppets were harmed in the making of this blog entry. Any and all images are © whomever made them, and I love you for not sueing me.

Grsecurity on the desktop

Friday, August 8th, 2014

In my last post I presented Grsecurity kernel packages for Debian Wheezy. Now would be a good time to review the hows and whys of Grsecurity, so you can decide if it is something you need.

Today we will quickly look at Grsecurity’s viability and impact on a typical desktop or laptop.

Why do I need Grsecurity on the desktop?

  • Often run unsecure code? Limit its impact on the system.
  • Employ chroots and containers? Enforce stricter containment.
  • Connect to hostile networks? Reduce and mitigate impact of exploitation attacks.
  • Allow others to use your system? Increase monitoring and control over your machine.

Or perhaps you choose to run Grsecurity on your laptop simply for the sheer paranoia-factor and to impress friends.

How does Grsecurity behave on the desktop?

In addition to the invisible yet significant hardening efforts againt kernel exploitation, there are some changes that an experienced user will notice immediately:

  • you need root to dmesg
  • grsec reports denied resource oversteps in dmesg
  • top only shows this user’s running processes
  • mappings and other sensitive information is only available to the process owner in /proc

The very few programs that depend on a specific kernel version, or that read /sys/kcore or write directly to kernel constructs will not work under Grsecurity.

And of course, there is the feeling of solidity and the sight of the reported kernel version:

~# uname -a
Linux amaeth 3.2.60-grsec-lied #1 SMP Wed Aug 6 17:40:27 CEST 2014 x86_64 GNU/Linux

kernel security for debian

Wednesday, August 6th, 2014

TL;DR – links to Grsecurity-enabled up-to-date debian wheezy-kernel packages at the bottom of this post.

Kernel security is becoming more important nowadays, especially since the Linux kernel has grown so big and the platform is rife with misbehaving programs.

Some enthusiasts have illusions that the Linux kernel is somehow magically very secure, but those in the know will tell you there are quicker ways to gain root than sudo.
you did what?
Grsecurity is by far the best patchset for increasing the security of a system, whether it be a virtual LAMP box, a login shell server or a desktop machine. I have been tracking it and using it for years and consider it superior to the SELinux and AppArmor approaches.

It has only one major drawback: most distributions of Linux do not provide tested and up-to-date grsec-patched kernel packages, making Grsec-level security features nearly unobtainium for the common mortal.

I have been rolling my own kernel patches since the millenium and so I put in the work to put Grsecurity back into Debian.

So far I have built and tested kernels for Debian 7.5 and 7.6 Stable codenamed Wheezy. This is the standard, debian-patched kernel with added Grsecurity.

I have built separate packages which are suitable for servers, virtualized servers and desktops, and these have been tested on run-of-the-mill LAMP boxen as well as custom and well-trafficed shell servers, and of course my trusty desktops and laptops.

Download and Install

You can download and install the grsec desktop. server and virtual server debian packages today!

Note, to avoid running out of space in /boot, change MODULES=most to MODULES=dep in /etc/initramfs-tools/initramfs.conf

Install the lied apt repository

sudo -i
wget http://www.delta9.pl/lied/SIGNING-KEY.GPG -O- | apt-key add -
wget http://www.delta9.pl/lied/lied.list -O /etc/apt/sources.list.d/lied.list
apt-get update

Grsecurity Desktop/Laptop HOWTO

apt-get install linux-image-3.2.60-grsec-lied

Grsecurity Server HOWTO

wget http://www.delta9.pl/lied/linux-image-3.2.54-grsec-nose-server_3.2.54-grsec-nose-server-1_amd64.deb
dpkg -i linux-image-3.2.54-grsec-nose-server_3.2.54-grsec-nose-server-1_amd64.deb

Grsecurity Virtual Server HOWTO

wget http://www.delta9.pl/lied/linux-image-3.2.54-grsec-nose-virtual_3.2.54-grsec-nose-virtual-1_amd64.deb
dpkg -i linux-image-3.2.54-grsec-nose-virtual_3.2.54-grsec-nose-virtual-1_amd64.deb

Furthermore I commit to also merging the patchsets and making available Grsecurity packages for Debian 8/Jessie and providing it all in a debian repo. I will then make this available in a repo so that people can easily add it to their setup.
I also commit to keeping these packages up to date on all the platforms I care about.

Quick Howto build your own
To build your own Grsec-enabled debian kernel packages, execute the following commands:

wget http://www.delta9.pl/lied/grsec-201408032014-debian-3.2.60-1.patch.gpg
gpg grsec-201408032014-debian-3.2.60-1.patch.gpg
apt-get source linux
cd linux-3.2.60
patch -p1 < ../grsec-201408032014-debian-3.2.60-1.patch
wget http://www.delta9.pl/lied/grsec-config-server -O .config
make deb-pkg LOCALVERSION=-myversion

You can replace “grsec-config-server” with “grsec-config-desktop” or “-virtual” if you need one of the other configurations.

3g wwan pain revisited with gobi

Monday, June 30th, 2014

Hi all,
after a long hiatus I’ve finally found a something annoying enough to share with you: namely, my 3g modem.
I have spoken at lengths about 3g on linux before.

I have a Thinkapd X201 laptop and it has a Qualcomm Gobi 2000 3g modem. This modem does some fancy mode switching, but not in the regular way by getting some control bytes. Therefore, usb-modeswitch can’t help you.

Instead, the modem needs firmware loaded to switch from usb id 05c6:9204 to usb id 05c6:9205.
On linux, the firmare loading is achieved with gobi-loader.

All this is nicely documented at thinkwiki, unfortunately it doesn’t make it one bit easier for the regular joe.

The trouble is, the firmware is not redistributable, so the whole thing is quite tricky!

  1. download 7xwc48ww.exe from the Thinkpad support site,
  2. unpack the drivers with wine or cabextract. I used wine:
    cp 7xwc48ww.exe ~/.wine/drive_c
    wine 7xwc48ww.exe

    Make sure you run the driver installation after extraction, otherwise execute setup again: wine ~/.wine/drive_c/DRIVERS/WWANQL/setup.exe

  3. copy the firmware:
    cd ~/.wine/drive_c/Program Files/QUALCOMM/Images/Lenovo
    sudo mkdir /lib/firmware/gobi
    sudo cp 6/UQCN.mbn UMTS/* /lib/firmware/gobi/
    

    This was the tricky part, unpacking and selecting the correct firmware out of the 12 different sets of files provided in that directory.

  4. reload the driver: modprobe -r qcserial; modprobe qcserial
  5. dmesg should now show you have three USB serial devices /dev/ttyUSB0 (control interface), /dev/ttyUSB1 (the actual modem), and /dev/ttyUSB2 (the GPS, which you need windows to enable once).
    usb 2-1.4: Product: Qualcomm Gobi 2000
    usb 2-1.4: Manufacturer: Qualcomm Incorporated
    qcserial 2-1.4:1.1: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB0
    qcserial 2-1.4:1.2: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB1
    qcserial 2-1.4:1.3: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB2
    
  6. If you have gotten this far, your 3g modem is basically working and you can set up wvdial as in my previoius post pointing at the /dev/ttyUSB1 modem.

    Note however you still need to enable the modem with echo enable > /proc/acpi/ibm/wan

live online root migration

Monday, August 19th, 2013

Hey all, yep before you ask, yes OHM was fantastic.
Others have eloquently described that experience, so I’m here to tell you a different story.
fuck cairo

I got an SSD this summer. They finally reached my cost-benefit pricepoint, and I’m not going to waste breath talking about how phun SSDs are. However, I will tell you about the little things I did that others would probably not do, most notably how I migrated a live running debian linux system from one disk to another.

I already RAID-1 my home partition, which has completely different data storage requirements, and besides it wouldn’t fit on 10 SSDs.

The SSD was to be my root drive, the OS disk. The OS is not so important in my setup, because the argument goes that if something dies, I can always reinstall it, I can always boot from USB, I can always configure it and the heavier stuff is in gone already.

I decided to put NILFS2 on it, which I’ve successfully used before to save my girlfriend from inadvertantly discovering how ext2undelete sucks, the argument being that log structured filesystems are better suited to solid state drives, then researched the heck out of it and found out about F2FS, which seemed like a better fit – if nothing else then because it’s faster, I don’t need log-based snapshots on the root disk and it’s not btrfs.

brain fuck schedule

I’m allowed the luxury to try out new things – I compile my very own kernels, complete with Con Kolivas patches as always, and I know I how to keep the pieces if something breaks.

Let’s try out this new magic.

First, the simple stuff; cfdisk/gparted to create one small /boot partition, one large root partition (all primaries of course, because extended partitions are evil) and leave some space for “something unexpected”.. then mkfs.ext2 -L boot /dev/sda1; mkfs.f2fs -l root /dev/sda2.

Why the boot partition? Just in case, mainly because experience has taught me this makes booting a whole lot easier in case of troubles that may or may not be ancient and deprecated in 2013. Let’s get the files there mount /dev/sda1 /mnt/target; rsync -a /boot /mnt/target/; umount /mnt/target.

What I am doing next is not difficult. It’s just not something anyone would consider safe. I want to sync the root filesystem over to the SSD. But I can’t be bothered rebooting into a USB stick, I want to continue using my system while I sync the 40 odd gigs (yes, fourty, ask me why), and I want to do it all in one go, creating the “cleanest” new filesystem with no fragmentation and with directories ordered spatially together on disk.

clever trap, sir

Therefore, first I disable any and all services that might write to root or var:

for daemon in prads gpsd munin-node hddtemp mpd atop powernowd atd and cron ddclient dictd libvirt-bin ntpd ssh timidity smartd uml-utilities postfix rsyslog arpwatch mcelog
do
  service $daemon stop
done

Afterwards, I give it a minute while watching pidstat -d 10 to see if anything else might write to disk. I see nothing and proceed:

mount -o bind / /mnt/src
mount /dev/sda2 /mnt/target
rsync -a /mnt/src/ /mnt/target/

Why did I bindmount? because I don’t want rsync to cross filesystem boundries, which it sometimes does regardless whether I tell it not to.

There are only two more things to do: update fstab and the boot loader. I’ve already disabled swap beforehand, but the rest of this I forget to do. Instead, I am excited to try out the new SSD-enabled system and reboot a little too early.

nuked

Thankfully I’m running grub – grub 1 at that, so much more user-friendly than grub2 – and I tell it to boot into (hd0,0). This fails because initrd cannot mount /dev/sda6. Well duh, that was the old disk at the old location. I mount /dev/sda2 and rewrite fstab. I reboot. Again it fails: the root= argument to the kernel doesn’t match fstab. I reboot again, edit the grub boot again, and it fails on the third try. Because f2fs doesn’t have fsck, and I’ve told it to do a pass in fstab. This is a modern filesystem, it fsck’s on mount. I tell it not to pass with a magic zero in the fstab and hoppla, fourth time is the charm, my machine boots with no further ado, and somewhat faster.

And before you ask, I am not fond of disk UUIDs either, they are terrible from a usability standpoint. Ever tried to type an UUID by hand, let alone remember one?

Furthermore,

Kernel command line: root=/dev/sda2 ro crashkernel=128M zcache vga=791
zcache: using lzo compressor
zcache: cleancache enabled using kernel transcendent memory and compression buddies
zcache: frontswap enabled using kernel transcendent memory and compression buddies
zcache: created ephemeral local tmem pool, id=0

hell yeah, zcache is finally in mainline, which means the ancient war between Time and Space has another battle and I get compressed memory pages which (hopefully) are both faster and more plentiful in physical RAM, now that I’ve foregone a swap partition.

I promptly recompile my kernel, timing the difference now that I use an SSD and zcache aand… it’s not much faster. Oh well, I guess it’s time to upgrade the system :-P

dark nite

pixie dust

Thursday, February 2nd, 2012

we’ve booted backtrack off usb before, now that’s kinda
boring and installing backtrack onto the usb with unetbootin
is painfully slow and not the same as bootin strait off the
usb which is what we want in this case; not an install
but a fresh copy every boot

there is someone disagreeing in the back of the room, now
wouldn’t this be a lot more complicated? No sir. on the contrary
booting fresh every time makes work a lot simpler; you gain a
direct relationship to what you store where, and where you
access your data from

but there is another one in the front;you sir, you feel that
one would have to sacrifice many of the comforts such as all
any tools of the trade at hand and permanent local storage -
but at best this is a lazy roadblock to salvation; by booting
off of local storage we have local storage at hand in a more
practical format, be that even a microscopic carrier can be
removed and replaced with sufficient storage for everything
and then some

the medium can be embedded, destroyed or ingested, so
the impermiableness of accidentally recorded data and the
robustness, accessability and portability of removable storage
comes very much in hand upon situations that either require
inconspiciousness, anonymity, covertness, plausible deniability
or a high degree of reliability in day-to-day computing

the totalality of the system given to remaining only in memory
causes it to be independent of other storage for operations, and when
operations cease from loss of any exterior preconditions, the
system simply ceases. when preconditions reoccur – by powering on
and executing the first block – the system can be relied upon to
simply starts afresh, completely unperturbed by any previous history

should the need arise to patch the system; say some new app or
capability is called for where there is no time to rebuild,
a patch should be scripted always when there is certanity that
the capability will require a repeat performance. It is advised
to devise a patch which includes all dependencies.

thus the fresh system becomes more capable and more accessible
over time, just like an install. patches can then easily be
rolled into the system should they proove useful to others.

But how does one do it? Well, it’s easy but unfortunately
not as easy as overwriting the boot device; it’s just not
practical because partitioning is always an individual consideration

  • . there are often other files on the block device
  • . choice of filesystem and memory technology has much bearing
  • . the block device is larger or smaller than expected
  • instead, we allow any bootable partition scheme and any
    filesystem and memory technology, as long as the storage
    requirements of the system are met;

    here’s to clone how:

    cp -a boot/ apt/ casper/ gone/ preseed/ syslinux/ 
    syslinux /dev/partition
    mbr /dev/device
    

    but that’s fine, it’s been done and all, but even the ability to
    boot the system with precisely zilch local storage comes in
    handy, and for that we have pixie dust.

    pixie daemon and tiny ftp should be pointing a path
    exactly matching the dhcp-provided patch.. otherwise
    you will have worries!

    /etc/pxe.conf:

    interface=eth1
    service=X86PC,0,0,local,Local boot
    service=X86PC,0,0,pxelinux,PXELinux
    tftpdbase=/var/lib/tftpboot
    domain=truly.yours
    

    /etc/default/tftpd-hpa:
    TFTP_DIRECTORY=”/var/lib/tftpboot/”

    /etc/dnsmasq.conf:

    dhcp-boot=/var/lib/tftpboot/pxelinux,vulcano,10.10.10.86
    

    “high speed” tftp daemons and multicast can be found but it is
    advised to stick to tftpd-hpa and dnsmasq with no esoterics due
    to the sheer amount of variables introduced.

    /var/lib/tftpboot/pxelinux.cfg/default:

    # not strictly necessary but makes the menu pretty
    menu hshift 13
    menu width 49
    menu margin 8
    
    menu title BackTrackBoot
    default vesamenu.c32
    display f.txt
    timeout 600
    
    label local
    menu label Local Harddisk
    localboot 0
    
    menu begin bt
    menu title BackTrack 5
    # ok here comes the real shit
    label backtrack5
    menu label BackTrack R1
    kernel bt5/vmlinuz
    append boot=casper netboot=nfs nfsroot=vulcano:/mnt/bt5 initrd=bt5/initrd.gz text splash vga=791 file=/cdrom/preseed/custom.seed --
    menu end
    

    you’ll need to copy to tftpboot/bt5 the initrd.gz and vmlinuz from the backtrack ISO /casper folder (which you can mount -o loop -t iso9660 bt5.iso /mnt/bt5

    the rest of the files you provide to the bootee over NFS

    /etc/exports:

    /mnt/bt5 10.10.3.0/24(rw,sync,no_subtree_check) 10.10.10.0/24(rw,sync,no_subtree_check)
    mount -t iso9660 -o loop BT5R1-GNOME-32.iso /mnt/bt5
    

    add a http server with kickstart / preseed files for an ever more powerful setup,
    in which case you replace the file= stanza in the append line with
    url=http://host/path/to/preseed

    more on preseeds… maybe later.

    Now restart all dependent services:

    /etc/init.d/nfs-kernel-server restart
    /etc/init.d/tftpd-hpa restart
    /etc/init.d/apache2 restart
    /etc/init.d/pxe restart
    

    debugging this setup usually requires tracing the process that is failing, so:
    - dhcp options tracing (dnsmasq verbose and tcpdump / wireshark)
    - verbose pxe
    - verbose foreground tftpd-hpa : in.tftpd -v -v -L /var/lib/tftpboot

    CPM 0.26 the Console Password Manager

    Monday, December 5th, 2011

    Some of you might have noticed that I’ve adopted this little program while its original author is MIA, and that my efforts have resulted in its inclusion into debian wheezy earlier this year.

    This is great news and makes it a breeze to get up and running with CPM with a simple apt-get install cpm

    However, it seems that most people are interested in running CPM on older distributions, realistically the stable distribution codenamed squeeze is a favorite, as well as the Ubuntu LTS release 10.4 codenamed lucid lynx.

    So I have built some updated packages of CPM for these oldies but goodies:
    * CPM for squeeze i386
    * CPM for squeeze amd64
    * CPM for lucid i386
    * CPM for lucid amd64

    Remember to install the dependencies though. On squeeze, they are:

    me@mine:~# apt-get install \
        libcdk5 libcrack2 libdotconf1.0 libgpg-error0 \
        libgpgme11 libxml2 libxml2-utils libpth20
    

    File us a ticket if you run into trouble with these packages or need cpm working on some other distribution.

    CPM is a simple, paranoid password manager for the console with some cool features that make it stand out:

    * data files can be encrypted for more than one person
    * data files are signed by the last person who saved it so forging data files is not possible
    * data files are en- and decryptable directly by gpg and gzip
    * the application memory is protected from paging, core dumps, ptrace attacks and runtime environment
    * data is validated using an internal DTD
    * several passwords per account are possible to store
    * it’s possible to handle several data files, each encrypted for different people
    * cracklib checks of password strength and warnings about weak passwords
    * user definable hierarchy with unlimited depth
    * long comments for any node in the hierarchy
    * password generator
    * only one password visible at a time
    * searchable database from the command line
    * user definable search patterns (e.g. user@hostname)
    * several hits can be displayed at once (e.g. several accounts per host)
    * conversion scripts for Password Management System (pms), Password Safe and CSV files

    consolekit is evil

    Wednesday, December 1st, 2010

    … and hates me

    I should really tell you about the DLD seminar three weeks ago, or the PARANOIA security conference, or even just that Adobe should be considered harmful but things have been crazy and between this and electromagnetism I haven’t had the mind space. After the 6th of december, I promise I’l come back with pictures and relations and maybe even sounds (I have notes, don’t worry I’ll remember).

    On the other hand here’s a nasty hack to kill console-kit-daemon, which has a really nasty way of polluting the PID-space… and annoys me enough to warrant a public humiliation as well. What does it do, and why? Who cares what it does, it’s doing it poorly enough to catch attention to itself! So here’s how to kill it:

    root@wasp:/usr/sbin# dpkg -S console-kit-daemon
    consolekit: /usr/sbin/console-kit-daemon
    

    DON’T try to purge the package because that’s just one end of a really big ugly yarn of unneccessary dependency pain that I’d like to spare you…

    DON’T try to replace /usr/sbin/console-kit-daemon with your own stub… turns out dbus autostarts this “service”, and that approach will make dbus block your (ssh) session when you log in… not forever, but that’s even more annoying than the pid pollution.

    Instead, debian bug #544147 and #544483 clewed me in to the following hack:

    cp /usr/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service \
      /usr/local/share/dbus-1/system-services/
    echo Exec=/bin/false >> /usr/local/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service
    

    which is a two-liner, and would have been less ugly and easier to debug if it hadn’t been for the fine hubris of the freedesktop dudes…

    OSSEC to the rescue

    Wednesday, October 20th, 2010

    I’m not going to brag about being online for 16 years without being hacked. It simply wouldn’t be truthful and more to the point even if I convinced myself there is little you or I can do to verify the claim. Rather, I’d like to think that by being a little paranoid I’ve managed to avoid some badness. Actually even if you like to think so, it’s rather optimistic to believe in one’s own infallability. The infallability of computer systems? Don’t even get me started.

    Computer security is about turning that trend around, about saying OK where and how did the bad guy get in, lets kick him out and make sure it doesn’t happen again. It’s about fixing the problems before they become really bad. Security is also about not putting all your balls in one basket, not trusting your single point of failure, and being very picky about the things you trust. Because automated systems fail automatically, security is about putting youself in the loop too.

    If you haven’t read this year’s Verizon data breach report [1], the gist is basically that 2/3 hacks are from 3rd parties, that leakage usually occurs 3 minutes into the breach and that most attacks are still discovered by a third party. What more, almost all attacks were made on servers, most of the attacks aren’t even that difficult to do, they leave traces in the log files and the security holes are even easier to fix!

    Now if you’ve been paying attention to the Stuxnet infestation [2], the Microsoft hack[3] or the recent Depnet failure[4], there is no guarantee that your skilled and educated IT staff is on top of that stuff… because they’re too busy delivering the features you demand.

    The problem here is one of control. If you are an admin, you may know what you’ve done on any particular server and you might be on top of what your team has been doing, but the second someone gets in and starts changing things they shouldn’t have, the bets are off. Files get changed: logs get nuked, commands get replaced, databases get UPDATE’d.

    Let me tell it to you straight: a virus, worm, piece of malware is basically a really cool software update.

    What you need is an eventuality: something that leaves a central, verifiable audit log, checks a number of different sources, stores who logged in where how and monitors system integrity. You need something flexible, with 90% of the job already done for you, something that can deal with a bunch of computers on a bunch of different platforms at the same time, and while leaving you in the loop does this in a managable way, so you don’t get overblown by a zillion messages.

    You need something that can respond to its environment, something that sits on every host, something that can take action on its own.

    OSSEC[5] has this three-pronged approach that fits the bill: rootkit checks, file integrity and logfile watching.

    It does these things according to a configurable ruleset. The rules can run on all your computers (yup, windows boxes too) and report to a central OSSEC server securely. OSSEC is also able to respond to alerts, for example by blocking an attacker that is trying to guess the password repeatedly (an everyday occurance).

    What more, GPL open source makes it possible to audit and patch the code of OSSEC, and gracefully sidesteps the problem of vendor lock-in.

    Now that I’ve played with it and tuned it for sufficiently long, it’s started to compliment my IDS nicely and beats old approaches like tripwire, fail2ban[6]/sshguard[7] and logwatch[8]. Don’t get me wrong, OSSEC is not the silver bullet[9], then again nothing is and thus we must stay vigilant.

    So, with the advent of Week of OSSEC year 2 I took the opportunity to tell you about this fine piece of software, and to show you the real nugget: my debian install and update script for ossec which you can use standalone, or together with my budding and passably simple configuration system gone, which I will introduce another day in another post.

    0K out.

    References in all their undistractingly subscripted glory:
    [1] Verizon data breach report
    [2] Talk on stuxnet the SCADA worm by kwy
    [3] Microsoft confirms Russian pill-pusher attack on its network
    [4] Regjeringen utsatt for dataspionasje
    [5] OSSEC
    [6] Fail2ban
    [7] SSHguard
    [8] Logwatch
    [9] Abusing OSSEC