Posts Tagged ‘disintegrate’

OSSEC to the rescue

Wednesday, October 20th, 2010

I’m not going to brag about being online for 16 years without being hacked. It simply wouldn’t be truthful and more to the point even if I convinced myself there is little you or I can do to verify the claim. Rather, I’d like to think that by being a little paranoid I’ve managed to avoid some badness. Actually even if you like to think so, it’s rather optimistic to believe in one’s own infallability. The infallability of computer systems? Don’t even get me started.

Computer security is about turning that trend around, about saying OK where and how did the bad guy get in, lets kick him out and make sure it doesn’t happen again. It’s about fixing the problems before they become really bad. Security is also about not putting all your balls in one basket, not trusting your single point of failure, and being very picky about the things you trust. Because automated systems fail automatically, security is about putting youself in the loop too.

If you haven’t read this year’s Verizon data breach report [1], the gist is basically that 2/3 hacks are from 3rd parties, that leakage usually occurs 3 minutes into the breach and that most attacks are still discovered by a third party. What more, almost all attacks were made on servers, most of the attacks aren’t even that difficult to do, they leave traces in the log files and the security holes are even easier to fix!

Now if you’ve been paying attention to the Stuxnet infestation [2], the Microsoft hack[3] or the recent Depnet failure[4], there is no guarantee that your skilled and educated IT staff is on top of that stuff… because they’re too busy delivering the features you demand.

The problem here is one of control. If you are an admin, you may know what you’ve done on any particular server and you might be on top of what your team has been doing, but the second someone gets in and starts changing things they shouldn’t have, the bets are off. Files get changed: logs get nuked, commands get replaced, databases get UPDATE’d.

Let me tell it to you straight: a virus, worm, piece of malware is basically a really cool software update.

What you need is an eventuality: something that leaves a central, verifiable audit log, checks a number of different sources, stores who logged in where how and monitors system integrity. You need something flexible, with 90% of the job already done for you, something that can deal with a bunch of computers on a bunch of different platforms at the same time, and while leaving you in the loop does this in a managable way, so you don’t get overblown by a zillion messages.

You need something that can respond to its environment, something that sits on every host, something that can take action on its own.

OSSEC[5] has this three-pronged approach that fits the bill: rootkit checks, file integrity and logfile watching.

It does these things according to a configurable ruleset. The rules can run on all your computers (yup, windows boxes too) and report to a central OSSEC server securely. OSSEC is also able to respond to alerts, for example by blocking an attacker that is trying to guess the password repeatedly (an everyday occurance).

What more, GPL open source makes it possible to audit and patch the code of OSSEC, and gracefully sidesteps the problem of vendor lock-in.

Now that I’ve played with it and tuned it for sufficiently long, it’s started to compliment my IDS nicely and beats old approaches like tripwire, fail2ban[6]/sshguard[7] and logwatch[8]. Don’t get me wrong, OSSEC is not the silver bullet[9], then again nothing is and thus we must stay vigilant.

So, with the advent of Week of OSSEC year 2 I took the opportunity to tell you about this fine piece of software, and to show you the real nugget: my debian install and update script for ossec which you can use standalone, or together with my budding and passably simple configuration system gone, which I will introduce another day in another post.

0K out.

References in all their undistractingly subscripted glory:
[1] Verizon data breach report
[2] Talk on stuxnet the SCADA worm by kwy
[3] Microsoft confirms Russian pill-pusher attack on its network
[4] Regjeringen utsatt for dataspionasje
[5] OSSEC
[6] Fail2ban
[7] SSHguard
[8] Logwatch
[9] Abusing OSSEC

backtrack to install a backtrack

Thursday, September 9th, 2010

BackTrack is your daddy.
BackTrack accepts no compromises, yet it is all compromising.
Because really, when is the last time you *didn’t* need those auditing tools? That penetration suite? Total privacy to break other people’s privacy? All that and a packet of crisps wrapped with razor sharp menus – it’s the kind of stuff you can only dream of on core. And I hear Fedora Core is the shitzitz now, adopting new [1] and exciting[2] features. Oh hey debian doesn’t have binary deltas for packages *yet* [3], but we’ve been talking about it way longer than those dudes have.

Anecdtotally, I spilled a glass of water on my laptop the other day. Naturally, the glass went half empty in an instant: my poor lovely x41, I screamed. As it turns out the laptop casing made sure all the water was rather cleverly funneled into the x41′s only 1.8″ harddrive, which proceeded to go completely bananas (due presumably to rust, because clean water doesn’t conduct, right?). The data? I believe trusty old dd_rescue did rescue at least part of it, but I then misplaced the image file somewhere.

The system?
It was a thrifty, untrusted yet trusty Windows XP install that I’d been keeping on there on the mercy of actually booting every time since I bought the machine despite having been licked by more than its fair share of virii, malignant updates and accidental hard resets. Most of the programs I ran were portable[4] versions so all I lost were some documents and lots of music[5].

The hardware?
I disassembled and metricuously dried every little component, and in the end only the disk drive was bust. The 1.8″ IDE drive that is impossibly ridiculously expensive to replace (5$ per GB? What the foo? Shut up!). Still, I needed the laptop so I exploded booting from USB. Despite (misguided?) efforts I haven’t bloody well been able to boot windows off USB, so I bootstrapped BackTrack 3 instead and bob is your uncle.

I mean really, I think I had that thing running like that for three months before I started missing stuff like apt. Didn’t really mind starting fresh every boot, I even invented a whole little schpiel for getting online as fast as possible, none of that Network Manager madness.
Persistent settings are all right in BT3 but booting into RAM is a lot more fun. After the first 3 seconds of boot you can pull the USB plug, everything goes zippety fast and your footprint is nada. Only thing that can get your ass is a cold boot attack.

BT3 is real cool and still a good recommend if you want to wardrive and do proper wifi phreaking due to the embedded injection drivers, but in the end I wanted new libs, a decent compiler and window dressing, and so I rolled BackTrack 4.

Granted, kde sucks, but if I cared enough I’d switch to openbox or something awesome in like 4 minutes. These days all I need is a shell and a browser.

For those of you fortunate enough to have a harddrive, BT4 ships with an install script to turn your system into a permanent BackTrack fixture. It’s based off Ubiquity, but dd’ing off the USB and onto your disk drive might be better if you’re interested in being able to boot your system into RAM, well I dunno because you want to do some advanced powersaving[6], or want to kill your system without worrying about unclean shutdowns, or want to maximise the life span of your solid-state device by nearly never writing to it.

For my own part there was a happy ending on DealExtreme, as they ship IDE44 to CompactFlash interfaces that fit in the x41 1.8″ bay… which leads to a whole slew of unexplored possibilities thaaat (drum rolls) I will explore in the next installment of how to break your machine.

BackTrack 4 R1 has released :-) [6]. Anyone know where I can score the BlackHat Edition?

[1] http://fedoraproject.org/wiki/Releases/FeaturePresto
[2] http://fedoraproject.org/wiki/Features/systemd
[3] http://samba.anu.edu.au/rsync/rsync-and-debian/rsync-and-debian.html
[4] http://portableapps.com/
[5] http://drownedinsound.com/community/boards/music/4179554
[6] http://kacper.blog.linpro.no/archives/13
[7] http://www.backtrack-linux.org/

pulse audio: an experiment in social engineering

Tuesday, September 7th, 2010

The title about sums up the sentiments on the topic in my department.
What’s the use case for pulse audio?

Apparently, pulse audio is:
* configurationless consumer audio
* something that sits between you and your sound
* a replacement for ESD
* necessary for normal printer(!) operation
* cross-platform and works on windows
* really really complex
* the shit when you want less usability
* the bomb when you want less security
* not really competing with jack
* really good at marketing itself
* on by default in most current distros.

The cure:
apt-get remove –purge pulseaudio\*
yum remove pulseaudio

what’s wrong with IT?

Wednesday, March 24th, 2010

Hold on a bit.
I am a monk of the old order, one of the illuminati of software stacks. By no means a high priest, but like many of my brethren I have been ordained with most of the dirty little secrets over the years since I joined the convent. I never specialized so I am well read in ancient texts and new work, and I have meditated on subjects ranging from compiling compilers through 3D rendering and artificial intelligence to business processes and value chains. In the constant rush to deliver on the promises of technology I’ve seen projects that are set up for failure even before they start. I’ve seen enough code to write a book detailing example for example what you should not do during development.

The secrets are many, and they are complex and hard to grasp out of context, but to misuse an old adage: the truth is simple and it’s out there.

The reason applications fail is because they are complex, but the reason IT fails is that IT people expect the applications to be simple to manage, and the business has a nasty tendency to promote the clueless.

It’s amazing how much money is thrown out the window (and into greedy hands) by large corporations and public departments on hairy overcomplicated blackbox solutions that are supposed to meet simple business needs.

Take databases for example. It’s easy to argue that the database is part of core business (because all the critical application data ends up in the database) and thus the database must be highly available, highly performant and highly secure. Maybe that’s how the CTO’s of the world justify spending millions on monstrous arcane iron black boxes to serve their modest database needs. Annualy!

The same needs, if properly understood, could be served by the fraction of the cost while being easier to manage and debug!

This is not just a schpiel on Postgres (who cares it’s open source, it can do all that and more) but a general protection fault in how technology is driven today.

Another nice example is DNS, which is beyond core business in importance: without domain resolution nearly all infrastructure fails. DNS problems can cause the most obscure failures simply because applications have no provision for DNS failure. Quite a few IT departments all over the world operate DNS through point-and-click wizards without anything but the rudimentary understanding of its inner workings. Should they have that understanding? Hell yes, otherwise sooner or later it must fail as everything does, and when it does they have none of the tools to fix it!

Scarier still is that the rest of the world (or very nearly) has standardized on the most baroque and insecure DNS server in existence (BIND me in hell with seven furies burning marks in my skin), a precise analogy to what has happened in the world of e-mail (sendmail will do anything but!). We do this because we follow Best Business Practices, which is the IT analogue of what happens to you when you go through airport security: it is completely ineffective but feels safer.

Other examples of the same thing happening is the proliferation of security products that offer nothing but a smokescreen, the use of gigantic and poorly considered application frameworks and the abstraction and layering of simple concepts into behemoth object-relation collections.
Humans have a distinctly object-oriented view of the world, all the same the world is trying to tell us that objects don’t exist in their own right but depend on a whole slew of epiphenomena.

Software rots if it is not maintained.
None of the above are hard problems, regardless of what others might have tried to jam down your throat. Databases are a snooze to work on, DNS and mail should Just Work, and once we have a web application stack going for us we’re not going to consider how it works or what could be better. The difficulty that lies in application infrastructure is a people problem.

We want to buy a shrink-wrapped product and feel value for money without risk.

There is some sort of mass marketing effect happening where decision makers are best influenced by the greediest hands. We tend to think that the most expensive car has the best value with the least risk, and we seldom so clear-sighted as to go back on decisions we have already made.

So what’s the fix?

Decision makers should spend more time evaluating the options before launching headlong into projects based on best business practices, and they should identify and listen more to the few quiet people that have a clue. The people with clue usually only get to vent their frustrations by making crass jokes about management and the hairyness of the most recent or most painful and embarassing failure of technology. These things are not talked about openly, but they should be.

Ideally we should focus on long-term research into the difficult problems of technology: artificial intelligence, algorithms, how to feed the starving and save the world from imminent ecological disaster, quantum computing etc, instead of spending so much time failing at the simple things.

brilliant fools – hackers update

Monday, March 1st, 2010

cracks are on the rise. so are hacks, and I haven’t posted a thing since December. So what’s up with you?

Good news is that Con Kolivas might have managed to defeat his carpal tunnel and swallow his spite for kernel dev elitism, and is again churning out solid kernel code to improve desktop usability – which the kernel devs aren’t too interested in – something he is quite right to say!

Hopefully ubuntu will pick up CK’s scheduling patches, because they are uber and with ubuntu’s momentum they might topple the stack. Too bad their kernel team can’t follow the churn. Wish I had time to compile it for you or even post some interbench stats with pretty graphs, but there is a BFQ PPA available already so you can test it out on your ubuntu or debian machine. Bonus? Some random dudes wrote a simple IO scheduler which is included there. Ain’t that reasonable?

All that at least until we have time to write our own OS which gets rid of all suckage, is super flexible and of course incorporates all our favorite patches.

Valgrind is pretty neat, and with that we have working stray acks in PRADS, the stealthy host and service detection system. More features and even a new release might come soon, which is to say – when it’s ready. In the mean time I welcome you to try breaking it in any and every way possible.

Also, if everything you do is motivated by monetary gains then you sir are a shame to the human race. Go back to step 1 and have a good day now.

the right way to use disk space? virtually, of course!

Monday, December 14th, 2009

I might have mentioned agedu before, a nice tool to find your least useful and ready-to-be-deleted files real quick.

Sucks when the only files you have are rather large ones that you can’t throw out, like virtual system images which can easily become more than a few gigs heavy.

Disk is cheap you say (again) and I will protest loudly; disk is not cheap for your laptop, it is not cheap for your high-performance platter server, it is not cheap for the environment and it’s ridiculous what kind of wasteful behavior the “hey, it’s cheap” mentality promotes, not all of which relates to computers (think garbage, cars, food, wars, lives…)

Regardless, if you are using KVM there is a way to save disk space, speed up disk accesses and maybe even save the environment a little: kvm ships with a little tool called kvm-img (if you’re using QEMU then it’s qemu-img), and support for a copy-on-write storage format called QCOW2.

The qcow2 format is cool because it supports compression and encryption.

Compress your images

If you cared about disk before, you could untick the “allocate all space now” and save a couple gigs on a 10G disk image, but that wouldn’t last long and you’d hear people grumble about disk corruption and such (corruption that I have never ever seen, I might interject), but now you can compress and rebase your image. Here’s how I saved 20G on my disk:

To convert your raw image to qcow2 you would do:

kvm-img convert -c -f raw -O qcow2 $IN ${IN%.img}_base.qcow2

where $IN is your existing image and ${IN%.img}_base.qcow2 is going to be the name of your new qcow2 image. If you have NADA space left, convert into tmpfs (make sure tmpfs is mounted with sufficient size), remove the raw image and copy the new image out of tmpfs. That’ll free up some space.

Rebasing

But why stop there? I mentioned rebasing, and rebase we shall.
The qcow2 format it is a little less cool for introducing really sucky snapshotting support, as applying and creating snapshots with kvm-img takes hours and is likely to fail! I don’t recommend trying kvm-img snapshot -c foo.qcow2
However, the copy-on-write functionality of qcow2 lets us implement functional faux snapshotting with little effort.

Copy-on-write means we can create an image sliver that only stores the changes from some read-only base image. Even better, we can layer these slivers! So, with the script I’ll introduce in a second, we can:

  1. Create or convert into a compressed base image. Name it foo_base.qcow2, eg “debian_squeeze_base.qcow2″. This is the master base, ideally made right after installing the operating system or whatevr.
  2. Create a usable sliver to store new data into: kvm-img create -b debian_squeeze_base.qcow2 squeeze_today.qcow2
  3. If you are using libvirt, update your /etc/libvirt/qemu/.xml disk source file to point to the ‘today’ image, and restart the libvirt daemon and virt-manager, to catch on to the changes
  4. To create a faux snapshot, just move the today image and rebase it like in step 2.
  5. To revert a faux snapshot, just replace today’s image with the snapshot.

And here is my rebase script:

kwy@amaeth:/var/lib/libvirt/images$ cat rebase_snap.sh 
#!/bin/sh

BASE=$1
if [ ! -f $BASE ]
then
   BASE=$1.qcow2
fi
if [ ! -f $BASE ]
then
   echo "No base image $BASE"
   exit
fi
REBASE=${BASE%.qcow2}_`date +%F`.qcow2
if [ -n "$2" ] 
then
   REBASE="$2"
fi
mv $BASE $REBASE
kvm-img create -f qcow2 -b $REBASE $BASE
kvm-img info $BASE 
kvm-img info $REBASE

echo "$BASE -> $REBASE"

Advantages

  • It takes 2 seconds to rebase and restore as opposed to 1 minute vmware snapshot or 4 hours to snapshot with qcow2
  • you don’t need fancy RAID or LVM tricks
  • You save space as opposed to shitty qcow2 snapshots and raw image copies
  • you can keep several versions or patchlevels of an operating system, and several application groups on the same operating system without having to reinstall the system – you already have a base image you can use!

Caveats

The experience should be pretty stable, but there is always room to shoot yourself in the foot. Here are a couple of ways you can make it hard for yourself:

  • don’t run out of disk space – it will corrupt your open images, regardless of format
  • don’t modify a base image that another image depends upon.
    Your base image knows nothing about its children (newer snapshots and ‘today’ images), so modifying the base image will cause all its children to corrupt into weirdness. That’s why the base image is “read only” and should be named appropriately.
  • don’t go down under the stairs!
  • don’t do stuff you don’t understand!
  • don’t tell me this ain’t new, cause I know!

Lame things that suck

Saturday, December 5th, 2009

The world is a difficult place, we know.
Here’s a list of things that suck unnecessarily much:

  • Fink for OSX needs Xcode dev tools.
    Why not provide a gcc/libc-dev package? No idea. General lameness from the fink developers forces you to register at the Apple Developer Connection and download 700MB of apple crap just to install and compile source packaged software in fink. LAME.
  • Fink is not Cydia on the iPhone.
    Both are based on apt and dpkg. Both run on OSX. Pooling of efforts, everyone.
  • The very fact that you have to jailbreak an iPhone is ridiculous. Goes doubletime for Xbox and PlayStation chipping, Wii softmods and DS carding. This is vendor lockdown and should be lotted with the criminally insane – vendors don’t need to take responsibility for user-created apps, but vendoirs must not stand in the way of the software evolution.
  • Tar sands. Corruption.
  • There ain’t enough time to read all the cool web comics. Games? Don’t get me started.
  • to quote a friend and collegue, “every operating system in the world. Pick one and I will tell you how much it sucks.”

HOWTO avoid pains with NetCom 3g USB on jaunty

Sunday, September 6th, 2009

Internet anywhere, ain’t it great?

If you have one of those 3g netcome HSDPA USB dongles you might hve noticed how they don’t really work so well out of the box.

After I had spent 4 hours trying to get the thing working Martin smugly told me these things should be plug’n'play and proceeded to… fail to get it working. oW hELL…

Cutting to the chase and sparing you the gritty details I have a recipie for getting 3g working with the Netcom ZTE-MF636 USB dongle. This recipie should work in ubuntu jaunty and similar recent distros, and most of the instructions apply to other USB dongles too. Included are also all the tips you need to avoid spending 4 hours hammering your head against the wall…
(more…)

LDAP and its many uses

Friday, June 19th, 2009

There is a nice article on Single-Sign-On and LDAP in the Journal and although it is not new the man writing it has clearly spent some time finding novel (read: whack) uses for catalogue services.

Myself, on the other hand, I’ve been finding novel ways to break OpenLDAP. My 35-hour stint on Thursday set up more Active Directory-integrating workaround setups of the Slap Daemon than you can shake a bloody large stick at, including but not limited to The Inverted Translucent Reverse Meta Tree, where we do a slapo-translucent overlay in one slapd and a plain slapd database in the second slapd, then slapd-meta the sAMAccountName into uid and remap the suffixes in a third slapd process. Yep, that’s four separate catalogues to solve one application problem.

Don’t. Ask. Why.

The upshot is that you should stay the hell away from the slapd rewrite module as it will core, that the translucent overlay is magnificent at making very plain ldapsearches (objectclass=*) return no objects or fail, that slapd-meta is a very cool backend to do remapping suffixes, attributes and your mom, that your application should never have to write to a read-only Active Directory tree, and that simplicity is instrumental in not going mental.

Unfortunately, simple solutions to complicated problems are rather hard to come by.

PS. The problems I was trying to fix all came out of one single application bug and my attempts to work around it :-P

Xorg is a steaming pile of gonzales so turn on your UXA

Sunday, June 7th, 2009

Hei folks,

if you have a newer Intel or ATI gfx card, and you’re running a suitably dishy distribution (Ubuntu Jaunty maybe, xorg 7.4, kernels 2.6.27 and up perhaps?) maybe you’ve noticed how sluggish and slow Xorg is. Sometimes it’ll be a dick and eat heaps of your RAM, and maybe hog 98% of your CPU time for no damn raisins.

Furthermore, that sleek graphics card seems to be causing loads of garbage rendering – what gives? I wish I could show you some screenshots but honestly, I don’t really wanna see that shit again, so here’s to not reproducing it. Instead, why don’t you do like I do and enable UXA? Edit your /etc/X11/xorg.conf file and add
Option "AccelMethod" "uxa" to your acting Device section like so:

Section "Device"
   Identifier    "integrated"
   Driver "intel"
      Option "AccelMethod" "uxa"
      Option "RenderAccel" "on"
      Option "PageFlip" "on"
EndSection

Now logout and back into X again, restart GDM, KDM or whatever and whoop de doo your /var/log/Xorg.0.log should say something like:

(**) intel(0): Using UXA for acceleration
[... snip irrelevant shit ...]
(II) intel(0): [DRI2] Setup complete
[... more irrelevant crap ...]
(II) UXA(0): Driver registered support for the following operations:
(II)         solid
(II)         copy
(II)         composite (RENDER acceleration)
[... further noise ...]
(II) intel(0): direct rendering: DRI2 Enabled

Also, windows should draw/move/resize snappier and there should be much rejoicing. If everything went all right that is.

Note that my glxgears performance dropped from 1300fps (w/xcompmgr) down to 660fps (xcompmgr) and 994fps (no composition manager) so there seems to be a tradeoff here, but I really really prefer my gooey windows snappy, artifact-free and less crash-prone. maybe.

K out