Posts Tagged ‘paranoia’

CPM 0.25 :: new packages

Wednesday, September 15th, 2010

sup peeps,
your favorite password managment program, CPM, now has updated packages.
What’s new in these packages is a working create-cpmdb. A fix for setting the SUID bit is also included, and that will allow CPM to store passwords securely in memory as well as on disk.

Ubuntu and Debian users:
Get cpm_0.25~beta-2debian3_amd64.deb directly from github.

I have also taken the opportunity to update the documentation, which will allow you to roll your own CPM should you be running something other than debian.

Quick start:

me@mine:~# apt-get install libcdk5 libcrack2 libdotconf1.0 libgpg-error0 libgpgme11 libncursesw5 libxml2 libxml2-utils zlib1g
me@mine:~# dpkg -i cpm_0.25~beta-2debian3_amd64.deb

You need: a GPG key and 3 minutes of your time. Create the password database (only once):

me@mine:~$ create-cpmdb

Use your GPG key to encrypt the database. This puts a .cpmdb file in your home folder.

Run CPM and add your passwords! Exit by hitting ESC to save the keys.

me@mine:~$ cpm

and you have a working CPM install.

Furthermore, I have devised a way for many people to share the same passoword database through a revision control system. Take a look at CPM::revision control.

backtrack to install a backtrack

Thursday, September 9th, 2010

BackTrack is your daddy.
BackTrack accepts no compromises, yet it is all compromising.
Because really, when is the last time you *didn’t* need those auditing tools? That penetration suite? Total privacy to break other people’s privacy? All that and a packet of crisps wrapped with razor sharp menus – it’s the kind of stuff you can only dream of on core. And I hear Fedora Core is the shitzitz now, adopting new [1] and exciting[2] features. Oh hey debian doesn’t have binary deltas for packages *yet* [3], but we’ve been talking about it way longer than those dudes have.

Anecdtotally, I spilled a glass of water on my laptop the other day. Naturally, the glass went half empty in an instant: my poor lovely x41, I screamed. As it turns out the laptop casing made sure all the water was rather cleverly funneled into the x41′s only 1.8″ harddrive, which proceeded to go completely bananas (due presumably to rust, because clean water doesn’t conduct, right?). The data? I believe trusty old dd_rescue did rescue at least part of it, but I then misplaced the image file somewhere.

The system?
It was a thrifty, untrusted yet trusty Windows XP install that I’d been keeping on there on the mercy of actually booting every time since I bought the machine despite having been licked by more than its fair share of virii, malignant updates and accidental hard resets. Most of the programs I ran were portable[4] versions so all I lost were some documents and lots of music[5].

The hardware?
I disassembled and metricuously dried every little component, and in the end only the disk drive was bust. The 1.8″ IDE drive that is impossibly ridiculously expensive to replace (5$ per GB? What the foo? Shut up!). Still, I needed the laptop so I exploded booting from USB. Despite (misguided?) efforts I haven’t bloody well been able to boot windows off USB, so I bootstrapped BackTrack 3 instead and bob is your uncle.

I mean really, I think I had that thing running like that for three months before I started missing stuff like apt. Didn’t really mind starting fresh every boot, I even invented a whole little schpiel for getting online as fast as possible, none of that Network Manager madness.
Persistent settings are all right in BT3 but booting into RAM is a lot more fun. After the first 3 seconds of boot you can pull the USB plug, everything goes zippety fast and your footprint is nada. Only thing that can get your ass is a cold boot attack.

BT3 is real cool and still a good recommend if you want to wardrive and do proper wifi phreaking due to the embedded injection drivers, but in the end I wanted new libs, a decent compiler and window dressing, and so I rolled BackTrack 4.

Granted, kde sucks, but if I cared enough I’d switch to openbox or something awesome in like 4 minutes. These days all I need is a shell and a browser.

For those of you fortunate enough to have a harddrive, BT4 ships with an install script to turn your system into a permanent BackTrack fixture. It’s based off Ubiquity, but dd’ing off the USB and onto your disk drive might be better if you’re interested in being able to boot your system into RAM, well I dunno because you want to do some advanced powersaving[6], or want to kill your system without worrying about unclean shutdowns, or want to maximise the life span of your solid-state device by nearly never writing to it.

For my own part there was a happy ending on DealExtreme, as they ship IDE44 to CompactFlash interfaces that fit in the x41 1.8″ bay… which leads to a whole slew of unexplored possibilities thaaat (drum rolls) I will explore in the next installment of how to break your machine.

BackTrack 4 R1 has released :-) [6]. Anyone know where I can score the BlackHat Edition?

[1] http://fedoraproject.org/wiki/Releases/FeaturePresto
[2] http://fedoraproject.org/wiki/Features/systemd
[3] http://samba.anu.edu.au/rsync/rsync-and-debian/rsync-and-debian.html
[4] http://portableapps.com/
[5] http://drownedinsound.com/community/boards/music/4179554
[6] http://kacper.blog.linpro.no/archives/13
[7] http://www.backtrack-linux.org/

pulse audio: an experiment in social engineering

Tuesday, September 7th, 2010

The title about sums up the sentiments on the topic in my department.
What’s the use case for pulse audio?

Apparently, pulse audio is:
* configurationless consumer audio
* something that sits between you and your sound
* a replacement for ESD
* necessary for normal printer(!) operation
* cross-platform and works on windows
* really really complex
* the shit when you want less usability
* the bomb when you want less security
* not really competing with jack
* really good at marketing itself
* on by default in most current distros.

The cure:
apt-get remove –purge pulseaudio\*
yum remove pulseaudio

CPM: Reliable multiuser password management

Monday, August 2nd, 2010

Sup all,
summer is drawing to a close and vacation is definitely over, but I for one welcome the chance to think and act again. Some time ago our managed services department started complaining about various shoddy password management solutions. Truth be told we already had a good solution, CPM (“Console Password Management”) but the software had fallen into disrepair due to seldom and untidy updates from its author. A new maintainer was desired and a project to fix the software was decreed and the result fell into my lap so to speak.

What sets CPM apart from other password management solutions is that it supports multiple users and goes to great lengths to keep your passwords secure while at the same time being very simple in its design: CPM locks its XML-formatted hierarchical password database in non-swappable private memory (so your passwords don’t get written in cleartext to disk while swapping), and encrypts the database with an arbitrary amount of GnuPG public keys.

All this makes CPM quite nice for storing and sharing secrets in a nice curses-based searchable console interface.

For the longest time I’ve been keeping the hundred-odd passwords I can’t remember on notepads and in random text files, thinking that surely I should start employing some sort of password management before I go crazy or my passwords leak. The congruence of my wishes with the scope of this project, so I picked up CPM and gave it a little love, and the result can be found at

GitHub CPM with CPM packages for debian in the downloads section.

CPM crash course

Requirements: Gnu Privacy Guard, and a GPG keypair.

First, install CPM:

dpkg -i cpm_0.25~beta-2debian2_amd64.deb

Then, create a password database, adding your key to the recipient list when prompted.

create-cpmdb

Then, use CPM from the console:

cpm

CPM should now ask you for you GPG key password and display an empty database.

CPM is controlled with the arrow keys, Enter and some control keys.
Hitting Control-H will bring you to the Help screen which explains the control keys.

By default CPM organises your passwords in a structure of hosts that have several services which may have one or more users. Hosts, services, users and passwords are nodes in the tree and a node is added by hitting Control-A and given an appropriate name.

For instance, if I were to add a password ch1ckens0up to user lolarun on the wiki service of host fragglepop.info, I would create the following node structure:

  host:fragglepop.info
      \-->service:wiki
              \-->user:lolarun
                      \-->password:ch1ckens0up

Of course there is no need to follow this anal layout, and you may even change the node structure by editing the template names in CPM by hitting Control-N or modifying the /etc/cpmrc config file.

To have CPM generate a random password for you, hit Control-P.
Your changes are not saved unless you hit Control-W or quit the program by hitting ESC enough times. Quitting through Control-C will not save the database.

Future work includes pushing the package into Debian.

What you don’t get (yet) is a GTK-based GUI, or a wrapper to pull the password database out of GIT and commit it again after modification nor integration with gpg-agent, probably (?) due to a bug in gpgme.

Enjoy this lovely piece of software and leave a comment after testing it!

EDD DoS detection and DLD

Friday, April 16th, 2010

Hi all,
a short note about the Norwegian data surveillance directive that is up for passage into law these days. This directive, “Datalagringsdirektivet”, is the single most harmful threat to the general public’s privacy while being completely ineffective at stopping the bad guys it’s meant to target. Protests last Saturday in front of the parlimentary building – which yours truly attended – featured politicians and individuals from all ends of the political spectrum. Read more on Stopp DLD.

On to other things,

getting DDoSed sucks, as some of my collegues found out recently. Wouldn’t it be great if we could detect DDoSes as they come in through the wire? I mean besides when all of nagios goes code red upon us?

Well, I’ve written the little program that could. It’s not quite there yet (too few hours in the day) but the basic principles are fleshed out, and they go a little something like this:

There is a mathematical and a physical notion of entropy. To put it bluntly, it’s the shortest representation of a given piece of information. There’s a theorem that states that if you get many messages, but the messages put together don’t amount to much, then probably someone is fugging with you. We can use this to detect anomalies in network traffic, too.

This theorem about entropy is what EDD, the Entropy Distributed Denial of Service Detector [tarball] uses to classify a packet stream as bollocks, or not.

EDD is still pre-alfa software, which means that it’s a little too simplistic to tell you anything beyond a mere “Something’s up”, but I’d like you to test it in your setups with the understanding that the program.

commit bc2f4df34745e4c422a17e70aac271bc930b9f1a
Author: Kacper Wysocki
Date: Fri Apr 16 18:18:37 2010 +0200

EDD now classifies simple SYN floods successfully.

* faster and simpler simple_entropy
* reads from pcaps (-r)
* configurable treshold (-e)
* configurable window size (-w)
* profile counting (-t)
* edd self-tests (-E)
* better TODO ideas

Try it out and let me now, and send me pcaps of your DDoS and false positives.

what’s wrong with IT?

Wednesday, March 24th, 2010

Hold on a bit.
I am a monk of the old order, one of the illuminati of software stacks. By no means a high priest, but like many of my brethren I have been ordained with most of the dirty little secrets over the years since I joined the convent. I never specialized so I am well read in ancient texts and new work, and I have meditated on subjects ranging from compiling compilers through 3D rendering and artificial intelligence to business processes and value chains. In the constant rush to deliver on the promises of technology I’ve seen projects that are set up for failure even before they start. I’ve seen enough code to write a book detailing example for example what you should not do during development.

The secrets are many, and they are complex and hard to grasp out of context, but to misuse an old adage: the truth is simple and it’s out there.

The reason applications fail is because they are complex, but the reason IT fails is that IT people expect the applications to be simple to manage, and the business has a nasty tendency to promote the clueless.

It’s amazing how much money is thrown out the window (and into greedy hands) by large corporations and public departments on hairy overcomplicated blackbox solutions that are supposed to meet simple business needs.

Take databases for example. It’s easy to argue that the database is part of core business (because all the critical application data ends up in the database) and thus the database must be highly available, highly performant and highly secure. Maybe that’s how the CTO’s of the world justify spending millions on monstrous arcane iron black boxes to serve their modest database needs. Annualy!

The same needs, if properly understood, could be served by the fraction of the cost while being easier to manage and debug!

This is not just a schpiel on Postgres (who cares it’s open source, it can do all that and more) but a general protection fault in how technology is driven today.

Another nice example is DNS, which is beyond core business in importance: without domain resolution nearly all infrastructure fails. DNS problems can cause the most obscure failures simply because applications have no provision for DNS failure. Quite a few IT departments all over the world operate DNS through point-and-click wizards without anything but the rudimentary understanding of its inner workings. Should they have that understanding? Hell yes, otherwise sooner or later it must fail as everything does, and when it does they have none of the tools to fix it!

Scarier still is that the rest of the world (or very nearly) has standardized on the most baroque and insecure DNS server in existence (BIND me in hell with seven furies burning marks in my skin), a precise analogy to what has happened in the world of e-mail (sendmail will do anything but!). We do this because we follow Best Business Practices, which is the IT analogue of what happens to you when you go through airport security: it is completely ineffective but feels safer.

Other examples of the same thing happening is the proliferation of security products that offer nothing but a smokescreen, the use of gigantic and poorly considered application frameworks and the abstraction and layering of simple concepts into behemoth object-relation collections.
Humans have a distinctly object-oriented view of the world, all the same the world is trying to tell us that objects don’t exist in their own right but depend on a whole slew of epiphenomena.

Software rots if it is not maintained.
None of the above are hard problems, regardless of what others might have tried to jam down your throat. Databases are a snooze to work on, DNS and mail should Just Work, and once we have a web application stack going for us we’re not going to consider how it works or what could be better. The difficulty that lies in application infrastructure is a people problem.

We want to buy a shrink-wrapped product and feel value for money without risk.

There is some sort of mass marketing effect happening where decision makers are best influenced by the greediest hands. We tend to think that the most expensive car has the best value with the least risk, and we seldom so clear-sighted as to go back on decisions we have already made.

So what’s the fix?

Decision makers should spend more time evaluating the options before launching headlong into projects based on best business practices, and they should identify and listen more to the few quiet people that have a clue. The people with clue usually only get to vent their frustrations by making crass jokes about management and the hairyness of the most recent or most painful and embarassing failure of technology. These things are not talked about openly, but they should be.

Ideally we should focus on long-term research into the difficult problems of technology: artificial intelligence, algorithms, how to feed the starving and save the world from imminent ecological disaster, quantum computing etc, instead of spending so much time failing at the simple things.

Lame things that suck

Saturday, December 5th, 2009

The world is a difficult place, we know.
Here’s a list of things that suck unnecessarily much:

  • Fink for OSX needs Xcode dev tools.
    Why not provide a gcc/libc-dev package? No idea. General lameness from the fink developers forces you to register at the Apple Developer Connection and download 700MB of apple crap just to install and compile source packaged software in fink. LAME.
  • Fink is not Cydia on the iPhone.
    Both are based on apt and dpkg. Both run on OSX. Pooling of efforts, everyone.
  • The very fact that you have to jailbreak an iPhone is ridiculous. Goes doubletime for Xbox and PlayStation chipping, Wii softmods and DS carding. This is vendor lockdown and should be lotted with the criminally insane – vendors don’t need to take responsibility for user-created apps, but vendoirs must not stand in the way of the software evolution.
  • Tar sands. Corruption.
  • There ain’t enough time to read all the cool web comics. Games? Don’t get me started.
  • to quote a friend and collegue, “every operating system in the world. Pick one and I will tell you how much it sucks.”

never ask for root again

Monday, August 17th, 2009

just a short note to all of you:

linux is not secure. Passwordless root is here :-*

Yes, it has been published elsewhere, but I’ll do mine to push this meme to you: there can be no “untrusted local users” nor do I believe that your services aren’t exploitable.

Two seconds later I have root on your box.

Despite LSM. Despite SELinux. Despite jails and virtualization. Despite all your assumptions.

You will need some very fine security gents and a little of your own smarts to secure your nets. Call us :-)

The best link on this issue so far has been:

cr0: bypassing linux with null pointer

Do you want security? Go run carpal-tunnel-inducing OpenBSD, swell swell if only it smelled well FreeBSD, or, *drum rolls*

drop-in up-to-date secure and invulnerable grsec kernel for ubuntu and debian

Only disadvantage I can see is that they don’t provide amd64 and desktop builds.

Dilligence and perseverence is the path to victory,
and although paranoia may not be the path to safety
noone should leave their front door open.

In other news, and probably a little lame for those of you coming thru the planet feed, security.vcl is here – properly used, understood and abused it could save you some worries, making sure no “untrusted user” went “local” in the first place.

Also, tell your friends: there is a Facebook virus about. It sends links to you from your friends accounts. If you click on the link, you too will be sending your friends links.

Yeah, I know, that sounds like what I do on facebook all day. Except the difference is you don’t know you’re sending links.

So watch out.

And tell your less savvy friends.

how to break your head : try linux compilination

Wednesday, July 29th, 2009

I’ve recently started compiling my own kernels again. Some people ask me why I’d ever want to do this – a valid question, since anyone who’s done it knows a time-consuming hassle best left to the distro packagers and really nerdy people with too much time on their hands. Other people will give a blank face and ask “What is a Conpiling?” To these other people: this article is not for you, it will only serve to confuse that pretty little head of yours. If you know what ‘a compiling’ is, you may proceed. I don’t provide references; I banter them. Google your friend, pluckum.
\
Still, I am not here to discuss the reasons for compiling your own kernel – these are all too obvious to the initiated and completely uninteresting to anyone else. I’m more interested in the reasons why my friends, collegues and I have *stopped* compiling our own kernels – despite some of us enjoying at least a compile a day (or ten!) for periods of time in the past. Only the gentoo rice boys remain, steadfastly compiling everything in sight despite snide comments about mean time between upgrades and ridicule about their USE_FLAGS selector GUIs.
\
Why don’t we compile anymore?
There is no stable upstream branch. In my own experience this has had direct consequences for the stability and quality of point releases.
Years after Linus’ bitkeeper schism, the SCO slimeballing and the death of the stable branch, we can look back and say that aye, we have a better audit trail and development has scaled through the roof. We have more kernel features than ever, and an astounding rate of patches make it into mainline every day.
\
These amazing developments are a long shot away from the linux dev process back in the days of 2.2 and 2.4, but there is a dark side to these developments.
Regressions are no longer the domain of the bleeding edge, the -mm or -ac trees, -alpha and -rc releases for the adventurous, masochistic or desperate. Common things. Getting bitten by that local sexploit and being too embarassed to tell your friends about it. Software suspend used to work fine. The graphics card did not crap itself on the last point release, but at least my NIC doesn’t get bricked in this one. The wifi keeps screwing with you, but you don’t know if you should blame Ubuntu, Intel or Linus. On the internet noone can hear you scream.

\

Elitism is rife on the LKML, and more pointedly, in the mainline patch process. Who knew NIH would be such a big problem in an open source project? Admittedly, it is the largest and perhaps the most ambitious open source project of all, with all eyes on target, a million uses and powerful market forces pulling the project this way and that. Linux has long ago outgrown the boy’s room, the hacker dungeon and its academic roots. Most kernel patches that get into mainline are pushed there by large hardware and software vendors. Many kernel hackers hack the kernel on their day job, earning an engineer’s living.
\
Linux has reached the Enterprise in a big way. The system runs and is optimized for Big Iron. The desktop is “good enough”, say the kernel hackers. Latency is fine for our uses, and those squeaky audiophiles should shut up and fork. Indeed they did, as embedded, realtime and audio people have all collectively decided to jump off the wagon.
Out-of-tree kernel hackers already know where the lay is at. After years of pushing the same genious useful patchsets they are sick of cleaning up, splitting out, documenting, backporting, forward porting only to discover that noone read their patch. Maybe they will be lucky, their ideas bastardized overnight into someone else’s pet project, far more likely to succeed once it is Invented Here(tm).
\
It’s not all bad: we want and need to trust the people that push stuff into the kernel. Who are you to think that you can do it better than them? They are doing their job, they do it well, so what if they all meet for beer and virgin sacrifice after hours, so what if there is no free seating in their society? Fork your own.
\
Weiging in at 800MB uncompressed, the Linux source is a behemoth. Counting only source, headers and assembly, there are 35,000 files in the linux kernel, with 10,667,648 lines of source code. This code is metriculously organized, not only into systems, subsystems and modules, but into domains of responsibility. Hey, if you’ve ever managed a large software project you would know how annoying, how encroaching it is when someone start fiddling with your private bits.
\
On the other hand, linux has lost a lot of great contributions and spurned a lot of marvelous people because of this elitism. OpenMosix israeli clustering, reiser4 the murderous file system, software suspend 2 the ‘it just works’ approach, page-in-from-swap, CK’s desktop efforts, the two kernel monty carlo and process snapshotting are only few of the projects that failed to sufficiently influence the core developers, some of them even year after year.
It can be argued that despite the patches not making it to mainline some of these ideas did find their way into the minds of the gitmasters and found other implementations on technical merit alone. To me this defeats the whole purpose of the open source model which drives technology by sheer speed. We’ve had a working, cleaned up, documented version of the patch for two years – and the feature doesn’t make the cut. This is too little too late.

\

Well, not everyone takes an interest in kernel politicking even if they follow the LKML or kerneltrap, and some people even like hitting bugs and fixing issues in their compiles, and trolling in epic flame wars. They too have left kernel compiling to other, more patient and masochistic people.
Maybe it’s because even grepping a single point release changelog is a major chore. The distro folks have gotten fairly good at kernel compiles; ubuntu ships a one-size-fits-all Just Works(tm) kernel, RedHat’s patchset has grown less offensive over the years and debian is and always was debian. Upgrades are relatively painless and usually somebody else already did the dirty work.
Linus Thorvald’s initial plan succeeded: by axing the stable/unstable tree he told the world that the responsibility for stability rests on the distributor. He also axed many hobbyists’ will to stay and play with new releases. I’d rather go play on milw0rm.

\

There are other compelling reasons not to roll one’s own: the number of configuration options has doubled over the past years, and most of these new options are not relevant to the hobbyist use case. Development not only in the kernel source but in the toolchain (gcc) has caused compile times to soar. I remember proudly compiling 2.4 kernels on my K7 within 10 minutes back in 2001. Today it might take longer to compile the tree on my Centrino dual-core.
And there it is: we’ve suffered feature creep and bloat. After a long download, an hour or more of configuring, and many failed initial make runs, a generic compiled bzImage weighs in at about 3412 kB. This is a modular kernel, mind you. What happened to lean and mean 800 kB kernels?

Memory is cheap you say.
But minds are not cheap!

\

I’m announcing a contest: what’s the smallest stable useful kernel you can make for your platform? Remember, it should run on other machines and be useful, and the compile reproducible. Choose your own definition of useful, but do find a concrete definition. Use any tree and patchsets that turn you on. Bonus points for packaging so others can plug your kernel into their system. I’ll make your package available.
As a side contest I’ll take compile times along with bogomips numbers and your .config file for reference.

\\\

PS. Yahoo! internal IT sucks. Where’s the wifi? Running our own cables, canned XP images in a linux lab, packet loss. This aint no funky party. I guess they are too busy. Paranoia maybe. Things aren’t wonderful.

LDAP and its many uses

Friday, June 19th, 2009

There is a nice article on Single-Sign-On and LDAP in the Journal and although it is not new the man writing it has clearly spent some time finding novel (read: whack) uses for catalogue services.

Myself, on the other hand, I’ve been finding novel ways to break OpenLDAP. My 35-hour stint on Thursday set up more Active Directory-integrating workaround setups of the Slap Daemon than you can shake a bloody large stick at, including but not limited to The Inverted Translucent Reverse Meta Tree, where we do a slapo-translucent overlay in one slapd and a plain slapd database in the second slapd, then slapd-meta the sAMAccountName into uid and remap the suffixes in a third slapd process. Yep, that’s four separate catalogues to solve one application problem.

Don’t. Ask. Why.

The upshot is that you should stay the hell away from the slapd rewrite module as it will core, that the translucent overlay is magnificent at making very plain ldapsearches (objectclass=*) return no objects or fail, that slapd-meta is a very cool backend to do remapping suffixes, attributes and your mom, that your application should never have to write to a read-only Active Directory tree, and that simplicity is instrumental in not going mental.

Unfortunately, simple solutions to complicated problems are rather hard to come by.

PS. The problems I was trying to fix all came out of one single application bug and my attempts to work around it :-P