Posts Tagged ‘breakage’

ip6 DNS wildcards considered harmful

Friday, September 23rd, 2011

I discovered something yesterday that might be of consequence:
If you have ip6 connectivity the domain name resolver will prefer an ip6 wildcard domain over a ip4 A or CNAME record. This breaks things like ssh. You’d expect the resolver to choose the response that is most specific, the same way ip4 wildcards work, and not to blindly prefer ip6 wildcards.

Consider the case of Mary, who’s been around and has lots of domains:

and she’s also wildcarding all the other * to her vanity host… you get the idea, it’s fairly typical. Those hosts only have ip4 connectivity. Now she adds a new address and puts a wildcard ip6 record *, expecting that people accessing on ip6 get the wildcard address – and they do! But she gets alot more than the doctor ordered, her ip6 clients will also get the ip6 wildcard address for all her other domains!, and will all land on instead of the ip4 A records. What happened here?
Effectively, Mary’s ip6 wildcard broke all ip6 to ip4 connectivity for Mary’s existing subdomains!

Yep, you can fix it on your machine, but this is a client problem and you can’t fix everybody else’s resolvers, so what you have to do is avoid ip6 wildcard domains ENTIRELY. Thanks a bunch.

On a completly different node:

“debug This option is recognized by pam_ldap but is presently ignored.”

I mean wow. What did they do, write the whole module flawlessly on the first try? I wish.

firefox + geolocation = m0ar paranoia

Friday, August 26th, 2011

Just a quick note pertaining to a previous post, namely the new evil that is firefox geolocation. This is new in firefox 3.5. Yes, it is opt-in and yes firefox does not track you but yes the servers you opt in to will track you and that my friends is one of the most serious misfeatures of our times, repeated again and again in stuff like Google Latitude, Android and Apple photo geo-tagging.
If you care about your personal security at all you do not want the internet tracking where you are, which is essentially what this amounts to.
Disable it now by going to the about:config location in your firefox, typing geo. in the search field and double clicking the geo.enabled line so that it says

geo.enabled    user set  boolean   false

That’s it for now.

security or privacy? both please!

Monday, July 11th, 2011

Hello readers, fellow bloggers, fell trolls… it’s been a while.

Happy Wheel

If you didn’t know that just following a link could lead to loss of your identity, virus infection and the unknowing participation in denial of service sieges, distribution of child pornography and letting attackers break through your company’s firewalls (not to mention immanentizing the eschaton), you could be forgiven for not paying attention to the happy happy field of information security.

If you knew this rather distressing fact about the web of links, but thought you could defend with an up-to-date antivirus, a current browser and the avoidance of “shady” websites I would hate to prick your bubble by telling you regular honest websites are being used against us, browsers all have to run the all-singing-all-dancing-all-executing flash and jave and anti-virus is not only ineffective but doesn’t even target this type of attacks. Knowing these things might be a little distressing so I won’t tell you.

At least my bank is secure, you say, it’s got SSL and everything! Well, you’d be twisting my arm into telling you, embarassedly, that SSL as implemented in browsers is very neatly broken, that all we needed was one of the Certified Trusted Authority Parties to go bad for us all to wake up with a butt-ache, but we now have not one but at least three such bad parties, not to mention all the MiM magic and DNS trickery that you don’t want to hear about anyway.

I will tell you however that the only defense is two-pronged: not allowing websites to script – which is a pain – and damage containment, which is not exactly perfect.

Let us say you already knew all of this, but no longer cared because there was an even greater danger on the web: the total breach of containment of privacy that is social media and tracking cookies which all want to integrate and track your every move through the web so that usage and mood patterns can be profiled, tracked, bought and sold. Doesn’t matter, right? You’re logged in to Facebook, Linkedin and Google+ and get all your links from there, so you have your own personal filter which only shows you stuff you care about, turning your blind eye towards anything you need to know that comes from left field, suckering you into giving away your privates every time you hit “like” or “add to friends list”.
pacman ghost

In a post-panopticlick world there is really only one lesson to be learned: care about your privacy, because noone else will give a damn. It’s not about whether you have anything to hide, it’s about the accumultion of your private info by crooks to use as they please!

Oh and crooks include the great people at Facebook, Microsoft and Google, that’s why I recommend disabling all tracking features that come in the guise of “better speed” or “increased security”. Pictures below show how to do it in chromium and firefox.

chromium dialog, how to disable google tracking

Ok that was Goggle’s Chromium – note all the unchecked- checkboxen… disabling prefetch, malware blocklists and suggestion services, all of which are sending far too much data to Google’s scaredy-ass all seeing eye. Aaaand here’s Firefox:

fuckfox prefetch prevention

Mhm that’s pointing the browser at about:config, searching for prefetch and setting it to false. Yep.

Those pictures show you how to find and set better privacy features in your browser. Safari users are up shit creek, but unlike the Internet Explorer users, they at least have a paddle! Great stuff, what next?

Keep a “secure” browser to browse with that you don’t log into anything personal with.. and make this your default browser!

What is a “secure” browser? Let’s talk a little about that without insulting anyone’s intelligence, shall we?
First off, I’m putting the word “secure” in uhm qoutes, because uhm the browser will never be secure, be it ever so protected. Ok, moving on you want to be running noscript and or adblock and or disconnect and or noflash, depending on whether you are a Mac, Windows (but then you are at game over already) or Linux user with Firefox or Chromium (NOT IExploder, that shit is scary!).

All of these tools make it so the sites you visit don’t get to run code on your machine. The end result is that sites are marginally less usable, but that popup popunder popver poopop ads and scary tracker/botnet/mal stuff doesn’t run automagically. Here are some links:
– Have you heard about sandboxing?

java and flash denied in chromium Chromium is getting the right idea by killing java and flash from the get-go. This should be the default, but you have to make it so!

You should definitely be cloaking your user-agent, as it is a useless yet very telling string your browser hoes out to all. Do this with the User-Agent Modifying Extension.

Also, you may find Torbutton and Foxyproxy interesting, the real privacy is in bouncing your traffic thru things that wash it for you. Putty or any other decent ssh client will create a proxy for you:

`ssh -D8080 me@myshellserver`

and you can point your browser’s SOCKS proxy settings to localhost:8080 putting all your traffic through your shell server!


The best has yet to come mua ha ha ha.

datalagring i praksis: politiraidet mot autistici

Tuesday, January 18th, 2011

Historien om politiraidet på den frivillige italienske organisasjonen Autistici er et praktisk eksempel på hvorfor Datalagringsdirektivet ikke bør innføres. er en non-profit organisasjon som tilbyr betalingsfrie e-post og bloggtjenester laget for å være motstandsdyktige mot sensur.

Her er en kort oppsumering av saken:

  • Som ledd i tiltakene mot sensur ligger Autistici sine tjenester fordelt på servere over hele verden, og dataene ligger kryptert på disk.
  • Autistici logger ikke oppkoblinger og bevarer ingen personlig informasjon om sine brukere.
  • Ett av Autistici sine tjenere står i serverparken til en norsk organisasjon for fremmelsen av fri programvare.
  • Den 5. november 2010: Politiet beslaglegger en harddisk fra serverparken basert på en anmodning fra italiensk politi.
  • Anmodningen fra Italia navngir én e-postkonto og ber om innhold, innlogginger og endringer på gitte e-post konto.
  • Beslageleggelsens begrunnelse er en trusselsak.
  • Den italienske anmodningen beskriver det straffbare forhold som fornærmelser mot omdømmet til to ledere av den neo-fascistiske organisasjonen Casa Pound.
  • Norsk politi går utover anmodningen og tar speilkopi av to harddisker, som inneholder e-posten til 1500 brukere og kontoinformasjonen til 7000 brukere.
  • De beslaglage diskene inneholder ikke e-postkontoen nevnt i den utenlandske anmodningen.
  • Lignende andmodninger antas å ha blitt sendt til Autistici sine serverparker i Nederland og Sveits.

Onkel tar allerede for mange friheter med andres data. I dette tilfellet har 1500 uskyldige menneskers private e-post blitt rammet av en ransaking på vegne av en anmodning fra en fremmed nasjons interesser, innblandet i et forhold av tvilsom legalitet, der siktede er en ukjent person.

Saken ruller videre for å undersøke beslagets lovlighet og sikre at kopiene ikke uleveres i sin helhet til italienske myndigheter.

Saken er også omtalt i:

Morsomme eksempler på fremtidig misbruk av DLD kan du finne på #DLDloggen.

caching wikileaks with varnish

Friday, December 3rd, 2010

In times like these I like to remind everyone that truth is a virtue and that the best resistance to corruption is decentralization.

With that in mind I quickly threw together a cache for wikileaks at This is not a full mirror obviously but it will stay up even though the main sites go down.

The host in question isn’t really specced for high loads but that is beside the point. The point is that you can do this in your own varnish instance. Here is how you do the same, in VCL:

# help wikileaks stay online by providing caching
# helps more if you have persistent storage.                                                                                                
# 2010-12-03
backend wikileaks2 {
   .host = "";
   .probe = {
      .url = "/img/favicon.ico";
      .interval = 60s;
backend wikileaks3 {
   .host = "";
   .probe = {
      .url = "/img/favicon.ico";
      .interval = 60s;

# won't work while the DNS name is taken out
#backend wikileaks1 {
#   .host = "";
#   .probe = {
#      .url = "/img/favicon.ico";
#   }
director wikileaks round-robin {
   #{ .backend = wikileaks1; }
   { .backend = wikileaks2; }
   { .backend = wikileaks3; }
   { .backend = wikileaks4; }
sub vcl_recv {
   if ( ~ "^(wiki)?leaks" ||
       req.url ~ "^/(wiki)leaks" ||
       req.http.referer ~ "leaks"){
      set req.backend = wikileaks;
         set req.grace = 7d;
         set req.grace = 365d;
sub vcl_miss {
   if(req.url ~ "^/(wiki)?leaks"){
      set bereq.url = regsub(req.url,"^/(wiki)?leaks","/");
sub vcl_fetch {
   if(req.url ~ "^/(wiki)?leaks"){
      set beresp.grace = 365d;

You can save that to /etc/varnish/default.vcl and reload varnish.
Or, if your Varnish instance has other sites on it, you could save it to /etc/varnish/wikileaks.vcl and add the following near the top of your default.vcl:

include "/etc/varnish/wikileaks.vcl";

Isn’t it beautiful?
You may not be able to set up a full mirror, but now you can go set up your varnish to cache Wikileaks!

The opinions expressed here are not necessarily those of my employer nor anyone else associated with me, Varnish or anything really.

consolekit is evil

Wednesday, December 1st, 2010

… and hates me

I should really tell you about the DLD seminar three weeks ago, or the PARANOIA security conference, or even just that Adobe should be considered harmful but things have been crazy and between this and electromagnetism I haven’t had the mind space. After the 6th of december, I promise I’l come back with pictures and relations and maybe even sounds (I have notes, don’t worry I’ll remember).

On the other hand here’s a nasty hack to kill console-kit-daemon, which has a really nasty way of polluting the PID-space… and annoys me enough to warrant a public humiliation as well. What does it do, and why? Who cares what it does, it’s doing it poorly enough to catch attention to itself! So here’s how to kill it:

root@wasp:/usr/sbin# dpkg -S console-kit-daemon
consolekit: /usr/sbin/console-kit-daemon

DON’T try to purge the package because that’s just one end of a really big ugly yarn of unneccessary dependency pain that I’d like to spare you…

DON’T try to replace /usr/sbin/console-kit-daemon with your own stub… turns out dbus autostarts this “service”, and that approach will make dbus block your (ssh) session when you log in… not forever, but that’s even more annoying than the pid pollution.

Instead, debian bug #544147 and #544483 clewed me in to the following hack:

cp /usr/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service \
echo Exec=/bin/false >> /usr/local/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service

which is a two-liner, and would have been less ugly and easier to debug if it hadn’t been for the fine hubris of the freedesktop dudes…

OSSEC to the rescue

Wednesday, October 20th, 2010

I’m not going to brag about being online for 16 years without being hacked. It simply wouldn’t be truthful and more to the point even if I convinced myself there is little you or I can do to verify the claim. Rather, I’d like to think that by being a little paranoid I’ve managed to avoid some badness. Actually even if you like to think so, it’s rather optimistic to believe in one’s own infallability. The infallability of computer systems? Don’t even get me started.

Computer security is about turning that trend around, about saying OK where and how did the bad guy get in, lets kick him out and make sure it doesn’t happen again. It’s about fixing the problems before they become really bad. Security is also about not putting all your balls in one basket, not trusting your single point of failure, and being very picky about the things you trust. Because automated systems fail automatically, security is about putting youself in the loop too.

If you haven’t read this year’s Verizon data breach report [1], the gist is basically that 2/3 hacks are from 3rd parties, that leakage usually occurs 3 minutes into the breach and that most attacks are still discovered by a third party. What more, almost all attacks were made on servers, most of the attacks aren’t even that difficult to do, they leave traces in the log files and the security holes are even easier to fix!

Now if you’ve been paying attention to the Stuxnet infestation [2], the Microsoft hack[3] or the recent Depnet failure[4], there is no guarantee that your skilled and educated IT staff is on top of that stuff… because they’re too busy delivering the features you demand.

The problem here is one of control. If you are an admin, you may know what you’ve done on any particular server and you might be on top of what your team has been doing, but the second someone gets in and starts changing things they shouldn’t have, the bets are off. Files get changed: logs get nuked, commands get replaced, databases get UPDATE’d.

Let me tell it to you straight: a virus, worm, piece of malware is basically a really cool software update.

What you need is an eventuality: something that leaves a central, verifiable audit log, checks a number of different sources, stores who logged in where how and monitors system integrity. You need something flexible, with 90% of the job already done for you, something that can deal with a bunch of computers on a bunch of different platforms at the same time, and while leaving you in the loop does this in a managable way, so you don’t get overblown by a zillion messages.

You need something that can respond to its environment, something that sits on every host, something that can take action on its own.

OSSEC[5] has this three-pronged approach that fits the bill: rootkit checks, file integrity and logfile watching.

It does these things according to a configurable ruleset. The rules can run on all your computers (yup, windows boxes too) and report to a central OSSEC server securely. OSSEC is also able to respond to alerts, for example by blocking an attacker that is trying to guess the password repeatedly (an everyday occurance).

What more, GPL open source makes it possible to audit and patch the code of OSSEC, and gracefully sidesteps the problem of vendor lock-in.

Now that I’ve played with it and tuned it for sufficiently long, it’s started to compliment my IDS nicely and beats old approaches like tripwire, fail2ban[6]/sshguard[7] and logwatch[8]. Don’t get me wrong, OSSEC is not the silver bullet[9], then again nothing is and thus we must stay vigilant.

So, with the advent of Week of OSSEC year 2 I took the opportunity to tell you about this fine piece of software, and to show you the real nugget: my debian install and update script for ossec which you can use standalone, or together with my budding and passably simple configuration system gone, which I will introduce another day in another post.

0K out.

References in all their undistractingly subscripted glory:
[1] Verizon data breach report
[2] Talk on stuxnet the SCADA worm by kwy
[3] Microsoft confirms Russian pill-pusher attack on its network
[4] Regjeringen utsatt for dataspionasje
[6] Fail2ban
[7] SSHguard
[8] Logwatch
[9] Abusing OSSEC

backtrack to install a backtrack

Thursday, September 9th, 2010

BackTrack is your daddy.
BackTrack accepts no compromises, yet it is all compromising.
Because really, when is the last time you *didn’t* need those auditing tools? That penetration suite? Total privacy to break other people’s privacy? All that and a packet of crisps wrapped with razor sharp menus – it’s the kind of stuff you can only dream of on core. And I hear Fedora Core is the shitzitz now, adopting new [1] and exciting[2] features. Oh hey debian doesn’t have binary deltas for packages *yet* [3], but we’ve been talking about it way longer than those dudes have.

Anecdtotally, I spilled a glass of water on my laptop the other day. Naturally, the glass went half empty in an instant: my poor lovely x41, I screamed. As it turns out the laptop casing made sure all the water was rather cleverly funneled into the x41′s only 1.8″ harddrive, which proceeded to go completely bananas (due presumably to rust, because clean water doesn’t conduct, right?). The data? I believe trusty old dd_rescue did rescue at least part of it, but I then misplaced the image file somewhere.

The system?
It was a thrifty, untrusted yet trusty Windows XP install that I’d been keeping on there on the mercy of actually booting every time since I bought the machine despite having been licked by more than its fair share of virii, malignant updates and accidental hard resets. Most of the programs I ran were portable[4] versions so all I lost were some documents and lots of music[5].

The hardware?
I disassembled and metricuously dried every little component, and in the end only the disk drive was bust. The 1.8″ IDE drive that is impossibly ridiculously expensive to replace (5$ per GB? What the foo? Shut up!). Still, I needed the laptop so I exploded booting from USB. Despite (misguided?) efforts I haven’t bloody well been able to boot windows off USB, so I bootstrapped BackTrack 3 instead and bob is your uncle.

I mean really, I think I had that thing running like that for three months before I started missing stuff like apt. Didn’t really mind starting fresh every boot, I even invented a whole little schpiel for getting online as fast as possible, none of that Network Manager madness.
Persistent settings are all right in BT3 but booting into RAM is a lot more fun. After the first 3 seconds of boot you can pull the USB plug, everything goes zippety fast and your footprint is nada. Only thing that can get your ass is a cold boot attack.

BT3 is real cool and still a good recommend if you want to wardrive and do proper wifi phreaking due to the embedded injection drivers, but in the end I wanted new libs, a decent compiler and window dressing, and so I rolled BackTrack 4.

Granted, kde sucks, but if I cared enough I’d switch to openbox or something awesome in like 4 minutes. These days all I need is a shell and a browser.

For those of you fortunate enough to have a harddrive, BT4 ships with an install script to turn your system into a permanent BackTrack fixture. It’s based off Ubiquity, but dd’ing off the USB and onto your disk drive might be better if you’re interested in being able to boot your system into RAM, well I dunno because you want to do some advanced powersaving[6], or want to kill your system without worrying about unclean shutdowns, or want to maximise the life span of your solid-state device by nearly never writing to it.

For my own part there was a happy ending on DealExtreme, as they ship IDE44 to CompactFlash interfaces that fit in the x41 1.8″ bay… which leads to a whole slew of unexplored possibilities thaaat (drum rolls) I will explore in the next installment of how to break your machine.

BackTrack 4 R1 has released :-) [6]. Anyone know where I can score the BlackHat Edition?


pulse audio: an experiment in social engineering

Tuesday, September 7th, 2010

The title about sums up the sentiments on the topic in my department.
What’s the use case for pulse audio?

Apparently, pulse audio is:
* configurationless consumer audio
* something that sits between you and your sound
* a replacement for ESD
* necessary for normal printer(!) operation
* cross-platform and works on windows
* really really complex
* the shit when you want less usability
* the bomb when you want less security
* not really competing with jack
* really good at marketing itself
* on by default in most current distros.

The cure:
apt-get remove –purge pulseaudio\*
yum remove pulseaudio

what’s wrong with IT?

Wednesday, March 24th, 2010

Hold on a bit.
I am a monk of the old order, one of the illuminati of software stacks. By no means a high priest, but like many of my brethren I have been ordained with most of the dirty little secrets over the years since I joined the convent. I never specialized so I am well read in ancient texts and new work, and I have meditated on subjects ranging from compiling compilers through 3D rendering and artificial intelligence to business processes and value chains. In the constant rush to deliver on the promises of technology I’ve seen projects that are set up for failure even before they start. I’ve seen enough code to write a book detailing example for example what you should not do during development.

The secrets are many, and they are complex and hard to grasp out of context, but to misuse an old adage: the truth is simple and it’s out there.

The reason applications fail is because they are complex, but the reason IT fails is that IT people expect the applications to be simple to manage, and the business has a nasty tendency to promote the clueless.

It’s amazing how much money is thrown out the window (and into greedy hands) by large corporations and public departments on hairy overcomplicated blackbox solutions that are supposed to meet simple business needs.

Take databases for example. It’s easy to argue that the database is part of core business (because all the critical application data ends up in the database) and thus the database must be highly available, highly performant and highly secure. Maybe that’s how the CTO’s of the world justify spending millions on monstrous arcane iron black boxes to serve their modest database needs. Annualy!

The same needs, if properly understood, could be served by the fraction of the cost while being easier to manage and debug!

This is not just a schpiel on Postgres (who cares it’s open source, it can do all that and more) but a general protection fault in how technology is driven today.

Another nice example is DNS, which is beyond core business in importance: without domain resolution nearly all infrastructure fails. DNS problems can cause the most obscure failures simply because applications have no provision for DNS failure. Quite a few IT departments all over the world operate DNS through point-and-click wizards without anything but the rudimentary understanding of its inner workings. Should they have that understanding? Hell yes, otherwise sooner or later it must fail as everything does, and when it does they have none of the tools to fix it!

Scarier still is that the rest of the world (or very nearly) has standardized on the most baroque and insecure DNS server in existence (BIND me in hell with seven furies burning marks in my skin), a precise analogy to what has happened in the world of e-mail (sendmail will do anything but!). We do this because we follow Best Business Practices, which is the IT analogue of what happens to you when you go through airport security: it is completely ineffective but feels safer.

Other examples of the same thing happening is the proliferation of security products that offer nothing but a smokescreen, the use of gigantic and poorly considered application frameworks and the abstraction and layering of simple concepts into behemoth object-relation collections.
Humans have a distinctly object-oriented view of the world, all the same the world is trying to tell us that objects don’t exist in their own right but depend on a whole slew of epiphenomena.

Software rots if it is not maintained.
None of the above are hard problems, regardless of what others might have tried to jam down your throat. Databases are a snooze to work on, DNS and mail should Just Work, and once we have a web application stack going for us we’re not going to consider how it works or what could be better. The difficulty that lies in application infrastructure is a people problem.

We want to buy a shrink-wrapped product and feel value for money without risk.

There is some sort of mass marketing effect happening where decision makers are best influenced by the greediest hands. We tend to think that the most expensive car has the best value with the least risk, and we seldom so clear-sighted as to go back on decisions we have already made.

So what’s the fix?

Decision makers should spend more time evaluating the options before launching headlong into projects based on best business practices, and they should identify and listen more to the few quiet people that have a clue. The people with clue usually only get to vent their frustrations by making crass jokes about management and the hairyness of the most recent or most painful and embarassing failure of technology. These things are not talked about openly, but they should be.

Ideally we should focus on long-term research into the difficult problems of technology: artificial intelligence, algorithms, how to feed the starving and save the world from imminent ecological disaster, quantum computing etc, instead of spending so much time failing at the simple things.