Archive for the ‘disintegrate’ Category

danger at the docks

Friday, November 14th, 2014

docker.io
Docker.io is taking the world by storm, but a day at the docks is not without its perils. Here I hope to inspire you to try out docker by showing you how to avoid its pitfalls.

In the days of yore

As the FreeBSD jailers and Solaris zoners will attest to, containerizing your services is a great boon, saving space and resources and providing easy management akin to chroots and potential security benefits, without the overheads of full-blown virtual machines.

FreeBSD Jail Linux has had containers for the longest time, in the ancient form of User Mode Linux, which actually ran a kernel in userland, and more recently OpenVZ, which was more like jails.

The former didn’t lend itself to production deployments and the latter never made it into the linux mainline, coming at a time when people were more interested in virtualization than containment. In recent years, a kernel facility named Cgroups has made LinuX Containers (LXC) possible, which as afforded the management, if not security, of bsd jails.

what can be gained

The biggest potential benefit from containers is that CPU, memory and disk resources are 100% shared at native speeds, so no libraries and no data need ever be duplicated on disk nor in memory.

In FreeBSD jails, this was achieved by providing most of the system read-only like /usr, /lib and /bin, and sharing it amongst jails. This worked quite well, but was surprisingly tricky to update.

LXC
You can do similar stuff with LXC, just as long as you understand that if it breaks, you get to keep all the pieces. This gives you full control, and means that I for one have LXC instances in production with uptimes of 1200 days and counting.

minimalizing

Taking the approach of single-container-single-responsibility further, you could instead of deploying whole system containers create image filesystems that contained only the bare necessities. For instance, your python application would have apart from its code,just the python runtime, libc and other dependant libraries, and naught much else.

Inspired by the “leaner is better” philosophy backed by the experience of running LXC in production, we built this minimal deployment framework complete with a tool to magically find all the required libraries.
leaner is better
Awesomely small images come from this approach, where the “contact surface” of the application has shrank to nothing but the app itself. It was far from perfect, serving to make the images awesomely less debuggable and managable, and never made it into production proper.

layer upon layer is two steps further

In comes Docker, and its concept of filesystem image layers based on AUFS. The approach isn’t novel itself, having been used by live-CD distributions for the longest time, but it’s the first that provides tools to manage the layers effortlessly for containers. So you can now have 100 servers with 100 application layers, and all your Ruby applications share one runtime layer and your Python applications share another, and they all run on the same base image of Ubuntu, and they do all that transparently, without you having to consciously think about which bit goes where.

layersDocker takes another step further, borrowing heavily from distributed social source control ala github, allowing you to clone, build, push, pull, commit, share and remix images as easy as that.

This is the type of thing that blows disk-image-based virtualization straight out of the water.

Perils and rough starts

The Docker docs are well written and will get you spawning containers and dockerizing applications in no time at all. What they will not tell you is how to run containers in production for all values of production.

In particular, the following things require special attention:

  • changing ips
  • service discovery
  • dns issues
  • cache clash

.. and that is precisely what we will talk about next time.
see you later

the paranoid console viewer

Tuesday, June 19th, 2012

Hi all,
I know it’s been a while since my last post.
There is lots to talk about, but let’s start afresh with something short and sweet.

Let me paint a picture for you:
There is something broken in the most holy and secure part of your network. You need to call support to have them look at it. The support rep wants console access, but you can’t give them axx to your holiest cream pie.
They offer to take over your desktop with a java rootkit app like TeamViewer, GoToMeeting or WebEx.
You decline. You need to stay in control, but show them what they need to see, that and only that.
How?

Let me be clear on the problem statement:
Read-only shell access to the most secure host, which is not available over the wire, viewed by multiple parties at the same time.

Here’s how to do that with SSH, screen(1) and some foo,
with ssh->chroot->rbash->readonly multiuser screen->reverse ssh->openvpn:

You will need a linux server in an “unsafe” zone which is exposed to your support rep on the internet or thru VPN.

  1. Create the user to be contained on your unsafe box, with the restricted bash shell:
    unsafe# export user=rep; adduser $user; chage -s /usr/bin/rbash $user
  2. (Bonus:) chroot/contain the user within sshd_config
  3. Setup multiuser screen on the unsafe box. There are lots of guides for it, but the short and sweet of it is: unsafe# chmod +s `which screen`; chmod 755 /var/run/screen Indeed, this increases the attack surface, and therefore we call this box the unsafe one.
  4. ssh from secure zone to unsafe server:
    secure# ssh -R 2222:localhost:22 screen
  5. Run screen from YOUR account and do :addacl $user :chacl $user -w "#" :chacl $user -x "?" Replace $user with whatever from step 1. Then, still in your screen: :multiuser on
  6. Win! Now you can reverse ssh back to the secure zone and let $user on the unsafe box read the terminal without being able to access anything but what you show her.
  7. Bonus: Add `screen -r $youraccount` in $user/.profile and $user will drop straight into locked screen, and remember that multiuser screen is read-write-execute for all accounts that are addacl’d
    so you might want to chacl before enabling the $user account login.

    And there you have it, superparanoid reverese secure-unsecure remote shell viewer.

    0k

oh noes, o cert my *sniff* cert

Monday, September 26th, 2011

papieren bitteI’m not going to tell you about DigiNotar, whose file of bankruptcy this month held shock for no one after recently having lost the keys to the grand vault, in which the government held much stock. Though I have many comments upon the sophistication of the player that so thoroughly owned the most trusted agencies of the digital age….

The cracker hardly needed them skillz, considering it has been a challenge to keep that whole corrupt industry accountable. The trouble with the central authority system is that even if only one of the keys is compromised, the system is broken and gives no assurances whatsoever. No warning bells either. Just a sweet silent man in the middle, passing along all the best parts to his lover.

It’s not a joke for the 300,000+ people who documentedly had their emails and facepalms compromised. We thought he was kind to give an interview and we wait in awe for his next move.

I’m not going to mention the fatal flaws in certificate revocation that became embarrassingly apparent when the damage was done.
What’s hardly the matter since this kind of thing is bound to crop up, that hole in TLS was deemed unexploitable – now there’s a Titanic if I ever saw one. Un sinkable. Too fat to die.
cert failure

SSL is an open book for those who dare to look, and it’s got more than a couple old bugs. It’s okay though, we can patch it, they will say. Dare to look the other way!
Not that you need those anyway, since there are some really nice sslsnarfing techniques out there that entirely forgo attacks on SSL as “too inefficient”.

But I say nay! Unacceptable. There is another way.. and we’re already doing it! We sign our own signatures and we back each other’s signatures.
Now that’s business, something that the companies on your CA trusted list were painfully aware of when they laid down the law of the code and put themselves on the trust list. Yet still ca-cert is not on your trust list, and warning bells fly off on some of the most trustworthy sites- self-signed ones.

Just don’t ask them why or how, or anything that isn’t directly relevant. Do you even know what is on your trust list? You might just be surprised at what you can find.

# ls -al /etc/ssl/certs | wc -l
479

How many of these do you trust? How many of these should you trust? I’ll tell you: *none*.

We should not be adding static lists of central signing authorities to our systems. This is a brittle and dangerous system. We knew this, but hackers have now thankfully demonstrated it.
A better way is for every person (and by extension every browser) to keep their own list of signing certs, and to exchange these certs with their friends (automagically, if you like). Your friends lists can come out of a social network, any social network, and it will mean that any site that has been vetted by one or more of your friends will likely be safe for you to use as well. It’s even better than that, you can check certs from multiple friends and detect discrepancies.

green padlock
That, my friends, is called the Web of Trust, and is a design that is heading in the right direction. convergence.io is doing something similar already to a Firefox near you, while GPG has worked like this for three decades!

It has to be simple. It has to be very simple. And it has to be chemically free of one word: ‘central’.

One real easy way to do this on linux would be using git and signed manifests. I already do this in gone to assure that only files on a manifest signed by a trusted key get installed.

firefox + geolocation = m0ar paranoia

Friday, August 26th, 2011

Just a quick note pertaining to a previous post, namely the new evil that is firefox geolocation. This is new in firefox 3.5. Yes, it is opt-in and yes firefox does not track you but yes the servers you opt in to will track you and that my friends is one of the most serious misfeatures of our times, repeated again and again in stuff like Google Latitude, Android and Apple photo geo-tagging.
If you care about your personal security at all you do not want the internet tracking where you are, which is essentially what this amounts to.
Disable it now by going to the about:config location in your firefox, typing geo. in the search field and double clicking the geo.enabled line so that it says

geo.enabled    user set  boolean   false

That’s it for now.

failcode

Thursday, August 18th, 2011

In my time as an application programmer. developer and designer, breif stint as team lead and project manager,
as well as my time as a systems consultant, I have witnessed first-hand and also heard many credible tales of systematic failure that rival any of the stories on The Daily WTF. My collegues and I have seen so many examples of bad design, bad code and systemic failure that we have considered writing a book titled How To Write Ugly Code.

I have also read the Texas Instruments Chainsaw massacre and personally met Gomez while debugging applications.

My speciality and my interest lies in diagnostics and the analysis of problems as well as system security, and my experience has showed that one can venture to say something about the qualitative difference of different design methodologies and what they have to say for the end result.

Firstly however, it is worth noting that the software industry as a whole has one primary problem: the time pressure to deliver new features at the face of mouting expectations.

This pressure to deliver is seen as the driving force behind industry progress and ever leaner, more economic applications, however contrary to this belief I have evidence that it leads to incentives for sloppy work, overengineering and poor considerations of the problem domain. It seems the process itself rewards poor application design, regardless of development methodology.

Large corporate and government tenders, which affect many hundreds of thousands of peoples lives, get bid on by large software houses that believe they can deliver everything (at low risk: if they cannot deliver it is very hard for the customer to contest this to a major software house).

What we get by and large out of this process are bloated top-down applications designed by people who do not understand the (whole) problem, leading to misguided decisions for such things as

  • choice of platform and language
  • choice of coding standards (check out Systems Hungarian if you don’t believe me)
  • programming methodology
  • communication tools: source control, ticket and forum tools for developers and system managers
  • Not Invented Here-practices
  • monkey-coding by people whose talents could be employed to solving the problem

What usually goes for as “agile” development causes frequent ineffective blame-game meetings.
Unit test driven development frequently causes micromanagement of program details and inflexible designs,
… all these methodologies were designed to improve programs, not bog them down! why then do they cause so much breakage?

The pressure to deliver requires the application developer to prefer large swathes of ready-made library code and a high level of abstraction to allow her to meet deadline demands.

A high abstraction level causes low debuggability and poor performance.
Low debuggability because bugs are by definition conditions caused by circumstances unforseen by the application developer. Abstractions are employed by the developer to hide implementation details to aid clairty and speed of application development, at the cost of debuggability.

The very tools and abstractions employed by the application developer create the frame through which the developer can see the circumstances of her design and code. Bugs most frequently occur on the boundries between abstractions, where the developer has no possibility to forsee these circumstances. Furthermore, in a system that has a passibly high level of abstraction there is a whole stack of hidden details which must be traced and unwound to discover the bug. Therefore, every additional layer of abstraction obscures the debugging process.

The debuggability and algorithmic simplicity is key in achieving optimal performance. In other words, if we have a clear problem statement it is possible to achieve performance. If there is no clear problem statement, and the program is further muddled by abstractions and interactions there is no effective path to performance.

Any artist will be able to tell you that the most interesting, creative and innovative work comes out of having a stress-free, playful environ. Since innovative coding is a creative activity, the same applies to developing applications, something that game developer companies and creative shops have known for years, and behemoths like Google and Microsoft have picked up on, reinvesting up to 15% of their revenue into research and development and getting that part right, as witnessed by the sheer output of innovation.

If there is a clear path to solving these fundamental problems of IT then it is putting the people who know what they are doing in the pilot seat, enabling developers to choose for themselves not only toolchains, methodology and communication tools but also engaging the systems thinkers into creating the specifications and architecture of the systems they are going to implement. The good news is that as customers and managers get savvy to this method of achieving IT success, we are going to see more developer autonomy and less spectacular fails.

security or privacy? both please!

Monday, July 11th, 2011

Hello readers, fellow bloggers, fell trolls… it’s been a while.

Happy Wheel

If you didn’t know that just following a link could lead to loss of your identity, virus infection and the unknowing participation in denial of service sieges, distribution of child pornography and letting attackers break through your company’s firewalls (not to mention immanentizing the eschaton), you could be forgiven for not paying attention to the happy happy field of information security.

If you knew this rather distressing fact about the web of links, but thought you could defend with an up-to-date antivirus, a current browser and the avoidance of “shady” websites I would hate to prick your bubble by telling you regular honest websites are being used against us, browsers all have to run the all-singing-all-dancing-all-executing flash and jave and anti-virus is not only ineffective but doesn’t even target this type of attacks. Knowing these things might be a little distressing so I won’t tell you.

At least my bank is secure, you say, it’s got SSL and everything! Well, you’d be twisting my arm into telling you, embarassedly, that SSL as implemented in browsers is very neatly broken, that all we needed was one of the Certified Trusted Authority Parties to go bad for us all to wake up with a butt-ache, but we now have not one but at least three such bad parties, not to mention all the MiM magic and DNS trickery that you don’t want to hear about anyway.

I will tell you however that the only defense is two-pronged: not allowing websites to script – which is a pain – and damage containment, which is not exactly perfect.

Let us say you already knew all of this, but no longer cared because there was an even greater danger on the web: the total breach of containment of privacy that is social media and tracking cookies which all want to integrate and track your every move through the web so that usage and mood patterns can be profiled, tracked, bought and sold. Doesn’t matter, right? You’re logged in to Facebook, Linkedin and Google+ and get all your links from there, so you have your own personal filter which only shows you stuff you care about, turning your blind eye towards anything you need to know that comes from left field, suckering you into giving away your privates every time you hit “like” or “add to friends list”.
pacman ghost

In a post-panopticlick world there is really only one lesson to be learned: care about your privacy, because noone else will give a damn. It’s not about whether you have anything to hide, it’s about the accumultion of your private info by crooks to use as they please!

Oh and crooks include the great people at Facebook, Microsoft and Google, that’s why I recommend disabling all tracking features that come in the guise of “better speed” or “increased security”. Pictures below show how to do it in chromium and firefox.

chromium dialog, how to disable google tracking

Ok that was Goggle’s Chromium – note all the unchecked- checkboxen… disabling prefetch, malware blocklists and suggestion services, all of which are sending far too much data to Google’s scaredy-ass all seeing eye. Aaaand here’s Firefox:

fuckfox prefetch prevention

Mhm that’s pointing the browser at about:config, searching for prefetch and setting it to false. Yep.

Those pictures show you how to find and set better privacy features in your browser. Safari users are up shit creek, but unlike the Internet Explorer users, they at least have a paddle! Great stuff, what next?

Keep a “secure” browser to browse with that you don’t log into anything personal with.. and make this your default browser!

What is a “secure” browser? Let’s talk a little about that without insulting anyone’s intelligence, shall we?
First off, I’m putting the word “secure” in uhm qoutes, because uhm the browser will never be secure, be it ever so protected. Ok, moving on you want to be running noscript and or adblock and or disconnect and or noflash, depending on whether you are a Mac, Windows (but then you are at game over already) or Linux user with Firefox or Chromium (NOT IExploder, that shit is scary!).

All of these tools make it so the sites you visit don’t get to run code on your machine. The end result is that sites are marginally less usable, but that popup popunder popver poopop ads and scary tracker/botnet/mal stuff doesn’t run automagically. Here are some links:
noscript
adblock
disconnect
Flashblock
– Have you heard about sandboxing?

java and flash denied in chromium Chromium is getting the right idea by killing java and flash from the get-go. This should be the default, but you have to make it so!

You should definitely be cloaking your user-agent, as it is a useless yet very telling string your browser hoes out to all. Do this with the User-Agent Modifying Extension.

Also, you may find Torbutton and Foxyproxy interesting, the real privacy is in bouncing your traffic thru things that wash it for you. Putty or any other decent ssh client will create a proxy for you:

`ssh -D8080 me@myshellserver`

and you can point your browser’s SOCKS proxy settings to localhost:8080 putting all your traffic through your shell server!

sshnuke

The best has yet to come mua ha ha ha.

consolekit is evil

Wednesday, December 1st, 2010

… and hates me

I should really tell you about the DLD seminar three weeks ago, or the PARANOIA security conference, or even just that Adobe should be considered harmful but things have been crazy and between this and electromagnetism I haven’t had the mind space. After the 6th of december, I promise I’l come back with pictures and relations and maybe even sounds (I have notes, don’t worry I’ll remember).

On the other hand here’s a nasty hack to kill console-kit-daemon, which has a really nasty way of polluting the PID-space… and annoys me enough to warrant a public humiliation as well. What does it do, and why? Who cares what it does, it’s doing it poorly enough to catch attention to itself! So here’s how to kill it:

root@wasp:/usr/sbin# dpkg -S console-kit-daemon
consolekit: /usr/sbin/console-kit-daemon

DON’T try to purge the package because that’s just one end of a really big ugly yarn of unneccessary dependency pain that I’d like to spare you…

DON’T try to replace /usr/sbin/console-kit-daemon with your own stub… turns out dbus autostarts this “service”, and that approach will make dbus block your (ssh) session when you log in… not forever, but that’s even more annoying than the pid pollution.

Instead, debian bug #544147 and #544483 clewed me in to the following hack:

cp /usr/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service \
  /usr/local/share/dbus-1/system-services/
echo Exec=/bin/false >> /usr/local/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service

which is a two-liner, and would have been less ugly and easier to debug if it hadn’t been for the fine hubris of the freedesktop dudes…

Free society conference – my experience

Tuesday, November 9th, 2010

Until the very last minutes I was unsure whether I’d make it to FSCONS, the Free Society Conference and Nordic Summit. I did not know what to think of it, despite gentle pushings from one set to speak at the conference. Three days later and with the event somewhat filtered in my mind, there is no doubt that it was well worth the opportunity costs and then some.

I'm going to FSCONS 2010!

My first impressions while browsing the event programme were that there was no way to attend all the interesting talks! An insurmountable problem, and I hadn’t even gotten there: my meat could not be in two to three places at the same time, while my mind could not rationally choose away interesting unknowns.. so I opted to leave it to a vague notion of chance and intent.

What I hadn’t expected was the powerful impact that the people attending would have on me. Cyber heroes and heroines, freedom fighters, game changers, inventors, uncelebrated cryptoanarchists and everything makers were some of those that I got to know, that engaged me in dialogue, that dared discuss openly some (most?) of the hardest problems that we, the world, are facing today. With the full intent to do something about these problems.

(more…)

backtrack to install a backtrack

Thursday, September 9th, 2010

BackTrack is your daddy.
BackTrack accepts no compromises, yet it is all compromising.
Because really, when is the last time you *didn’t* need those auditing tools? That penetration suite? Total privacy to break other people’s privacy? All that and a packet of crisps wrapped with razor sharp menus – it’s the kind of stuff you can only dream of on core. And I hear Fedora Core is the shitzitz now, adopting new [1] and exciting[2] features. Oh hey debian doesn’t have binary deltas for packages *yet* [3], but we’ve been talking about it way longer than those dudes have.

Anecdtotally, I spilled a glass of water on my laptop the other day. Naturally, the glass went half empty in an instant: my poor lovely x41, I screamed. As it turns out the laptop casing made sure all the water was rather cleverly funneled into the x41′s only 1.8″ harddrive, which proceeded to go completely bananas (due presumably to rust, because clean water doesn’t conduct, right?). The data? I believe trusty old dd_rescue did rescue at least part of it, but I then misplaced the image file somewhere.

The system?
It was a thrifty, untrusted yet trusty Windows XP install that I’d been keeping on there on the mercy of actually booting every time since I bought the machine despite having been licked by more than its fair share of virii, malignant updates and accidental hard resets. Most of the programs I ran were portable[4] versions so all I lost were some documents and lots of music[5].

The hardware?
I disassembled and metricuously dried every little component, and in the end only the disk drive was bust. The 1.8″ IDE drive that is impossibly ridiculously expensive to replace (5$ per GB? What the foo? Shut up!). Still, I needed the laptop so I exploded booting from USB. Despite (misguided?) efforts I haven’t bloody well been able to boot windows off USB, so I bootstrapped BackTrack 3 instead and bob is your uncle.

I mean really, I think I had that thing running like that for three months before I started missing stuff like apt. Didn’t really mind starting fresh every boot, I even invented a whole little schpiel for getting online as fast as possible, none of that Network Manager madness.
Persistent settings are all right in BT3 but booting into RAM is a lot more fun. After the first 3 seconds of boot you can pull the USB plug, everything goes zippety fast and your footprint is nada. Only thing that can get your ass is a cold boot attack.

BT3 is real cool and still a good recommend if you want to wardrive and do proper wifi phreaking due to the embedded injection drivers, but in the end I wanted new libs, a decent compiler and window dressing, and so I rolled BackTrack 4.

Granted, kde sucks, but if I cared enough I’d switch to openbox or something awesome in like 4 minutes. These days all I need is a shell and a browser.

For those of you fortunate enough to have a harddrive, BT4 ships with an install script to turn your system into a permanent BackTrack fixture. It’s based off Ubiquity, but dd’ing off the USB and onto your disk drive might be better if you’re interested in being able to boot your system into RAM, well I dunno because you want to do some advanced powersaving[6], or want to kill your system without worrying about unclean shutdowns, or want to maximise the life span of your solid-state device by nearly never writing to it.

For my own part there was a happy ending on DealExtreme, as they ship IDE44 to CompactFlash interfaces that fit in the x41 1.8″ bay… which leads to a whole slew of unexplored possibilities thaaat (drum rolls) I will explore in the next installment of how to break your machine.

BackTrack 4 R1 has released :-) [6]. Anyone know where I can score the BlackHat Edition?

[1] http://fedoraproject.org/wiki/Releases/FeaturePresto
[2] http://fedoraproject.org/wiki/Features/systemd
[3] http://samba.anu.edu.au/rsync/rsync-and-debian/rsync-and-debian.html
[4] http://portableapps.com/
[5] http://drownedinsound.com/community/boards/music/4179554
[6] http://kacper.blog.linpro.no/archives/13
[7] http://www.backtrack-linux.org/

EDD DoS detection and DLD

Friday, April 16th, 2010

Hi all,
a short note about the Norwegian data surveillance directive that is up for passage into law these days. This directive, “Datalagringsdirektivet”, is the single most harmful threat to the general public’s privacy while being completely ineffective at stopping the bad guys it’s meant to target. Protests last Saturday in front of the parlimentary building – which yours truly attended – featured politicians and individuals from all ends of the political spectrum. Read more on Stopp DLD.

On to other things,

getting DDoSed sucks, as some of my collegues found out recently. Wouldn’t it be great if we could detect DDoSes as they come in through the wire? I mean besides when all of nagios goes code red upon us?

Well, I’ve written the little program that could. It’s not quite there yet (too few hours in the day) but the basic principles are fleshed out, and they go a little something like this:

There is a mathematical and a physical notion of entropy. To put it bluntly, it’s the shortest representation of a given piece of information. There’s a theorem that states that if you get many messages, but the messages put together don’t amount to much, then probably someone is fugging with you. We can use this to detect anomalies in network traffic, too.

This theorem about entropy is what EDD, the Entropy Distributed Denial of Service Detector [tarball] uses to classify a packet stream as bollocks, or not.

EDD is still pre-alfa software, which means that it’s a little too simplistic to tell you anything beyond a mere “Something’s up”, but I’d like you to test it in your setups with the understanding that the program.

commit bc2f4df34745e4c422a17e70aac271bc930b9f1a
Author: Kacper Wysocki
Date: Fri Apr 16 18:18:37 2010 +0200

EDD now classifies simple SYN floods successfully.

* faster and simpler simple_entropy
* reads from pcaps (-r)
* configurable treshold (-e)
* configurable window size (-w)
* profile counting (-t)
* edd self-tests (-E)
* better TODO ideas

Try it out and let me now, and send me pcaps of your DDoS and false positives.