Posts Tagged ‘read!’

My Varnish pet peeves

Sunday, August 23rd, 2015

I’ve been meaning to write a blog entry about Varnish for years now. The closest I’ve come is to write a blog about how to make Varnish cache your debian repos, make you a WikiLeaks cache and I’ve released Varnish Secure Firewall, but that without a word on this blog. So? SO? Well, after years it turns out there is a thing or two to say about Varnish. Read on to find out what annoys me and people I meet the most.

varnish on wood

Although you could definitely call me a “Varnish expert” and even a sometimes contributor, and I do develop programs, I cannot call myself a Varnish developer because I’ve shamefully never participated in a Monday evening bug wash. My role in the Varnish world is more… operative. I am often tasked with helping ops people use Varnish correctly, justify its use and cost to their bosses, defend it from expensive and inferior competitors, sit up long nites with load tests just before launch days. I’m the guy that explains the low risk and high reward of putting Varnish in front of your critical site, and the guy that makes it actually be low risk, with long nites on load tests and I’ll be the first guy on the scene when the code has just taken a huge dump on the CEO’s new pet Jaguar. I am also sometimes the guy who tells these stories to the Varnish developers, although of course they also have other sources. The consequences of this .. lifestyle choice .. is that what code I do write is either short and to the point or .. incomplete.

bug wash

I know we all love Varnish, which is why after nearly 7 years of working with this software I’d like to share with you my pet peeves about the project. There aren’t many problems with this lovely and lean piece of software but those which are there are sharp edges that pretty much everyone snubs a toe or snags their head on. Some of them are specific to a certain version, while others are “features” present in nearly all versions.

And for you Varnish devs who will surely read this, I love you all. I write this critique of the software you contribute to, knowing full well that I haven’t filed bug reports on any of these issues and therefore I too am guilty in contributing to the problem and not the solution. I aim to change that starting now :-) Also, I know that some of these issues are better lived with than fixed, the medicine being more hazardous than the disease, so take this as all good cooking; with a grain of salt.

Silent error messages in init scripts

Some genious keeps inserting 1>/dev/null 2>&1 into the startup scripts on most Linux distros. This might be in line with some wacko distro policy but makes conf errors and in particular VCL errors way harder to debug for the common man. Even worse, the `service varnish reload` script called `varnish-vcl-reload -q`, that’s q for please-silence-my-fatal-conf-mistakes, and the best way to fix this is to *edit the init script and remove the offender*. Mind your p’s and q’s eh, it makes me sad every time, but where do I file this particular bug report?

silent but deadly still not adequately documented

People go YEARS using Varnish without discovering watch varnishadm Not to mention that it’s anyone’s guess this has to do with probes, and that there are no other debug.* parameters, except for the totally unrelated debug parameter. Perhaps this was decided to be dev-internal at some point, but the probe status is actually really useful in precisely this form. is still absent from the list and the man pages, while in 4.0 some probe status and backend info has been put into varnishstat, which I am sure to be not the only one being verry thankful for indeed.

Bad naming

Designing a language is tricky.


Explaining why purge is now ban and what is now purge is something else is mindboggling. This issue will be fixed in 10 years when people are no longer running varnish 2.1 anywhere. Explaining all the three-letter acronyms that start with V is just a gas.
Showing someone ban("req.url = "+ req.url) for the first time is bound to make them go “oh” like a racoon just caught sneaking through your garbage.
Grace and Saint mode… that’s biblical, man. Understanding what it does and how to demonstrate the functionality is still for Advanced Users, explaining this to noobs is downright futile, and I am still unsure whether we wouldn’t all be better off for just enabling it by default and forgetting about it.
I suppose if you’re going to be awesome at architecting and writing software, it’s going to get in the way of coming up with really awesome names for things, and I’m actually happy that’s still the way they prioritize what gets done first.

Only for people who grok regex

Sometimes you’ll meet Varnish users who do code but just don’t grok regex. It’s weak, I know, but this language isn’t for them.

Uncertain current working directory

This is a problem on some rigs which have VCL code in stacked layers, or really anywhere where it’s more appropriate to call the VCL a Varnish program, as in “a program written for the Varnish runtime”, rather than simply a configuration for Varnish.

UncertantyYou’ll typically want to organize your VCL in such a way that each VCL is standalone with if-wrappend rules and they’re all included from one main vcl file, stacking all the vcl_recv’s and vcl_fetches .

Because distros don’t agree on where to put varnishd’s current working directory, which happens to be where it’s been launched from, instead of always chdir $(basename $CURRENT_VCL_FILE), you can’t reliably specify include statements with relative paths. This forces us to use hardcoded absolute paths in includes, which is neither pretty nor portable.

Missing default director in 4.0

When translating VCL to 4.0 there is no longer any language for director definitions, which means they are done in vcl_init(), which means your default backend is no longer the director you specified at the top, which means you’ll have to rewrite some logic lest it bite you in the ass.

director.backend() is without string representation, instead of backend_hint,
so cannot do old style name comparisons, ie backends are first-class objects but directors are another class of objects.

the missing director

VCL doesn’t allow unused backends or probes

Adding and removing backends is a routine ordeal in Varnish.
Quite often you’ll find it useful to keep backup backends around that aren’t enabled, either as manual failover backups, because you’re testing something or just because you’re doing something funky. Unfortunately, the VCC is a strict and harsh mistress on this matter: you are forced to comment out or delete unused backends :-(

Workarounds include using the backends inside some dead code or constructs like

	set req.backend_hint = unused;
	set req.backend_hint = default;

It’s impossible to determine how many bugs this error message has avoided by letting you know that backend you just added, er yes that one isn’t in use sir, but you can definitely count the number of Varnish users inconvenienced by having to “comment out that backend they just temporarily removed from the request flow”.

I am sure it is wise to warn about this, but couldn’t it have been just that, a warning? Well, I guess maybe not, considering distro packaging is silencing error messages in init and reload scripts..

To be fair, this is now configurable in Varnish by setting vcc_err_unref to false, but couldn’t this be the default?

saintmode_threshold default considered harmful


If many different URLs keep returning bad data or error codes, you might concievably want the whole backend to be declared sick instead of growing some huge list of sick urls for this backend. What if I told you your developers just deployed an application which generates 50x error codes triggering your saintmode for an infinite amount of URLs? Well, then you have just DoSed yourself because you hit this threshold. I usually enable saintmode only after giving my clients a big fat warning about this one, because quite frankly this easily comes straight out of left field every time. Either saintmode is off, or the treshold is Really Large™ or even ∞, and in only some special cases do you actually want this set to an actual number.

Then again, maybe it is just my clients and the wacky applications they put behind Varnish.

What is graceful about the saint in V4?

While we are on the subject, grace mode being the most often misunderstood feature of Varnish, the thing has changed so radically in Varnish 4 that it is no longer recognizable by users, and they often make completely reasonable but devestating mistakes trying to predict its behavior.

To be clear on what has happened: saint mode is deprecated as a core feature in V4.0, while the new architecture now allows a type of “stale-while-revalidate” logic. A saintmode vmod is slated for Varnish 4.1.

But as of 4.0, say you have a bunch of requests hitting a slow backend. They’ll all queue up while we fetch a new one, right? Well yes, and then they all error out when that request times out, or if the backend fetch errors out. That sucks. So lets turn on grace mode, and get “stale-while-revalidate” and even “stale-if-error” logic, right? And send If-Modified-Since headers too, sweet as.

Now that’s gonna work when the request times out, but you might be surprised that it does not when the request errors out with 50x errors. Since beresp.saint_mode isn’t a thing anymore in V4, those error codes are actually going to knock the old object outta cache and each request is going to break your precious stale-while-error until the backend probe declares the backend sick and your requests become grace candidates.

Ouch, you didn’t mean for it to do that, did you?

The Saint

And if, gods forbid, your apphost returns 404′s when some backend app is not resolving, bam you are in a cascading hell fan fantasy.

What did you want it to do, behave sanely? A backend response always replaces another backend response for the same URL – not counting vary-headers. To get a poor mans saint mode back in Varnish 4.0, you’ll have to return (abandon) those erroneous backend responses.

Evil grace on unloved objects

For frequently accessed URLs grace is fantastic, and will save you loads of grief, and those objects could have large grace times. However, rarely accessed URLs suffer a big penalty under grace, especially when they are dynamic and ment to be updated from backend. If that URL is meant to be refreshed from backend every hour, and Varnish sees many hours between each access, it’s going to serve up that many-hour-old stale object while it revalidates its cache.

stale while revalidate
This diagram might help you understand what happens in the “200 OK” and “50x error” cases of graceful request flow through Varnish 4.0.

Language breaks on major versions

This is a funny one because the first major language break I remember was the one that I caused myself. We were making security.vcl and I was translating rules from mod_security and having trouble with it because Varnish used POSIX regexes at the time, and I was writing this really godaweful script to translate PCRE into POSIX when Kristian who conceived of security.vcl went to Tollef, who were both working in the same department at the time, and asked in his classical broker-no-argument kind of way "why don’t we just support Perl regexes?".
Needless to say, (?i) spent a full 12 months afterwards cursing myself while rewriting tons of nasty client VCL code from POSIX to PCRE and fixing occasional site-devestating bugs related to case-sensitivity.

Of course, Varnish is all the better for the change, and would get no where fast if the devs were to hang on to legacy, but there is a lesson in here somewhere.


So what's a couple of sed 's/req.method/req.request/'s every now and again?
This is actually the main reason I created the VCL.BNF. For one, it got the devs thinking about the grammar itself as an actual thing (which may or may not have resulted in the cleanups that make VCL a very regular and clean language today), but my intent was to write a parser that could parse any version of VCL and spit out any other version of VCL, optionally pruning and pretty-printing of course. That is still really high on my todo list. Funny how my clients will book all my time to convert their code for days but will not spend a dime on me writing code that would basically make the conversion free and painless for everyone forever.

Indeed, most of these issues are really hard to predict consequences of implementation decisions, and I am unsure whether it would be possible to predict these consequences without actually getting snagged by the issues in the first place. So again: varnish devs, I love you, what are your pet peeves? Varnish users, what are your pet peeves?

Errata: vcc_err_unref has existed since Varnish 3.

Grsecurity on the desktop

Friday, August 8th, 2014

In my last post I presented Grsecurity kernel packages for Debian Wheezy. Now would be a good time to review the hows and whys of Grsecurity, so you can decide if it is something you need.

Today we will quickly look at Grsecurity’s viability and impact on a typical desktop or laptop.

Why do I need Grsecurity on the desktop?

  • Often run unsecure code? Limit its impact on the system.
  • Employ chroots and containers? Enforce stricter containment.
  • Connect to hostile networks? Reduce and mitigate impact of exploitation attacks.
  • Allow others to use your system? Increase monitoring and control over your machine.

Or perhaps you choose to run Grsecurity on your laptop simply for the sheer paranoia-factor and to impress friends.

How does Grsecurity behave on the desktop?

In addition to the invisible yet significant hardening efforts againt kernel exploitation, there are some changes that an experienced user will notice immediately:

  • you need root to dmesg
  • grsec reports denied resource oversteps in dmesg
  • top only shows this user’s running processes
  • mappings and other sensitive information is only available to the process owner in /proc

The very few programs that depend on a specific kernel version, or that read /sys/kcore or write directly to kernel constructs will not work under Grsecurity.

And of course, there is the feeling of solidity and the sight of the reported kernel version:

~# uname -a
Linux amaeth 3.2.60-grsec-lied #1 SMP Wed Aug 6 17:40:27 CEST 2014 x86_64 GNU/Linux


Thursday, August 18th, 2011

In my time as an application programmer. developer and designer, breif stint as team lead and project manager,
as well as my time as a systems consultant, I have witnessed first-hand and also heard many credible tales of systematic failure that rival any of the stories on The Daily WTF. My collegues and I have seen so many examples of bad design, bad code and systemic failure that we have considered writing a book titled How To Write Ugly Code.

I have also read the Texas Instruments Chainsaw massacre and personally met Gomez while debugging applications.

My speciality and my interest lies in diagnostics and the analysis of problems as well as system security, and my experience has showed that one can venture to say something about the qualitative difference of different design methodologies and what they have to say for the end result.

Firstly however, it is worth noting that the software industry as a whole has one primary problem: the time pressure to deliver new features at the face of mouting expectations.

This pressure to deliver is seen as the driving force behind industry progress and ever leaner, more economic applications, however contrary to this belief I have evidence that it leads to incentives for sloppy work, overengineering and poor considerations of the problem domain. It seems the process itself rewards poor application design, regardless of development methodology.

Large corporate and government tenders, which affect many hundreds of thousands of peoples lives, get bid on by large software houses that believe they can deliver everything (at low risk: if they cannot deliver it is very hard for the customer to contest this to a major software house).

What we get by and large out of this process are bloated top-down applications designed by people who do not understand the (whole) problem, leading to misguided decisions for such things as

  • choice of platform and language
  • choice of coding standards (check out Systems Hungarian if you don’t believe me)
  • programming methodology
  • communication tools: source control, ticket and forum tools for developers and system managers
  • Not Invented Here-practices
  • monkey-coding by people whose talents could be employed to solving the problem

What usually goes for as “agile” development causes frequent ineffective blame-game meetings.
Unit test driven development frequently causes micromanagement of program details and inflexible designs,
… all these methodologies were designed to improve programs, not bog them down! why then do they cause so much breakage?

The pressure to deliver requires the application developer to prefer large swathes of ready-made library code and a high level of abstraction to allow her to meet deadline demands.

A high abstraction level causes low debuggability and poor performance.
Low debuggability because bugs are by definition conditions caused by circumstances unforseen by the application developer. Abstractions are employed by the developer to hide implementation details to aid clairty and speed of application development, at the cost of debuggability.

The very tools and abstractions employed by the application developer create the frame through which the developer can see the circumstances of her design and code. Bugs most frequently occur on the boundries between abstractions, where the developer has no possibility to forsee these circumstances. Furthermore, in a system that has a passibly high level of abstraction there is a whole stack of hidden details which must be traced and unwound to discover the bug. Therefore, every additional layer of abstraction obscures the debugging process.

The debuggability and algorithmic simplicity is key in achieving optimal performance. In other words, if we have a clear problem statement it is possible to achieve performance. If there is no clear problem statement, and the program is further muddled by abstractions and interactions there is no effective path to performance.

Any artist will be able to tell you that the most interesting, creative and innovative work comes out of having a stress-free, playful environ. Since innovative coding is a creative activity, the same applies to developing applications, something that game developer companies and creative shops have known for years, and behemoths like Google and Microsoft have picked up on, reinvesting up to 15% of their revenue into research and development and getting that part right, as witnessed by the sheer output of innovation.

If there is a clear path to solving these fundamental problems of IT then it is putting the people who know what they are doing in the pilot seat, enabling developers to choose for themselves not only toolchains, methodology and communication tools but also engaging the systems thinkers into creating the specifications and architecture of the systems they are going to implement. The good news is that as customers and managers get savvy to this method of achieving IT success, we are going to see more developer autonomy and less spectacular fails.

security or privacy? both please!

Monday, July 11th, 2011

Hello readers, fellow bloggers, fell trolls… it’s been a while.

Happy Wheel

If you didn’t know that just following a link could lead to loss of your identity, virus infection and the unknowing participation in denial of service sieges, distribution of child pornography and letting attackers break through your company’s firewalls (not to mention immanentizing the eschaton), you could be forgiven for not paying attention to the happy happy field of information security.

If you knew this rather distressing fact about the web of links, but thought you could defend with an up-to-date antivirus, a current browser and the avoidance of “shady” websites I would hate to prick your bubble by telling you regular honest websites are being used against us, browsers all have to run the all-singing-all-dancing-all-executing flash and jave and anti-virus is not only ineffective but doesn’t even target this type of attacks. Knowing these things might be a little distressing so I won’t tell you.

At least my bank is secure, you say, it’s got SSL and everything! Well, you’d be twisting my arm into telling you, embarassedly, that SSL as implemented in browsers is very neatly broken, that all we needed was one of the Certified Trusted Authority Parties to go bad for us all to wake up with a butt-ache, but we now have not one but at least three such bad parties, not to mention all the MiM magic and DNS trickery that you don’t want to hear about anyway.

I will tell you however that the only defense is two-pronged: not allowing websites to script – which is a pain – and damage containment, which is not exactly perfect.

Let us say you already knew all of this, but no longer cared because there was an even greater danger on the web: the total breach of containment of privacy that is social media and tracking cookies which all want to integrate and track your every move through the web so that usage and mood patterns can be profiled, tracked, bought and sold. Doesn’t matter, right? You’re logged in to Facebook, Linkedin and Google+ and get all your links from there, so you have your own personal filter which only shows you stuff you care about, turning your blind eye towards anything you need to know that comes from left field, suckering you into giving away your privates every time you hit “like” or “add to friends list”.
pacman ghost

In a post-panopticlick world there is really only one lesson to be learned: care about your privacy, because noone else will give a damn. It’s not about whether you have anything to hide, it’s about the accumultion of your private info by crooks to use as they please!

Oh and crooks include the great people at Facebook, Microsoft and Google, that’s why I recommend disabling all tracking features that come in the guise of “better speed” or “increased security”. Pictures below show how to do it in chromium and firefox.

chromium dialog, how to disable google tracking

Ok that was Goggle’s Chromium – note all the unchecked- checkboxen… disabling prefetch, malware blocklists and suggestion services, all of which are sending far too much data to Google’s scaredy-ass all seeing eye. Aaaand here’s Firefox:

fuckfox prefetch prevention

Mhm that’s pointing the browser at about:config, searching for prefetch and setting it to false. Yep.

Those pictures show you how to find and set better privacy features in your browser. Safari users are up shit creek, but unlike the Internet Explorer users, they at least have a paddle! Great stuff, what next?

Keep a “secure” browser to browse with that you don’t log into anything personal with.. and make this your default browser!

What is a “secure” browser? Let’s talk a little about that without insulting anyone’s intelligence, shall we?
First off, I’m putting the word “secure” in uhm qoutes, because uhm the browser will never be secure, be it ever so protected. Ok, moving on you want to be running noscript and or adblock and or disconnect and or noflash, depending on whether you are a Mac, Windows (but then you are at game over already) or Linux user with Firefox or Chromium (NOT IExploder, that shit is scary!).

All of these tools make it so the sites you visit don’t get to run code on your machine. The end result is that sites are marginally less usable, but that popup popunder popver poopop ads and scary tracker/botnet/mal stuff doesn’t run automagically. Here are some links:
– Have you heard about sandboxing?

java and flash denied in chromium Chromium is getting the right idea by killing java and flash from the get-go. This should be the default, but you have to make it so!

You should definitely be cloaking your user-agent, as it is a useless yet very telling string your browser hoes out to all. Do this with the User-Agent Modifying Extension.

Also, you may find Torbutton and Foxyproxy interesting, the real privacy is in bouncing your traffic thru things that wash it for you. Putty or any other decent ssh client will create a proxy for you:

`ssh -D8080 me@myshellserver`

and you can point your browser’s SOCKS proxy settings to localhost:8080 putting all your traffic through your shell server!


The best has yet to come mua ha ha ha.

Free society conference – my experience

Tuesday, November 9th, 2010

Until the very last minutes I was unsure whether I’d make it to FSCONS, the Free Society Conference and Nordic Summit. I did not know what to think of it, despite gentle pushings from one set to speak at the conference. Three days later and with the event somewhat filtered in my mind, there is no doubt that it was well worth the opportunity costs and then some.

I'm going to FSCONS 2010!

My first impressions while browsing the event programme were that there was no way to attend all the interesting talks! An insurmountable problem, and I hadn’t even gotten there: my meat could not be in two to three places at the same time, while my mind could not rationally choose away interesting unknowns.. so I opted to leave it to a vague notion of chance and intent.

What I hadn’t expected was the powerful impact that the people attending would have on me. Cyber heroes and heroines, freedom fighters, game changers, inventors, uncelebrated cryptoanarchists and everything makers were some of those that I got to know, that engaged me in dialogue, that dared discuss openly some (most?) of the hardest problems that we, the world, are facing today. With the full intent to do something about these problems.


CPM: Reliable multiuser password management

Monday, August 2nd, 2010

Sup all,
summer is drawing to a close and vacation is definitely over, but I for one welcome the chance to think and act again. Some time ago our managed services department started complaining about various shoddy password management solutions. Truth be told we already had a good solution, CPM (“Console Password Management”) but the software had fallen into disrepair due to seldom and untidy updates from its author. A new maintainer was desired and a project to fix the software was decreed and the result fell into my lap so to speak.

What sets CPM apart from other password management solutions is that it supports multiple users and goes to great lengths to keep your passwords secure while at the same time being very simple in its design: CPM locks its XML-formatted hierarchical password database in non-swappable private memory (so your passwords don’t get written in cleartext to disk while swapping), and encrypts the database with an arbitrary amount of GnuPG public keys.

All this makes CPM quite nice for storing and sharing secrets in a nice curses-based searchable console interface.

For the longest time I’ve been keeping the hundred-odd passwords I can’t remember on notepads and in random text files, thinking that surely I should start employing some sort of password management before I go crazy or my passwords leak. The congruence of my wishes with the scope of this project, so I picked up CPM and gave it a little love, and the result can be found at

GitHub CPM with CPM packages for debian in the downloads section.

CPM crash course

Requirements: Gnu Privacy Guard, and a GPG keypair.

First, install CPM:

dpkg -i cpm_0.25~beta-2debian2_amd64.deb

Then, create a password database, adding your key to the recipient list when prompted.


Then, use CPM from the console:


CPM should now ask you for you GPG key password and display an empty database.

CPM is controlled with the arrow keys, Enter and some control keys.
Hitting Control-H will bring you to the Help screen which explains the control keys.

By default CPM organises your passwords in a structure of hosts that have several services which may have one or more users. Hosts, services, users and passwords are nodes in the tree and a node is added by hitting Control-A and given an appropriate name.

For instance, if I were to add a password ch1ckens0up to user lolarun on the wiki service of host, I would create the following node structure:

Of course there is no need to follow this anal layout, and you may even change the node structure by editing the template names in CPM by hitting Control-N or modifying the /etc/cpmrc config file.

To have CPM generate a random password for you, hit Control-P.
Your changes are not saved unless you hit Control-W or quit the program by hitting ESC enough times. Quitting through Control-C will not save the database.

Future work includes pushing the package into Debian.

What you don’t get (yet) is a GTK-based GUI, or a wrapper to pull the password database out of GIT and commit it again after modification nor integration with gpg-agent, probably (?) due to a bug in gpgme.

Enjoy this lovely piece of software and leave a comment after testing it!

what’s wrong with IT?

Wednesday, March 24th, 2010

Hold on a bit.
I am a monk of the old order, one of the illuminati of software stacks. By no means a high priest, but like many of my brethren I have been ordained with most of the dirty little secrets over the years since I joined the convent. I never specialized so I am well read in ancient texts and new work, and I have meditated on subjects ranging from compiling compilers through 3D rendering and artificial intelligence to business processes and value chains. In the constant rush to deliver on the promises of technology I’ve seen projects that are set up for failure even before they start. I’ve seen enough code to write a book detailing example for example what you should not do during development.

The secrets are many, and they are complex and hard to grasp out of context, but to misuse an old adage: the truth is simple and it’s out there.

The reason applications fail is because they are complex, but the reason IT fails is that IT people expect the applications to be simple to manage, and the business has a nasty tendency to promote the clueless.

It’s amazing how much money is thrown out the window (and into greedy hands) by large corporations and public departments on hairy overcomplicated blackbox solutions that are supposed to meet simple business needs.

Take databases for example. It’s easy to argue that the database is part of core business (because all the critical application data ends up in the database) and thus the database must be highly available, highly performant and highly secure. Maybe that’s how the CTO’s of the world justify spending millions on monstrous arcane iron black boxes to serve their modest database needs. Annualy!

The same needs, if properly understood, could be served by the fraction of the cost while being easier to manage and debug!

This is not just a schpiel on Postgres (who cares it’s open source, it can do all that and more) but a general protection fault in how technology is driven today.

Another nice example is DNS, which is beyond core business in importance: without domain resolution nearly all infrastructure fails. DNS problems can cause the most obscure failures simply because applications have no provision for DNS failure. Quite a few IT departments all over the world operate DNS through point-and-click wizards without anything but the rudimentary understanding of its inner workings. Should they have that understanding? Hell yes, otherwise sooner or later it must fail as everything does, and when it does they have none of the tools to fix it!

Scarier still is that the rest of the world (or very nearly) has standardized on the most baroque and insecure DNS server in existence (BIND me in hell with seven furies burning marks in my skin), a precise analogy to what has happened in the world of e-mail (sendmail will do anything but!). We do this because we follow Best Business Practices, which is the IT analogue of what happens to you when you go through airport security: it is completely ineffective but feels safer.

Other examples of the same thing happening is the proliferation of security products that offer nothing but a smokescreen, the use of gigantic and poorly considered application frameworks and the abstraction and layering of simple concepts into behemoth object-relation collections.
Humans have a distinctly object-oriented view of the world, all the same the world is trying to tell us that objects don’t exist in their own right but depend on a whole slew of epiphenomena.

Software rots if it is not maintained.
None of the above are hard problems, regardless of what others might have tried to jam down your throat. Databases are a snooze to work on, DNS and mail should Just Work, and once we have a web application stack going for us we’re not going to consider how it works or what could be better. The difficulty that lies in application infrastructure is a people problem.

We want to buy a shrink-wrapped product and feel value for money without risk.

There is some sort of mass marketing effect happening where decision makers are best influenced by the greediest hands. We tend to think that the most expensive car has the best value with the least risk, and we seldom so clear-sighted as to go back on decisions we have already made.

So what’s the fix?

Decision makers should spend more time evaluating the options before launching headlong into projects based on best business practices, and they should identify and listen more to the few quiet people that have a clue. The people with clue usually only get to vent their frustrations by making crass jokes about management and the hairyness of the most recent or most painful and embarassing failure of technology. These things are not talked about openly, but they should be.

Ideally we should focus on long-term research into the difficult problems of technology: artificial intelligence, algorithms, how to feed the starving and save the world from imminent ecological disaster, quantum computing etc, instead of spending so much time failing at the simple things.

brilliant fools – hackers update

Monday, March 1st, 2010

cracks are on the rise. so are hacks, and I haven’t posted a thing since December. So what’s up with you?

Good news is that Con Kolivas might have managed to defeat his carpal tunnel and swallow his spite for kernel dev elitism, and is again churning out solid kernel code to improve desktop usability – which the kernel devs aren’t too interested in – something he is quite right to say!

Hopefully ubuntu will pick up CK’s scheduling patches, because they are uber and with ubuntu’s momentum they might topple the stack. Too bad their kernel team can’t follow the churn. Wish I had time to compile it for you or even post some interbench stats with pretty graphs, but there is a BFQ PPA available already so you can test it out on your ubuntu or debian machine. Bonus? Some random dudes wrote a simple IO scheduler which is included there. Ain’t that reasonable?

All that at least until we have time to write our own OS which gets rid of all suckage, is super flexible and of course incorporates all our favorite patches.

Valgrind is pretty neat, and with that we have working stray acks in PRADS, the stealthy host and service detection system. More features and even a new release might come soon, which is to say – when it’s ready. In the mean time I welcome you to try breaking it in any and every way possible.

Also, if everything you do is motivated by monetary gains then you sir are a shame to the human race. Go back to step 1 and have a good day now.

Lame things that suck

Saturday, December 5th, 2009

The world is a difficult place, we know.
Here’s a list of things that suck unnecessarily much:

  • Fink for OSX needs Xcode dev tools.
    Why not provide a gcc/libc-dev package? No idea. General lameness from the fink developers forces you to register at the Apple Developer Connection and download 700MB of apple crap just to install and compile source packaged software in fink. LAME.
  • Fink is not Cydia on the iPhone.
    Both are based on apt and dpkg. Both run on OSX. Pooling of efforts, everyone.
  • The very fact that you have to jailbreak an iPhone is ridiculous. Goes doubletime for Xbox and PlayStation chipping, Wii softmods and DS carding. This is vendor lockdown and should be lotted with the criminally insane – vendors don’t need to take responsibility for user-created apps, but vendoirs must not stand in the way of the software evolution.
  • Tar sands. Corruption.
  • There ain’t enough time to read all the cool web comics. Games? Don’t get me started.
  • to quote a friend and collegue, “every operating system in the world. Pick one and I will tell you how much it sucks.”

kernel coolness, finally!

Thursday, November 5th, 2009

Many things worth blogging about are happening lately! In fact, so many things that there is not enough time to blog about them. Ah, where to begin!

Quickly now:


Ebf0 and myself had a lecture about our fine host detection application at Dagen@IFI (Institute for informatics, UiO). Presentation available here, at least until we upload it to the project website.

We now know that our Proof of Concept is k00l and Ebf has started the high-performance C implementation.

Kernel hacks

Did you know you’re missing out on cool kernel features? Well, yes you are. Here are some of them:

  • grsecurity : Better security in linux! Fixes thousands of attack vectors for desktops and servers alike
  • compcache: compressed memory swap might sound counter intuitive, but memory is lightning fast compared to disk, and you can cram more apps into compressed memory!
  • nilfs: Every wished you hadn’t deleted that file 5 seconds ago? Or wasted an hour waiting for a fsck? Log structured file systems scream write performance. And NILFS aids in data recovery too, as it’ll take automatic snapshots of your data every synchronous write. Very sweet.
  • reiser4: Don’t get me started. This is still not reached mainline. Hans be damned. However, reiser4 is still the fastest file system around.
  • ++++ low-latency, preemptible, tickless system, loads of hardware support and lots more!

The upshot?

The -lied patchset is back!

I now track Ubuntu karmic git and I provide i686 packages:

Add the following to your /etc/apt/sources.list :

deb ./

then install the package:

# add the archive key:

gpg --recv-key 089ac586 && gpg --armor --export 089ac586 | sudo apt-key add -

# update package database:

sudo apt-get update

# install the kernel package

sudo apt-get install linux-image-

# check grub or lilo and then reboot into the kernel!

amd64 binaries are coming as soon as I get a chance to compile them. For now grab the karmic git, the patchset and .config and roll your own :-)

What else?

Bifrost is coming along, and might be close to a release soon, and

Multiframe needs a new client release (which I am w0rking on)

oh and I’ve made an

auto-migrate from ISC dhcpd to dnsmasq by script

…with my quick and dirty perl f00 :

Be mindful that it is best for those with a lot of host definitions, and does not support all the ISC syntax out there.

The script is interesting because it consicely illustrates how to make a simple but powerful parser with the minimal amount of lines (and fuss) using the AND-OR Waterfall method.

Honk and Drop me a comment if you like / hate / fake it~!