Archive for the ‘disintegrate’ Category

what’s wrong with IT?

Wednesday, March 24th, 2010

Hold on a bit.
I am a monk of the old order, one of the illuminati of software stacks. By no means a high priest, but like many of my brethren I have been ordained with most of the dirty little secrets over the years since I joined the convent. I never specialized so I am well read in ancient texts and new work, and I have meditated on subjects ranging from compiling compilers through 3D rendering and artificial intelligence to business processes and value chains. In the constant rush to deliver on the promises of technology I’ve seen projects that are set up for failure even before they start. I’ve seen enough code to write a book detailing example for example what you should not do during development.

The secrets are many, and they are complex and hard to grasp out of context, but to misuse an old adage: the truth is simple and it’s out there.

The reason applications fail is because they are complex, but the reason IT fails is that IT people expect the applications to be simple to manage, and the business has a nasty tendency to promote the clueless.

It’s amazing how much money is thrown out the window (and into greedy hands) by large corporations and public departments on hairy overcomplicated blackbox solutions that are supposed to meet simple business needs.

Take databases for example. It’s easy to argue that the database is part of core business (because all the critical application data ends up in the database) and thus the database must be highly available, highly performant and highly secure. Maybe that’s how the CTO’s of the world justify spending millions on monstrous arcane iron black boxes to serve their modest database needs. Annualy!

The same needs, if properly understood, could be served by the fraction of the cost while being easier to manage and debug!

This is not just a schpiel on Postgres (who cares it’s open source, it can do all that and more) but a general protection fault in how technology is driven today.

Another nice example is DNS, which is beyond core business in importance: without domain resolution nearly all infrastructure fails. DNS problems can cause the most obscure failures simply because applications have no provision for DNS failure. Quite a few IT departments all over the world operate DNS through point-and-click wizards without anything but the rudimentary understanding of its inner workings. Should they have that understanding? Hell yes, otherwise sooner or later it must fail as everything does, and when it does they have none of the tools to fix it!

Scarier still is that the rest of the world (or very nearly) has standardized on the most baroque and insecure DNS server in existence (BIND me in hell with seven furies burning marks in my skin), a precise analogy to what has happened in the world of e-mail (sendmail will do anything but!). We do this because we follow Best Business Practices, which is the IT analogue of what happens to you when you go through airport security: it is completely ineffective but feels safer.

Other examples of the same thing happening is the proliferation of security products that offer nothing but a smokescreen, the use of gigantic and poorly considered application frameworks and the abstraction and layering of simple concepts into behemoth object-relation collections.
Humans have a distinctly object-oriented view of the world, all the same the world is trying to tell us that objects don’t exist in their own right but depend on a whole slew of epiphenomena.

Software rots if it is not maintained.
None of the above are hard problems, regardless of what others might have tried to jam down your throat. Databases are a snooze to work on, DNS and mail should Just Work, and once we have a web application stack going for us we’re not going to consider how it works or what could be better. The difficulty that lies in application infrastructure is a people problem.

We want to buy a shrink-wrapped product and feel value for money without risk.

There is some sort of mass marketing effect happening where decision makers are best influenced by the greediest hands. We tend to think that the most expensive car has the best value with the least risk, and we seldom so clear-sighted as to go back on decisions we have already made.

So what’s the fix?

Decision makers should spend more time evaluating the options before launching headlong into projects based on best business practices, and they should identify and listen more to the few quiet people that have a clue. The people with clue usually only get to vent their frustrations by making crass jokes about management and the hairyness of the most recent or most painful and embarassing failure of technology. These things are not talked about openly, but they should be.

Ideally we should focus on long-term research into the difficult problems of technology: artificial intelligence, algorithms, how to feed the starving and save the world from imminent ecological disaster, quantum computing etc, instead of spending so much time failing at the simple things.

how to break your head : try linux compilination

Wednesday, July 29th, 2009

I’ve recently started compiling my own kernels again. Some people ask me why I’d ever want to do this – a valid question, since anyone who’s done it knows a time-consuming hassle best left to the distro packagers and really nerdy people with too much time on their hands. Other people will give a blank face and ask “What is a Conpiling?” To these other people: this article is not for you, it will only serve to confuse that pretty little head of yours. If you know what ‘a compiling’ is, you may proceed. I don’t provide references; I banter them. Google your friend, pluckum.
\
Still, I am not here to discuss the reasons for compiling your own kernel – these are all too obvious to the initiated and completely uninteresting to anyone else. I’m more interested in the reasons why my friends, collegues and I have *stopped* compiling our own kernels – despite some of us enjoying at least a compile a day (or ten!) for periods of time in the past. Only the gentoo rice boys remain, steadfastly compiling everything in sight despite snide comments about mean time between upgrades and ridicule about their USE_FLAGS selector GUIs.
\
Why don’t we compile anymore?
There is no stable upstream branch. In my own experience this has had direct consequences for the stability and quality of point releases.
Years after Linus’ bitkeeper schism, the SCO slimeballing and the death of the stable branch, we can look back and say that aye, we have a better audit trail and development has scaled through the roof. We have more kernel features than ever, and an astounding rate of patches make it into mainline every day.
\
These amazing developments are a long shot away from the linux dev process back in the days of 2.2 and 2.4, but there is a dark side to these developments.
Regressions are no longer the domain of the bleeding edge, the -mm or -ac trees, -alpha and -rc releases for the adventurous, masochistic or desperate. Common things. Getting bitten by that local sexploit and being too embarassed to tell your friends about it. Software suspend used to work fine. The graphics card did not crap itself on the last point release, but at least my NIC doesn’t get bricked in this one. The wifi keeps screwing with you, but you don’t know if you should blame Ubuntu, Intel or Linus. On the internet noone can hear you scream.

\

Elitism is rife on the LKML, and more pointedly, in the mainline patch process. Who knew NIH would be such a big problem in an open source project? Admittedly, it is the largest and perhaps the most ambitious open source project of all, with all eyes on target, a million uses and powerful market forces pulling the project this way and that. Linux has long ago outgrown the boy’s room, the hacker dungeon and its academic roots. Most kernel patches that get into mainline are pushed there by large hardware and software vendors. Many kernel hackers hack the kernel on their day job, earning an engineer’s living.
\
Linux has reached the Enterprise in a big way. The system runs and is optimized for Big Iron. The desktop is “good enough”, say the kernel hackers. Latency is fine for our uses, and those squeaky audiophiles should shut up and fork. Indeed they did, as embedded, realtime and audio people have all collectively decided to jump off the wagon.
Out-of-tree kernel hackers already know where the lay is at. After years of pushing the same genious useful patchsets they are sick of cleaning up, splitting out, documenting, backporting, forward porting only to discover that noone read their patch. Maybe they will be lucky, their ideas bastardized overnight into someone else’s pet project, far more likely to succeed once it is Invented Here(tm).
\
It’s not all bad: we want and need to trust the people that push stuff into the kernel. Who are you to think that you can do it better than them? They are doing their job, they do it well, so what if they all meet for beer and virgin sacrifice after hours, so what if there is no free seating in their society? Fork your own.
\
Weiging in at 800MB uncompressed, the Linux source is a behemoth. Counting only source, headers and assembly, there are 35,000 files in the linux kernel, with 10,667,648 lines of source code. This code is metriculously organized, not only into systems, subsystems and modules, but into domains of responsibility. Hey, if you’ve ever managed a large software project you would know how annoying, how encroaching it is when someone start fiddling with your private bits.
\
On the other hand, linux has lost a lot of great contributions and spurned a lot of marvelous people because of this elitism. OpenMosix israeli clustering, reiser4 the murderous file system, software suspend 2 the ‘it just works’ approach, page-in-from-swap, CK’s desktop efforts, the two kernel monty carlo and process snapshotting are only few of the projects that failed to sufficiently influence the core developers, some of them even year after year.
It can be argued that despite the patches not making it to mainline some of these ideas did find their way into the minds of the gitmasters and found other implementations on technical merit alone. To me this defeats the whole purpose of the open source model which drives technology by sheer speed. We’ve had a working, cleaned up, documented version of the patch for two years – and the feature doesn’t make the cut. This is too little too late.

\

Well, not everyone takes an interest in kernel politicking even if they follow the LKML or kerneltrap, and some people even like hitting bugs and fixing issues in their compiles, and trolling in epic flame wars. They too have left kernel compiling to other, more patient and masochistic people.
Maybe it’s because even grepping a single point release changelog is a major chore. The distro folks have gotten fairly good at kernel compiles; ubuntu ships a one-size-fits-all Just Works(tm) kernel, RedHat’s patchset has grown less offensive over the years and debian is and always was debian. Upgrades are relatively painless and usually somebody else already did the dirty work.
Linus Thorvald’s initial plan succeeded: by axing the stable/unstable tree he told the world that the responsibility for stability rests on the distributor. He also axed many hobbyists’ will to stay and play with new releases. I’d rather go play on milw0rm.

\

There are other compelling reasons not to roll one’s own: the number of configuration options has doubled over the past years, and most of these new options are not relevant to the hobbyist use case. Development not only in the kernel source but in the toolchain (gcc) has caused compile times to soar. I remember proudly compiling 2.4 kernels on my K7 within 10 minutes back in 2001. Today it might take longer to compile the tree on my Centrino dual-core.
And there it is: we’ve suffered feature creep and bloat. After a long download, an hour or more of configuring, and many failed initial make runs, a generic compiled bzImage weighs in at about 3412 kB. This is a modular kernel, mind you. What happened to lean and mean 800 kB kernels?

Memory is cheap you say.
But minds are not cheap!

\

I’m announcing a contest: what’s the smallest stable useful kernel you can make for your platform? Remember, it should run on other machines and be useful, and the compile reproducible. Choose your own definition of useful, but do find a concrete definition. Use any tree and patchsets that turn you on. Bonus points for packaging so others can plug your kernel into their system. I’ll make your package available.
As a side contest I’ll take compile times along with bogomips numbers and your .config file for reference.

\\\

PS. Yahoo! internal IT sucks. Where’s the wifi? Running our own cables, canned XP images in a linux lab, packet loss. This aint no funky party. I guess they are too busy. Paranoia maybe. Things aren’t wonderful.

LDAP and its many uses

Friday, June 19th, 2009

There is a nice article on Single-Sign-On and LDAP in the Journal and although it is not new the man writing it has clearly spent some time finding novel (read: whack) uses for catalogue services.

Myself, on the other hand, I’ve been finding novel ways to break OpenLDAP. My 35-hour stint on Thursday set up more Active Directory-integrating workaround setups of the Slap Daemon than you can shake a bloody large stick at, including but not limited to The Inverted Translucent Reverse Meta Tree, where we do a slapo-translucent overlay in one slapd and a plain slapd database in the second slapd, then slapd-meta the sAMAccountName into uid and remap the suffixes in a third slapd process. Yep, that’s four separate catalogues to solve one application problem.

Don’t. Ask. Why.

The upshot is that you should stay the hell away from the slapd rewrite module as it will core, that the translucent overlay is magnificent at making very plain ldapsearches (objectclass=*) return no objects or fail, that slapd-meta is a very cool backend to do remapping suffixes, attributes and your mom, that your application should never have to write to a read-only Active Directory tree, and that simplicity is instrumental in not going mental.

Unfortunately, simple solutions to complicated problems are rather hard to come by.

PS. The problems I was trying to fix all came out of one single application bug and my attempts to work around it :-P