Posts Tagged ‘howto’

HOWTO avoid pains with NetCom 3g USB on jaunty

Sunday, September 6th, 2009

Internet anywhere, ain’t it great?

If you have one of those 3g netcome HSDPA USB dongles you might hve noticed how they don’t really work so well out of the box.

After I had spent 4 hours trying to get the thing working Martin smugly told me these things should be plug’n'play and proceeded to… fail to get it working. oW hELL…

Cutting to the chase and sparing you the gritty details I have a recipie for getting 3g working with the Netcom ZTE-MF636 USB dongle. This recipie should work in ubuntu jaunty and similar recent distros, and most of the instructions apply to other USB dongles too. Included are also all the tips you need to avoid spending 4 hours hammering your head against the wall…

how to break your head : try linux compilination

Wednesday, July 29th, 2009

I’ve recently started compiling my own kernels again. Some people ask me why I’d ever want to do this – a valid question, since anyone who’s done it knows a time-consuming hassle best left to the distro packagers and really nerdy people with too much time on their hands. Other people will give a blank face and ask “What is a Conpiling?” To these other people: this article is not for you, it will only serve to confuse that pretty little head of yours. If you know what ‘a compiling’ is, you may proceed. I don’t provide references; I banter them. Google your friend, pluckum.
Still, I am not here to discuss the reasons for compiling your own kernel – these are all too obvious to the initiated and completely uninteresting to anyone else. I’m more interested in the reasons why my friends, collegues and I have *stopped* compiling our own kernels – despite some of us enjoying at least a compile a day (or ten!) for periods of time in the past. Only the gentoo rice boys remain, steadfastly compiling everything in sight despite snide comments about mean time between upgrades and ridicule about their USE_FLAGS selector GUIs.
Why don’t we compile anymore?
There is no stable upstream branch. In my own experience this has had direct consequences for the stability and quality of point releases.
Years after Linus’ bitkeeper schism, the SCO slimeballing and the death of the stable branch, we can look back and say that aye, we have a better audit trail and development has scaled through the roof. We have more kernel features than ever, and an astounding rate of patches make it into mainline every day.
These amazing developments are a long shot away from the linux dev process back in the days of 2.2 and 2.4, but there is a dark side to these developments.
Regressions are no longer the domain of the bleeding edge, the -mm or -ac trees, -alpha and -rc releases for the adventurous, masochistic or desperate. Common things. Getting bitten by that local sexploit and being too embarassed to tell your friends about it. Software suspend used to work fine. The graphics card did not crap itself on the last point release, but at least my NIC doesn’t get bricked in this one. The wifi keeps screwing with you, but you don’t know if you should blame Ubuntu, Intel or Linus. On the internet noone can hear you scream.


Elitism is rife on the LKML, and more pointedly, in the mainline patch process. Who knew NIH would be such a big problem in an open source project? Admittedly, it is the largest and perhaps the most ambitious open source project of all, with all eyes on target, a million uses and powerful market forces pulling the project this way and that. Linux has long ago outgrown the boy’s room, the hacker dungeon and its academic roots. Most kernel patches that get into mainline are pushed there by large hardware and software vendors. Many kernel hackers hack the kernel on their day job, earning an engineer’s living.
Linux has reached the Enterprise in a big way. The system runs and is optimized for Big Iron. The desktop is “good enough”, say the kernel hackers. Latency is fine for our uses, and those squeaky audiophiles should shut up and fork. Indeed they did, as embedded, realtime and audio people have all collectively decided to jump off the wagon.
Out-of-tree kernel hackers already know where the lay is at. After years of pushing the same genious useful patchsets they are sick of cleaning up, splitting out, documenting, backporting, forward porting only to discover that noone read their patch. Maybe they will be lucky, their ideas bastardized overnight into someone else’s pet project, far more likely to succeed once it is Invented Here(tm).
It’s not all bad: we want and need to trust the people that push stuff into the kernel. Who are you to think that you can do it better than them? They are doing their job, they do it well, so what if they all meet for beer and virgin sacrifice after hours, so what if there is no free seating in their society? Fork your own.
Weiging in at 800MB uncompressed, the Linux source is a behemoth. Counting only source, headers and assembly, there are 35,000 files in the linux kernel, with 10,667,648 lines of source code. This code is metriculously organized, not only into systems, subsystems and modules, but into domains of responsibility. Hey, if you’ve ever managed a large software project you would know how annoying, how encroaching it is when someone start fiddling with your private bits.
On the other hand, linux has lost a lot of great contributions and spurned a lot of marvelous people because of this elitism. OpenMosix israeli clustering, reiser4 the murderous file system, software suspend 2 the ‘it just works’ approach, page-in-from-swap, CK’s desktop efforts, the two kernel monty carlo and process snapshotting are only few of the projects that failed to sufficiently influence the core developers, some of them even year after year.
It can be argued that despite the patches not making it to mainline some of these ideas did find their way into the minds of the gitmasters and found other implementations on technical merit alone. To me this defeats the whole purpose of the open source model which drives technology by sheer speed. We’ve had a working, cleaned up, documented version of the patch for two years – and the feature doesn’t make the cut. This is too little too late.


Well, not everyone takes an interest in kernel politicking even if they follow the LKML or kerneltrap, and some people even like hitting bugs and fixing issues in their compiles, and trolling in epic flame wars. They too have left kernel compiling to other, more patient and masochistic people.
Maybe it’s because even grepping a single point release changelog is a major chore. The distro folks have gotten fairly good at kernel compiles; ubuntu ships a one-size-fits-all Just Works(tm) kernel, RedHat’s patchset has grown less offensive over the years and debian is and always was debian. Upgrades are relatively painless and usually somebody else already did the dirty work.
Linus Thorvald’s initial plan succeeded: by axing the stable/unstable tree he told the world that the responsibility for stability rests on the distributor. He also axed many hobbyists’ will to stay and play with new releases. I’d rather go play on milw0rm.


There are other compelling reasons not to roll one’s own: the number of configuration options has doubled over the past years, and most of these new options are not relevant to the hobbyist use case. Development not only in the kernel source but in the toolchain (gcc) has caused compile times to soar. I remember proudly compiling 2.4 kernels on my K7 within 10 minutes back in 2001. Today it might take longer to compile the tree on my Centrino dual-core.
And there it is: we’ve suffered feature creep and bloat. After a long download, an hour or more of configuring, and many failed initial make runs, a generic compiled bzImage weighs in at about 3412 kB. This is a modular kernel, mind you. What happened to lean and mean 800 kB kernels?

Memory is cheap you say.
But minds are not cheap!


I’m announcing a contest: what’s the smallest stable useful kernel you can make for your platform? Remember, it should run on other machines and be useful, and the compile reproducible. Choose your own definition of useful, but do find a concrete definition. Use any tree and patchsets that turn you on. Bonus points for packaging so others can plug your kernel into their system. I’ll make your package available.
As a side contest I’ll take compile times along with bogomips numbers and your .config file for reference.


PS. Yahoo! internal IT sucks. Where’s the wifi? Running our own cables, canned XP images in a linux lab, packet loss. This aint no funky party. I guess they are too busy. Paranoia maybe. Things aren’t wonderful.

Xorg is a steaming pile of gonzales so turn on your UXA

Sunday, June 7th, 2009

Hei folks,

if you have a newer Intel or ATI gfx card, and you’re running a suitably dishy distribution (Ubuntu Jaunty maybe, xorg 7.4, kernels 2.6.27 and up perhaps?) maybe you’ve noticed how sluggish and slow Xorg is. Sometimes it’ll be a dick and eat heaps of your RAM, and maybe hog 98% of your CPU time for no damn raisins.

Furthermore, that sleek graphics card seems to be causing loads of garbage rendering – what gives? I wish I could show you some screenshots but honestly, I don’t really wanna see that shit again, so here’s to not reproducing it. Instead, why don’t you do like I do and enable UXA? Edit your /etc/X11/xorg.conf file and add
Option "AccelMethod" "uxa" to your acting Device section like so:

Section "Device"
   Identifier    "integrated"
   Driver "intel"
      Option "AccelMethod" "uxa"
      Option "RenderAccel" "on"
      Option "PageFlip" "on"

Now logout and back into X again, restart GDM, KDM or whatever and whoop de doo your /var/log/Xorg.0.log should say something like:

(**) intel(0): Using UXA for acceleration
[... snip irrelevant shit ...]
(II) intel(0): [DRI2] Setup complete
[... more irrelevant crap ...]
(II) UXA(0): Driver registered support for the following operations:
(II)         solid
(II)         copy
(II)         composite (RENDER acceleration)
[... further noise ...]
(II) intel(0): direct rendering: DRI2 Enabled

Also, windows should draw/move/resize snappier and there should be much rejoicing. If everything went all right that is.

Note that my glxgears performance dropped from 1300fps (w/xcompmgr) down to 660fps (xcompmgr) and 994fps (no composition manager) so there seems to be a tradeoff here, but I really really prefer my gooey windows snappy, artifact-free and less crash-prone. maybe.

K out

useful scripts for digging through code

Monday, March 30th, 2009

After a little bit of digging on my now seldom used desktop machine I found some useful scripts I’d share with you all. They’re real good if you’re in the CLI a lot trying to make sense of large amounts of code.

Amongst others, a ~/.bashrc file to rule them all.

The bash goodness:

g foo: what lines of what files contain the word foo?
cat somefile | gi foo

.... : cd ../../..

f foo: case-insensitive search for file foo.

finn foo: find filenames containing ‘foo’. Sensitive but won’t print svn garbage.

p foo: highlight processes matching ‘foo’

fsym foo: search for symbol foo in all objects in subfolders.

rsym foo: check if links properly

Persistent vim windows.

Further: gvimws allows you to always open new files in the same editor on the current desktop. It also closes the file in any other editor on any other desktop automatically.

And the kicker: ve will allow you to quickly open the files you’ve been searching for (using gvimws)

Putting it together

kwy@amaeth ~$ dmesg | gi error
1404:[27214.044072] iwlagn: Error sending REPLY_ADD_STA: time out after 500ms.
kwy@amaeth ~$ cd /usr/src/linux
kwy@amaeth /usr/src/linux$ f iwl-core
# doubleclick-select midclick-paste the above line.
kwy@amaeth /usr/src/linux$ cdd  ./drivers/net/wireless/iwlwifi/iwl-core.c
kwy@amaeth  /usr/src/linux/drivers/net/wireless/iwlwifi$ g time out
iwl3945-base.c:767:			IWL_ERROR("Error sending %s: time out after %dms.n",
iwl3945-base.c:1590:			IWL_ERROR("Time out reading EEPROM[%d]", addr);
iwl-eeprom.c:246:			IWL_ERROR("Time out reading EEPROM[%d]", addr);
iwl-hcmd.c:186:			IWL_ERROR("Error sending %s: time out after %dms.n",
#select-paste filename:lineno pair from above
kwy@amaeth  /usr/src/linux/drivers/net/wireless/iwlwifi$ ve iwl-hcmd.c:186:
# BAM you have found the problem

The fsym and rsym functions are the kind of thing that you’ll never need unless you’re maintaining a large build with libraries that often break. fsym will find the object file that contains the symbol you’re looking for. rsym will do more than ldd and the compiler do to check if your lib is linking correctly.

The all singing, all dancing vimrc file

It’s swell. Switch buffers with F3/F4, do typeahead search and all kinds of crazy things you’ll never find out about until you happen to press the wrong button. Just like all of vim.

cross-reference like CSI

Friday, March 27th, 2009

I was watching The Mentalist the other night and during the course of a murder investigation they ask the cute geeky cop to “cross-reference” the list of people with brown cars with the list of buyers of a specific toothpaste brand.

She brings up a fancy display of two lists and punches the keyboard a few times, and suddenly a couple of entries in the list are highlighted – the murder suspects. So without further ado I bring you this ultimate crime-fighting tool:

(link updated)
Usage: Download; mv bin/crossref; chmod +x bin/crossref; crossref list1 list2

If you’re on windows you’ll need ActivePerl or Strawberry Perl.

It may not have any fancy graphics, but it’ll get the job done and hey – this isn’t a TV series.