Posts Tagged ‘disintegrate’

software defined radios with alsa, jack and pulseaudio and a professional sound card

Monday, September 9th, 2013

Preramble

waves

All the people out there are neatly divided in two piles:
the “it works for me and does what I need”-camp, and
the “always always always gets in the way so killitwithfire”-camp,
and this fragmentation may be the best argument that pulseaudio should be up for a whammy.

For all of you tl;dr’s (too lazy, doyou read?) here’s a short summary:

  • alsa: just works. confusion in the .asoundrc
  • pulseaudio: controls per process, less buffer fuckups, “just works”
  • jack: controls per process, realtime, firewire/usb, pro audio apps
  • firewire: fantastic, massive pain but getting there
  • software defined radios: so worth it!

But read on to learn the recipie to the secret magic sauce.

The reason I am writing this is not because pulseaudio is evil and sucks. However, it was the last straw in a long and windy road that broke the camels back. Pulseaudio assumes you are running systemd, and talks to console-kit-daemon which is surely one of Satan’s most trusted advisers and a harbinger of the Apocalypse.

Pulseaudio

We know all this, and yet why do I bother?
I didn’t come here to rant about Pulseaudio though:
I’ve gathered you here today to tell a story about Software Defined Radios.

Introducing a large and characteristic cast of characters, and howto make them work together in the best possible way.

My way.

The Cook

Well: a friend of mine got a hold of a few Terratec DVB-dongles with the awesome rtl-chipset and Elonics tuner, which means I can play with radio!

terratec dongle

Except the first time I tried I got stuck in gnuradio dependency hell and never got anything working… which was very nearly a year ago.

Things weren’t easy back then, gqrx, the pretty waterfall app wasn’t mature enough and you were stuck using something far more fugly (.net code running under mono, shudder the thought).

You still have to build gnuradio from source (because the packaged versions aren’t new and shiny enough), but the piper’s playing to a different tune now, with the advent of build-gnuradio it’s possible to sit back and relax while gnuradio and all its dependencies builds before your very eyes.

Yes indeed this takes longer than getting the cows back from pasture but it’s worth it, because with a full gnuradio build you can now have a hope of getting gqrx the shiny waterfall to compile!

gnuradio companion

The Thief

Except you didn’t realize that without the -m option ti build-gnuradio it builds gnuradio 4.6 which is not 4.7 which gqrx needs! Joke’s on you haha ha ha.

Then you build gqrx and realize you can’t get it to talk to your Terratec, because why? Because it’s a DVB dongle and the kernel has helpfully inserted the DVB module to enable it! So run along now and add

# rtlsdr
blacklist dvb_usb_rtl28xxu

to your /etc/modprobe.d/blacklist.conf – now you are ready to fire up gqrx and gnuradio-companion.

His Wife

That’s when you might discover that if you are lucky there is no sound in gqrx. It’s working and showing you a waterfall but you can’t hear the demodulated waves!

GQRX gnuradio waterfall

Why oh why, well let me tell you why: It absolutely needs, requires Pulseaudio to produce sound!

OK, fair enough, some of you out there are in the “works-for-me”-camp and ipso facto you’re done here, gqrx works and IT ALL JUST WORKS but the world is not so easy for the rest of us.

The rest of us bite the bullet and install pulseaudio to get this thing working. Which is as far as you need to go if you’re semi-sane and normal or even when you are running this thing on a Raspberry PI or you’re building a beagleboard spectrum analyzer.

Actually you don’t even need Pulseaudio for that last project..

Her Lover

echo2

What I have neglected to tell you however is that I have an Echo Audiofire. I was impressed with these little firewire-driven sound cards back when my bro had the small and portable Audiofire2.

Sound quality and flexibility wise they are unbeatable, certainly if you want professional quality sound input and output.

Firewire sound also has the major advantage over USB sound in that firewire packets aren’t quantized in time, which means a lot if you’re doing midi or other realtime music stuff. Latency is a killer.

You might also be aware that the higher the sample rate of your sound card, the higher the bandwidth of your homebrew SDR radio..

Anyways, firewire soundcards “just work” with asio drivers in Windoze but are a small pain of their own to set up in Linux. ALSA never heard of them. Pulseaudio doesn’t speak firewire. For anything resembling realtime professional audio under Linux you’ll have to go FFADO and JACK.

JACK audio kit

Also, never think that just any firewire card will work in Linux: a lot of vendors continue to ignore the platform (understandibly, because of the atrocious state of professional audio under linux) and there are some wonderous cards out there that have just pitiful support here.

The jackd brothers

You’re walking down a long path, you’re going to Mecca. You come upon a fork in the road where two brothers live. They are twins, and you know that one of them always speaks the truth, and the other always lies. You need to ask them the way to Mecca, but how?

As there are two problems with anything in this world, there are two problems with Jack. Firstly, jack forked into jack1 and jack2, and both versions are strangely alive to this day, and there is netjack1 and netjack2 and well, what the fuck.

FFADO

To complicate matters there are two competing linux driver subsystems for firewire and both of them live to this day, with one supporting some firewire devices and one supporting other firewire devices, and one being supported in jack1 and the other in jack2. Furthermore you need a recent FFADO to get it all working.

Thankfully in recent debians and ubuntus the right kind of jackd talks to the right kind of firewire device in the kernel and matches the right ffado to get things to work, but you still need to know your way around.

LMMS

The Answer, not The Question

Know what question to ask to get the right answer, which is that at least for the Echo Audiofire, jackd2 works nicely with ffado and recent-ish kernels as long as
you run jackd as your X user
with jackd -v -dfirewire and then fire off qjackctl and ffado-mixer and then all your sweet sweet jack apps. For now, lets assume you are jackd2′ing things, but let us just say that at this point it no longer matters.

What you don’t know is that to get the Echo to work, you will likely have to upgrade your Echo firmware, either by hooking it up to a windoze with the right drivers and letting them reflash the rom, or play with the scary commands ffado-diag and ffado-fireworks-downloader, insert magic (literally!), etc.

Having done all this voodoo I still had problems that required rebooting the sound card or booting into windoze to reset it to a known state that jackd could talk to, but with newer kernels/libffado/jackd versions the problem evaporated somewhere along the line.

Jack meters

Realtime patchset to the Linux kernel? Lets not get into it… I am not a professional musician nor am I a sound engineer, and they would probably use windows or mac.

The Waitress

Confusion.

Synthesizer clutter

At that point you might be wondering where I’m going with things. Lets recap:
I’ve got a gqrx waterfall on Terratec DVB RTL-SDR that only supports Pulseaudio, and I’ve got an Echo Audiofire soundcard on firewire that only listens to jack. I can hook pulseaudio to Alsa.

Indeed, installing pulseaudio I discovered it will do this automatically, /usr/share/alsa/alsa.conf.d/pulse.conf suddenly appears and fucks your setup by putting everything in ALSA through Pulseaudio.

There is also some shit in /etc/pulseaudio/default.pa that is supposed to detect jackdbus and make pulseaudio use jack, but that stuff just never worked.

Of course, I have an .asoundrc file that takes everything from ALSA and puts it up JACK, so how do you think that’s gonna work?

Well, it doesn’t work.
So, it’s time to bring out the guns again.

The Heist

# convert alsa API over jack API
# use it with
# % aplay foo.wav

# use this as default
pcm.!default {
    type plug
    slave { 
       pcm "jack" 
       #rate 96000
       }
}

ctl.mixer0 {
    type hw
    card 1
}

# pcm type jack
pcm.jack {
    type jack
    playback_ports {
        0 system:playback_1
        1 system:playback_2
    }
    capture_ports {
        0 system:capture_1
        1 system:capture_2
    }
}

(that was .asoundrc)

load-module module-jack-sink
load-module module-jack-source 

in your /etc/pulseaudio/default.pa
but put it somewhere near the top, instead of load-module module-alsa-sink, before the ifexists module-jackdbus shit.

and rm /usr/share/alsa/alsa.conf.d/pulse.conf

Now remember that jack is running as you, so make sure that Pulseaudio is running as you as well:

sudo service pulseaudio stop
pulseaudio -v

The Payoff

Pulses playing through jack audio

At this point you can run your freshly compiled gqrx waterfall radio outputting to pulseaudio outputting to jackd and at the same time enjoy ALSA apps talking to jack directly and jack apps doing jack.

the paranoid console viewer

Tuesday, June 19th, 2012

Hi all,
I know it’s been a while since my last post.
There is lots to talk about, but let’s start afresh with something short and sweet.

Let me paint a picture for you:
There is something broken in the most holy and secure part of your network. You need to call support to have them look at it. The support rep wants console access, but you can’t give them axx to your holiest cream pie.
They offer to take over your desktop with a java rootkit app like TeamViewer, GoToMeeting or WebEx.
You decline. You need to stay in control, but show them what they need to see, that and only that.
How?

Let me be clear on the problem statement:
Read-only shell access to the most secure host, which is not available over the wire, viewed by multiple parties at the same time.

Here’s how to do that with SSH, screen(1) and some foo,
with ssh->chroot->rbash->readonly multiuser screen->reverse ssh->openvpn:

You will need a linux server in an “unsafe” zone which is exposed to your support rep on the internet or thru VPN.

  1. Create the user to be contained on your unsafe box, with the restricted bash shell:
    unsafe# export user=rep; adduser $user; chage -s /usr/bin/rbash $user
  2. (Bonus:) chroot/contain the user within sshd_config
  3. Setup multiuser screen on the unsafe box. There are lots of guides for it, but the short and sweet of it is: unsafe# chmod +s `which screen`; chmod 755 /var/run/screen Indeed, this increases the attack surface, and therefore we call this box the unsafe one.
  4. ssh from secure zone to unsafe server:
    secure# ssh -R 2222:localhost:22 screen
  5. Run screen from YOUR account and do :addacl $user :chacl $user -w "#" :chacl $user -x "?" Replace $user with whatever from step 1. Then, still in your screen: :multiuser on
  6. Win! Now you can reverse ssh back to the secure zone and let $user on the unsafe box read the terminal without being able to access anything but what you show her.
  7. Bonus: Add `screen -r $youraccount` in $user/.profile and $user will drop straight into locked screen, and remember that multiuser screen is read-write-execute for all accounts that are addacl’d
    so you might want to chacl before enabling the $user account login.

    And there you have it, superparanoid reverese secure-unsecure remote shell viewer.

    0k

pixie dust

Thursday, February 2nd, 2012

we’ve booted backtrack off usb before, now that’s kinda
boring and installing backtrack onto the usb with unetbootin
is painfully slow and not the same as bootin strait off the
usb which is what we want in this case; not an install
but a fresh copy every boot

there is someone disagreeing in the back of the room, now
wouldn’t this be a lot more complicated? No sir. on the contrary
booting fresh every time makes work a lot simpler; you gain a
direct relationship to what you store where, and where you
access your data from

but there is another one in the front;you sir, you feel that
one would have to sacrifice many of the comforts such as all
any tools of the trade at hand and permanent local storage -
but at best this is a lazy roadblock to salvation; by booting
off of local storage we have local storage at hand in a more
practical format, be that even a microscopic carrier can be
removed and replaced with sufficient storage for everything
and then some

the medium can be embedded, destroyed or ingested, so
the impermiableness of accidentally recorded data and the
robustness, accessability and portability of removable storage
comes very much in hand upon situations that either require
inconspiciousness, anonymity, covertness, plausible deniability
or a high degree of reliability in day-to-day computing

the totalality of the system given to remaining only in memory
causes it to be independent of other storage for operations, and when
operations cease from loss of any exterior preconditions, the
system simply ceases. when preconditions reoccur – by powering on
and executing the first block – the system can be relied upon to
simply starts afresh, completely unperturbed by any previous history

should the need arise to patch the system; say some new app or
capability is called for where there is no time to rebuild,
a patch should be scripted always when there is certanity that
the capability will require a repeat performance. It is advised
to devise a patch which includes all dependencies.

thus the fresh system becomes more capable and more accessible
over time, just like an install. patches can then easily be
rolled into the system should they proove useful to others.

But how does one do it? Well, it’s easy but unfortunately
not as easy as overwriting the boot device; it’s just not
practical because partitioning is always an individual consideration

  • . there are often other files on the block device
  • . choice of filesystem and memory technology has much bearing
  • . the block device is larger or smaller than expected
  • instead, we allow any bootable partition scheme and any
    filesystem and memory technology, as long as the storage
    requirements of the system are met;

    here’s to clone how:

    cp -a boot/ apt/ casper/ gone/ preseed/ syslinux/ 
    syslinux /dev/partition
    mbr /dev/device
    

    but that’s fine, it’s been done and all, but even the ability to
    boot the system with precisely zilch local storage comes in
    handy, and for that we have pixie dust.

    pixie daemon and tiny ftp should be pointing a path
    exactly matching the dhcp-provided patch.. otherwise
    you will have worries!

    /etc/pxe.conf:

    interface=eth1
    service=X86PC,0,0,local,Local boot
    service=X86PC,0,0,pxelinux,PXELinux
    tftpdbase=/var/lib/tftpboot
    domain=truly.yours
    

    /etc/default/tftpd-hpa:
    TFTP_DIRECTORY=”/var/lib/tftpboot/”

    /etc/dnsmasq.conf:

    dhcp-boot=/var/lib/tftpboot/pxelinux,vulcano,10.10.10.86
    

    “high speed” tftp daemons and multicast can be found but it is
    advised to stick to tftpd-hpa and dnsmasq with no esoterics due
    to the sheer amount of variables introduced.

    /var/lib/tftpboot/pxelinux.cfg/default:

    # not strictly necessary but makes the menu pretty
    menu hshift 13
    menu width 49
    menu margin 8
    
    menu title BackTrackBoot
    default vesamenu.c32
    display f.txt
    timeout 600
    
    label local
    menu label Local Harddisk
    localboot 0
    
    menu begin bt
    menu title BackTrack 5
    # ok here comes the real shit
    label backtrack5
    menu label BackTrack R1
    kernel bt5/vmlinuz
    append boot=casper netboot=nfs nfsroot=vulcano:/mnt/bt5 initrd=bt5/initrd.gz text splash vga=791 file=/cdrom/preseed/custom.seed --
    menu end
    

    you’ll need to copy to tftpboot/bt5 the initrd.gz and vmlinuz from the backtrack ISO /casper folder (which you can mount -o loop -t iso9660 bt5.iso /mnt/bt5

    the rest of the files you provide to the bootee over NFS

    /etc/exports:

    /mnt/bt5 10.10.3.0/24(rw,sync,no_subtree_check) 10.10.10.0/24(rw,sync,no_subtree_check)
    mount -t iso9660 -o loop BT5R1-GNOME-32.iso /mnt/bt5
    

    add a http server with kickstart / preseed files for an ever more powerful setup,
    in which case you replace the file= stanza in the append line with
    url=http://host/path/to/preseed

    more on preseeds… maybe later.

    Now restart all dependent services:

    /etc/init.d/nfs-kernel-server restart
    /etc/init.d/tftpd-hpa restart
    /etc/init.d/apache2 restart
    /etc/init.d/pxe restart
    

    debugging this setup usually requires tracing the process that is failing, so:
    - dhcp options tracing (dnsmasq verbose and tcpdump / wireshark)
    - verbose pxe
    - verbose foreground tftpd-hpa : in.tftpd -v -v -L /var/lib/tftpboot

    oh noes, o cert my *sniff* cert

    Monday, September 26th, 2011

    papieren bitteI’m not going to tell you about DigiNotar, whose file of bankruptcy this month held shock for no one after recently having lost the keys to the grand vault, in which the government held much stock. Though I have many comments upon the sophistication of the player that so thoroughly owned the most trusted agencies of the digital age….

    The cracker hardly needed them skillz, considering it has been a challenge to keep that whole corrupt industry accountable. The trouble with the central authority system is that even if only one of the keys is compromised, the system is broken and gives no assurances whatsoever. No warning bells either. Just a sweet silent man in the middle, passing along all the best parts to his lover.

    It’s not a joke for the 300,000+ people who documentedly had their emails and facepalms compromised. We thought he was kind to give an interview and we wait in awe for his next move.

    I’m not going to mention the fatal flaws in certificate revocation that became embarrassingly apparent when the damage was done.
    What’s hardly the matter since this kind of thing is bound to crop up, that hole in TLS was deemed unexploitable – now there’s a Titanic if I ever saw one. Un sinkable. Too fat to die.
    cert failure

    SSL is an open book for those who dare to look, and it’s got more than a couple old bugs. It’s okay though, we can patch it, they will say. Dare to look the other way!
    Not that you need those anyway, since there are some really nice sslsnarfing techniques out there that entirely forgo attacks on SSL as “too inefficient”.

    But I say nay! Unacceptable. There is another way.. and we’re already doing it! We sign our own signatures and we back each other’s signatures.
    Now that’s business, something that the companies on your CA trusted list were painfully aware of when they laid down the law of the code and put themselves on the trust list. Yet still ca-cert is not on your trust list, and warning bells fly off on some of the most trustworthy sites- self-signed ones.

    Just don’t ask them why or how, or anything that isn’t directly relevant. Do you even know what is on your trust list? You might just be surprised at what you can find.

    # ls -al /etc/ssl/certs | wc -l
    479

    How many of these do you trust? How many of these should you trust? I’ll tell you: *none*.

    We should not be adding static lists of central signing authorities to our systems. This is a brittle and dangerous system. We knew this, but hackers have now thankfully demonstrated it.
    A better way is for every person (and by extension every browser) to keep their own list of signing certs, and to exchange these certs with their friends (automagically, if you like). Your friends lists can come out of a social network, any social network, and it will mean that any site that has been vetted by one or more of your friends will likely be safe for you to use as well. It’s even better than that, you can check certs from multiple friends and detect discrepancies.

    green padlock
    That, my friends, is called the Web of Trust, and is a design that is heading in the right direction. convergence.io is doing something similar already to a Firefox near you, while GPG has worked like this for three decades!

    It has to be simple. It has to be very simple. And it has to be chemically free of one word: ‘central’.

    One real easy way to do this on linux would be using git and signed manifests. I already do this in gone to assure that only files on a manifest signed by a trusted key get installed.

    ip6 DNS wildcards considered harmful

    Friday, September 23rd, 2011

    I discovered something yesterday that might be of consequence:
    If you have ip6 connectivity the domain name resolver will prefer an ip6 wildcard domain over a ip4 A or CNAME record. This breaks things like ssh. You’d expect the resolver to choose the response that is most specific, the same way ip4 wildcards work, and not to blindly prefer ip6 wildcards.

    Consider the case of Mary, who’s been around and has lots of domains:

    hail.mary.com
    naked.mary.com
    see.mary.com
    

    and she’s also wildcarding all the other *.mary.com to her vanity host me.mary.com… you get the idea, it’s fairly typical. Those hosts only have ip4 connectivity. Now she adds a new address ip6.mary.com and puts a wildcard ip6 record *.mary.com, expecting that people accessing foo.mary.com on ip6 get the wildcard address – and they do! But she gets alot more than the doctor ordered, her ip6 clients will also get the ip6 wildcard address for all her other domains! hail.mary.com, naked.mary.com and see.mary.com will all land on ip6.mary.com instead of the ip4 A records. What happened here?
    Effectively, Mary’s ip6 wildcard broke all ip6 to ip4 connectivity for Mary’s existing subdomains!

    Yep, you can fix it on your machine, but this is a client problem and you can’t fix everybody else’s resolvers, so what you have to do is avoid ip6 wildcard domains ENTIRELY. Thanks a bunch.

    On a completly different node:

    “debug This option is recognized by pam_ldap but is presently ignored.”

    I mean wow. What did they do, write the whole module flawlessly on the first try? I wish.

    firefox + geolocation = m0ar paranoia

    Friday, August 26th, 2011

    Just a quick note pertaining to a previous post, namely the new evil that is firefox geolocation. This is new in firefox 3.5. Yes, it is opt-in and yes firefox does not track you but yes the servers you opt in to will track you and that my friends is one of the most serious misfeatures of our times, repeated again and again in stuff like Google Latitude, Android and Apple photo geo-tagging.
    If you care about your personal security at all you do not want the internet tracking where you are, which is essentially what this amounts to.
    Disable it now by going to the about:config location in your firefox, typing geo. in the search field and double clicking the geo.enabled line so that it says

    geo.enabled    user set  boolean   false
    

    That’s it for now.

    failcode

    Thursday, August 18th, 2011

    In my time as an application programmer. developer and designer, breif stint as team lead and project manager,
    as well as my time as a systems consultant, I have witnessed first-hand and also heard many credible tales of systematic failure that rival any of the stories on The Daily WTF. My collegues and I have seen so many examples of bad design, bad code and systemic failure that we have considered writing a book titled How To Write Ugly Code.

    I have also read the Texas Instruments Chainsaw massacre and personally met Gomez while debugging applications.

    My speciality and my interest lies in diagnostics and the analysis of problems as well as system security, and my experience has showed that one can venture to say something about the qualitative difference of different design methodologies and what they have to say for the end result.

    Firstly however, it is worth noting that the software industry as a whole has one primary problem: the time pressure to deliver new features at the face of mouting expectations.

    This pressure to deliver is seen as the driving force behind industry progress and ever leaner, more economic applications, however contrary to this belief I have evidence that it leads to incentives for sloppy work, overengineering and poor considerations of the problem domain. It seems the process itself rewards poor application design, regardless of development methodology.

    Large corporate and government tenders, which affect many hundreds of thousands of peoples lives, get bid on by large software houses that believe they can deliver everything (at low risk: if they cannot deliver it is very hard for the customer to contest this to a major software house).

    What we get by and large out of this process are bloated top-down applications designed by people who do not understand the (whole) problem, leading to misguided decisions for such things as

    • choice of platform and language
    • choice of coding standards (check out Systems Hungarian if you don’t believe me)
    • programming methodology
    • communication tools: source control, ticket and forum tools for developers and system managers
    • Not Invented Here-practices
    • monkey-coding by people whose talents could be employed to solving the problem

    What usually goes for as “agile” development causes frequent ineffective blame-game meetings.
    Unit test driven development frequently causes micromanagement of program details and inflexible designs,
    … all these methodologies were designed to improve programs, not bog them down! why then do they cause so much breakage?

    The pressure to deliver requires the application developer to prefer large swathes of ready-made library code and a high level of abstraction to allow her to meet deadline demands.

    A high abstraction level causes low debuggability and poor performance.
    Low debuggability because bugs are by definition conditions caused by circumstances unforseen by the application developer. Abstractions are employed by the developer to hide implementation details to aid clairty and speed of application development, at the cost of debuggability.

    The very tools and abstractions employed by the application developer create the frame through which the developer can see the circumstances of her design and code. Bugs most frequently occur on the boundries between abstractions, where the developer has no possibility to forsee these circumstances. Furthermore, in a system that has a passibly high level of abstraction there is a whole stack of hidden details which must be traced and unwound to discover the bug. Therefore, every additional layer of abstraction obscures the debugging process.

    The debuggability and algorithmic simplicity is key in achieving optimal performance. In other words, if we have a clear problem statement it is possible to achieve performance. If there is no clear problem statement, and the program is further muddled by abstractions and interactions there is no effective path to performance.

    Any artist will be able to tell you that the most interesting, creative and innovative work comes out of having a stress-free, playful environ. Since innovative coding is a creative activity, the same applies to developing applications, something that game developer companies and creative shops have known for years, and behemoths like Google and Microsoft have picked up on, reinvesting up to 15% of their revenue into research and development and getting that part right, as witnessed by the sheer output of innovation.

    If there is a clear path to solving these fundamental problems of IT then it is putting the people who know what they are doing in the pilot seat, enabling developers to choose for themselves not only toolchains, methodology and communication tools but also engaging the systems thinkers into creating the specifications and architecture of the systems they are going to implement. The good news is that as customers and managers get savvy to this method of achieving IT success, we are going to see more developer autonomy and less spectacular fails.

    security or privacy? both please!

    Monday, July 11th, 2011

    Hello readers, fellow bloggers, fell trolls… it’s been a while.

    Happy Wheel

    If you didn’t know that just following a link could lead to loss of your identity, virus infection and the unknowing participation in denial of service sieges, distribution of child pornography and letting attackers break through your company’s firewalls (not to mention immanentizing the eschaton), you could be forgiven for not paying attention to the happy happy field of information security.

    If you knew this rather distressing fact about the web of links, but thought you could defend with an up-to-date antivirus, a current browser and the avoidance of “shady” websites I would hate to prick your bubble by telling you regular honest websites are being used against us, browsers all have to run the all-singing-all-dancing-all-executing flash and jave and anti-virus is not only ineffective but doesn’t even target this type of attacks. Knowing these things might be a little distressing so I won’t tell you.

    At least my bank is secure, you say, it’s got SSL and everything! Well, you’d be twisting my arm into telling you, embarassedly, that SSL as implemented in browsers is very neatly broken, that all we needed was one of the Certified Trusted Authority Parties to go bad for us all to wake up with a butt-ache, but we now have not one but at least three such bad parties, not to mention all the MiM magic and DNS trickery that you don’t want to hear about anyway.

    I will tell you however that the only defense is two-pronged: not allowing websites to script – which is a pain – and damage containment, which is not exactly perfect.

    Let us say you already knew all of this, but no longer cared because there was an even greater danger on the web: the total breach of containment of privacy that is social media and tracking cookies which all want to integrate and track your every move through the web so that usage and mood patterns can be profiled, tracked, bought and sold. Doesn’t matter, right? You’re logged in to Facebook, Linkedin and Google+ and get all your links from there, so you have your own personal filter which only shows you stuff you care about, turning your blind eye towards anything you need to know that comes from left field, suckering you into giving away your privates every time you hit “like” or “add to friends list”.
    pacman ghost

    In a post-panopticlick world there is really only one lesson to be learned: care about your privacy, because noone else will give a damn. It’s not about whether you have anything to hide, it’s about the accumultion of your private info by crooks to use as they please!

    Oh and crooks include the great people at Facebook, Microsoft and Google, that’s why I recommend disabling all tracking features that come in the guise of “better speed” or “increased security”. Pictures below show how to do it in chromium and firefox.

    chromium dialog, how to disable google tracking

    Ok that was Goggle’s Chromium – note all the unchecked- checkboxen… disabling prefetch, malware blocklists and suggestion services, all of which are sending far too much data to Google’s scaredy-ass all seeing eye. Aaaand here’s Firefox:

    fuckfox prefetch prevention

    Mhm that’s pointing the browser at about:config, searching for prefetch and setting it to false. Yep.

    Those pictures show you how to find and set better privacy features in your browser. Safari users are up shit creek, but unlike the Internet Explorer users, they at least have a paddle! Great stuff, what next?

    Keep a “secure” browser to browse with that you don’t log into anything personal with.. and make this your default browser!

    What is a “secure” browser? Let’s talk a little about that without insulting anyone’s intelligence, shall we?
    First off, I’m putting the word “secure” in uhm qoutes, because uhm the browser will never be secure, be it ever so protected. Ok, moving on you want to be running noscript and or adblock and or disconnect and or noflash, depending on whether you are a Mac, Windows (but then you are at game over already) or Linux user with Firefox or Chromium (NOT IExploder, that shit is scary!).

    All of these tools make it so the sites you visit don’t get to run code on your machine. The end result is that sites are marginally less usable, but that popup popunder popver poopop ads and scary tracker/botnet/mal stuff doesn’t run automagically. Here are some links:
    noscript
    adblock
    disconnect
    Flashblock
    – Have you heard about sandboxing?

    java and flash denied in chromium Chromium is getting the right idea by killing java and flash from the get-go. This should be the default, but you have to make it so!

    You should definitely be cloaking your user-agent, as it is a useless yet very telling string your browser hoes out to all. Do this with the User-Agent Modifying Extension.

    Also, you may find Torbutton and Foxyproxy interesting, the real privacy is in bouncing your traffic thru things that wash it for you. Putty or any other decent ssh client will create a proxy for you:

    `ssh -D8080 me@myshellserver`

    and you can point your browser’s SOCKS proxy settings to localhost:8080 putting all your traffic through your shell server!

    sshnuke

    The best has yet to come mua ha ha ha.

    caching wikileaks with varnish

    Friday, December 3rd, 2010

    In times like these I like to remind everyone that truth is a virtue and that the best resistance to corruption is decentralization.

    With that in mind I quickly threw together a cache for wikileaks at wikileaks.uplink.li. This is not a full mirror obviously but it will stay up even though the main sites go down.

    The host in question isn’t really specced for high loads but that is beside the point. The point is that you can do this in your own varnish instance. Here is how you do the same, in VCL:

    # help wikileaks stay online by providing caching
    # helps more if you have persistent storage.                                                                                                
    #
    # comotion@krutt.org 2010-12-03
    # http://kacper.blog.linpro.no
     
    backend wikileaks2 {
       .host = "213.251.145.96";
       .probe = {
          .url = "/img/favicon.ico";
          .interval = 60s;
       }
    }
    backend wikileaks3 {
       .host = "wikileaks.ch";
       .probe = {
          .url = "/img/favicon.ico";
          .interval = 60s;
       }
    }
    
    # won't work while the DNS name is taken out
    #backend wikileaks1 {
    #   .host = "wikileaks.org";
    #   .probe = {
    #      .url = "/img/favicon.ico";
    #   }
    #}
    director wikileaks round-robin {
       #{ .backend = wikileaks1; }
       { .backend = wikileaks2; }
       { .backend = wikileaks3; }
       { .backend = wikileaks4; }
    }
     
    sub vcl_recv {
       if (req.http.host ~ "^(wiki)?leaks" ||
           req.url ~ "^/(wiki)leaks" ||
           req.http.referer ~ "leaks"){
          set req.backend = wikileaks;
          if(req.backend.healthy){
             set req.grace = 7d;
          }else{
             set req.grace = 365d;
          }
       }
    }
     
    sub vcl_miss {
       if(req.url ~ "^/(wiki)?leaks"){
          set bereq.url = regsub(req.url,"^/(wiki)?leaks","/");
       }
    }
    sub vcl_fetch {
       if(req.url ~ "^/(wiki)?leaks"){
          set beresp.grace = 365d;
       }
    }
     
    

    You can save that to /etc/varnish/default.vcl and reload varnish.
    Or, if your Varnish instance has other sites on it, you could save it to /etc/varnish/wikileaks.vcl and add the following near the top of your default.vcl:

    include "/etc/varnish/wikileaks.vcl";
    

    Isn’t it beautiful?
    You may not be able to set up a full mirror, but now you can go set up your varnish to cache Wikileaks!

    PS.
    The opinions expressed here are not necessarily those of my employer nor anyone else associated with me, Varnish or anything really.

    consolekit is evil

    Wednesday, December 1st, 2010

    … and hates me

    I should really tell you about the DLD seminar three weeks ago, or the PARANOIA security conference, or even just that Adobe should be considered harmful but things have been crazy and between this and electromagnetism I haven’t had the mind space. After the 6th of december, I promise I’l come back with pictures and relations and maybe even sounds (I have notes, don’t worry I’ll remember).

    On the other hand here’s a nasty hack to kill console-kit-daemon, which has a really nasty way of polluting the PID-space… and annoys me enough to warrant a public humiliation as well. What does it do, and why? Who cares what it does, it’s doing it poorly enough to catch attention to itself! So here’s how to kill it:

    root@wasp:/usr/sbin# dpkg -S console-kit-daemon
    consolekit: /usr/sbin/console-kit-daemon
    

    DON’T try to purge the package because that’s just one end of a really big ugly yarn of unneccessary dependency pain that I’d like to spare you…

    DON’T try to replace /usr/sbin/console-kit-daemon with your own stub… turns out dbus autostarts this “service”, and that approach will make dbus block your (ssh) session when you log in… not forever, but that’s even more annoying than the pid pollution.

    Instead, debian bug #544147 and #544483 clewed me in to the following hack:

    cp /usr/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service \
      /usr/local/share/dbus-1/system-services/
    echo Exec=/bin/false >> /usr/local/share/dbus-1/system-services/org.freedesktop.ConsoleKit.service
    

    which is a two-liner, and would have been less ugly and easier to debug if it hadn’t been for the fine hubris of the freedesktop dudes…