Posts Tagged ‘breakage’

My Varnish pet peeves

Sunday, August 23rd, 2015

I’ve been meaning to write a blog entry about Varnish for years now. The closest I’ve come is to write a blog about how to make Varnish cache your debian repos, make you a WikiLeaks cache and I’ve released Varnish Secure Firewall, but that without a word on this blog. So? SO? Well, after years it turns out there is a thing or two to say about Varnish. Read on to find out what annoys me and people I meet the most.

varnish on wood

Although you could definitely call me a “Varnish expert” and even a sometimes contributor, and I do develop programs, I cannot call myself a Varnish developer because I’ve shamefully never participated in a Monday evening bug wash. My role in the Varnish world is more… operative. I am often tasked with helping ops people use Varnish correctly, justify its use and cost to their bosses, defend it from expensive and inferior competitors, sit up long nites with load tests just before launch days. I’m the guy that explains the low risk and high reward of putting Varnish in front of your critical site, and the guy that makes it actually be low risk, with long nites on load tests and I’ll be the first guy on the scene when the code has just taken a huge dump on the CEO’s new pet Jaguar. I am also sometimes the guy who tells these stories to the Varnish developers, although of course they also have other sources. The consequences of this .. lifestyle choice .. is that what code I do write is either short and to the point or .. incomplete.

bug wash

I know we all love Varnish, which is why after nearly 7 years of working with this software I’d like to share with you my pet peeves about the project. There aren’t many problems with this lovely and lean piece of software but those which are there are sharp edges that pretty much everyone snubs a toe or snags their head on. Some of them are specific to a certain version, while others are “features” present in nearly all versions.

And for you Varnish devs who will surely read this, I love you all. I write this critique of the software you contribute to, knowing full well that I haven’t filed bug reports on any of these issues and therefore I too am guilty in contributing to the problem and not the solution. I aim to change that starting now :-) Also, I know that some of these issues are better lived with than fixed, the medicine being more hazardous than the disease, so take this as all good cooking; with a grain of salt.

Silent error messages in init scripts

Some genious keeps inserting 1>/dev/null 2>&1 into the startup scripts on most Linux distros. This might be in line with some wacko distro policy but makes conf errors and in particular VCL errors way harder to debug for the common man. Even worse, the `service varnish reload` script called `varnish-vcl-reload -q`, that’s q for please-silence-my-fatal-conf-mistakes, and the best way to fix this is to *edit the init script and remove the offender*. Mind your p’s and q’s eh, it makes me sad every time, but where do I file this particular bug report?

silent but deadly

debug.health still not adequately documented

People go YEARS using Varnish without discovering watch varnishadm debug.health. Not to mention that it’s anyone’s guess this has to do with probes, and that there are no other debug.* parameters, except for the totally unrelated debug parameter. Perhaps this was decided to be dev-internal at some point, but the probe status is actually really useful in precisely this form. debug.health is still absent from the param.show list and the man pages, while in 4.0 some probe status and backend info has been put into varnishstat, which I am sure to be not the only one being verry thankful for indeed.

Bad naming

Designing a language is tricky.

conufsed?

Explaining why purge is now ban and what is now purge is something else is mindboggling. This issue will be fixed in 10 years when people are no longer running varnish 2.1 anywhere. Explaining all the three-letter acronyms that start with V is just a gas.
Showing someone ban("req.url = "+ req.url) for the first time is bound to make them go “oh” like a racoon just caught sneaking through your garbage.
Grace and Saint mode… that’s biblical, man. Understanding what it does and how to demonstrate the functionality is still for Advanced Users, explaining this to noobs is downright futile, and I am still unsure whether we wouldn’t all be better off for just enabling it by default and forgetting about it.
I suppose if you’re going to be awesome at architecting and writing software, it’s going to get in the way of coming up with really awesome names for things, and I’m actually happy that’s still the way they prioritize what gets done first.

Only for people who grok regex

Sometimes you’ll meet Varnish users who do code but just don’t grok regex. It’s weak, I know, but this language isn’t for them.

Uncertain current working directory

This is a problem on some rigs which have VCL code in stacked layers, or really anywhere where it’s more appropriate to call the VCL a Varnish program, as in “a program written for the Varnish runtime”, rather than simply a configuration for Varnish.

UncertantyYou’ll typically want to organize your VCL in such a way that each VCL is standalone with if-wrappend rules and they’re all included from one main vcl file, stacking all the vcl_recv’s and vcl_fetches .

Because distros don’t agree on where to put varnishd’s current working directory, which happens to be where it’s been launched from, instead of always chdir $(basename $CURRENT_VCL_FILE), you can’t reliably specify include statements with relative paths. This forces us to use hardcoded absolute paths in includes, which is neither pretty nor portable.

Missing default director in 4.0

When translating VCL to 4.0 there is no longer any language for director definitions, which means they are done in vcl_init(), which means your default backend is no longer the director you specified at the top, which means you’ll have to rewrite some logic lest it bite you in the ass.

director.backend() is without string representation, instead of backend_hint,
so cannot do old style name comparisons, ie backends are first-class objects but directors are another class of objects.

the missing director

VCL doesn’t allow unused backends or probes

Adding and removing backends is a routine ordeal in Varnish.
Quite often you’ll find it useful to keep backup backends around that aren’t enabled, either as manual failover backups, because you’re testing something or just because you’re doing something funky. Unfortunately, the VCC is a strict and harsh mistress on this matter: you are forced to comment out or delete unused backends :-(

Workarounds include using the backends inside some dead code or constructs like

vcl_recv{
	set req.backend_hint = unused;
	set req.backend_hint = default;
	...
}

It’s impossible to determine how many bugs this error message has avoided by letting you know that backend you just added, er yes that one isn’t in use sir, but you can definitely count the number of Varnish users inconvenienced by having to “comment out that backend they just temporarily removed from the request flow”.

I am sure it is wise to warn about this, but couldn’t it have been just that, a warning? Well, I guess maybe not, considering distro packaging is silencing error messages in init and reload scripts..

To be fair, this is now configurable in Varnish by setting vcc_err_unref to false, but couldn’t this be the default?

saintmode_threshold default considered harmful

saintmode

If many different URLs keep returning bad data or error codes, you might concievably want the whole backend to be declared sick instead of growing some huge list of sick urls for this backend. What if I told you your developers just deployed an application which generates 50x error codes triggering your saintmode for an infinite amount of URLs? Well, then you have just DoSed yourself because you hit this threshold. I usually enable saintmode only after giving my clients a big fat warning about this one, because quite frankly this easily comes straight out of left field every time. Either saintmode is off, or the treshold is Really Large™ or even ∞, and in only some special cases do you actually want this set to an actual number.

Then again, maybe it is just my clients and the wacky applications they put behind Varnish.

What is graceful about the saint in V4?

While we are on the subject, grace mode being the most often misunderstood feature of Varnish, the thing has changed so radically in Varnish 4 that it is no longer recognizable by users, and they often make completely reasonable but devestating mistakes trying to predict its behavior.

To be clear on what has happened: saint mode is deprecated as a core feature in V4.0, while the new architecture now allows a type of “stale-while-revalidate” logic. A saintmode vmod is slated for Varnish 4.1.

But as of 4.0, say you have a bunch of requests hitting a slow backend. They’ll all queue up while we fetch a new one, right? Well yes, and then they all error out when that request times out, or if the backend fetch errors out. That sucks. So lets turn on grace mode, and get “stale-while-revalidate” and even “stale-if-error” logic, right? And send If-Modified-Since headers too, sweet as.

Now that’s gonna work when the request times out, but you might be surprised that it does not when the request errors out with 50x errors. Since beresp.saint_mode isn’t a thing anymore in V4, those error codes are actually going to knock the old object outta cache and each request is going to break your precious stale-while-error until the backend probe declares the backend sick and your requests become grace candidates.

Ouch, you didn’t mean for it to do that, did you?

The Saint

And if, gods forbid, your apphost returns 404′s when some backend app is not resolving, bam you are in a cascading hell fan fantasy.

What did you want it to do, behave sanely? A backend response always replaces another backend response for the same URL – not counting vary-headers. To get a poor mans saint mode back in Varnish 4.0, you’ll have to return (abandon) those erroneous backend responses.

Evil grace on unloved objects

For frequently accessed URLs grace is fantastic, and will save you loads of grief, and those objects could have large grace times. However, rarely accessed URLs suffer a big penalty under grace, especially when they are dynamic and ment to be updated from backend. If that URL is meant to be refreshed from backend every hour, and Varnish sees many hours between each access, it’s going to serve up that many-hour-old stale object while it revalidates its cache.

stale while revalidate
This diagram might help you understand what happens in the “200 OK” and “50x error” cases of graceful request flow through Varnish 4.0.

Language breaks on major versions

This is a funny one because the first major language break I remember was the one that I caused myself. We were making security.vcl and I was translating rules from mod_security and having trouble with it because Varnish used POSIX regexes at the time, and I was writing this really godaweful script to translate PCRE into POSIX when Kristian who conceived of security.vcl went to Tollef, who were both working in the same department at the time, and asked in his classical broker-no-argument kind of way "why don’t we just support Perl regexes?".
Needless to say, (?i) spent a full 12 months afterwards cursing myself while rewriting tons of nasty client VCL code from POSIX to PCRE and fixing occasional site-devestating bugs related to case-sensitivity.

Of course, Varnish is all the better for the change, and would get no where fast if the devs were to hang on to legacy, but there is a lesson in here somewhere.

furby

So what's a couple of sed 's/req.method/req.request/'s every now and again?
This is actually the main reason I created the VCL.BNF. For one, it got the devs thinking about the grammar itself as an actual thing (which may or may not have resulted in the cleanups that make VCL a very regular and clean language today), but my intent was to write a parser that could parse any version of VCL and spit out any other version of VCL, optionally pruning and pretty-printing of course. That is still really high on my todo list. Funny how my clients will book all my time to convert their code for days but will not spend a dime on me writing code that would basically make the conversion free and painless for everyone forever.

Indeed, most of these issues are really hard to predict consequences of implementation decisions, and I am unsure whether it would be possible to predict these consequences without actually getting snagged by the issues in the first place. So again: varnish devs, I love you, what are your pet peeves? Varnish users, what are your pet peeves?

Errata: vcc_err_unref has existed since Varnish 3.

sound sound

Monday, December 8th, 2014

Intermission..

Recently I been doing some video editing.. less editing than tweaking my system tho.
If you want your jack output to speak with Kdenlive, a most excellent video editing suite,
and output audio in a nice way without choppyness and popping, which I promise you is not nice,
you’ll want to pipe it through pulseaudio because the alsa to jack stuff doesn’t do well with phonom, at least not on this convoluted setup.

Remember, to get that setup to work, ALSA pipes to jack with the pcm.jack { type jack .. thing, and remove the alsa to pulseaudio stupidity at /usr/share/alsa/alsa.conf.d/50-pulseaudio.conf

So, once that’s in place, it won’t play even though Pulse found your Jack because your clients are defaulting out on some ALSA device… this is when you change /etc/pulse/client.conf and set default-sink = jack_out.

danger at the docks

Friday, November 14th, 2014

docker.io
Docker.io is taking the world by storm, but a day at the docks is not without its perils. Here I hope to inspire you to try out docker by showing you how to avoid its pitfalls.

In the days of yore

As the FreeBSD jailers and Solaris zoners will attest to, containerizing your services is a great boon, saving space and resources and providing easy management akin to chroots and potential security benefits, without the overheads of full-blown virtual machines.

FreeBSD Jail Linux has had containers for the longest time, in the ancient form of User Mode Linux, which actually ran a kernel in userland, and more recently OpenVZ, which was more like jails.

The former didn’t lend itself to production deployments and the latter never made it into the linux mainline, coming at a time when people were more interested in virtualization than containment. In recent years, a kernel facility named Cgroups has made LinuX Containers (LXC) possible, which as afforded the management, if not security, of bsd jails.

what can be gained

The biggest potential benefit from containers is that CPU, memory and disk resources are 100% shared at native speeds, so no libraries and no data need ever be duplicated on disk nor in memory.

In FreeBSD jails, this was achieved by providing most of the system read-only like /usr, /lib and /bin, and sharing it amongst jails. This worked quite well, but was surprisingly tricky to update.

LXC
You can do similar stuff with LXC, just as long as you understand that if it breaks, you get to keep all the pieces. This gives you full control, and means that I for one have LXC instances in production with uptimes of 1200 days and counting.

minimalizing

Taking the approach of single-container-single-responsibility further, you could instead of deploying whole system containers create image filesystems that contained only the bare necessities. For instance, your python application would have apart from its code,just the python runtime, libc and other dependant libraries, and naught much else.

Inspired by the “leaner is better” philosophy backed by the experience of running LXC in production, we built this minimal deployment framework complete with a tool to magically find all the required libraries.
leaner is better
Awesomely small images come from this approach, where the “contact surface” of the application has shrank to nothing but the app itself. It was far from perfect, serving to make the images awesomely less debuggable and managable, and never made it into production proper.

layer upon layer is two steps further

In comes Docker, and its concept of filesystem image layers based on AUFS. The approach isn’t novel itself, having been used by live-CD distributions for the longest time, but it’s the first that provides tools to manage the layers effortlessly for containers. So you can now have 100 servers with 100 application layers, and all your Ruby applications share one runtime layer and your Python applications share another, and they all run on the same base image of Ubuntu, and they do all that transparently, without you having to consciously think about which bit goes where.

layersDocker takes another step further, borrowing heavily from distributed social source control ala github, allowing you to clone, build, push, pull, commit, share and remix images as easy as that.

This is the type of thing that blows disk-image-based virtualization straight out of the water.

Perils and rough starts

The Docker docs are well written and will get you spawning containers and dockerizing applications in no time at all. What they will not tell you is how to run containers in production for all values of production.

In particular, the following things require special attention:

  • changing ips
  • service discovery
  • dns issues
  • cache clash

.. and that is precisely what we will talk about next time.
see you later

systemd crash course, with remote logging

Saturday, September 20th, 2014

live a better life

The world is taking systemd by storm and there is no looking back now.

Still, there are some elements that you would expect to be there that are missing. One of them is remote logging!

Another thing missing is a decent crash course [*]. This is frustrating because making init scripts and checking logs is the staple diet of any old sysadmin.

Read on to readjust gently but quickly.
she wants it

Systemd crash course

Find “unit” – that’s the new name for “init script name” to us oldtimers:

systemctl list-units --type=service
# this one is way more verbose
systemctl list-units

Start, stop, restart, reload, status:

systemctl start sshd
systemctl stop sshd
systemctl restart sshd
systemctl reload sshd
# status, gives some log output too
systemctl status sshd

Check ALL the logs, follow the logs, get a log for a service:

journalctl -l
journalctl -f
journalctl -u sshd

Install a systemd service:
(This is what a systemd service description looks like)

    cat > ossec.service << EOF
[Unit]
Description=OSSEC Host-based Intrusion Detection System

[Service]
Type=forking
ExecStart=/var/ossec/bin/ossec-control start
ExecStop=/var/ossec/bin/ossec-control stop

[Install]
WantedBy=basic.target
EOF

# now copy that file into the magic place, /etc/init.d in the old days
install -Dm0644 ossec.service /usr/lib/systemd/system/ossec.service

# now make systemd pick up the changes
systemctl daemon-reload

Enable or disable a service:

systemctl enable ossec
systemctl disable ossec

systemd components

Remote logging

OK so you now know your way around this beast.
Now you want remote logging.

According to the Arch wiki [#], systemd doesn’t actually do remote logging (yet. what else doesn’t it do?) but it will helpfully spew its logs onto the socket /run/systemd/journal/syslog if you knock twice, gently.

To convince systemd to write to this socket, go to /etc/systemd/journald.conf and set

ForwardToSyslog=yes

then issue a journald restart

systemctl restart systemd-journald

You can install syslog-ng and it should pick up the logs. Test it now by making a log entry with

logger -t WARN zool

and check /var/log/syslog.log

If you have a distro running systemd, then hopefully syslog-ng will be recent enough to be aware enough of systemd that things should just work at this point.

If it don’t, syslog-ng.conf’s source src { system(); }; isn’t picking up the socket file. Fix this by adding the socket explicitly by changing the source in /etc/syslog-ng/syslog-ng.conf like so:

source src {
  unix-dgram("/run/systemd/journal/syslog");
  internal();
};

if you are working with a laptop or desktop then the console_all on tty12 is handy too:

log { source(src); destination(console_all); };

the systemd monster

[*] IMHO Fedora’s cheatsheet on systemd is a little too cluttered
[#] Arch has a decent intro to systemd

No sockpuppets were harmed in the making of this blog entry. Any and all images are © whomever made them, and I love you for not sueing me.

3g wwan pain revisited with gobi

Monday, June 30th, 2014

Hi all,
after a long hiatus I’ve finally found a something annoying enough to share with you: namely, my 3g modem.
I have spoken at lengths about 3g on linux before.

I have a Thinkapd X201 laptop and it has a Qualcomm Gobi 2000 3g modem. This modem does some fancy mode switching, but not in the regular way by getting some control bytes. Therefore, usb-modeswitch can’t help you.

Instead, the modem needs firmware loaded to switch from usb id 05c6:9204 to usb id 05c6:9205.
On linux, the firmare loading is achieved with gobi-loader.

All this is nicely documented at thinkwiki, unfortunately it doesn’t make it one bit easier for the regular joe.

The trouble is, the firmware is not redistributable, so the whole thing is quite tricky!

  1. download 7xwc48ww.exe from the Thinkpad support site,
  2. unpack the drivers with wine or cabextract. I used wine:
    cp 7xwc48ww.exe ~/.wine/drive_c
    wine 7xwc48ww.exe

    Make sure you run the driver installation after extraction, otherwise execute setup again: wine ~/.wine/drive_c/DRIVERS/WWANQL/setup.exe

  3. copy the firmware:
    cd ~/.wine/drive_c/Program Files/QUALCOMM/Images/Lenovo
    sudo mkdir /lib/firmware/gobi
    sudo cp 6/UQCN.mbn UMTS/* /lib/firmware/gobi/
    

    This was the tricky part, unpacking and selecting the correct firmware out of the 12 different sets of files provided in that directory.

  4. reload the driver: modprobe -r qcserial; modprobe qcserial
  5. dmesg should now show you have three USB serial devices /dev/ttyUSB0 (control interface), /dev/ttyUSB1 (the actual modem), and /dev/ttyUSB2 (the GPS, which you need windows to enable once).
    usb 2-1.4: Product: Qualcomm Gobi 2000
    usb 2-1.4: Manufacturer: Qualcomm Incorporated
    qcserial 2-1.4:1.1: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB0
    qcserial 2-1.4:1.2: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB1
    qcserial 2-1.4:1.3: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB2
    
  6. If you have gotten this far, your 3g modem is basically working and you can set up wvdial as in my previoius post pointing at the /dev/ttyUSB1 modem.

    Note however you still need to enable the modem with echo enable > /proc/acpi/ibm/wan

software defined radios with alsa, jack and pulseaudio and a professional sound card

Monday, September 9th, 2013

Preramble

waves

All the people out there are neatly divided in two piles:
the “it works for me and does what I need”-camp, and
the “always always always gets in the way so killitwithfire”-camp,
and this fragmentation may be the best argument that pulseaudio should be up for a whammy.

For all of you tl;dr’s (too lazy, doyou read?) here’s a short summary:

  • alsa: just works. confusion in the .asoundrc
  • pulseaudio: controls per process, less buffer fuckups, “just works”
  • jack: controls per process, realtime, firewire/usb, pro audio apps
  • firewire: fantastic, massive pain but getting there
  • software defined radios: so worth it!

But read on to learn the recipie to the secret magic sauce.

The reason I am writing this is not because pulseaudio is evil and sucks. However, it was the last straw in a long and windy road that broke the camels back. Pulseaudio assumes you are running systemd, and talks to console-kit-daemon which is surely one of Satan’s most trusted advisers and a harbinger of the Apocalypse.

Pulseaudio

We know all this, and yet why do I bother?
I didn’t come here to rant about Pulseaudio though:
I’ve gathered you here today to tell a story about Software Defined Radios.

Introducing a large and characteristic cast of characters, and howto make them work together in the best possible way.

My way.

The Cook

Well: a friend of mine got a hold of a few Terratec DVB-dongles with the awesome rtl-chipset and Elonics tuner, which means I can play with radio!

terratec dongle

Except the first time I tried I got stuck in gnuradio dependency hell and never got anything working… which was very nearly a year ago.

Things weren’t easy back then, gqrx, the pretty waterfall app wasn’t mature enough and you were stuck using something far more fugly (.net code running under mono, shudder the thought).

You still have to build gnuradio from source (because the packaged versions aren’t new and shiny enough), but the piper’s playing to a different tune now, with the advent of build-gnuradio it’s possible to sit back and relax while gnuradio and all its dependencies builds before your very eyes.

Yes indeed this takes longer than getting the cows back from pasture but it’s worth it, because with a full gnuradio build you can now have a hope of getting gqrx the shiny waterfall to compile!

gnuradio companion

The Thief

Except you didn’t realize that without the -m option ti build-gnuradio it builds gnuradio 4.6 which is not 4.7 which gqrx needs! Joke’s on you haha ha ha.

Then you build gqrx and realize you can’t get it to talk to your Terratec, because why? Because it’s a DVB dongle and the kernel has helpfully inserted the DVB module to enable it! So run along now and add

# rtlsdr
blacklist dvb_usb_rtl28xxu

to your /etc/modprobe.d/blacklist.conf – now you are ready to fire up gqrx and gnuradio-companion.

His Wife

That’s when you might discover that if you are lucky there is no sound in gqrx. It’s working and showing you a waterfall but you can’t hear the demodulated waves!

GQRX gnuradio waterfall

Why oh why, well let me tell you why: It absolutely needs, requires Pulseaudio to produce sound!

OK, fair enough, some of you out there are in the “works-for-me”-camp and ipso facto you’re done here, gqrx works and IT ALL JUST WORKS but the world is not so easy for the rest of us.

The rest of us bite the bullet and install pulseaudio to get this thing working. Which is as far as you need to go if you’re semi-sane and normal or even when you are running this thing on a Raspberry PI or you’re building a beagleboard spectrum analyzer.

Actually you don’t even need Pulseaudio for that last project..

Her Lover

echo2

What I have neglected to tell you however is that I have an Echo Audiofire. I was impressed with these little firewire-driven sound cards back when my bro had the small and portable Audiofire2.

Sound quality and flexibility wise they are unbeatable, certainly if you want professional quality sound input and output.

Firewire sound also has the major advantage over USB sound in that firewire packets aren’t quantized in time, which means a lot if you’re doing midi or other realtime music stuff. Latency is a killer.

You might also be aware that the higher the sample rate of your sound card, the higher the bandwidth of your homebrew SDR radio..

Anyways, firewire soundcards “just work” with asio drivers in Windoze but are a small pain of their own to set up in Linux. ALSA never heard of them. Pulseaudio doesn’t speak firewire. For anything resembling realtime professional audio under Linux you’ll have to go FFADO and JACK.

JACK audio kit

Also, never think that just any firewire card will work in Linux: a lot of vendors continue to ignore the platform (understandibly, because of the atrocious state of professional audio under linux) and there are some wonderous cards out there that have just pitiful support here.

The jackd brothers

You’re walking down a long path, you’re going to Mecca. You come upon a fork in the road where two brothers live. They are twins, and you know that one of them always speaks the truth, and the other always lies. You need to ask them the way to Mecca, but how?

As there are two problems with anything in this world, there are two problems with Jack. Firstly, jack forked into jack1 and jack2, and both versions are strangely alive to this day, and there is netjack1 and netjack2 and well, what the fuck.

FFADO

To complicate matters there are two competing linux driver subsystems for firewire and both of them live to this day, with one supporting some firewire devices and one supporting other firewire devices, and one being supported in jack1 and the other in jack2. Furthermore you need a recent FFADO to get it all working.

Thankfully in recent debians and ubuntus the right kind of jackd talks to the right kind of firewire device in the kernel and matches the right ffado to get things to work, but you still need to know your way around.

LMMS

The Answer, not The Question

Know what question to ask to get the right answer, which is that at least for the Echo Audiofire, jackd2 works nicely with ffado and recent-ish kernels as long as
you run jackd as your X user
with jackd -v -dfirewire and then fire off qjackctl and ffado-mixer and then all your sweet sweet jack apps. For now, lets assume you are jackd2′ing things, but let us just say that at this point it no longer matters.

What you don’t know is that to get the Echo to work, you will likely have to upgrade your Echo firmware, either by hooking it up to a windoze with the right drivers and letting them reflash the rom, or play with the scary commands ffado-diag and ffado-fireworks-downloader, insert magic (literally!), etc.

Having done all this voodoo I still had problems that required rebooting the sound card or booting into windoze to reset it to a known state that jackd could talk to, but with newer kernels/libffado/jackd versions the problem evaporated somewhere along the line.

Jack meters

Realtime patchset to the Linux kernel? Lets not get into it… I am not a professional musician nor am I a sound engineer, and they would probably use windows or mac.

The Waitress

Confusion.

Synthesizer clutter

At that point you might be wondering where I’m going with things. Lets recap:
I’ve got a gqrx waterfall on Terratec DVB RTL-SDR that only supports Pulseaudio, and I’ve got an Echo Audiofire soundcard on firewire that only listens to jack. I can hook pulseaudio to Alsa.

Indeed, installing pulseaudio I discovered it will do this automatically, /usr/share/alsa/alsa.conf.d/pulse.conf suddenly appears and fucks your setup by putting everything in ALSA through Pulseaudio.

There is also some shit in /etc/pulseaudio/default.pa that is supposed to detect jackdbus and make pulseaudio use jack, but that stuff just never worked.

Of course, I have an .asoundrc file that takes everything from ALSA and puts it up JACK, so how do you think that’s gonna work?

Well, it doesn’t work.
So, it’s time to bring out the guns again.

The Heist

# convert alsa API over jack API
# use it with
# % aplay foo.wav

# use this as default
pcm.!default {
    type plug
    slave { 
       pcm "jack" 
       #rate 96000
       }
}

ctl.mixer0 {
    type hw
    card 1
}

# pcm type jack
pcm.jack {
    type jack
    playback_ports {
        0 system:playback_1
        1 system:playback_2
    }
    capture_ports {
        0 system:capture_1
        1 system:capture_2
    }
}

(that was .asoundrc)

load-module module-jack-sink
load-module module-jack-source 

in your /etc/pulseaudio/default.pa
but put it somewhere near the top, instead of load-module module-alsa-sink, before the ifexists module-jackdbus shit.

and rm /usr/share/alsa/alsa.conf.d/pulse.conf

Now remember that jack is running as you, so make sure that Pulseaudio is running as you as well:

sudo service pulseaudio stop
pulseaudio -v

The Payoff

Pulses playing through jack audio

At this point you can run your freshly compiled gqrx waterfall radio outputting to pulseaudio outputting to jackd and at the same time enjoy ALSA apps talking to jack directly and jack apps doing jack.

The Zombie Apexeclipse

Tuesday, December 18th, 2012

The end is nigh!!11!1!1

please stand by

.. or so everyone will have you believe these days. I’m not one to throw myself on the bandwagon of the times, but waxing practical on the subject is not beneath me. Too bad the prophecized apocalypse is no more than a gross misinterpretation of the Mayan calendar and the myths surrounding this ancient time-keeping device.

TL;DR, If you want to cut to the chase I suggest skipping to The End.

Truth is the calendar in question is a very accurate time-keeping, call it time-division device, and far from coming to an end, it is merely turning from the Long Count date of x.12.19.19.19.19 to x.13.0.0.0.0. Keep your pants on.

What I’ve found interesting in all this is the basis for the Mayan Long Count, a count which has succeeded in taking account of great cosmic phenomena such as procession, phenomena that even our own “modern” calendar has failed to take into account.

The basis, or at least one of the bases upon which the Long Count is built, are the natural cycles of our sun. Just like the inhabitants of ancient villages in India can every year predict the flood and draught of their nearest river to within the exact meter, having measured the periods and passed on this information for generations, the Mayans and their ancestors knew the sun and the starry sky better than we know it today. They knew it so well as to be able to predict major solar events.

The sun, you see, has a certain internal cycle, which we’ve in modern times managed to figur out repeats about every 11 years, with high and low solar activity. The Mayans were smarter and figured out that there are even larger cycles. And one of these larger cycles is about to come around. Even NASA
agrees, there is a solar storm coming up. If it’s properly powerful, oh maybe like back in 1989, it could knock out
telephones and power grids.

… which is not a real biggie now is it, but if it *does* happen, and lots of people have been “primed” to believe that this is indeed the end of days, it could result in a week-long panic the likes of which our society has yet to see. This is not Y2K, and it’s not the apocalypse, but it could be filled with panic-stricken people and that’s nearly as dangerous as anything zombie-like.


Kacper’s Zombie Apocalypse Bug Out List.

aquisition list

  • shelter
    .. bomb shelter is ideal, but barring that any easily defendable point is great. Sturdy walls and many vantage points make a structure wherein 3 people can defend against 30… or adverse elements and bad weather in general.
  • food
    .. for a week at least, you’ll have to forage for more. If you’ve built your bomb shelter to bug out in until the ashes fall, knock yourself out on canned food but be prepared to ration.
  • friends
    .. all the food and shelter in the world won’t watch your back or keep you from going crazy with loneliness. Know who they are and where they are should shit hit the fan. Find your friends and watch their backs and love will conquer all.
  • weapons
    .. for the haters, for self-defence, for catching prey.
  • transportation
    .. to reach shelter. will you travel The Road on foot? Will you get a horse? A bike? Engines depend on fuels that might quickly become scarce, so either stock up or plan for aquisitions.
  • warmth / fire
    it’s obvious but you, your friends and your gunn ain’t gonna survive in the cold and the blight, so you better bring some storm matches, flint or such cause your zippo ain’t gonna last you forever.
  • power
    .. by this I mean not only fuel and fire but also electricity, but I also mean power in the sociopolitical sense. In the coming days you need to be easy to like or hard to kill.
  • maps
    .. to where you can find more of the above.
  • comfort/conveience items
    .. alcohol, cigarettes, blankets, games, salt, trinkets.
    pretty much anything
    you can trade.
  • Until then and beyond, you can enjoy our very own Mayan Calendar.

the paranoid console viewer

Tuesday, June 19th, 2012

Hi all,
I know it’s been a while since my last post.
There is lots to talk about, but let’s start afresh with something short and sweet.

Let me paint a picture for you:
There is something broken in the most holy and secure part of your network. You need to call support to have them look at it. The support rep wants console access, but you can’t give them axx to your holiest cream pie.
They offer to take over your desktop with a java rootkit app like TeamViewer, GoToMeeting or WebEx.
You decline. You need to stay in control, but show them what they need to see, that and only that.
How?

Let me be clear on the problem statement:
Read-only shell access to the most secure host, which is not available over the wire, viewed by multiple parties at the same time.

Here’s how to do that with SSH, screen(1) and some foo,
with ssh->chroot->rbash->readonly multiuser screen->reverse ssh->openvpn:

You will need a linux server in an “unsafe” zone which is exposed to your support rep on the internet or thru VPN.

  1. Create the user to be contained on your unsafe box, with the restricted bash shell:
    unsafe# export user=rep; adduser $user; chage -s /usr/bin/rbash $user
  2. (Bonus:) chroot/contain the user within sshd_config
  3. Setup multiuser screen on the unsafe box. There are lots of guides for it, but the short and sweet of it is: unsafe# chmod +s `which screen`; chmod 755 /var/run/screen Indeed, this increases the attack surface, and therefore we call this box the unsafe one.
  4. ssh from secure zone to unsafe server:
    secure# ssh -R 2222:localhost:22 screen
  5. Run screen from YOUR account and do :addacl $user :chacl $user -w "#" :chacl $user -x "?" Replace $user with whatever from step 1. Then, still in your screen: :multiuser on
  6. Win! Now you can reverse ssh back to the secure zone and let $user on the unsafe box read the terminal without being able to access anything but what you show her.
  7. Bonus: Add `screen -r $youraccount` in $user/.profile and $user will drop straight into locked screen, and remember that multiuser screen is read-write-execute for all accounts that are addacl’d
    so you might want to chacl before enabling the $user account login.

    And there you have it, superparanoid reverese secure-unsecure remote shell viewer.

    0k

pixie dust

Thursday, February 2nd, 2012

we’ve booted backtrack off usb before, now that’s kinda
boring and installing backtrack onto the usb with unetbootin
is painfully slow and not the same as bootin strait off the
usb which is what we want in this case; not an install
but a fresh copy every boot

there is someone disagreeing in the back of the room, now
wouldn’t this be a lot more complicated? No sir. on the contrary
booting fresh every time makes work a lot simpler; you gain a
direct relationship to what you store where, and where you
access your data from

but there is another one in the front;you sir, you feel that
one would have to sacrifice many of the comforts such as all
any tools of the trade at hand and permanent local storage -
but at best this is a lazy roadblock to salvation; by booting
off of local storage we have local storage at hand in a more
practical format, be that even a microscopic carrier can be
removed and replaced with sufficient storage for everything
and then some

the medium can be embedded, destroyed or ingested, so
the impermiableness of accidentally recorded data and the
robustness, accessability and portability of removable storage
comes very much in hand upon situations that either require
inconspiciousness, anonymity, covertness, plausible deniability
or a high degree of reliability in day-to-day computing

the totalality of the system given to remaining only in memory
causes it to be independent of other storage for operations, and when
operations cease from loss of any exterior preconditions, the
system simply ceases. when preconditions reoccur – by powering on
and executing the first block – the system can be relied upon to
simply starts afresh, completely unperturbed by any previous history

should the need arise to patch the system; say some new app or
capability is called for where there is no time to rebuild,
a patch should be scripted always when there is certanity that
the capability will require a repeat performance. It is advised
to devise a patch which includes all dependencies.

thus the fresh system becomes more capable and more accessible
over time, just like an install. patches can then easily be
rolled into the system should they proove useful to others.

But how does one do it? Well, it’s easy but unfortunately
not as easy as overwriting the boot device; it’s just not
practical because partitioning is always an individual consideration

  • . there are often other files on the block device
  • . choice of filesystem and memory technology has much bearing
  • . the block device is larger or smaller than expected
  • instead, we allow any bootable partition scheme and any
    filesystem and memory technology, as long as the storage
    requirements of the system are met;

    here’s to clone how:

    cp -a boot/ apt/ casper/ gone/ preseed/ syslinux/ 
    syslinux /dev/partition
    mbr /dev/device
    

    but that’s fine, it’s been done and all, but even the ability to
    boot the system with precisely zilch local storage comes in
    handy, and for that we have pixie dust.

    pixie daemon and tiny ftp should be pointing a path
    exactly matching the dhcp-provided patch.. otherwise
    you will have worries!

    /etc/pxe.conf:

    interface=eth1
    service=X86PC,0,0,local,Local boot
    service=X86PC,0,0,pxelinux,PXELinux
    tftpdbase=/var/lib/tftpboot
    domain=truly.yours
    

    /etc/default/tftpd-hpa:
    TFTP_DIRECTORY=”/var/lib/tftpboot/”

    /etc/dnsmasq.conf:

    dhcp-boot=/var/lib/tftpboot/pxelinux,vulcano,10.10.10.86
    

    “high speed” tftp daemons and multicast can be found but it is
    advised to stick to tftpd-hpa and dnsmasq with no esoterics due
    to the sheer amount of variables introduced.

    /var/lib/tftpboot/pxelinux.cfg/default:

    # not strictly necessary but makes the menu pretty
    menu hshift 13
    menu width 49
    menu margin 8
    
    menu title BackTrackBoot
    default vesamenu.c32
    display f.txt
    timeout 600
    
    label local
    menu label Local Harddisk
    localboot 0
    
    menu begin bt
    menu title BackTrack 5
    # ok here comes the real shit
    label backtrack5
    menu label BackTrack R1
    kernel bt5/vmlinuz
    append boot=casper netboot=nfs nfsroot=vulcano:/mnt/bt5 initrd=bt5/initrd.gz text splash vga=791 file=/cdrom/preseed/custom.seed --
    menu end
    

    you’ll need to copy to tftpboot/bt5 the initrd.gz and vmlinuz from the backtrack ISO /casper folder (which you can mount -o loop -t iso9660 bt5.iso /mnt/bt5

    the rest of the files you provide to the bootee over NFS

    /etc/exports:

    /mnt/bt5 10.10.3.0/24(rw,sync,no_subtree_check) 10.10.10.0/24(rw,sync,no_subtree_check)
    mount -t iso9660 -o loop BT5R1-GNOME-32.iso /mnt/bt5
    

    add a http server with kickstart / preseed files for an ever more powerful setup,
    in which case you replace the file= stanza in the append line with
    url=http://host/path/to/preseed

    more on preseeds… maybe later.

    Now restart all dependent services:

    /etc/init.d/nfs-kernel-server restart
    /etc/init.d/tftpd-hpa restart
    /etc/init.d/apache2 restart
    /etc/init.d/pxe restart
    

    debugging this setup usually requires tracing the process that is failing, so:
    - dhcp options tracing (dnsmasq verbose and tcpdump / wireshark)
    - verbose pxe
    - verbose foreground tftpd-hpa : in.tftpd -v -v -L /var/lib/tftpboot

    oh noes, o cert my *sniff* cert

    Monday, September 26th, 2011

    papieren bitteI’m not going to tell you about DigiNotar, whose file of bankruptcy this month held shock for no one after recently having lost the keys to the grand vault, in which the government held much stock. Though I have many comments upon the sophistication of the player that so thoroughly owned the most trusted agencies of the digital age….

    The cracker hardly needed them skillz, considering it has been a challenge to keep that whole corrupt industry accountable. The trouble with the central authority system is that even if only one of the keys is compromised, the system is broken and gives no assurances whatsoever. No warning bells either. Just a sweet silent man in the middle, passing along all the best parts to his lover.

    It’s not a joke for the 300,000+ people who documentedly had their emails and facepalms compromised. We thought he was kind to give an interview and we wait in awe for his next move.

    I’m not going to mention the fatal flaws in certificate revocation that became embarrassingly apparent when the damage was done.
    What’s hardly the matter since this kind of thing is bound to crop up, that hole in TLS was deemed unexploitable – now there’s a Titanic if I ever saw one. Un sinkable. Too fat to die.
    cert failure

    SSL is an open book for those who dare to look, and it’s got more than a couple old bugs. It’s okay though, we can patch it, they will say. Dare to look the other way!
    Not that you need those anyway, since there are some really nice sslsnarfing techniques out there that entirely forgo attacks on SSL as “too inefficient”.

    But I say nay! Unacceptable. There is another way.. and we’re already doing it! We sign our own signatures and we back each other’s signatures.
    Now that’s business, something that the companies on your CA trusted list were painfully aware of when they laid down the law of the code and put themselves on the trust list. Yet still ca-cert is not on your trust list, and warning bells fly off on some of the most trustworthy sites- self-signed ones.

    Just don’t ask them why or how, or anything that isn’t directly relevant. Do you even know what is on your trust list? You might just be surprised at what you can find.

    # ls -al /etc/ssl/certs | wc -l
    479

    How many of these do you trust? How many of these should you trust? I’ll tell you: *none*.

    We should not be adding static lists of central signing authorities to our systems. This is a brittle and dangerous system. We knew this, but hackers have now thankfully demonstrated it.
    A better way is for every person (and by extension every browser) to keep their own list of signing certs, and to exchange these certs with their friends (automagically, if you like). Your friends lists can come out of a social network, any social network, and it will mean that any site that has been vetted by one or more of your friends will likely be safe for you to use as well. It’s even better than that, you can check certs from multiple friends and detect discrepancies.

    green padlock
    That, my friends, is called the Web of Trust, and is a design that is heading in the right direction. convergence.io is doing something similar already to a Firefox near you, while GPG has worked like this for three decades!

    It has to be simple. It has to be very simple. And it has to be chemically free of one word: ‘central’.

    One real easy way to do this on linux would be using git and signed manifests. I already do this in gone to assure that only files on a manifest signed by a trusted key get installed.