Archive for the ‘howto’ Category

NSA-proof SSH

Tuesday, January 6th, 2015

ssh-pictureOne of the biggest takeaways from 31C3 and the most recent Snowden-leaked NSA documents is that a lot of SSH stuff is .. broken.

I’m not surprised, but then again I never am when it comes to this paranoia stuff. However, I do run a ton of SSH in production and know a lot of people that do. Are we all fucked? Well, almost, but not really.

Unfortunately most of what Stribika writes about the “Secure Secure Shell” doesn’t work for old production versions of SSH. The cliff notes for us real-world people, who will realistically be running SSH 5.9p1 for years is hidden in the bettercrypto.org repo.

Edit your /etc/ssh/sshd_config:


Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160
KexAlgorithms diffie-hellman-group-exchange-sha256

sshh
Basically the nice and forward secure aes-*-gcm chacha20-poly1305 ciphers, the curve25519-sha256 Kex algorithm and Encrypt-Then-MAC message authentication modes are not available to those of us stuck in the early 2000s. That’s right, provably NSA-proof stuff not supported. Upgrading at this point makes sense.

Still, we can harden SSH, so go into /etc/ssh/moduli and delete all the moduli that have 5th column < 2048, and disable ECDSA host keys:

cd /etc/ssh
mkdir -p broken
mv moduli ssh_host_dsa_key* ssh_host_ecdsa_key* ssh_host_key* broken
awk '{ if ($5 > 2048){ print } }' broken/moduli > moduli
# create broken links to force SSH not to regenerate broken keys
ln -s ssh_host_ecdsa_key ssh_host_ecdsa_key
ln -s ssh_host_dsa_key ssh_host_dsa_key
ln -s ssh_host_key ssh_host_key

Your clients, which hopefully have more recent versions of SSH, could have the following settings in /etc/ssh/ssh_config or .ssh/config:

Host all-old-servers

    Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-ripemd160
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Note: Sadly, the -ctr ciphers do not provide forward security and hmac-ripemd160 isn’t the strongest MAC. But if you disable these, there are plenty of places you won’t be able to connect to. Upgrade your servers to get rid of these poor auth methods!

Handily, I have made a little script to do all this and more, which you can find in my Gone distribution.

There, done.

sshh obama

Updated Jan 6th to highlight the problems of not upgrading SSH.
Updated Jan 22nd to note CTR mode isn’t any worse.
Go learn about COMSEC if you didn’t get trolled by the title.

sound sound

Monday, December 8th, 2014

Intermission..

Recently I been doing some video editing.. less editing than tweaking my system tho.
If you want your jack output to speak with Kdenlive, a most excellent video editing suite,
and output audio in a nice way without choppyness and popping, which I promise you is not nice,
you’ll want to pipe it through pulseaudio because the alsa to jack stuff doesn’t do well with phonom, at least not on this convoluted setup.

Remember, to get that setup to work, ALSA pipes to jack with the pcm.jack { type jack .. thing, and remove the alsa to pulseaudio stupidity at /usr/share/alsa/alsa.conf.d/50-pulseaudio.conf

So, once that’s in place, it won’t play even though Pulse found your Jack because your clients are defaulting out on some ALSA device… this is when you change /etc/pulse/client.conf and set default-sink = jack_out.

danger at the docks

Friday, November 14th, 2014

docker.io
Docker.io is taking the world by storm, but a day at the docks is not without its perils. Here I hope to inspire you to try out docker by showing you how to avoid its pitfalls.

In the days of yore

As the FreeBSD jailers and Solaris zoners will attest to, containerizing your services is a great boon, saving space and resources and providing easy management akin to chroots and potential security benefits, without the overheads of full-blown virtual machines.

FreeBSD Jail Linux has had containers for the longest time, in the ancient form of User Mode Linux, which actually ran a kernel in userland, and more recently OpenVZ, which was more like jails.

The former didn’t lend itself to production deployments and the latter never made it into the linux mainline, coming at a time when people were more interested in virtualization than containment. In recent years, a kernel facility named Cgroups has made LinuX Containers (LXC) possible, which as afforded the management, if not security, of bsd jails.

what can be gained

The biggest potential benefit from containers is that CPU, memory and disk resources are 100% shared at native speeds, so no libraries and no data need ever be duplicated on disk nor in memory.

In FreeBSD jails, this was achieved by providing most of the system read-only like /usr, /lib and /bin, and sharing it amongst jails. This worked quite well, but was surprisingly tricky to update.

LXC
You can do similar stuff with LXC, just as long as you understand that if it breaks, you get to keep all the pieces. This gives you full control, and means that I for one have LXC instances in production with uptimes of 1200 days and counting.

minimalizing

Taking the approach of single-container-single-responsibility further, you could instead of deploying whole system containers create image filesystems that contained only the bare necessities. For instance, your python application would have apart from its code,just the python runtime, libc and other dependant libraries, and naught much else.

Inspired by the “leaner is better” philosophy backed by the experience of running LXC in production, we built this minimal deployment framework complete with a tool to magically find all the required libraries.
leaner is better
Awesomely small images come from this approach, where the “contact surface” of the application has shrank to nothing but the app itself. It was far from perfect, serving to make the images awesomely less debuggable and managable, and never made it into production proper.

layer upon layer is two steps further

In comes Docker, and its concept of filesystem image layers based on AUFS. The approach isn’t novel itself, having been used by live-CD distributions for the longest time, but it’s the first that provides tools to manage the layers effortlessly for containers. So you can now have 100 servers with 100 application layers, and all your Ruby applications share one runtime layer and your Python applications share another, and they all run on the same base image of Ubuntu, and they do all that transparently, without you having to consciously think about which bit goes where.

layersDocker takes another step further, borrowing heavily from distributed social source control ala github, allowing you to clone, build, push, pull, commit, share and remix images as easy as that.

This is the type of thing that blows disk-image-based virtualization straight out of the water.

Perils and rough starts

The Docker docs are well written and will get you spawning containers and dockerizing applications in no time at all. What they will not tell you is how to run containers in production for all values of production.

In particular, the following things require special attention:

  • changing ips
  • service discovery
  • dns issues
  • cache clash

.. and that is precisely what we will talk about next time.
see you later

systemd crash course, with remote logging

Saturday, September 20th, 2014

live a better life

The world is taking systemd by storm and there is no looking back now.

Still, there are some elements that you would expect to be there that are missing. One of them is remote logging!

Another thing missing is a decent crash course [*]. This is frustrating because making init scripts and checking logs is the staple diet of any old sysadmin.

Read on to readjust gently but quickly.
she wants it

Systemd crash course

Find “unit” – that’s the new name for “init script name” to us oldtimers:

systemctl list-units --type=service
# this one is way more verbose
systemctl list-units

Start, stop, restart, reload, status:

systemctl start sshd
systemctl stop sshd
systemctl restart sshd
systemctl reload sshd
# status, gives some log output too
systemctl status sshd

Check ALL the logs, follow the logs, get a log for a service:

journalctl -l
journalctl -f
journalctl -u sshd

Install a systemd service:
(This is what a systemd service description looks like)

    cat > ossec.service << EOF
[Unit]
Description=OSSEC Host-based Intrusion Detection System

[Service]
Type=forking
ExecStart=/var/ossec/bin/ossec-control start
ExecStop=/var/ossec/bin/ossec-control stop

[Install]
WantedBy=basic.target
EOF

# now copy that file into the magic place, /etc/init.d in the old days
install -Dm0644 ossec.service /usr/lib/systemd/system/ossec.service

# now make systemd pick up the changes
systemctl daemon-reload

Enable or disable a service:

systemctl enable ossec
systemctl disable ossec

systemd components

Remote logging

OK so you now know your way around this beast.
Now you want remote logging.

According to the Arch wiki [#], systemd doesn’t actually do remote logging (yet. what else doesn’t it do?) but it will helpfully spew its logs onto the socket /run/systemd/journal/syslog if you knock twice, gently.

To convince systemd to write to this socket, go to /etc/systemd/journald.conf and set

ForwardToSyslog=yes

then issue a journald restart

systemctl restart systemd-journald

You can install syslog-ng and it should pick up the logs. Test it now by making a log entry with

logger -t WARN zool

and check /var/log/syslog.log

If you have a distro running systemd, then hopefully syslog-ng will be recent enough to be aware enough of systemd that things should just work at this point.

If it don’t, syslog-ng.conf’s source src { system(); }; isn’t picking up the socket file. Fix this by adding the socket explicitly by changing the source in /etc/syslog-ng/syslog-ng.conf like so:

source src {
  unix-dgram("/run/systemd/journal/syslog");
  internal();
};

if you are working with a laptop or desktop then the console_all on tty12 is handy too:

log { source(src); destination(console_all); };

the systemd monster

[*] IMHO Fedora’s cheatsheet on systemd is a little too cluttered
[#] Arch has a decent intro to systemd

No sockpuppets were harmed in the making of this blog entry. Any and all images are © whomever made them, and I love you for not sueing me.

3g wwan pain revisited with gobi

Monday, June 30th, 2014

Hi all,
after a long hiatus I’ve finally found a something annoying enough to share with you: namely, my 3g modem.
I have spoken at lengths about 3g on linux before.

I have a Thinkapd X201 laptop and it has a Qualcomm Gobi 2000 3g modem. This modem does some fancy mode switching, but not in the regular way by getting some control bytes. Therefore, usb-modeswitch can’t help you.

Instead, the modem needs firmware loaded to switch from usb id 05c6:9204 to usb id 05c6:9205.
On linux, the firmare loading is achieved with gobi-loader.

All this is nicely documented at thinkwiki, unfortunately it doesn’t make it one bit easier for the regular joe.

The trouble is, the firmware is not redistributable, so the whole thing is quite tricky!

  1. download 7xwc48ww.exe from the Thinkpad support site,
  2. unpack the drivers with wine or cabextract. I used wine:
    cp 7xwc48ww.exe ~/.wine/drive_c
    wine 7xwc48ww.exe

    Make sure you run the driver installation after extraction, otherwise execute setup again: wine ~/.wine/drive_c/DRIVERS/WWANQL/setup.exe

  3. copy the firmware:
    cd ~/.wine/drive_c/Program Files/QUALCOMM/Images/Lenovo
    sudo mkdir /lib/firmware/gobi
    sudo cp 6/UQCN.mbn UMTS/* /lib/firmware/gobi/
    

    This was the tricky part, unpacking and selecting the correct firmware out of the 12 different sets of files provided in that directory.

  4. reload the driver: modprobe -r qcserial; modprobe qcserial
  5. dmesg should now show you have three USB serial devices /dev/ttyUSB0 (control interface), /dev/ttyUSB1 (the actual modem), and /dev/ttyUSB2 (the GPS, which you need windows to enable once).
    usb 2-1.4: Product: Qualcomm Gobi 2000
    usb 2-1.4: Manufacturer: Qualcomm Incorporated
    qcserial 2-1.4:1.1: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB0
    qcserial 2-1.4:1.2: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB1
    qcserial 2-1.4:1.3: Qualcomm USB modem converter detected
    usb 2-1.4: Qualcomm USB modem converter now attached to ttyUSB2
    
  6. If you have gotten this far, your 3g modem is basically working and you can set up wvdial as in my previoius post pointing at the /dev/ttyUSB1 modem.

    Note however you still need to enable the modem with echo enable > /proc/acpi/ibm/wan

bup quick reference

Thursday, April 25th, 2013

Git is nice and flexible. I wish my backups were that flexible. Thankfully, my wishes have been answered, as bup was created.
I used to lookup the 28c3 bup slides for a quick reference, until I realized I was always looking for just one page of the slides. Best docs are short docs.

# Install
sudo apt-get install python2.6-dev python-fuse python-pyxattr python-pylibacl
git clone https://github.com/bup/bup.git
cd bup && make && make test && sudo make install
# index zz's home directory
bup index -ux /home/zz
# backup to default BUP_DIR and label the backup 'laptop'
bup save -n laptop /home/zz
# backup to remote myserver, naming the backup 'laptop'
bup save -r myserver -n laptop /home/zz
# index /home/zz on myserver
bup on myserver index -ux /home/zz
# backup myserver:/home/zz, naming the backup 'server'
bup on myserver save -n server /home/zz
# check the latest laptop backup
bup ls laptop/latest/home/zz

It’s hard to migrate from tivoli, rsnapshot, tarsnap and friends when you don’t know how. So here we go, without further ado, all you needed to know about bup but never daret to ask, ie

Some reasons to use bup:

  • global deduplication
    • rsnapshot: 4.97G = 2.18G with bup
    • rsnapshot: 12.6G = 4.6G with bup
  • save transmission time
  • backups are oneliners
  • anytime snapshots
  • uid,gid,permissions,acl,selinux
  • par2 anti-bitrot and corruption resistance
  • runs on dd-wrt

This is awesome, but there are two caveats. One is I am unaware of Enterprise&tm; shops using bup yet, the other is a common question: no, bup doesn’t encrypt data.

You can either encrypt or deduplicate. Choose. If you want the other, you probably want duplicity or tarsnap.

The Zombie Apexeclipse

Tuesday, December 18th, 2012

The end is nigh!!11!1!1

please stand by

.. or so everyone will have you believe these days. I’m not one to throw myself on the bandwagon of the times, but waxing practical on the subject is not beneath me. Too bad the prophecized apocalypse is no more than a gross misinterpretation of the Mayan calendar and the myths surrounding this ancient time-keeping device.

TL;DR, If you want to cut to the chase I suggest skipping to The End.

Truth is the calendar in question is a very accurate time-keeping, call it time-division device, and far from coming to an end, it is merely turning from the Long Count date of x.12.19.19.19.19 to x.13.0.0.0.0. Keep your pants on.

What I’ve found interesting in all this is the basis for the Mayan Long Count, a count which has succeeded in taking account of great cosmic phenomena such as procession, phenomena that even our own “modern” calendar has failed to take into account.

The basis, or at least one of the bases upon which the Long Count is built, are the natural cycles of our sun. Just like the inhabitants of ancient villages in India can every year predict the flood and draught of their nearest river to within the exact meter, having measured the periods and passed on this information for generations, the Mayans and their ancestors knew the sun and the starry sky better than we know it today. They knew it so well as to be able to predict major solar events.

The sun, you see, has a certain internal cycle, which we’ve in modern times managed to figur out repeats about every 11 years, with high and low solar activity. The Mayans were smarter and figured out that there are even larger cycles. And one of these larger cycles is about to come around. Even NASA
agrees, there is a solar storm coming up. If it’s properly powerful, oh maybe like back in 1989, it could knock out
telephones and power grids.

… which is not a real biggie now is it, but if it *does* happen, and lots of people have been “primed” to believe that this is indeed the end of days, it could result in a week-long panic the likes of which our society has yet to see. This is not Y2K, and it’s not the apocalypse, but it could be filled with panic-stricken people and that’s nearly as dangerous as anything zombie-like.


Kacper’s Zombie Apocalypse Bug Out List.

aquisition list

  • shelter
    .. bomb shelter is ideal, but barring that any easily defendable point is great. Sturdy walls and many vantage points make a structure wherein 3 people can defend against 30… or adverse elements and bad weather in general.
  • food
    .. for a week at least, you’ll have to forage for more. If you’ve built your bomb shelter to bug out in until the ashes fall, knock yourself out on canned food but be prepared to ration.
  • friends
    .. all the food and shelter in the world won’t watch your back or keep you from going crazy with loneliness. Know who they are and where they are should shit hit the fan. Find your friends and watch their backs and love will conquer all.
  • weapons
    .. for the haters, for self-defence, for catching prey.
  • transportation
    .. to reach shelter. will you travel The Road on foot? Will you get a horse? A bike? Engines depend on fuels that might quickly become scarce, so either stock up or plan for aquisitions.
  • warmth / fire
    it’s obvious but you, your friends and your gunn ain’t gonna survive in the cold and the blight, so you better bring some storm matches, flint or such cause your zippo ain’t gonna last you forever.
  • power
    .. by this I mean not only fuel and fire but also electricity, but I also mean power in the sociopolitical sense. In the coming days you need to be easy to like or hard to kill.
  • maps
    .. to where you can find more of the above.
  • comfort/conveience items
    .. alcohol, cigarettes, blankets, games, salt, trinkets.
    pretty much anything
    you can trade.
  • Until then and beyond, you can enjoy our very own Mayan Calendar.

the paranoid console viewer

Tuesday, June 19th, 2012

Hi all,
I know it’s been a while since my last post.
There is lots to talk about, but let’s start afresh with something short and sweet.

Let me paint a picture for you:
There is something broken in the most holy and secure part of your network. You need to call support to have them look at it. The support rep wants console access, but you can’t give them axx to your holiest cream pie.
They offer to take over your desktop with a java rootkit app like TeamViewer, GoToMeeting or WebEx.
You decline. You need to stay in control, but show them what they need to see, that and only that.
How?

Let me be clear on the problem statement:
Read-only shell access to the most secure host, which is not available over the wire, viewed by multiple parties at the same time.

Here’s how to do that with SSH, screen(1) and some foo,
with ssh->chroot->rbash->readonly multiuser screen->reverse ssh->openvpn:

You will need a linux server in an “unsafe” zone which is exposed to your support rep on the internet or thru VPN.

  1. Create the user to be contained on your unsafe box, with the restricted bash shell:
    unsafe# export user=rep; adduser $user; chage -s /usr/bin/rbash $user
  2. (Bonus:) chroot/contain the user within sshd_config
  3. Setup multiuser screen on the unsafe box. There are lots of guides for it, but the short and sweet of it is: unsafe# chmod +s `which screen`; chmod 755 /var/run/screen Indeed, this increases the attack surface, and therefore we call this box the unsafe one.
  4. ssh from secure zone to unsafe server:
    secure# ssh -R 2222:localhost:22 screen
  5. Run screen from YOUR account and do :addacl $user :chacl $user -w "#" :chacl $user -x "?" Replace $user with whatever from step 1. Then, still in your screen: :multiuser on
  6. Win! Now you can reverse ssh back to the secure zone and let $user on the unsafe box read the terminal without being able to access anything but what you show her.
  7. Bonus: Add `screen -r $youraccount` in $user/.profile and $user will drop straight into locked screen, and remember that multiuser screen is read-write-execute for all accounts that are addacl’d
    so you might want to chacl before enabling the $user account login.

    And there you have it, superparanoid reverese secure-unsecure remote shell viewer.

    0k

CPM 0.26 the Console Password Manager

Monday, December 5th, 2011

Some of you might have noticed that I’ve adopted this little program while its original author is MIA, and that my efforts have resulted in its inclusion into debian wheezy earlier this year.

This is great news and makes it a breeze to get up and running with CPM with a simple apt-get install cpm

However, it seems that most people are interested in running CPM on older distributions, realistically the stable distribution codenamed squeeze is a favorite, as well as the Ubuntu LTS release 10.4 codenamed lucid lynx.

So I have built some updated packages of CPM for these oldies but goodies:
* CPM for squeeze i386
* CPM for squeeze amd64
* CPM for lucid i386
* CPM for lucid amd64

Remember to install the dependencies though. On squeeze, they are:

me@mine:~# apt-get install \
    libcdk5 libcrack2 libdotconf1.0 libgpg-error0 \
    libgpgme11 libxml2 libxml2-utils libpth20

File us a ticket if you run into trouble with these packages or need cpm working on some other distribution.

CPM is a simple, paranoid password manager for the console with some cool features that make it stand out:

* data files can be encrypted for more than one person
* data files are signed by the last person who saved it so forging data files is not possible
* data files are en- and decryptable directly by gpg and gzip
* the application memory is protected from paging, core dumps, ptrace attacks and runtime environment
* data is validated using an internal DTD
* several passwords per account are possible to store
* it’s possible to handle several data files, each encrypted for different people
* cracklib checks of password strength and warnings about weak passwords
* user definable hierarchy with unlimited depth
* long comments for any node in the hierarchy
* password generator
* only one password visible at a time
* searchable database from the command line
* user definable search patterns (e.g. user@hostname)
* several hits can be displayed at once (e.g. several accounts per host)
* conversion scripts for Password Management System (pms), Password Safe and CSV files

oh noes, o cert my *sniff* cert

Monday, September 26th, 2011

papieren bitteI’m not going to tell you about DigiNotar, whose file of bankruptcy this month held shock for no one after recently having lost the keys to the grand vault, in which the government held much stock. Though I have many comments upon the sophistication of the player that so thoroughly owned the most trusted agencies of the digital age….

The cracker hardly needed them skillz, considering it has been a challenge to keep that whole corrupt industry accountable. The trouble with the central authority system is that even if only one of the keys is compromised, the system is broken and gives no assurances whatsoever. No warning bells either. Just a sweet silent man in the middle, passing along all the best parts to his lover.

It’s not a joke for the 300,000+ people who documentedly had their emails and facepalms compromised. We thought he was kind to give an interview and we wait in awe for his next move.

I’m not going to mention the fatal flaws in certificate revocation that became embarrassingly apparent when the damage was done.
What’s hardly the matter since this kind of thing is bound to crop up, that hole in TLS was deemed unexploitable – now there’s a Titanic if I ever saw one. Un sinkable. Too fat to die.
cert failure

SSL is an open book for those who dare to look, and it’s got more than a couple old bugs. It’s okay though, we can patch it, they will say. Dare to look the other way!
Not that you need those anyway, since there are some really nice sslsnarfing techniques out there that entirely forgo attacks on SSL as “too inefficient”.

But I say nay! Unacceptable. There is another way.. and we’re already doing it! We sign our own signatures and we back each other’s signatures.
Now that’s business, something that the companies on your CA trusted list were painfully aware of when they laid down the law of the code and put themselves on the trust list. Yet still ca-cert is not on your trust list, and warning bells fly off on some of the most trustworthy sites- self-signed ones.

Just don’t ask them why or how, or anything that isn’t directly relevant. Do you even know what is on your trust list? You might just be surprised at what you can find.

# ls -al /etc/ssl/certs | wc -l
479

How many of these do you trust? How many of these should you trust? I’ll tell you: *none*.

We should not be adding static lists of central signing authorities to our systems. This is a brittle and dangerous system. We knew this, but hackers have now thankfully demonstrated it.
A better way is for every person (and by extension every browser) to keep their own list of signing certs, and to exchange these certs with their friends (automagically, if you like). Your friends lists can come out of a social network, any social network, and it will mean that any site that has been vetted by one or more of your friends will likely be safe for you to use as well. It’s even better than that, you can check certs from multiple friends and detect discrepancies.

green padlock
That, my friends, is called the Web of Trust, and is a design that is heading in the right direction. convergence.io is doing something similar already to a Firefox near you, while GPG has worked like this for three decades!

It has to be simple. It has to be very simple. And it has to be chemically free of one word: ‘central’.

One real easy way to do this on linux would be using git and signed manifests. I already do this in gone to assure that only files on a manifest signed by a trusted key get installed.