Archive for the ‘breakage’ Category

The Zombie Apexeclipse

Tuesday, December 18th, 2012

The end is nigh!!11!1!1

please stand by

.. or so everyone will have you believe these days. I’m not one to throw myself on the bandwagon of the times, but waxing practical on the subject is not beneath me. Too bad the prophecized apocalypse is no more than a gross misinterpretation of the Mayan calendar and the myths surrounding this ancient time-keeping device.

TL;DR, If you want to cut to the chase I suggest skipping to The End.

Truth is the calendar in question is a very accurate time-keeping, call it time-division device, and far from coming to an end, it is merely turning from the Long Count date of x.12.19.19.19.19 to x.13.0.0.0.0. Keep your pants on.

What I’ve found interesting in all this is the basis for the Mayan Long Count, a count which has succeeded in taking account of great cosmic phenomena such as procession, phenomena that even our own “modern” calendar has failed to take into account.

The basis, or at least one of the bases upon which the Long Count is built, are the natural cycles of our sun. Just like the inhabitants of ancient villages in India can every year predict the flood and draught of their nearest river to within the exact meter, having measured the periods and passed on this information for generations, the Mayans and their ancestors knew the sun and the starry sky better than we know it today. They knew it so well as to be able to predict major solar events.

The sun, you see, has a certain internal cycle, which we’ve in modern times managed to figur out repeats about every 11 years, with high and low solar activity. The Mayans were smarter and figured out that there are even larger cycles. And one of these larger cycles is about to come around. Even NASA
agrees, there is a solar storm coming up. If it’s properly powerful, oh maybe like back in 1989, it could knock out
telephones and power grids.

… which is not a real biggie now is it, but if it *does* happen, and lots of people have been “primed” to believe that this is indeed the end of days, it could result in a week-long panic the likes of which our society has yet to see. This is not Y2K, and it’s not the apocalypse, but it could be filled with panic-stricken people and that’s nearly as dangerous as anything zombie-like.


Kacper’s Zombie Apocalypse Bug Out List.

aquisition list

  • shelter
    .. bomb shelter is ideal, but barring that any easily defendable point is great. Sturdy walls and many vantage points make a structure wherein 3 people can defend against 30… or adverse elements and bad weather in general.
  • food
    .. for a week at least, you’ll have to forage for more. If you’ve built your bomb shelter to bug out in until the ashes fall, knock yourself out on canned food but be prepared to ration.
  • friends
    .. all the food and shelter in the world won’t watch your back or keep you from going crazy with loneliness. Know who they are and where they are should shit hit the fan. Find your friends and watch their backs and love will conquer all.
  • weapons
    .. for the haters, for self-defence, for catching prey.
  • transportation
    .. to reach shelter. will you travel The Road on foot? Will you get a horse? A bike? Engines depend on fuels that might quickly become scarce, so either stock up or plan for aquisitions.
  • warmth / fire
    it’s obvious but you, your friends and your gunn ain’t gonna survive in the cold and the blight, so you better bring some storm matches, flint or such cause your zippo ain’t gonna last you forever.
  • power
    .. by this I mean not only fuel and fire but also electricity, but I also mean power in the sociopolitical sense. In the coming days you need to be easy to like or hard to kill.
  • maps
    .. to where you can find more of the above.
  • comfort/conveience items
    .. alcohol, cigarettes, blankets, games, salt, trinkets.
    pretty much anything
    you can trade.
  • Until then and beyond, you can enjoy our very own Mayan Calendar.

prads-0.3.2: ya skipped that one

Monday, November 5th, 2012

Ever since HACK.LU (where we spoke about VSF), Ebf0 and I have had quite some activity on PRADS, wonder why?

We really enjoyed the design of POM-NG, we find this little program quite inspiring and will keep in touch with GMsoft.

This might be the right time to announce PRADS-me! at prads.delta9.pl, a service to actively fingerprint your own self. Real useful even just for an IP check, geolocation or to see what types of fingerprints you are matching at any given time.

Some of you might recall that PRADS was the subject of a Masters thesis in 2011: “Investigating Passive Operating System Detection” by Petter Bjerke Falch from UiO. Well, it’s happened again.

Jostein Haukeli at the University of Oslo Department of Informatics has written a paper on “False positive reduction through IDS network awareness”. We are excited about the prospect that our work is being used in data correlation, and we would like to see more event correlation stuff done in a scalable context.

Last year PRADS was a featured ip6-ready tool at the ISC.
Furthermore, in July this year PRADS was included in OSSIM, the Open Source SIEM

In other news, PRADS is about to be baked into the next release of the Security Onion network monitoring linux distro. Version 12.04 beta already comes with PRADS included (replacing old-timers sancp and pads) but it did require some bug-squashing from our end. You know what that means? 0.3.2-rc1 was tagged in the tree recently. That’s right: a new PRADS release is coming up Real Soon Now.

the paranoid console viewer

Tuesday, June 19th, 2012

Hi all,
I know it’s been a while since my last post.
There is lots to talk about, but let’s start afresh with something short and sweet.

Let me paint a picture for you:
There is something broken in the most holy and secure part of your network. You need to call support to have them look at it. The support rep wants console access, but you can’t give them axx to your holiest cream pie.
They offer to take over your desktop with a java rootkit app like TeamViewer, GoToMeeting or WebEx.
You decline. You need to stay in control, but show them what they need to see, that and only that.
How?

Let me be clear on the problem statement:
Read-only shell access to the most secure host, which is not available over the wire, viewed by multiple parties at the same time.

Here’s how to do that with SSH, screen(1) and some foo,
with ssh->chroot->rbash->readonly multiuser screen->reverse ssh->openvpn:

You will need a linux server in an “unsafe” zone which is exposed to your support rep on the internet or thru VPN.

  1. Create the user to be contained on your unsafe box, with the restricted bash shell:
    unsafe# export user=rep; adduser $user; chage -s /usr/bin/rbash $user
  2. (Bonus:) chroot/contain the user within sshd_config
  3. Setup multiuser screen on the unsafe box. There are lots of guides for it, but the short and sweet of it is: unsafe# chmod +s `which screen`; chmod 755 /var/run/screen Indeed, this increases the attack surface, and therefore we call this box the unsafe one.
  4. ssh from secure zone to unsafe server:
    secure# ssh -R 2222:localhost:22 screen
  5. Run screen from YOUR account and do :addacl $user :chacl $user -w "#" :chacl $user -x "?" Replace $user with whatever from step 1. Then, still in your screen: :multiuser on
  6. Win! Now you can reverse ssh back to the secure zone and let $user on the unsafe box read the terminal without being able to access anything but what you show her.
  7. Bonus: Add `screen -r $youraccount` in $user/.profile and $user will drop straight into locked screen, and remember that multiuser screen is read-write-execute for all accounts that are addacl’d
    so you might want to chacl before enabling the $user account login.

    And there you have it, superparanoid reverese secure-unsecure remote shell viewer.

    0k

hackeriet festival

Friday, February 10th, 2012

If you haven’t heard already, Hackeriet – the Oslo Hackerspace – is hosting a full day of talks and workshops and party tomorrow Saturday. Come on by from 11:00am!
Check out the program at
http://events.hackeriet.no,
keywords to look out for are: arduinos, mesh networks, crypto, cyberwar, datalove and chipmusic :-))

pixie dust

Thursday, February 2nd, 2012

we’ve booted backtrack off usb before, now that’s kinda
boring and installing backtrack onto the usb with unetbootin
is painfully slow and not the same as bootin strait off the
usb which is what we want in this case; not an install
but a fresh copy every boot

there is someone disagreeing in the back of the room, now
wouldn’t this be a lot more complicated? No sir. on the contrary
booting fresh every time makes work a lot simpler; you gain a
direct relationship to what you store where, and where you
access your data from

but there is another one in the front;you sir, you feel that
one would have to sacrifice many of the comforts such as all
any tools of the trade at hand and permanent local storage -
but at best this is a lazy roadblock to salvation; by booting
off of local storage we have local storage at hand in a more
practical format, be that even a microscopic carrier can be
removed and replaced with sufficient storage for everything
and then some

the medium can be embedded, destroyed or ingested, so
the impermiableness of accidentally recorded data and the
robustness, accessability and portability of removable storage
comes very much in hand upon situations that either require
inconspiciousness, anonymity, covertness, plausible deniability
or a high degree of reliability in day-to-day computing

the totalality of the system given to remaining only in memory
causes it to be independent of other storage for operations, and when
operations cease from loss of any exterior preconditions, the
system simply ceases. when preconditions reoccur – by powering on
and executing the first block – the system can be relied upon to
simply starts afresh, completely unperturbed by any previous history

should the need arise to patch the system; say some new app or
capability is called for where there is no time to rebuild,
a patch should be scripted always when there is certanity that
the capability will require a repeat performance. It is advised
to devise a patch which includes all dependencies.

thus the fresh system becomes more capable and more accessible
over time, just like an install. patches can then easily be
rolled into the system should they proove useful to others.

But how does one do it? Well, it’s easy but unfortunately
not as easy as overwriting the boot device; it’s just not
practical because partitioning is always an individual consideration

  • . there are often other files on the block device
  • . choice of filesystem and memory technology has much bearing
  • . the block device is larger or smaller than expected
  • instead, we allow any bootable partition scheme and any
    filesystem and memory technology, as long as the storage
    requirements of the system are met;

    here’s to clone how:

    cp -a boot/ apt/ casper/ gone/ preseed/ syslinux/ 
    syslinux /dev/partition
    mbr /dev/device
    

    but that’s fine, it’s been done and all, but even the ability to
    boot the system with precisely zilch local storage comes in
    handy, and for that we have pixie dust.

    pixie daemon and tiny ftp should be pointing a path
    exactly matching the dhcp-provided patch.. otherwise
    you will have worries!

    /etc/pxe.conf:

    interface=eth1
    service=X86PC,0,0,local,Local boot
    service=X86PC,0,0,pxelinux,PXELinux
    tftpdbase=/var/lib/tftpboot
    domain=truly.yours
    

    /etc/default/tftpd-hpa:
    TFTP_DIRECTORY=”/var/lib/tftpboot/”

    /etc/dnsmasq.conf:

    dhcp-boot=/var/lib/tftpboot/pxelinux,vulcano,10.10.10.86
    

    “high speed” tftp daemons and multicast can be found but it is
    advised to stick to tftpd-hpa and dnsmasq with no esoterics due
    to the sheer amount of variables introduced.

    /var/lib/tftpboot/pxelinux.cfg/default:

    # not strictly necessary but makes the menu pretty
    menu hshift 13
    menu width 49
    menu margin 8
    
    menu title BackTrackBoot
    default vesamenu.c32
    display f.txt
    timeout 600
    
    label local
    menu label Local Harddisk
    localboot 0
    
    menu begin bt
    menu title BackTrack 5
    # ok here comes the real shit
    label backtrack5
    menu label BackTrack R1
    kernel bt5/vmlinuz
    append boot=casper netboot=nfs nfsroot=vulcano:/mnt/bt5 initrd=bt5/initrd.gz text splash vga=791 file=/cdrom/preseed/custom.seed --
    menu end
    

    you’ll need to copy to tftpboot/bt5 the initrd.gz and vmlinuz from the backtrack ISO /casper folder (which you can mount -o loop -t iso9660 bt5.iso /mnt/bt5

    the rest of the files you provide to the bootee over NFS

    /etc/exports:

    /mnt/bt5 10.10.3.0/24(rw,sync,no_subtree_check) 10.10.10.0/24(rw,sync,no_subtree_check)
    mount -t iso9660 -o loop BT5R1-GNOME-32.iso /mnt/bt5
    

    add a http server with kickstart / preseed files for an ever more powerful setup,
    in which case you replace the file= stanza in the append line with
    url=http://host/path/to/preseed

    more on preseeds… maybe later.

    Now restart all dependent services:

    /etc/init.d/nfs-kernel-server restart
    /etc/init.d/tftpd-hpa restart
    /etc/init.d/apache2 restart
    /etc/init.d/pxe restart
    

    debugging this setup usually requires tracing the process that is failing, so:
    - dhcp options tracing (dnsmasq verbose and tcpdump / wireshark)
    - verbose pxe
    - verbose foreground tftpd-hpa : in.tftpd -v -v -L /var/lib/tftpboot

    CPM 0.26 the Console Password Manager

    Monday, December 5th, 2011

    Some of you might have noticed that I’ve adopted this little program while its original author is MIA, and that my efforts have resulted in its inclusion into debian wheezy earlier this year.

    This is great news and makes it a breeze to get up and running with CPM with a simple apt-get install cpm

    However, it seems that most people are interested in running CPM on older distributions, realistically the stable distribution codenamed squeeze is a favorite, as well as the Ubuntu LTS release 10.4 codenamed lucid lynx.

    So I have built some updated packages of CPM for these oldies but goodies:
    * CPM for squeeze i386
    * CPM for squeeze amd64
    * CPM for lucid i386
    * CPM for lucid amd64

    Remember to install the dependencies though. On squeeze, they are:

    me@mine:~# apt-get install \
        libcdk5 libcrack2 libdotconf1.0 libgpg-error0 \
        libgpgme11 libxml2 libxml2-utils libpth20
    

    File us a ticket if you run into trouble with these packages or need cpm working on some other distribution.

    CPM is a simple, paranoid password manager for the console with some cool features that make it stand out:

    * data files can be encrypted for more than one person
    * data files are signed by the last person who saved it so forging data files is not possible
    * data files are en- and decryptable directly by gpg and gzip
    * the application memory is protected from paging, core dumps, ptrace attacks and runtime environment
    * data is validated using an internal DTD
    * several passwords per account are possible to store
    * it’s possible to handle several data files, each encrypted for different people
    * cracklib checks of password strength and warnings about weak passwords
    * user definable hierarchy with unlimited depth
    * long comments for any node in the hierarchy
    * password generator
    * only one password visible at a time
    * searchable database from the command line
    * user definable search patterns (e.g. user@hostname)
    * several hits can be displayed at once (e.g. several accounts per host)
    * conversion scripts for Password Management System (pms), Password Safe and CSV files

    oh noes, o cert my *sniff* cert

    Monday, September 26th, 2011

    papieren bitteI’m not going to tell you about DigiNotar, whose file of bankruptcy this month held shock for no one after recently having lost the keys to the grand vault, in which the government held much stock. Though I have many comments upon the sophistication of the player that so thoroughly owned the most trusted agencies of the digital age….

    The cracker hardly needed them skillz, considering it has been a challenge to keep that whole corrupt industry accountable. The trouble with the central authority system is that even if only one of the keys is compromised, the system is broken and gives no assurances whatsoever. No warning bells either. Just a sweet silent man in the middle, passing along all the best parts to his lover.

    It’s not a joke for the 300,000+ people who documentedly had their emails and facepalms compromised. We thought he was kind to give an interview and we wait in awe for his next move.

    I’m not going to mention the fatal flaws in certificate revocation that became embarrassingly apparent when the damage was done.
    What’s hardly the matter since this kind of thing is bound to crop up, that hole in TLS was deemed unexploitable – now there’s a Titanic if I ever saw one. Un sinkable. Too fat to die.
    cert failure

    SSL is an open book for those who dare to look, and it’s got more than a couple old bugs. It’s okay though, we can patch it, they will say. Dare to look the other way!
    Not that you need those anyway, since there are some really nice sslsnarfing techniques out there that entirely forgo attacks on SSL as “too inefficient”.

    But I say nay! Unacceptable. There is another way.. and we’re already doing it! We sign our own signatures and we back each other’s signatures.
    Now that’s business, something that the companies on your CA trusted list were painfully aware of when they laid down the law of the code and put themselves on the trust list. Yet still ca-cert is not on your trust list, and warning bells fly off on some of the most trustworthy sites- self-signed ones.

    Just don’t ask them why or how, or anything that isn’t directly relevant. Do you even know what is on your trust list? You might just be surprised at what you can find.

    # ls -al /etc/ssl/certs | wc -l
    479

    How many of these do you trust? How many of these should you trust? I’ll tell you: *none*.

    We should not be adding static lists of central signing authorities to our systems. This is a brittle and dangerous system. We knew this, but hackers have now thankfully demonstrated it.
    A better way is for every person (and by extension every browser) to keep their own list of signing certs, and to exchange these certs with their friends (automagically, if you like). Your friends lists can come out of a social network, any social network, and it will mean that any site that has been vetted by one or more of your friends will likely be safe for you to use as well. It’s even better than that, you can check certs from multiple friends and detect discrepancies.

    green padlock
    That, my friends, is called the Web of Trust, and is a design that is heading in the right direction. convergence.io is doing something similar already to a Firefox near you, while GPG has worked like this for three decades!

    It has to be simple. It has to be very simple. And it has to be chemically free of one word: ‘central’.

    One real easy way to do this on linux would be using git and signed manifests. I already do this in gone to assure that only files on a manifest signed by a trusted key get installed.

    ip6 DNS wildcards considered harmful

    Friday, September 23rd, 2011

    I discovered something yesterday that might be of consequence:
    If you have ip6 connectivity the domain name resolver will prefer an ip6 wildcard domain over a ip4 A or CNAME record. This breaks things like ssh. You’d expect the resolver to choose the response that is most specific, the same way ip4 wildcards work, and not to blindly prefer ip6 wildcards.

    Consider the case of Mary, who’s been around and has lots of domains:

    hail.mary.com
    naked.mary.com
    see.mary.com
    

    and she’s also wildcarding all the other *.mary.com to her vanity host me.mary.com… you get the idea, it’s fairly typical. Those hosts only have ip4 connectivity. Now she adds a new address ip6.mary.com and puts a wildcard ip6 record *.mary.com, expecting that people accessing foo.mary.com on ip6 get the wildcard address – and they do! But she gets alot more than the doctor ordered, her ip6 clients will also get the ip6 wildcard address for all her other domains! hail.mary.com, naked.mary.com and see.mary.com will all land on ip6.mary.com instead of the ip4 A records. What happened here?
    Effectively, Mary’s ip6 wildcard broke all ip6 to ip4 connectivity for Mary’s existing subdomains!

    Yep, you can fix it on your machine, but this is a client problem and you can’t fix everybody else’s resolvers, so what you have to do is avoid ip6 wildcard domains ENTIRELY. Thanks a bunch.

    On a completly different node:

    “debug This option is recognized by pam_ldap but is presently ignored.”

    I mean wow. What did they do, write the whole module flawlessly on the first try? I wish.

    firefox + geolocation = m0ar paranoia

    Friday, August 26th, 2011

    Just a quick note pertaining to a previous post, namely the new evil that is firefox geolocation. This is new in firefox 3.5. Yes, it is opt-in and yes firefox does not track you but yes the servers you opt in to will track you and that my friends is one of the most serious misfeatures of our times, repeated again and again in stuff like Google Latitude, Android and Apple photo geo-tagging.
    If you care about your personal security at all you do not want the internet tracking where you are, which is essentially what this amounts to.
    Disable it now by going to the about:config location in your firefox, typing geo. in the search field and double clicking the geo.enabled line so that it says

    geo.enabled    user set  boolean   false
    

    That’s it for now.

    failcode

    Thursday, August 18th, 2011

    In my time as an application programmer. developer and designer, breif stint as team lead and project manager,
    as well as my time as a systems consultant, I have witnessed first-hand and also heard many credible tales of systematic failure that rival any of the stories on The Daily WTF. My collegues and I have seen so many examples of bad design, bad code and systemic failure that we have considered writing a book titled How To Write Ugly Code.

    I have also read the Texas Instruments Chainsaw massacre and personally met Gomez while debugging applications.

    My speciality and my interest lies in diagnostics and the analysis of problems as well as system security, and my experience has showed that one can venture to say something about the qualitative difference of different design methodologies and what they have to say for the end result.

    Firstly however, it is worth noting that the software industry as a whole has one primary problem: the time pressure to deliver new features at the face of mouting expectations.

    This pressure to deliver is seen as the driving force behind industry progress and ever leaner, more economic applications, however contrary to this belief I have evidence that it leads to incentives for sloppy work, overengineering and poor considerations of the problem domain. It seems the process itself rewards poor application design, regardless of development methodology.

    Large corporate and government tenders, which affect many hundreds of thousands of peoples lives, get bid on by large software houses that believe they can deliver everything (at low risk: if they cannot deliver it is very hard for the customer to contest this to a major software house).

    What we get by and large out of this process are bloated top-down applications designed by people who do not understand the (whole) problem, leading to misguided decisions for such things as

    • choice of platform and language
    • choice of coding standards (check out Systems Hungarian if you don’t believe me)
    • programming methodology
    • communication tools: source control, ticket and forum tools for developers and system managers
    • Not Invented Here-practices
    • monkey-coding by people whose talents could be employed to solving the problem

    What usually goes for as “agile” development causes frequent ineffective blame-game meetings.
    Unit test driven development frequently causes micromanagement of program details and inflexible designs,
    … all these methodologies were designed to improve programs, not bog them down! why then do they cause so much breakage?

    The pressure to deliver requires the application developer to prefer large swathes of ready-made library code and a high level of abstraction to allow her to meet deadline demands.

    A high abstraction level causes low debuggability and poor performance.
    Low debuggability because bugs are by definition conditions caused by circumstances unforseen by the application developer. Abstractions are employed by the developer to hide implementation details to aid clairty and speed of application development, at the cost of debuggability.

    The very tools and abstractions employed by the application developer create the frame through which the developer can see the circumstances of her design and code. Bugs most frequently occur on the boundries between abstractions, where the developer has no possibility to forsee these circumstances. Furthermore, in a system that has a passibly high level of abstraction there is a whole stack of hidden details which must be traced and unwound to discover the bug. Therefore, every additional layer of abstraction obscures the debugging process.

    The debuggability and algorithmic simplicity is key in achieving optimal performance. In other words, if we have a clear problem statement it is possible to achieve performance. If there is no clear problem statement, and the program is further muddled by abstractions and interactions there is no effective path to performance.

    Any artist will be able to tell you that the most interesting, creative and innovative work comes out of having a stress-free, playful environ. Since innovative coding is a creative activity, the same applies to developing applications, something that game developer companies and creative shops have known for years, and behemoths like Google and Microsoft have picked up on, reinvesting up to 15% of their revenue into research and development and getting that part right, as witnessed by the sheer output of innovation.

    If there is a clear path to solving these fundamental problems of IT then it is putting the people who know what they are doing in the pilot seat, enabling developers to choose for themselves not only toolchains, methodology and communication tools but also engaging the systems thinkers into creating the specifications and architecture of the systems they are going to implement. The good news is that as customers and managers get savvy to this method of achieving IT success, we are going to see more developer autonomy and less spectacular fails.