Posts Tagged ‘freedom’

pixie dust

Thursday, February 2nd, 2012

we’ve booted backtrack off usb before, now that’s kinda
boring and installing backtrack onto the usb with unetbootin
is painfully slow and not the same as bootin strait off the
usb which is what we want in this case; not an install
but a fresh copy every boot

there is someone disagreeing in the back of the room, now
wouldn’t this be a lot more complicated? No sir. on the contrary
booting fresh every time makes work a lot simpler; you gain a
direct relationship to what you store where, and where you
access your data from

but there is another one in the front;you sir, you feel that
one would have to sacrifice many of the comforts such as all
any tools of the trade at hand and permanent local storage -
but at best this is a lazy roadblock to salvation; by booting
off of local storage we have local storage at hand in a more
practical format, be that even a microscopic carrier can be
removed and replaced with sufficient storage for everything
and then some

the medium can be embedded, destroyed or ingested, so
the impermiableness of accidentally recorded data and the
robustness, accessability and portability of removable storage
comes very much in hand upon situations that either require
inconspiciousness, anonymity, covertness, plausible deniability
or a high degree of reliability in day-to-day computing

the totalality of the system given to remaining only in memory
causes it to be independent of other storage for operations, and when
operations cease from loss of any exterior preconditions, the
system simply ceases. when preconditions reoccur – by powering on
and executing the first block – the system can be relied upon to
simply starts afresh, completely unperturbed by any previous history

should the need arise to patch the system; say some new app or
capability is called for where there is no time to rebuild,
a patch should be scripted always when there is certanity that
the capability will require a repeat performance. It is advised
to devise a patch which includes all dependencies.

thus the fresh system becomes more capable and more accessible
over time, just like an install. patches can then easily be
rolled into the system should they proove useful to others.

But how does one do it? Well, it’s easy but unfortunately
not as easy as overwriting the boot device; it’s just not
practical because partitioning is always an individual consideration

  • . there are often other files on the block device
  • . choice of filesystem and memory technology has much bearing
  • . the block device is larger or smaller than expected
  • instead, we allow any bootable partition scheme and any
    filesystem and memory technology, as long as the storage
    requirements of the system are met;

    here’s to clone how:

    cp -a boot/ apt/ casper/ gone/ preseed/ syslinux/ 
    syslinux /dev/partition
    mbr /dev/device
    

    but that’s fine, it’s been done and all, but even the ability to
    boot the system with precisely zilch local storage comes in
    handy, and for that we have pixie dust.

    pixie daemon and tiny ftp should be pointing a path
    exactly matching the dhcp-provided patch.. otherwise
    you will have worries!

    /etc/pxe.conf:

    interface=eth1
    service=X86PC,0,0,local,Local boot
    service=X86PC,0,0,pxelinux,PXELinux
    tftpdbase=/var/lib/tftpboot
    domain=truly.yours
    

    /etc/default/tftpd-hpa:
    TFTP_DIRECTORY=”/var/lib/tftpboot/”

    /etc/dnsmasq.conf:

    dhcp-boot=/var/lib/tftpboot/pxelinux,vulcano,10.10.10.86
    

    “high speed” tftp daemons and multicast can be found but it is
    advised to stick to tftpd-hpa and dnsmasq with no esoterics due
    to the sheer amount of variables introduced.

    /var/lib/tftpboot/pxelinux.cfg/default:

    # not strictly necessary but makes the menu pretty
    menu hshift 13
    menu width 49
    menu margin 8
    
    menu title BackTrackBoot
    default vesamenu.c32
    display f.txt
    timeout 600
    
    label local
    menu label Local Harddisk
    localboot 0
    
    menu begin bt
    menu title BackTrack 5
    # ok here comes the real shit
    label backtrack5
    menu label BackTrack R1
    kernel bt5/vmlinuz
    append boot=casper netboot=nfs nfsroot=vulcano:/mnt/bt5 initrd=bt5/initrd.gz text splash vga=791 file=/cdrom/preseed/custom.seed --
    menu end
    

    you’ll need to copy to tftpboot/bt5 the initrd.gz and vmlinuz from the backtrack ISO /casper folder (which you can mount -o loop -t iso9660 bt5.iso /mnt/bt5

    the rest of the files you provide to the bootee over NFS

    /etc/exports:

    /mnt/bt5 10.10.3.0/24(rw,sync,no_subtree_check) 10.10.10.0/24(rw,sync,no_subtree_check)
    mount -t iso9660 -o loop BT5R1-GNOME-32.iso /mnt/bt5
    

    add a http server with kickstart / preseed files for an ever more powerful setup,
    in which case you replace the file= stanza in the append line with
    url=http://host/path/to/preseed

    more on preseeds… maybe later.

    Now restart all dependent services:

    /etc/init.d/nfs-kernel-server restart
    /etc/init.d/tftpd-hpa restart
    /etc/init.d/apache2 restart
    /etc/init.d/pxe restart
    

    debugging this setup usually requires tracing the process that is failing, so:
    - dhcp options tracing (dnsmasq verbose and tcpdump / wireshark)
    - verbose pxe
    - verbose foreground tftpd-hpa : in.tftpd -v -v -L /var/lib/tftpboot

    oh noes, o cert my *sniff* cert

    Monday, September 26th, 2011

    papieren bitteI’m not going to tell you about DigiNotar, whose file of bankruptcy this month held shock for no one after recently having lost the keys to the grand vault, in which the government held much stock. Though I have many comments upon the sophistication of the player that so thoroughly owned the most trusted agencies of the digital age….

    The cracker hardly needed them skillz, considering it has been a challenge to keep that whole corrupt industry accountable. The trouble with the central authority system is that even if only one of the keys is compromised, the system is broken and gives no assurances whatsoever. No warning bells either. Just a sweet silent man in the middle, passing along all the best parts to his lover.

    It’s not a joke for the 300,000+ people who documentedly had their emails and facepalms compromised. We thought he was kind to give an interview and we wait in awe for his next move.

    I’m not going to mention the fatal flaws in certificate revocation that became embarrassingly apparent when the damage was done.
    What’s hardly the matter since this kind of thing is bound to crop up, that hole in TLS was deemed unexploitable – now there’s a Titanic if I ever saw one. Un sinkable. Too fat to die.
    cert failure

    SSL is an open book for those who dare to look, and it’s got more than a couple old bugs. It’s okay though, we can patch it, they will say. Dare to look the other way!
    Not that you need those anyway, since there are some really nice sslsnarfing techniques out there that entirely forgo attacks on SSL as “too inefficient”.

    But I say nay! Unacceptable. There is another way.. and we’re already doing it! We sign our own signatures and we back each other’s signatures.
    Now that’s business, something that the companies on your CA trusted list were painfully aware of when they laid down the law of the code and put themselves on the trust list. Yet still ca-cert is not on your trust list, and warning bells fly off on some of the most trustworthy sites- self-signed ones.

    Just don’t ask them why or how, or anything that isn’t directly relevant. Do you even know what is on your trust list? You might just be surprised at what you can find.

    # ls -al /etc/ssl/certs | wc -l
    479

    How many of these do you trust? How many of these should you trust? I’ll tell you: *none*.

    We should not be adding static lists of central signing authorities to our systems. This is a brittle and dangerous system. We knew this, but hackers have now thankfully demonstrated it.
    A better way is for every person (and by extension every browser) to keep their own list of signing certs, and to exchange these certs with their friends (automagically, if you like). Your friends lists can come out of a social network, any social network, and it will mean that any site that has been vetted by one or more of your friends will likely be safe for you to use as well. It’s even better than that, you can check certs from multiple friends and detect discrepancies.

    green padlock
    That, my friends, is called the Web of Trust, and is a design that is heading in the right direction. convergence.io is doing something similar already to a Firefox near you, while GPG has worked like this for three decades!

    It has to be simple. It has to be very simple. And it has to be chemically free of one word: ‘central’.

    One real easy way to do this on linux would be using git and signed manifests. I already do this in gone to assure that only files on a manifest signed by a trusted key get installed.

    datalagring i praksis: politiraidet mot autistici

    Tuesday, January 18th, 2011

    Historien om politiraidet på den frivillige italienske organisasjonen Autistici er et praktisk eksempel på hvorfor Datalagringsdirektivet ikke bør innføres.

    Autistici.org er en non-profit organisasjon som tilbyr betalingsfrie e-post og bloggtjenester laget for å være motstandsdyktige mot sensur.

    Her er en kort oppsumering av saken:

    • Som ledd i tiltakene mot sensur ligger Autistici sine tjenester fordelt på servere over hele verden, og dataene ligger kryptert på disk.
    • Autistici logger ikke oppkoblinger og bevarer ingen personlig informasjon om sine brukere.
    • Ett av Autistici sine tjenere står i serverparken til en norsk organisasjon for fremmelsen av fri programvare.
    • Den 5. november 2010: Politiet beslaglegger en harddisk fra serverparken basert på en anmodning fra italiensk politi.
    • Anmodningen fra Italia navngir én e-postkonto og ber om innhold, innlogginger og endringer på gitte e-post konto.
    • Beslageleggelsens begrunnelse er en trusselsak.
    • Den italienske anmodningen beskriver det straffbare forhold som fornærmelser mot omdømmet til to ledere av den neo-fascistiske organisasjonen Casa Pound.
    • Norsk politi går utover anmodningen og tar speilkopi av to harddisker, som inneholder e-posten til 1500 brukere og kontoinformasjonen til 7000 brukere.
    • De beslaglage diskene inneholder ikke e-postkontoen nevnt i den utenlandske anmodningen.
    • Lignende andmodninger antas å ha blitt sendt til Autistici sine serverparker i Nederland og Sveits.

    Onkel tar allerede for mange friheter med andres data. I dette tilfellet har 1500 uskyldige menneskers private e-post blitt rammet av en ransaking på vegne av en anmodning fra en fremmed nasjons interesser, innblandet i et forhold av tvilsom legalitet, der siktede er en ukjent person.

    Saken ruller videre for å undersøke beslagets lovlighet og sikre at kopiene ikke uleveres i sin helhet til italienske myndigheter.

    Saken er også omtalt i:

    Morsomme eksempler på fremtidig misbruk av DLD kan du finne på #DLDloggen.

    Free society conference – my experience

    Tuesday, November 9th, 2010

    Until the very last minutes I was unsure whether I’d make it to FSCONS, the Free Society Conference and Nordic Summit. I did not know what to think of it, despite gentle pushings from one set to speak at the conference. Three days later and with the event somewhat filtered in my mind, there is no doubt that it was well worth the opportunity costs and then some.

    I'm going to FSCONS 2010!

    My first impressions while browsing the event programme were that there was no way to attend all the interesting talks! An insurmountable problem, and I hadn’t even gotten there: my meat could not be in two to three places at the same time, while my mind could not rationally choose away interesting unknowns.. so I opted to leave it to a vague notion of chance and intent.

    What I hadn’t expected was the powerful impact that the people attending would have on me. Cyber heroes and heroines, freedom fighters, game changers, inventors, uncelebrated cryptoanarchists and everything makers were some of those that I got to know, that engaged me in dialogue, that dared discuss openly some (most?) of the hardest problems that we, the world, are facing today. With the full intent to do something about these problems.

    (more…)

    what’s wrong with IT?

    Wednesday, March 24th, 2010

    Hold on a bit.
    I am a monk of the old order, one of the illuminati of software stacks. By no means a high priest, but like many of my brethren I have been ordained with most of the dirty little secrets over the years since I joined the convent. I never specialized so I am well read in ancient texts and new work, and I have meditated on subjects ranging from compiling compilers through 3D rendering and artificial intelligence to business processes and value chains. In the constant rush to deliver on the promises of technology I’ve seen projects that are set up for failure even before they start. I’ve seen enough code to write a book detailing example for example what you should not do during development.

    The secrets are many, and they are complex and hard to grasp out of context, but to misuse an old adage: the truth is simple and it’s out there.

    The reason applications fail is because they are complex, but the reason IT fails is that IT people expect the applications to be simple to manage, and the business has a nasty tendency to promote the clueless.

    It’s amazing how much money is thrown out the window (and into greedy hands) by large corporations and public departments on hairy overcomplicated blackbox solutions that are supposed to meet simple business needs.

    Take databases for example. It’s easy to argue that the database is part of core business (because all the critical application data ends up in the database) and thus the database must be highly available, highly performant and highly secure. Maybe that’s how the CTO’s of the world justify spending millions on monstrous arcane iron black boxes to serve their modest database needs. Annualy!

    The same needs, if properly understood, could be served by the fraction of the cost while being easier to manage and debug!

    This is not just a schpiel on Postgres (who cares it’s open source, it can do all that and more) but a general protection fault in how technology is driven today.

    Another nice example is DNS, which is beyond core business in importance: without domain resolution nearly all infrastructure fails. DNS problems can cause the most obscure failures simply because applications have no provision for DNS failure. Quite a few IT departments all over the world operate DNS through point-and-click wizards without anything but the rudimentary understanding of its inner workings. Should they have that understanding? Hell yes, otherwise sooner or later it must fail as everything does, and when it does they have none of the tools to fix it!

    Scarier still is that the rest of the world (or very nearly) has standardized on the most baroque and insecure DNS server in existence (BIND me in hell with seven furies burning marks in my skin), a precise analogy to what has happened in the world of e-mail (sendmail will do anything but!). We do this because we follow Best Business Practices, which is the IT analogue of what happens to you when you go through airport security: it is completely ineffective but feels safer.

    Other examples of the same thing happening is the proliferation of security products that offer nothing but a smokescreen, the use of gigantic and poorly considered application frameworks and the abstraction and layering of simple concepts into behemoth object-relation collections.
    Humans have a distinctly object-oriented view of the world, all the same the world is trying to tell us that objects don’t exist in their own right but depend on a whole slew of epiphenomena.

    Software rots if it is not maintained.
    None of the above are hard problems, regardless of what others might have tried to jam down your throat. Databases are a snooze to work on, DNS and mail should Just Work, and once we have a web application stack going for us we’re not going to consider how it works or what could be better. The difficulty that lies in application infrastructure is a people problem.

    We want to buy a shrink-wrapped product and feel value for money without risk.

    There is some sort of mass marketing effect happening where decision makers are best influenced by the greediest hands. We tend to think that the most expensive car has the best value with the least risk, and we seldom so clear-sighted as to go back on decisions we have already made.

    So what’s the fix?

    Decision makers should spend more time evaluating the options before launching headlong into projects based on best business practices, and they should identify and listen more to the few quiet people that have a clue. The people with clue usually only get to vent their frustrations by making crass jokes about management and the hairyness of the most recent or most painful and embarassing failure of technology. These things are not talked about openly, but they should be.

    Ideally we should focus on long-term research into the difficult problems of technology: artificial intelligence, algorithms, how to feed the starving and save the world from imminent ecological disaster, quantum computing etc, instead of spending so much time failing at the simple things.