When your Empire has no Clothes

How many data points does it take to call something a trend?  With the hack and subsequent data dump of the internal files of Hacking Team, a company most of us never even knew existed until this week, the world is getting to see a very public examination of the naked inner workings of an organization. This is the second time I can think of this kind of hack occurring.  The first was, of course, Sony Pictures.

Some number of hackers have turned two different organizations inside out from a digital perspective, exposing even the mundane stuff for public ridicule.  And some of the most harshly ridiculed practices of all in both cases involved passwords and credentials.

In the case of Sony Pictures, the effect was acutely embarrassing.  Scores of Excel spreadsheets, detailing personal, business, and IT system passwords, with filenames like “website passwords” and “usernames & passwords”.   When Gawker writes an article detailing what morons you are,  you know it’s bad:  http://gawker.com/sonys-top-secret-password-lists-have-names-like-master_-1666775151

sonypicturespasswordfiles

In the case of Hacking Team, enough data was dumped for both the obvious stupidity to come to light, but also for hashed passwords to be brute forced, to be gleefully revealed in horrific detail on twitter.  The examples below are (a) a dump of the admin’s Firefox password manager, and (b) an excel spreadsheet containing VPS credentials.

hackingteamexample2

hackingteamexample

 

 

 

 

So, let’s assume that this ‘dump and roast’ trend is really a trend, and will continue.  Perhaps it puts a little more personal skin in the game.  We all get lazy. We all take shortcuts.  But perhaps now that there is a risk that all those shortcuts get dissected at a later date, with a very sharp scalpel.

Trying to look competent during examination by your Future Hacker Overlords.  It’s an odd thing to imagine as a security influence.  But right now, it feels like it might become a thing….

Certificate Impossible

I’m writing an iOS app.  Loving it too, learning a lot.  More on that in a bit.

Today when I tried to update my github repostory, I received a certificate error that said “XCode can’t verify the identity of the server github.com”.  Because I’m a paranoid idiot, I decided to get to the bottom of it.   A search on Stack Overflow scared the crap out of me — the “accepted” answer is to just “make the prompt go away” by blindly choosing to trust the certificate.  That is theoretically the worst, laziest, most insecure answer in the world and we as an industry should be castigating such a brutal security recommendation, right?  But before casting stones, what *should* be done?

Here’s what I found about the intermediate certificate presented by github:

  • The intermediate certificate that shows up in the certificate chain given by github.com is called “DigiCert High Assurance EV CA-1”.
  • It was issued Nov 9 2006, expiring Nov 9 2021.
  •  It has a SHA-1 fingerprint of 4A 35 8B 25 35 28 61 42 F6 0F 4E 9B 57 E2 AE 11 6D AB F0 F5.
  • It was issued by a CA certificate called “DigiCert High Assurance EV Root CA” with a serial number of “08 BB B0 25 47 13 4B C9 B1 10 D7 C1 A2 12 59 C5”.
  • The certificate gets a little green checkmark to say that the certificate is valid.  I assume this means that the certificate passed CRL and OSCP checks

 

To try to clear this up, I went to the Digicert website, to their root certificates page at https://www.digicert.com/digicert-root-certificates.htm, to validate this intermediate certificate.  I downloaded the certificate called “DigiCert High Assurance EV CA-1” and confirmed that the downloaded cert matched what was shown on the website:

  • There is an intermediate cert on the website called “Digicert High Assurance EV CA-1”.
  • It has a SHA-1 fingerprint of DB C7 E9 0B 0D A5 D8 8A 55 35 43 0E EB 66 5D 07 78 59 E8 E8.
  • It was issued Nov 9, 2007, expiring Nov 9 2021.
  • It was issued by a CA certificate called “DigiCert High Assurance EV Root CA” with a serial number of “03 37 B9 28 34 7C 60 A6 AE C5 AD B1 21 7F 38 60”
  • The certificate gets a little green checkmark to say that the certificate is valid.  I assume this means that the certificate passed CRL and OSCP checks

So,  where does this leave us? Let’s just recap.

  • I get a warning about a certificate when I try to use XCode to go to github.
  • When I view the certificate, the operating system pronounces the cert as “valid”.
  • Neither the thumbprint nor the issuer serial number match the values advertised by Digicert as the correct values for that intermediate CA certificate.

So what is an honest but paranoid person supposed to do now?   The chain presented by github both fails when XCode looks at it programatically (not that I can tell you exactly why the programmatic fail occurs) and when I attempt to manually validate.

It is very possible that Digicert has issued two intermediate CA certificates.  For example companies define rollover certificates all the time, so that there is always one valid certificate for business continuity.  But given that both these certificates expire on the same date, these particular certificates kinda suck as rollover certificates.   If DigiCert had reissued the CA certificate due to fraud or misadventure I would *hope* that one of these two certs should fail CRL and OSCP checks.  But that hasn’t happened either.

Conclusion: Based on the resources available to me, I have to conclude that the intermediate certificate offered by github is evil.  Either that, or Digicert has wasted a bunch of my time by not simply documenting the second thumbprint for the second valid instantiation of the intermediate certificate.

If the former is true, I have no idea what to do.  If the latter is true, I still have no idea what to do.  Color me completely unable to move forward.  Yay security.

For the 2 people who actually bothered to read this to the end, here is a screenshot of the three certificate detail screens for the intermediate certificate — the leftmost cert is the intermediate certificate from the github error, the middle cert details are from the intermediate cert downloaded from Digicert directly, and the rightmost window is the DigiCert details window.   Fill your boots. Any recommendation on how I could actually move forward here short of emailing digicert support would be gratefully accepted.  I’ll let you know what I find out from my email to support@digicert.com.

 

 

 

Axel’s Challenge

Axel says he’ll fetch you a beer at IIW if you can decrypt the token he has made publicly available on his blog: crypto doubters in the crowd,  this is your big chance!   As someone who was recently burned while copying and pasting encrypted tokens off of a web page and trying to decrypt, I would be careful of the white space though, I bet if you ask really nice he’d even send you a file version.Axel's Challenge

My Case is MADE

I wrote not long ago about the HSBC Canada banking site, and its odd and frightening ways of dealing with access control.  Their fanciful notions of authentication proved to me that passwords were being stored in a retrievable format rather than in a format where the password can be verified as matching but not retrieved and examined.

This exact same issue has come up on the OSIS list with respect to privatepersonalidentifiers – some have argued that it is perfectly safe to store raw ppid and modulus information at the RP,  and I cannot tell you how STRONGLY I disagree with that idea.

Luckily, Gunnar has pointed me to the perfect example:  apparently the HSBC France banking site has been hacked,  and guess what?  They are storing their customer’s passwords in clear text too (surprise surprise).  And a handy little SQL injection attack gives the hacker everything he needs to log in as anyone he can think to query for.

Had the HSBC stored their passwords in some kind of encrypted format,  the same attack would have netted the hacker a fraction of the value,  because there would still be a significant and likely cost-ineffective amount of time and work necessary to turn the data into a set of credentials that could be used for actual authentication.  This is why encryption of passwords is an industry best practice, and why you will and should be laughed out of this community if you can’t get such a simple mitigation right.

If an RP stores the ppid and modulus of a self-issued information card in clear text, and that RP becomes the victim of a SQL injection attack,  a hacker has everything they need to get in the front door too.   The data must be stored in a way that mitigates this danger,  period.  I consider this to be identity 101 for information cards, and anyone who writes an RP should consider this to be a best practice.

Guidance

In researching a few products for a client, I came across an e-book on Managing Linux & UNIX Servers by Dustin Peryear.  I managed to get access to a chapter without registering, and I liked what I saw so much that I had to have the whole book.
The thing that is remarkable about this book to me, is that it is NOT a book about technology, commands, program execution or coding.  It is a book about what to get done and why.   There are so few of these kinds of books – the ones that assume that once you have a comprehensive plan for getting things done,  finding out how is the easy part.  The books that get that the mapping works better from the top down than the bottom up: all the man pages in the world will not help you if you don’t have the context to know which of them you should be reading, and what the end result should be when you apply that knowledge.  It is the guidance that makes the difference.

GuidanceI very badly want a book like this for information card Relying Parties, specifically the PKI functionality of an RP.   I have work to do on my RP:  right now I know I’m missing several critical checks to ensure integrity and non-repudiation for the messages I’m accepting and trusting.   But how do I know that I have covered all the bases?   I have this list of interoperability issues.  I have a set of api calls into security libraries like xmlseclib and openssl that could possibly solve my issues.    What I do not have is guidance.   I feel like I’m assembling an entertainment unit from IKEA, and I have detailed engineering information on every screw and every panel in the entire IKEA inventory:  thousands of weights, heights, screw thread pitches, you name it.  While I technically have access to everything I could possibly need to assemble my entertainment unit,  it is left up to me to figure out which and how many of the inventory items I need, how they fit together, and what order they must be assembled.

I suppose what I’m saying is we need to step above RTFM (Read the Fsking Manual) to KWFMTR (Know Which Fsking Manuals to Read).

(photo credit: http://www.flickr.com/photos/jonk/33283987/)

Security as Upsell

Here is a philosophical question for you:

SSL only for the privileged

Many SaaS vendors currently only offer SSL support to clients who pay a premium.  Users who don’t pay, can’t have the “extra” benefit of using SSL.   What happens to the small companies and/or single users who wish to be secure, but do not need unlimited users or 2GB of file storage, or 10 project templates?  Who in their right mind would pay $20 extra a month just to get SSL?   And what possible justification is there for denying transport-level security to smaller customers?

Today we have this perception that only the largest corporations need to pursue security:  the ones with CIOs and Enterprise Architects,  the ones trading publicly or in a vertical where audits are mandatory.  If you ask me, I think we could go a very long way if we stopped thinking like this and began to enable any person or organization, of any size to care about, understand, and pursue secure internet operation.

I know it isn’t as lofty a goal as Bob has put forth;  but this issue, to me,  represents a small part of the underlying systemic problem that Bob is trying to shed light on.

Am I an Accessory?

I am stuck in an interesting dilemma.

I love my web hosting company, I think the people are great, the tools and support are top notch, and the price is right.  I was, in fact, thinking of upgrading my service with them.

But I cannot ignore the fact that on my particular server, they haven’t updated their openssl libraries in 5 YEARS.   Five.  By my count, they are 20 revisions behind the current library version.

This means that there is a list of exploits as long as my arm that can probably be used against the various sites that I sponsor, including the Pamela Project and OSIS. The big problem is that I’m using a shared Apache infrastructure, and while I can recompile PHP and Curl to use up-to-date libraries, Apache does a lot of the heavy lifting when it comes to transport-level security, and as far as I know, I cannot affect the modules that load into that shared webserver environment.

Obviously there are actions that I can take to ensure my own security.  I can change web hosters.  I can upgrade to a server where I have access to my own apache configuration.  The standard answer here is that if I don’t like what I’ve got I should leave.

But – what about the folks on my server who don’t even know they are at risk?  Who are running shopping carts and who just assume that they are on a well-patched, secure server?  By silently walking away from an insecure environment, am I in fact aiding and abetting the web hosting company in their terrible security practices?

I have in the past contacted support about updating SSL libraries when various remote holes were found.  I won’t quote their answer because I don’t have the email to back up my recollection, I didn’t think to save them, but the version number speaks for itself:  0.9.7e.  The current version is 0.9.8k.  I’m not a big customer, and I know that by paying more I could become more secure — but must it follow that by paying less I have to be at risk?   I already pay extra per month for SSL support, and also for the unique IP address that is a pre-requisite.  Do I have to get hacked before I have a case?

So I ask you all — for the small number of us who know better — what’s the right way to proceed?  Do I silently act to ensure that I myself am secure, leaving all those other poor uneducated suckers to continue in ignorance and risk?  Do I make a stink?  I doubt I have the clout to cause this very large company to do anything.  Yet, if I don’t, who will?

Update: I received a response to my separately sent inquiry to the security team just now (told you they are responsive):

We run Debian Linux. Debian does not put new upstream releases, even point releases, into a stable distribution. What happens is that only the security fixes are backported into a package in stable. This minimizes the possibility for the stable release to be de-stabilized by new code introduced upstream.
So while the version of libssl0.9.7-sarge5, it should nevertheless incorporate all the security fixes present in 0.9.8k.

So, the good news is that I’m probably safe, and the team is on top of it.  The bad new is that I simply have to trust it is so, I don’t see a way to easily independently verify.