Reflections of an Identity Geek on the JLAW Fail

I’m sitting here, in the dark, when I should be sleeping. Thinking about how 100 different iCloud accounts were manipulated to give up their secrets.  We should all be taking a hard look at what constitutes account recovery in this day and age of the internet. Disclaimer – I haven’t had a coffee yet this morning.  If I sound like a raving lunatic, this may be why.

As the dust settles, it appears that the attackers walked in the front door.  Well, the side door actually.  Data is sketchy, but it looks like account recovery processes at Apple were manipulated to give access to attackers.  Why can this even happen?

1.  We design only for the Lowest Common Denominator

When an account recovery loop is assembled by a service, it is the same loop regardless of who you are. Or how savvy you are.  Or how likely you are to be targeted for a given threat.  Why is this?   Why not keep the base recovery experience as the one where you get if you can barely spell computer and these password things are scary.  But why not let people with stronger needs self-identify?  Allow people to ask to jump through more hoops, to supply more, and better, information in order to receive more, and better protection from targeted attacks?

I know exactly why this kind of “better security” doesn’t happen.  Because for every JLaw attack, where the security could have helped, there are 10,000 regular people who would turn on a feature like this and then get locked out of their account.  There, I said it. The lowest common denominator is: that the public expects is that even if they do everything wrong, even if they cannot in any reasonable or provable way identify themselves as the actual owner of the account, they should still get their data back.   And the cost of dealing with those 10,000 upset locked-out people, both in PR and support terms is very real.  More real and more common than cost associated with the relatively few that get hacked.

2. We have purposely created a Stateless Machine

When you choose to try to recover an account today, you generally do so in a vacuum.   You are asked to identify yourself, and the information you give is often considered in isolation.  Do these two strings representing your dog’s name and your first school match the hashed strings stored in our database?  Yes?  Great!  Keys to the kingdom!   Doesn’t matter that somebody has been trying and failing to do the same thing three times a day for the last week.  No sense of suspicion is placed on this success as a possible culmination to all those failures.  This is part of why an attacker can keep calling help desks over and over until they succeed, and why they can keep using online forms over and over until they succeed.  Also — see #1, whereby it isn’t that unusual for people to really fail at knowing their recovery information and to still expect success.

The whole reason these systems were built to be stateless is because they were built to scale.  But those requirements need to be examined.   It should also be a requirement to at least try to recognize when an attacker could be systematically probing recovery systems, ranging from digital forms to help desks, maybe even in-person resources, or direct emails to IT staff.

3. We keep the User in the dark

If somebody is systematically probing at a given user’s account, don’t you think it would be valuable to tell them, so that they can try to form their own understanding of their safety?  If you’ve locked yourself out of your account, I’m sure you won’t mind the notifications.  And if you haven’t locked yourself out of your account, those notifications may be very important. For example, receiving a notification from every one of your email accounts and your bank in a 24 hour period is something that may not be so significant to each system, but should ring serious bells for the individual.  There are programs like Shared Signals that are evolving to help with cascading identity attacks, but for now, the only person who might see the pattern is the user.   And they are not involved in the process.

4. Users don’t care until it’s too late

It’s true.  There are lots of optional things people could do to be safe that they never bother with.   But perhaps, if there was a way to make users aware of recovery question guessing attempts against their account, users might get scared a little sooner, and carefully contemplate their options.

The WORST THING about this breach

I understand the prosaic duh moment going on where people note that the best way to not have naked pictures stolen is to not have naked pictures taken.  But this should in no way mask the failure that has taken place from an implementation standpoint.  We need to safely store and share sensitive things. As a society. We need to trust that accounts we create and populate with our most treasured data are not just swiss cheese for anyone willing to stalk a specific target.  The old canard of “Doctor it hurts when I do this”/ “then don’t do that” doesn’t help if the underlying problem is disease rather than a boo boo.  This issue is not a boo boo, and turning the iphone camera off will not prevent the spread of the disease, it just prevents one symptom from showing.

Recommendations

If the identity fairy came to visit and granted me three wishes, here is what I would wish for.  These aren’t qualified recommendations in any sense — just a place to start.

  1. Provide options for users to customize their own recovery ritual.
    1. Include things like
      1. Turning on notifications for events like calls to the help desk or for use of the password reset form
      2. Adding additional or alternate recovery steps
        1. Additional identity proofing steps before help desk support will engage  – like requiring a 2FA authentication before the call continues
        2. Requiring that KBA answers be retired (or at least flagged for review) after a certain number of incorrect guesses
        3. Turning on additional 2-factor authentication for services that may not normally be protected (see above for an example
  2. Architect for recognition of accounts that self-identify (or are verified) as likely targets
    1. Help Desks should be able to recognize high-fraud-risk accounts
    2. Audit and accountability should be elevated
    3. Work towards a point where the system figures out who the high-risk accounts are in real time
  3. Track the use of recovery mechanisms, and make the history available to the user.
    1. How many times has a recovery question been used
    2. How many times has the form been submitted with the user’s user name
    3. How many times and when has the help desk been notified

The sun is long-up now. Time for reflection to end, and reality to intrude again…

The next conversation to be had

Ok, now that CIS and Catalyst conferences are (almost) out of the way, we need to rally the identity geeks and start talking about OAuth and OpenID Connect design patterns.   We need to get some public discourse going about token architectures for various real world business access scenarios.

The value proposition needs to be made more concrete.  So let’s try to push on that rope in the next few months.

 

Certificate Impossible

I’m writing an iOS app.  Loving it too, learning a lot.  More on that in a bit.

Today when I tried to update my github repostory, I received a certificate error that said “XCode can’t verify the identity of the server github.com”.  Because I’m a paranoid idiot, I decided to get to the bottom of it.   A search on Stack Overflow scared the crap out of me — the “accepted” answer is to just “make the prompt go away” by blindly choosing to trust the certificate.  That is theoretically the worst, laziest, most insecure answer in the world and we as an industry should be castigating such a brutal security recommendation, right?  But before casting stones, what *should* be done?

Here’s what I found about the intermediate certificate presented by github:

  • The intermediate certificate that shows up in the certificate chain given by github.com is called “DigiCert High Assurance EV CA-1”.
  • It was issued Nov 9 2006, expiring Nov 9 2021.
  •  It has a SHA-1 fingerprint of 4A 35 8B 25 35 28 61 42 F6 0F 4E 9B 57 E2 AE 11 6D AB F0 F5.
  • It was issued by a CA certificate called “DigiCert High Assurance EV Root CA” with a serial number of “08 BB B0 25 47 13 4B C9 B1 10 D7 C1 A2 12 59 C5”.
  • The certificate gets a little green checkmark to say that the certificate is valid.  I assume this means that the certificate passed CRL and OSCP checks

 

To try to clear this up, I went to the Digicert website, to their root certificates page at https://www.digicert.com/digicert-root-certificates.htm, to validate this intermediate certificate.  I downloaded the certificate called “DigiCert High Assurance EV CA-1” and confirmed that the downloaded cert matched what was shown on the website:

  • There is an intermediate cert on the website called “Digicert High Assurance EV CA-1”.
  • It has a SHA-1 fingerprint of DB C7 E9 0B 0D A5 D8 8A 55 35 43 0E EB 66 5D 07 78 59 E8 E8.
  • It was issued Nov 9, 2007, expiring Nov 9 2021.
  • It was issued by a CA certificate called “DigiCert High Assurance EV Root CA” with a serial number of “03 37 B9 28 34 7C 60 A6 AE C5 AD B1 21 7F 38 60”
  • The certificate gets a little green checkmark to say that the certificate is valid.  I assume this means that the certificate passed CRL and OSCP checks

So,  where does this leave us? Let’s just recap.

  • I get a warning about a certificate when I try to use XCode to go to github.
  • When I view the certificate, the operating system pronounces the cert as “valid”.
  • Neither the thumbprint nor the issuer serial number match the values advertised by Digicert as the correct values for that intermediate CA certificate.

So what is an honest but paranoid person supposed to do now?   The chain presented by github both fails when XCode looks at it programatically (not that I can tell you exactly why the programmatic fail occurs) and when I attempt to manually validate.

It is very possible that Digicert has issued two intermediate CA certificates.  For example companies define rollover certificates all the time, so that there is always one valid certificate for business continuity.  But given that both these certificates expire on the same date, these particular certificates kinda suck as rollover certificates.   If DigiCert had reissued the CA certificate due to fraud or misadventure I would *hope* that one of these two certs should fail CRL and OSCP checks.  But that hasn’t happened either.

Conclusion: Based on the resources available to me, I have to conclude that the intermediate certificate offered by github is evil.  Either that, or Digicert has wasted a bunch of my time by not simply documenting the second thumbprint for the second valid instantiation of the intermediate certificate.

If the former is true, I have no idea what to do.  If the latter is true, I still have no idea what to do.  Color me completely unable to move forward.  Yay security.

For the 2 people who actually bothered to read this to the end, here is a screenshot of the three certificate detail screens for the intermediate certificate — the leftmost cert is the intermediate certificate from the github error, the middle cert details are from the intermediate cert downloaded from Digicert directly, and the rightmost window is the DigiCert details window.   Fill your boots. Any recommendation on how I could actually move forward here short of emailing digicert support would be gratefully accepted.  I’ll let you know what I find out from my email to support@digicert.com.

 

 

 

Time to Act!

Did you know that a vote is on at the OpenID Foundation to approve an initial implementer’s draft of OpenID Connect?

Your action is required.

If you haven’t looked at these specs yet,  go to http://openid.net/connect.    If you have only limited time, check out the Basic Client Profile to get an idea of what we’re talking about, or look at Nat Sakimura’s OpenID Connect in a Nutshell.

If you don’t even know what I’m talking about,  you need to go find out.  OpenID Connect is an identity layer on top of OAuth 2.0.   It abandons the redirect-based structure of OpenID 2.0 completely, and instead embraces the API security layer.   While OAuth 2.0 takes care of the mechanism of asking for a token and using that token,  OpenID Connect creates a scope that protects a standardized set of identity services:  these services provide roughly the same set of attributes, authentication context, and session expiry information that you would get in a SAML assertion.

SAML, OAuth 2.0, and OpenID Connect, when taken together, allow identity and issuer/session information to become a known common quantity, traded either on the front channel or the back channel, consumable by the largest enterprises and the simplest mobile applications, and secured at any level of assurance.

If you are already an OpenID Foundation member, you simply need to visit a website, login with your openid, and cast your vote.  Go to https://openid.net/foundation/members/polls/62 to cast your vote.

If you aren’t an OpenID Foundation member, becoming a member is simple and affordable, you can join as an individual for USD $25.  Visit https://openid.net/foundation/members/registration to join, and then you too can cast a vote.

You only have 5 more days, voting closes on February 15th,  do not wait until the last minute!

Google Plus: Minus 1

Google Plus started out so well!  It was pleasant, easy, and there was a lovely gratification in adding people to circles and being added in return.  It felt like finally, perhaps somebody at Google had figured out how to be social!  People were sharing, and communicating, and suddenly it seemed like maybe there might be an alternative to Facebook that had a chance.

And then…. Google started enforcing their real names policy.  Obsessively.  The fly in the ointment?  While Google can state that they require real names, they have no definitive way to determine which names are real.  The result is an offensively discriminatory process of identifying names that don’t appear to conform and requiring proof of identity only from those people.

My question to Google:  what exactly are you trying to accomplish?  Because I thought you were trying to create a welcoming place where insights and observations were shared with fellow end users who have formed a relationship with each other.   A place where users invite each other to talk and manage relationships themselves.

Instead, the real names policy is a gating factor, at a time when the service is just struggling to gain critical mass.  You have real people with odd names being banned from using plus and required to prove their identity.  You have people with excellent internet reputations banned because they publish under a nickname.   The result is a ridiculously easy-to-game entrance requirement that punishes those who genuinely want to express their individuality, while offering a loophole the size of a planet for anyone else to slide through.

And for what?   In the identity industry we often talk about trading value for value.  In theory, users are willing to take extra steps in order to get extra value.  Is Google Plus about high assurance transactions for which the friction, pain and invasiveness associated with identity vetting is a worthwhile trade?  No. Completely outside of any question of whether a real names policy is right or wrong, enforcement of this policy is bad for business, if the business is supposed to be that of creating a popular and well-used platform that keeps users inside their Google bubble all the time.

The people I want in my social circles prove themselves over time. They say smart things and engage in positive ways. Requiring government identification before engaging in casual conversation would be considered horribly antisocial in real life – why does Google think it’s ok in the social networking world?   They are choking the life and personality out of their own service, before it has even had the chance to flourish.

Is it worth trying to communicate the facepalm that is Google Plus’s real names enforcement to Google in some quantifiable sense?  Perhaps the numbers-oriented folks at Google might look at a huge number of accounts that have been banned from the plus service and say “hey, maybe that’s bad”?   If so it might be worth adding that nickname in parentheses to your profile.  If Google is going to force identity vetting, they should be prepared to do it for all 20 million accounts.  And they should be prepared to monitor, and maintain, and police, and arbitrate.  And what will be the result?  An accountable digital space.   Sounds like a blast, right?  Party at Google Plus, bring your flights of fancy along for the ride…

 

Why Identity Management Matters

A few weeks ago, a convicted sex offender abducted a 10 year old girl from a shopping mall in my hometown (link).  The attacker posed as a police officer, but when the girl asked too many questions, she was simply picked up and carried to a van, which then sped away from the mall. Even as the girl’s father searched the mall for his daughter, police pulled over the same van for speeding.  The officer at the traffic stop ran the attacker’s drivers license, gave him a ticket, and let him go.

At the time police pulled over the van, there was no public bulletin, no specific information about a kidnapping, nothing to make the officer suspicious about the man and the girl.  That officer had only one source at his disposal that could have caused him to question the situation in front of him: data delivered from a search on the drivers license of the attacker.  The officer should have been able to see that the man was a convicted sexual predator with a violent history towards children.  Did the officer actually see this data?  I don’t know.  If the officer did have access to that information, he did not make a correlation between the van’s passenger and the impropriety of a sexual predator being alone with a child. It is always easy to connect the dots after the fact  of course, but the real question is:  how could that police officer have been assisted in connecting those critical dots in real time?  What needed to change in that situation to cause an inkling of suspicion, just enough for a few extra questions, a slightly more in-depth interview?

The ultimate goal of the identity management industry is to have correct information available in the moment when it is needed, presented in a fashion that changes decisions for the better.   We have fancy names for this: corporate agility, visibility, business intelligence and so on; but those fancy terms go away when I think of that night, that scared little girl, and the police officer who didn’t understand the context of the situation in time to help.

In this case the story had a happy ending:  the attacker, likely realizing that the police would eventually put two and two together, dropped the girl off unharmed at a fast food restaurant.  What remains are questions from the community – the next time an officer happens to be in the right place at the right time, will the availability, accuracy, timeliness and relevance of the data he has access to save a life or cost it?

Audio Visual Nirvana

I admit it – I’m currently obsessed by two things: sound and style.  In sailing, rule #1 is: look good.  It turns out that you have to be able to sail well to obey that rule.  I’ve decided that the same rule should apply to my audio visual life at home.  In my mind, this means four things:

  • Excellent sound and picture quality everywhere.
  • Control of the sound and picture from anywhere.
  • No computers next to AV equipment.
  • No visible wires.  Anywhere.

To truly obey rule #1, I’ve discovered a few things.  Placing devices into iPod docks does not work.  It’s inconvenient, and you have to walk into the room to control the sound.   I want my phone with me, not connected to my home stereo.  Also — connecting speakers directly to a computer sucks.  I don’t want “computer speaker” sound.  I want stereo HiFi.  I want to be able to shake the room while I’m working, but pause the sound in all parts of my house instantly if my phone rings, without taking a step.

Here is the solution I’ve come up with.  To the tech-religious nuts who read this – I’m sure it can be done equally well with some other product suite.  I’m not trying to sell you on my choice of platform, only the end result. Do please use whatever you want to build the same thing if it makes you happy.

Here is my architecture:  if you want details, read below.  With the architecture below, I get fantastic quality stereo from any of the bottom three devices, to any of the top two high-quality zones.  My office stereo is completely physically separate from my desk, nicely tucked away in a bookcase. And I can control either of these zones from my iphone without getting up from wherever I’m sitting.

The Architecture

The Details

My system works through the use of the following bits:

  • Home Sharing:  this is an iTunes feature that lets you broadcast music and video from an iTunes Library.
  • Airplay:  this is a feature of iPod/iPhone apps for music, photos, and movies that let you choose a remote output source.
  • Remote:  this is a free app from the app store (made by Apple) that lets you connect to and control both iTunes Libraries and devices like AppleTV.
  • AppleTV:  this is a device from Apple that streams audio and video from Home Sharing, Airplay, and other internet sources and outputs to HDMI and/or digital audio.
  • Airport Express:  this is a device from Apple that streams audio only from Home Sharing and Airplay sources using digital audio or RCA.

All of the named entities in the above diagram are network entities with unique IP addresses in my home network.  Gemini and Soyuz are computers running iTunes with Home Sharing Enabled.  Soyuz is an old 12″ powerbook that is now acting as a server, both containing my music and video library, and acting as a Time Machine backup server.  Sputnik and Apollo are audio sources that show up when the Airplay icon is selected from any of Atlantis, Soyuz or Gemini.

Setup is pretty well plug & play – you will need the airport utility to configure the Airport Express from your computer, because it has no video interface.  The AppleTV can be configured from your TV.  All of the devices connected to the AV equipment are for all intents and purposes invisible. There are a few notes though:

  • All of the devices must both be on the same network and home sharing must be enabled with the same Apple ID.  Apple sees all, would you expect anything else? Note that while Home Sharing Apple IDs must match, the Library itself can sync to the iTunes store with a different Apple ID, so this architecture does allow everyone to keep their own AppleIDs for apps etc.
  • This solution only works with iTunes.  If I watch a video on YouTube in my office browser, there is no way to get that sound to my office stereo (I can go to my living room and play it on the apple TV though, because the apple TV can directly stream from YouTube).
  • As far as I know, there is no need for any Apple computers to use this setup.  A PC running iTunes can replace either Gemini or Soyuz.
  • You’ll pay as much for the Airport Express as your Apple TV even though it is lesser tech from an A/V perspective, because the Airport Express can also be configured as a wireless router.
  • No  proprietary cables are needed for these solutions, not that this saves you any money, the standard stuff costs a fortune.  The cost of assorted speaker wire for 5.1 audio, HDMI cables, digital audio cables, and an RCA-to-mini-audio-jack collectively surpassed the cost of both the appletv and airport express combined.
  • You stream photos to the AppleTV, both as a screensaver and for slideshows.   I have my screensaver set up to stream photos from my Favorites list in Flickr, meaning every time I add to that list, I’m enriching the photography shown on my wall while music is being played, or while AppleTV is not busy with other things.
  • If I stream video from an iTunes Library, it will be from Soyuz, which is hard-wired to my wifi router.  Currently my plan is to rip my DVD collection to iTunes – at that  point, I won’t even need a DVD player.
  • The weakest link in this whole setup is iTunes itself.  Maybe one day Apple will wake up to the fact that iTunes should be a personal DJ system – allowing you to classify, organize, and moderate your media content with the most sensitive of nuances — as you’re listening, not in advance.  In my opinion, they’ve put a pinto at the center of their media empire, instead of the lotus esprit that they should be capable of.

Life is good :)

What a Team

In case you hadn’t heard, Ping Identity just hired Travis Spencer to work with Paul Madsen, Patrick Harding and myself in the Office of the CTO.

Score!!!

We’ll all be meeting up in January for our first in-person strategy session, and I can’t wait!