Archive for the ‘Identity Theory’ Category
Friday, February 10th, 2012
Did you know that a vote is on at the OpenID Foundation to approve an initial implementer’s draft of OpenID Connect?
Your action is required.
If you haven’t looked at these specs yet, go to http://openid.net/connect. If you have only limited time, check out the Basic Client Profile to get an idea of what we’re talking about, or look at Nat Sakimura’s OpenID Connect in a Nutshell.
If you don’t even know what I’m talking about, you need to go find out. OpenID Connect is an identity layer on top of OAuth 2.0. It abandons the redirect-based structure of OpenID 2.0 completely, and instead embraces the API security layer. While OAuth 2.0 takes care of the mechanism of asking for a token and using that token, OpenID Connect creates a scope that protects a standardized set of identity services: these services provide roughly the same set of attributes, authentication context, and session expiry information that you would get in a SAML assertion.
SAML, OAuth 2.0, and OpenID Connect, when taken together, allow identity and issuer/session information to become a known common quantity, traded either on the front channel or the back channel, consumable by the largest enterprises and the simplest mobile applications, and secured at any level of assurance.
If you are already an OpenID Foundation member, you simply need to visit a website, login with your openid, and cast your vote. Go to https://openid.net/foundation/members/polls/62 to cast your vote.
If you aren’t an OpenID Foundation member, becoming a member is simple and affordable, you can join as an individual for USD $25. Visit https://openid.net/foundation/members/registration to join, and then you too can cast a vote.
You only have 5 more days, voting closes on February 15th, do not wait until the last minute!
Monday, March 7th, 2011
A few weeks ago, a convicted sex offender abducted a 10 year old girl from a shopping mall in my hometown (link). The attacker posed as a police officer, but when the girl asked too many questions, she was simply picked up and carried to a van, which then sped away from the mall. Even as the girl’s father searched the mall for his daughter, police pulled over the same van for speeding. The officer at the traffic stop ran the attacker’s drivers license, gave him a ticket, and let him go.
At the time police pulled over the van, there was no public bulletin, no specific information about a kidnapping, nothing to make the officer suspicious about the man and the girl. That officer had only one source at his disposal that could have caused him to question the situation in front of him: data delivered from a search on the drivers license of the attacker. The officer should have been able to see that the man was a convicted sexual predator with a violent history towards children. Did the officer actually see this data? I don’t know. If the officer did have access to that information, he did not make a correlation between the van’s passenger and the impropriety of a sexual predator being alone with a child. It is always easy to connect the dots after the fact of course, but the real question is: how could that police officer have been assisted in connecting those critical dots in real time? What needed to change in that situation to cause an inkling of suspicion, just enough for a few extra questions, a slightly more in-depth interview?
The ultimate goal of the identity management industry is to have correct information available in the moment when it is needed, presented in a fashion that changes decisions for the better. We have fancy names for this: corporate agility, visibility, business intelligence and so on; but those fancy terms go away when I think of that night, that scared little girl, and the police officer who didn’t understand the context of the situation in time to help.
In this case the story had a happy ending: the attacker, likely realizing that the police would eventually put two and two together, dropped the girl off unharmed at a fast food restaurant. What remains are questions from the community – the next time an officer happens to be in the right place at the right time, will the availability, accuracy, timeliness and relevance of the data he has access to save a life or cost it?
Monday, May 3rd, 2010
Microsoft announced last Tuesday that CardSpace 2.0 beta would not be releasing at the same time as ADFS 2.0. That fact may not have immediate significance to you, but it certainly does to me. Microsoft, you’ve blown it.
On one hand, I’m immensely relieved. A premature release of CardSpace 2.0 would have removed personal card support from the desktop, meaning that CardSpace would have been relegated to nothing more than Home Realm discovery.
On the other hand… We won’t know for sure until ADFS 2.0 ships, but from what I and other people have seen from the beta and release candidate versions, Microsoft has broken backward compatibility with CardSpace 1.0. This means that unless Microsoft has taken recent steps to regress their information card issuance code, ADFS 2.0 will ship in information card limbo.
I am trying not to care and failing miserably. Let’s face it, Microsoft can release their software in whatever shape they see fit. If they want to, they can release an initial version of a client with no server, and then release a version of the server *years* later that can’t work with the initial client, and can’t be deployed with the later client because that later client “isn’t done yet”. I’m sure that the collateral damage is the least of their problems, and I actually know and understand better than most what internal and external pressures may have been brought to bear. Resources are precious, and both FIM and ADFS have slipped themselves, so somebody had to draw a line.
But see, people were waiting. Big companies, waiting to run information card pilots. Governments, excited to use ADFS 2.0 to implement higher-assurance consumer identity projects. There weren’t a huge number of interested parties, but dammit, they were BIG interested parties. Those interested parties need a sustainable closed circle — a production server and a production client. Not a production server that can only work with a client that “isn’t done yet”.
In the meantime, there is a very hardy little information card community that can at least now stop the horrible waiting and wondering game with respect to ADFS 2.0 and CardSpace 2.0. The choice for the immediate future is becoming clear: CardSpace 1.0 remains the defacto standard for information cards. The rest is moot. Regardless of the hole that Microsoft may have dug for itself, the quality and uniqueness of the interactions that the IMI spec makes possible are undeniable, and I hope inevitable in some variant. I continue to believe that this protocol represents our best hope to regain rational control over our own digital relationships.
It is entirely possible that companies like Azigo and Avoco Secure will see the silver lining here and do the extra work to shim up the ADFS server to work again with the rest of our ecosystem. We’re not out for the count, and at least now we finally know what the biggest player in our space plans, even if it is a big fat WTF…
Friday, February 5th, 2010
Twitter broke a very interesting story this week about a hacker who bulk-harvested account details by installing backdoors in a popular torrent hosting solution. Users registered for a valid service, and received value in return, but all the while, their details were being stolen.
This would be a pretty boring phish, except for the part where users re-use passwords and account names ALL THE TIME. The current trend is upsell — harvest a low-value throwaway password at an insecure site and then see what high value matches can be made with the same username and password.
Identity Theft via phishing used to be a consumer identity problem, but Cloud services and extranets have changed that. There is now a new game in town: commercial phishing. If your enterprise users are uninformed enough to use their work email and a standard, muscle-memory-password at a site like a torrent site, attackers now have a growing list of possible commercial candidates for that account. Of course there is always the chance that the worst case scenario will happen and an attacker will harvest your entire Enterprise Directory. You may say, my company is obscure, what use would hacking my company be? Well, if you use outlook web access, and your AD password is phished, and your accountant uses his/her work email address for password recovery on your corporate banking site, there is a path for an attacker to get at your organization’s money from the internet.
I think it’s hysterical that a company will spend all sorts of money for education of their workforce around physical safety and nothing on account safety. Why is there not a brightly colored data safety reminder on every floor, something to idly inspect while you’re waiting for the elevator? As much as you scoff at the idea, the very prosaic advice that this fire poster offers DOES help in muscle-memory situations. The strategy of setting out simple rules and making them highly visible does work.
Not only does a sign like this not exist for account safety, I don’t even think that there is agreed-upon text to go on it. No wonder we’re in the state we’re in.
Sunday, January 24th, 2010
I believe that what Apple releases next week will herald the end of broad adoption of general computing devices. The introduction of their tablet will begin in earnest a trend towards tightly integrated, tightly controlled sealed-hardware computer devices that allow the majority of the population to accomplish the most popular computing tasks without doing anything more than visiting the app store. Not as your “mobile” computing solution by the way — as your only computing solution.
Why wouldn’t the world move in this direction? Why shouldn’t your computer be as easy to use as your smartphone? Why fiddle with drivers and desktops and operating systems if all you ever do is surf the web and send email to your grandchildren? Even if you want more than the basics, why go through long and complicated application installs when you can just click a button?
This is the future, and those of us in industries like identity management had better stop and pause right now, because per-application passwords have no place in the world of the app store. They are difficult to type on a touchscreen, and inconvenient in exactly the way that the new push-button paradigm seeks to overcome. This could be the best thing — or the worst thing to happen to those of us working on protocols which replace password storage.
There is no doubt that passwords *will* be hidden from the user from now on. In the same way that nobody types a telephone number into their phone anymore (they just use Contacts), nobody will type a username or a password. Heck, they won’t even type the URL of the service. Details will be hidden, the pain taken away. We have a small window in time to affect the way in which that happens, before users forget what it was like to have to figure out which user name went with which password and which site.
Don’t believe me? If you have an iPhone, you should try PageOnce‘s Personal Assistant app. I reviewed PageOnce ages ago: it aggregates accounts of all kinds, giving a consolidated dashboard and allowing you to login without typing your password. I panned the service: not only do you have to give your passwords away, but you have to go out of your way to pageonce for that very first account login – why do that when you can go directly to the website and log in? On a general purpose computing device, the service has no use to me. On the iPhone however? Pure solid gold. Clicking that little “Personal Assistant” icon is always easier than typing in a URL for the original website. Not only do I never have to remember credentials, I am essentially given a menu of my accounts, and I’m one click away from transacting.
But, you say – it’s just mobile. What really matters is the desktop. I say you’re wrong. I say that the ubiquity of the smartphone is coming to a desktop near you, courtesy of Apple Computers Inc. I say that we had better *start* our strategy thinking about what happens when a user has an expectation that authentication should be no more complicated than making a phone call on a smartphone.
If we don’t make it that easy, somebody else will do it. Of that you can rest assured.
Wednesday, August 26th, 2009
I talked to several people who were somewhat disturbed about my last blog post. Surely it can’t be that easy?
The potential exists – and I think it is worthwhile to ask why. Most people have been taught to guard their passwords, but have been carefully instructed to feel no responsibility for the other ways in which an attacker could access their account. Why is it we can educate about password complexity and reuse, but don’t want to explain under what circumstances a “personal identification” answer might be used? Why is it we will force a user to change their password every three months, but the email address that would be used in case of a password recovery effort is never tested, and security questions are never refreshed or reinforced? Why is it that we as a culture have recognized the concept of a “fire drill” in the real world, and advise people to understand alternate exit routes in cases where the elevators are out of order, but in the online world, we feel that advising those users who happen to be of the more concerned persuasion to familiarize themselves with and verify the operation of the page behind their “forgot my password” links is a crazy and unthinkable thing to ask?
If you are someone who worries about being hacked, and if you are willing to take a little bit of time and energy to at least understand the risk you might be facing, my advice to you is: Go forth and recover.
Go ahead. Recover all of your accounts. You probably needed to rotate those passwords anyway. Find those “forgot password” links and click ‘em. Chances are, you will be able to reset your password in an automated fashion, either by answering a pre-specified question, or by getting a link sent to an email account (sometimes, both approaches are combined). If you are asked a question, is the answer guessable? Is it searchable? Is it short? Is it a single dictionary word? Can you control the guessability of the answer, or is it a hard-coded format such as a postal code or a birthdate? If you are emailed a link, follow the chain to your email provider and recover your password there too. Is it more pre-specified questions? Are they the same questions? Were you required to click on a link sent to yet another email address? If so, follow the chain again. Rinse and repeat. This is the same trail that a hacker would follow – often they find something you’ve forgotten, something out of date, an expired account or a typo that you never would guess could end up in a compromise of your identity. Password recovery mechanisms were used to compromise Sarah Palin’s email account, and also used to steal corporate data from Twitter. If you can satisfy yourself that the password recovery loop is closed, that your answers are not guessable, that you haven’t specified incorrect, out-of-date, or non-existent email addresses, and that the services you use don’t use unsafe mechanisms, you will be safer.
Don’t believe me? Check out the techniques this guy used to compromise the identity of a mere acquaintance. He gained access to supposedly “secure” accounts whose password recovery mechanisms depended on password recovery mechanism that depended on grossly guessable data.
Should you have to do this? No. Not according to almost anyone in this business. Are you expected to do this? Of course not. How many people actually memorize an alternate exit route from every hotel room they ever stay in? Only the ultra paranoid, I am sure. Still, if you care, if you are motivated, and if you want to know what to do, perhaps this can be a starting point.
Thursday, August 13th, 2009
Catalyst North America 2009 was a fascinating conference – but maybe fascinating to me for different reasons than it might have been fascinating to you.
The logistics summary is short: Burton Group has just plain gotten it right. Good food, free, reliable internet access even in the room, power for laptops, nice hotel. They even arranged an airport shuttle discount. They paid a lot of attention to the cost incurred by their attendees, and it was appreciated.
I’ll tell you the truth. I’m not going to particularly talk about the content of any given presentation. After 8 years, a large portion of the content is pretty well ingrained in my head, and while I learn new things every time, each little twist and turn has really become a single data point contributing to an overall set of trends. I think of the following points as indicators – but you be the judge of the truth of that statement.
1. Presentations fit to take home to Mom
This is literally the first year of all the years I have been attending Catalyst that I have downloaded presentations and recommended them to those that could not attend; that’s how good some of these presentations were. The speaker notes were critical in being able to pass these presentations on, so thank you to the speakers who took the time to be sure that their presentations were consumable after the fact.
2. Cloud Track
The cloud track presentations I saw this year were fantastic, but I hope that this is the first and last time that Burton focuses primarily on “Cloud”. Why? Because I hope that after this year, everyone will be savvy enough and discerning enough to get past such a broad topic rollup. A lot of attendees I talked to had been sent to Catalyst with the mission of “understanding this cloud thing”, and I think that the Burton Group very astutely served the needs of their attendees – but while general education is important, there were people there who were frustrated because they wanted to talk about actual concrete things that Enterprises might want to do in the cloud. You can only start with the layered diagram of SaaS, PaaS, IaaS, and SIaaS (Software Infrastructure as a Service, newly defined by the Burton Group) so many times. Unless you were interested in virtualization, which seemed to be covered very thoroughly, I don’t believe that many of the cloud sessions put a targeted group of people with a common business goal in the same room, however I also don’t believe that this would have been a realistic goal for this year anyway.
This track is going to be very popular and profitable for Burton Group – it is a great team, producing great content. I look forward to seeing how it evolves & matures in the next year.
3. Lightning Rounds
(Lightning rounds are a series of extremely short on-stage spots given to vendors who have product announcements to make: 4 minutes & 4 slides, if I recall correctly)
The lightning rounds started in 2008 and were expanded this year. I believe they were very well received, in fact I heard people say that they were the best content of the day. I hope Burton Group thinks long and hard about what that means. For a very long time, ‘vendor’ has been a dirty word at Catalyst – with the result that attendees can only find out about products through the sanitized views of the analysts or the drunken haze of the hospitality suites. Granted, the analysts are smart and make great points, but – the danger is that the whole experience becomes homogenized, and no matter how great the quality is, homogeneity is boring. Looking at the neat pastel-colored items on the agenda this year, that’s all I could think. Oh, yet another customer use case. Oh, a panel. All fitting into a certain template.
The lightning rounds were refreshingly template-free, but more importantly, they let the attendees make a direct connection with the vendors. Some vendors did not use their time wisely, some did, but no matter what the attendee could be the direct judge, and in the worst case the suffering was short. I’d like to see more of that, and I think it benefits everyone, assuming the goal is to create a thriving identity ecosystem.
4. Where are “The Regulars”
My recollection of the early part of 2000 was that there was a set of non-Burton people who could always be counted on to further the discussion. Burton analysts provided the meal, but ‘the regulars’ provided the spice, both in the blogosphere and on stage. I haven’t seen very many recurring spots given to regular non-Burton speakers any more, and I think that’s a shame. I’m not sure if it is because these people have different jobs and focuses, because the space is simply more commodotized and the characters have moved on to more interesting new problems, or because Burton has abandoned the policy – but I think the conference is the poorer for it. I’d like to see Burton take a chance and try to cultivate a new breed of thought leaders, agitators, and characters in this space, who can grow with the technology and help attendees gain multiple and growing perspectives over time, rather than only hearing from yet another different customer who took on and solved one task one time, in one context, and who you will never hear from again.
Why are the regulars important? Because they represent a growing trusted relationship that engages people. We need those trusted standouts who can transcend vendor allegiances, who can tell the truth not only from a neutral standpoint but also sometimes from a decidedly non-neutral standpoint. We need people who can bridge gaps and serve as public touchstones for the topics of the day.
I have a list of people I think would excel at this, but it would be much more interesting to see who Burton Attendees would nominate for the job.
By the way, Frank (shown here) really enjoyed the conference. Especially the hospitality suites with the icy martini bars… if you were at Catalyst you have probably already met Frank, otherwise you’ll be seeing more of him as I travel around.
Tuesday, July 28th, 2009
As of very recently, I have had the pleasure of working on contract for Ping Identity – and I have been dying for today, because I can finally talk about what the combination of PingConnect and Google can accomplish.
Traditionally, the ability to integrate a disparate set of cloud applications for a userbase was predicated on the non-trivial task of first creating a Start of Authority. As a bare minimum, you had to (a) create an authoritative user repository and (b) enable some kind of service to perform an initial authentication and leverage the resulting session to facilitate federation to various parts of the cloud. After that, you still had to figure out who could consume what you had worked so hard to be able to establish.
Now, you can make Google your Start of Authority, and instantly get to a laundry list of 60 applications with PingConnect. All without a Windows domain, a WAM server, or a federation server, and best of all, by utilizing an existing repository that is likely to be maintained regularly. AND, there is actually useful stuff to get to. This may not sound like a big deal to the companies who all have Windows domains anyway, but I believe that this could push back the need for a growing small business to get a Windows domain quite significantly. To me, the start of authority problem was a massive barrier to adoption for federation, and that barrier has been obliterated, not just on the cost front but on the effort front too. They say it takes a village? Well now we actually have one worth hanging out in.
Monday, July 20th, 2009
In researching a few products for a client, I came across an e-book on Managing Linux & UNIX Servers by Dustin Peryear. I managed to get access to a chapter without registering, and I liked what I saw so much that I had to have the whole book.
The thing that is remarkable about this book to me, is that it is NOT a book about technology, commands, program execution or coding. It is a book about what to get done and why. There are so few of these kinds of books – the ones that assume that once you have a comprehensive plan for getting things done, finding out how is the easy part. The books that get that the mapping works better from the top down than the bottom up: all the man pages in the world will not help you if you don’t have the context to know which of them you should be reading, and what the end result should be when you apply that knowledge. It is the guidance that makes the difference.
I very badly want a book like this for information card Relying Parties, specifically the PKI functionality of an RP. I have work to do on my RP: right now I know I’m missing several critical checks to ensure integrity and non-repudiation for the messages I’m accepting and trusting. But how do I know that I have covered all the bases? I have this list of interoperability issues. I have a set of api calls into security libraries like xmlseclib and openssl that could possibly solve my issues. What I do not have is guidance. I feel like I’m assembling an entertainment unit from IKEA, and I have detailed engineering information on every screw and every panel in the entire IKEA inventory: thousands of weights, heights, screw thread pitches, you name it. While I technically have access to everything I could possibly need to assemble my entertainment unit, it is left up to me to figure out which and how many of the inventory items I need, how they fit together, and what order they must be assembled.
I suppose what I’m saying is we need to step above RTFM (Read the Fsking Manual) to KWFMTR (Know Which Fsking Manuals to Read).
(photo credit: http://www.flickr.com/photos/jonk/33283987/)
Monday, May 4th, 2009
I’ve been listening and watching lately, and there are some interesting independent things happening that I expect could knit into a very entertaining next 3 quarters. Something is telling me to swing away; so here goes.
Identity Management Tool Hiatus
The Sun/Oracle takeover has everyone aflutter over which tools will stay and which will go, and what the resulting stuff will look like. I think the interesting thing is that no matter what happens, you can pretty well guarantee that while Oracle sorts out what to keep and what to shelve, both Oracle Identity Manager and Sun Identity Manager will come to a developmental standstill. Coincidentally, this matches the Microsoft delay of release to manufacture of MIIS/ILM/FIM.
Even before the announcements above, all was very quiet on the home front for IdM. It seems obvious to me that all the big stack vendors have scurried off into their war rooms and are frantically trying to figure out how to set up their stacks to transparently support the rollout of cloud offerings. This means there is probably an architectural pause going on, as everyone tries to get from theoretical to concrete with their sanity and business plans intact.
Immediate Status Quo Interruption
Meanwhile back in the real world, cloud mania is causing every Tom Dick and Harry who runs a software shop to ask themselves whether they could offer their product as a service. While the think tanks are pondering the cloud as a big fat integrated platform offering, a whole new generation of application vendors are simply putting their software online as services, any which way they can.
Short Cuts and Regretful Choices
The services out there now have not had the benefit any kind of cloud philosophy. Applications are offering the usual set of poor choices for access and user management, doing the bare minimum so that they can focus on their “core” service. Lured by attractive cost and immediate gratification, Enterprises won’t see the risk, and won’t think to do two critical things: track beyond the departmental level what services are engaged, and set policies around minimum security requirements.
Stir it all together and…
So where do all these little tidbits take me when I connect the dots? I see a big issue looming on the horizon: a proliferation of untracked administrative web interfaces on the open internet, protected by unencrypted and buggy login forms which are open for anyone to probe. Even in cases where the login process itself is reasonable, Enterprise assets are at the mercy of the quality of an admin password. Ask Twitter, it’s a big problem. Crack one admin password in a poorly-secured application, and you may gain instant access to many other better-secured services – unless of course you really believe administrators will use a different password for each of their multiple services.
With the advent of these kinds of issues, provisioning could transition from being a back-room necessity with minimal business impact and no real SLA requirements, to being an activity that incurs serious risk for the organization. Enterprises will realize that they need to do one of two things; add an extra physical layer of security to each and every administration console, or pull those consoles off of the internet altogether, opting instead for an automated API call that can be locked down six ways from Sunday. You better believe that application vendors will go along for the ride; submitting to one of these choices is a lot better than having Enterprises simply abandon services and return back to intranet solutions.
The big Identity players do not have the agility to respond properly to these kinds of pain points; but the little guys do. I think that a few small agile companies are going to swoop in and provide consolidation services for administration console interfaces in the cloud. Others will create Identity Provider services and products that allow the Enterprise to distribute 2-factor authentication tokens for use at multiple sites on the internet.
Somebody is about to steal home. Who will it be? Come to my Glue Talk and we can debate in person…