Real Life Personal Privacy Policy

I’m sitting in the Data Sharing Summit after a conversation about what can go wrong with data portability, all full of wonderment and questions — I figure I’ll blog my heart out while I can still embrace my current simplistic view of this area :)

I feel a huge sense of dissatisfaction when I listen to application developers talking about privacy. They talk about how a given person can create a view of themselves that can be consumed by an application – but the vocabulary they use reminds me of assembly programming. Of course, the folks who write the specs and the folks who implement those specs must understand this level of granularity – but can’t there be something more palatable put in front of the users?

Every person who interacts with another makes a personal risk assessment about the action they are about to take. At the very beginning, all you can really do is look at the very superficial things that people advertise about themselves, and interpret those things within the context of the current community. In real life, this means that initiating a conversation on heavy metal with a person wearing a Metallica t-shirt is probably not risky within most contexts. In the same way, you might choose to confidently drop a literary reference in a conversation with a person who has a copy of ‘The Master & Margarita’ in his hand.

This is theoretically analogous with online entities like interest groups within social networks, it gives fellow users a chance to make initial guesses on the type of person they are dealing with. But I have to ask — why is it that we have nice warm fuzzy interfaces for users to express their preferences, affiliations, personal views and all sorts of context such that other people can synthesize a gestalt of a person and make a risk assessment, but the application can do no such thing?

What about allowing a user to choose a set of simple, private parameters that represent a very coarse-grained view of how that user might wish to be treated by the application? If I tell Linkedin that I want to be treated like a quiet, conservative, privacy-concerned person who keeps to themselves, I think that LinkedIn can guess how I would feel about my data being exported. If I wanted LinkedIn to treat me slightly less stereotypically in some circumstances, I should be able to dive into the assembly language and tweak things – but I’ll bet most people would be fine with broad strokes as a starting point.

Alternatively, perhaps I tell Facebook that I’m an extrovert with a great sense of humor who loves to connect but who is concerned about how photos with me as the subject are published to the world. Again, I think there are interpretations that can be made with respect to the boundaries that this user wishes to set.

Would this perfectly work every time? Certainly not. But neither does the real world model. At least maybe this could be a way to mitigate the fact that the social graph with respect to data portability/privacy is in fact an interconnected set of multi-dimensional matrices that represents the mother of all provisioning problems – every person dealing with every attribute of every relationship within every community they are a part of, and now also between many of the communities they are part of.

Here is what I envision. Imagine a very small number of possible attributes to describe a person’s privacy tolerance, that are displayed as part of your account settings. My guess is, if you see a descriptive word in your account that is the default, but doesn’t describe you, you will go and change it (rather than just ignoring a wall of possible privacy settings that doesn’t give you any interpretation of the implications). Perhaps to be more visual, you could set up an equalizer at the bottom of the page representing different ranges of tolerance for various uses of data, that users can set with one button click by using a preset and then fine tune if needed.

Hm, I wonder what the privacy version of the “stadium” preset would be? :)

4 thoughts on “Real Life Personal Privacy Policy

  1. I think you’re on to something here, P. In fact, Facebook allows a similar kind of interface that implies aggregation preferences for users for their news feed. What their interface lacks is a ‘human view’ of what those equalization settings actually mean, and it does take a bit of back and forth tweaking to get it right.

    It’s one thing — and rather without consequence — to realize after a couple weeks that you _didn’t_ actually like seeing status updates of users, but it could be harmful IRL if, for instance, you realized that the public at large was being made aware of your vacation plans, thus priming your property for a hostile intrusion.

    I’m not sure where the balance comes in. In fact, if you put on your black hat for a while you could probably come up with a way to abuse almost any kind of aggregate data. A competant programmer with ill-intent could devise a way to milk a community for this information.

    What is frustrating, to me, in all this, is that it can be actually exciting to look at some of the possibilities of aggregated personal data, even identity, but that the systems and controls aren’t in place to handle the current adoption rate.

    It’s not that they don’t exist, at least in some form, but they certainly aren’t implied, controlled or regulated in any way. Don’t get me wrong…I am no fan of regulation, but my excitement for the possibilities dies quickly when it’s hard to understand the implications of sharing the personal data to begin with.

    Stadium, btw, would likely be an aggressive propegation setting that kept your mom up-to-date via text messages and RSS feeds on your level of intimacy with your spouse.

    Eww.

  2. Ok…here I go again.

    I cannot possibly comment on the technical side of data portability and even about identity management much, but that won’t stop me from bringing up some the issues that I feel are not being adequately addressed in the field.

    The issue I see with these systems, and their underlying basis, is their absence of moving beyond “first impressions”; that is there is no means in the system to have a relationship strengthen or weaken based upon the interactions that have taken place and to personalize the relationship people or entities have. For example on linked in I have contacts there who I have known for virtually my entire professional life and whom I consider my closest friends, there are also people who just have met once and put them on linked in so that I don’t have to keep looking for their business card. If linked in let me differentiate between the “strength and/or quality” of the connection that would be one step along an interesting path. For example if linked in kept track of my interactions with these people, perhaps via something like Open ID as I understand it, and could quantify and qualify the waxing, waning or transformation of a relationship, digital or otherwise. The fundamental thing here is that relationships must transform in order to grow and if they do not the relationship withers, as there is no more information to be passed. The system you propose is a one-way where I get to only show what I want to show and does not reflect the organic and personalized nature of a relationship. I may wear a Metalica shirt, but others may know I cry in the rain…. (Actually, I do not own a Meticalica shirt and it is always sunny in California).

    The most satisfying relationships I have had always been ones that have changed over time. I think it reflects, I believe, the human need to discover more about the world and ourselves.

    I don’t think I would spend much time in “tweaking” an on-line identity to show either to a specific group of people or in general. I am too lazy or even I am not self centered enough to keep changing the face I want to show. Conversely, I am not sure that I want an algorithm to make those “judgments” for me – I do not know what the right answer is I’m just saying I’m not sure I want a computer to decide who my friends are, if I have any, and I’m probably too lazy to do it myself.

    I always leave the eq setting on “Rock Concert” myself..turned up to 11.

    D.

  3. I think this is good thinking in the generally right direction. I am wondering . . . what is the relationship between the privacy ‘language’ and the ‘trust’ language? Two different things or one and the same. I suspect it would work best as one thing with anything/anyone I don’t trust sufficiently in certain contexts being directly denied access to my information in that context.

    Sorry for the late comment – I am just finally getting back into this space after 2 years off doing other stuff…

Comments are closed.