Real Life Personal Privacy Policy

I’m sitting in the Data Sharing Summit after a conversation about what can go wrong with data portability, all full of wonderment and questions — I figure I’ll blog my heart out while I can still embrace my current simplistic view of this area :)

I feel a huge sense of dissatisfaction when I listen to application developers talking about privacy. They talk about how a given person can create a view of themselves that can be consumed by an application – but the vocabulary they use reminds me of assembly programming. Of course, the folks who write the specs and the folks who implement those specs must understand this level of granularity – but can’t there be something more palatable put in front of the users?

Every person who interacts with another makes a personal risk assessment about the action they are about to take. At the very beginning, all you can really do is look at the very superficial things that people advertise about themselves, and interpret those things within the context of the current community. In real life, this means that initiating a conversation on heavy metal with a person wearing a Metallica t-shirt is probably not risky within most contexts. In the same way, you might choose to confidently drop a literary reference in a conversation with a person who has a copy of ‘The Master & Margarita’ in his hand.

This is theoretically analogous with online entities like interest groups within social networks, it gives fellow users a chance to make initial guesses on the type of person they are dealing with. But I have to ask — why is it that we have nice warm fuzzy interfaces for users to express their preferences, affiliations, personal views and all sorts of context such that other people can synthesize a gestalt of a person and make a risk assessment, but the application can do no such thing?

What about allowing a user to choose a set of simple, private parameters that represent a very coarse-grained view of how that user might wish to be treated by the application? If I tell Linkedin that I want to be treated like a quiet, conservative, privacy-concerned person who keeps to themselves, I think that LinkedIn can guess how I would feel about my data being exported. If I wanted LinkedIn to treat me slightly less stereotypically in some circumstances, I should be able to dive into the assembly language and tweak things – but I’ll bet most people would be fine with broad strokes as a starting point.

Alternatively, perhaps I tell Facebook that I’m an extrovert with a great sense of humor who loves to connect but who is concerned about how photos with me as the subject are published to the world. Again, I think there are interpretations that can be made with respect to the boundaries that this user wishes to set.

Would this perfectly work every time? Certainly not. But neither does the real world model. At least maybe this could be a way to mitigate the fact that the social graph with respect to data portability/privacy is in fact an interconnected set of multi-dimensional matrices that represents the mother of all provisioning problems – every person dealing with every attribute of every relationship within every community they are a part of, and now also between many of the communities they are part of.

Here is what I envision. Imagine a very small number of possible attributes to describe a person’s privacy tolerance, that are displayed as part of your account settings. My guess is, if you see a descriptive word in your account that is the default, but doesn’t describe you, you will go and change it (rather than just ignoring a wall of possible privacy settings that doesn’t give you any interpretation of the implications). Perhaps to be more visual, you could set up an equalizer at the bottom of the page representing different ranges of tolerance for various uses of data, that users can set with one button click by using a preset and then fine tune if needed.

Hm, I wonder what the privacy version of the “stadium” preset would be? :)