Online Social Networks (OSNs) have revolutionised the way we communicate with each other. Nevertheless, it has become harder to keep our information private. Privacy policies, in current OSNs, are typically determined by a service provider; Facebook and Google decide the notion of privacy in their respective social networks.
Furthermore, their privacy models are flawed by design and thus do not inadequately protect users’ privacy [1]: there is an inherent mismatch between the available privacy settings and actual sharing intent of the user and there is no effort to bridge this gap. So often what seems intuitively private to us is not viewed so by the system. This is particularly evident in many cases where supposably private information that was leaked or in other way used against the will of the person who shared it.
In modern OSNs, the users are provided with tools that are (supposedly) designed to help us to put “locks” on our information. Although, this is indeed a step forward and a much better case compared to just a couple of years ago, the current approach to privacy-social networks which is primarily list-based (e.g., you can access my photos because you are in my Family group) is very restrictive and doesn’t represent the complex reality that we as humans deal with on day to day basis.
Privacy is contextual: what is private in one setting can be completely fine to share in another context. This notion is in the heart of the Contextual Integrity (CI) theoretical framework [2]. CI defines privacy policies (norms) that facilitate information flows between users (actors) in the systems. Norms define who get to share what to whom and under which conditions and in what role capacity.
Societies have different norms in different contexts. More so, the notion of privacy is rooted in a common consensus within the community on privacy norms, which are established over time. Real life norms do not automatically translate to a digital environment. It took our society decades to establish a set of norms which we collectively follow. We didn’t have nearly same amount of time in the online world.
Capturing norms is not an easy process, often our perception of things can be different from what the society collectively agrees on. Nevertheless, learning from the lesson in the real world, it is reasonable to believe that we can reach such consensus in the cyber world as well. This can be done potentially faster as we develop techniques that will rapidly evolve and adjust themselves according to the collective (common) notion of privacy of those using it.
However, it would also be naive to think that all the users can agree on a single set of norms. We can speculate that a minimum set of norms can be achieved while additional subtleties will form clusters with norms that capture a particular view of the world that matches that of some users. As a system evolves users can be clustered into different factions of “digital society” based on the choices of their norms.
This is somewhat similar to the plot in the Divergence trilogy science-fiction novel that describes a futuristic society where people have to choose their life faction; hence is the name of the post. In that society, once you join a faction you embrace its norms–e.g, Candor faction believe in always telling the truth.
References
[1]: Tierney, Matt, and Lakshminarayanan Subramanian. “Realizing privacy by definition in social networks.” Proceedings of 5th Asia-Pacific Workshop on Systems. ACM, 2014.
[2]: Nissenbaum, Helen. “Privacy as contextual integrity.” Washington law review 79.1 (2004).
comments powered by Disqus