Much of my research strives to extend my broader social, political, and ethical explorations of information and communication technologies beyond the halls of academia and into “real-world” design contexts to pragmatically engage with designers and advocate for the value-conscious design of new media and information technologies. Recognizing that the choices engineers and designers make in shaping technical systems are guided by their understandings of the particular values at play, a goal of my research is to increase awareness within technical design communities of the ethical and value implications of technologies and to provide them the necessary conceptual tools to foster critical reflection on the hidden assumptions, ideologies and values underlying their design decisions.
Novel pragmatic frameworks have recently emerged to ensure that particular attention to values becomes an integral part of the conception, design, and development of technological artifacts and systems. These value-conscious design frameworks — known variously as Design for Values (as conceived by L. Jean Camp), Values at Play (Helen Nissenbaum & Mary Flanagan), and Value Sensitive Design (Batya Friedman) — seek to broaden the criteria for judging the quality of technological systems to include the advancement of ethical and human values, and to proactively influence the design of technologies to account for such values during the conception and design process.
I strive to contribute to these attempts to bring value considerations systematically to bear on the design and development of new media and communication technologies. Examples include my dissertation research on the value implications of the quest for the perfect search engine, my work on the privacy and surveillance aspects of vehicle safety communication systems, and more recent explorations into Web 2.0 information infrastructures.
Clearly, there is no shortage of potential applications of value-conscious design, as there continues to be a glaring need among major ICT providers to consider the value and ethical implications of their products before rolling them out to the masses. Consider recent examples from two of the current darling companies — Facebook and Google:
(more after the fold)
In September of 2006, Facebook launched the mini-feed, ignoring how the new “feature” impacted the norms of personal information flows — that the contextual integrity of the flow of information within Facebook was disrupted. When they tried to address the concern, the expanded privacy settings were defaulted to the maximum sharing of personal information — hardly a stance revealing a commitment to protecting user privacy. We also must not forget that the mini-feed was automatically activated for every user. This revealed an obvious case where user values should have been considered in the initial design stages of the feature; a value-conscious design approach would have recognized how such a feature would have been seen as a privacy violation by many, and its default settings and other design features would have been tweaked accordingly.
Now, a year later, Facebook has repeated the same mistakes, again failing to engage in the value-conscious design of new “features” to their popular service. In pursuit of advertising revenue, Facebook released Facebook Ads, a complex system whereby Facebook cookies are retrieved at third-party e-commerce sites, users are given 20-seconds to opt out (the default is to participate, and the screen disappears with the option still checked if no action is taken), and users’ likenesses are appropriated to shill for products. With all this, Facebook has again disrupted the contextual integrity of personal information flows, and made it difficult for users to opt-out of the potentially privacy-threatening situation. Only after significant complaints, including a campaign by MoveOn and threats of an FTC investigation, Facebook modified how Beacon works in order to better protect user privacy. Again, a value-conscious design approach would have likely prompted the release of this more privacy respecting variation in the first place. Instead, Facebook is left to try to protect values on the back end.
About 6 months after Microsoft launched their Windows Live Local Virtual Earth service, providing street level images of San Francisco and Seattle, Google jumped into the foray, offering their own “Street View” enhancement for Google Maps. You can drive or walk around a map and view the streets and storefronts…and the people. This detailed level of mapping carries significant concerns about one’s privacy in public, given the prevalence of visible and identifiable faces and license plates captured by their fleet of camera-toting cars trolling our streets, which I pointed out at the time.
To remove yourself from the database of images, Google first required submission of your legal name, e-mail address, a copy of your driver’s license or other government ID, and proof of your association with that address (letterhead, utility bill, etc). This, of course, created even more privacy concerns, and Google eventually backed down on this set of requirements, instead asking for only your name and the image location. Later, Google loosened the requirements further, allowing anyone to request the blurring of a face or license place, even if the identifiable image isn’t you/yours. They later announced certain versions of Street View will automatically have all faces and license plates automatically blurred (when local laws require it, such as in Canada). Now, 6 months later, reports indicate that Google might be considering taking the same steps with the U.S. version of the Street View software.
These are all positive moves by Google, but they are all reactionary. They reveal Google’s adeptness of responding to criticism over user privacy, and little initiative in proactively protecting that privacy with these kinds of products. The very fact that they are only “considering” blurring identifiable information with the U.S. version reveals a lack of consciousness to the value-laden implications of their design decisions.
Dear Facebook, dear Google:
Please engage in value-conscious design. Please recognize that you can make value-based decisions during the design phase of products, rather than relying on your legal, PR, and engineering teams to clean up the mess after they are deployed. We (in academia) can help you, we want to help you. We’ll guide you through the theory of privacy as contextual integrity, about pragmatically designing for values, and help reveal the links between design and corporate social responsibility and action ethics.
We like your products; we use your products; we want your products to succeed. We also want them to respect and protect human values. Together, we can accomplish this.