If I were still in New York City, I’d be excited to attend a debate tomorrow on the proposition that “Google violates its ‘don’t be evil’ motto.” Debaters include Jeff Jarvis, Esther Dyson and Jim Harper (against the motion), and Harry Lewis, Randall Picker, and Siva Vaidhyanathan (for the motion).
I think most would agree that Google set itself up for such criticism and debate by selecting a simultaneously provocative and nebulous motto. And I suspect any such debate isn’t about a binary “evil” or “not-evil” distinction, but more about placing the search giant on a continuum of corporate social responsibility, with “complete altruism” on one side, and “utterly evil” on the other.
When one considers its complicity with Chinese censorship, its reluctance to include a direct link to its privacy policy on its homepage, its resistance to limiting the duration of its data retention or to even use a cookie with an expiration date, its continued opposition to shareholder anti-censorship and human rights proposals, its lack of foresight on how to protect privacy in public with Street View, and its general disregard for the need for its computer scientists and engineers to place values at the forefront of their design decisions, I’m forced to take the side of Lewis/Picker/Vaidyanathan, arguing that Google leans toward the evil side of the continuum.
Has Google done good for the world? Certainly. Can it — no, should it — do more? Absolutely.
UPDATE: The Times covers the debate here, and a transcript is here. Podcast is coming soon.
One of the concerns I’ve always had about the “don’t be evil” motto is that it reflects a very naive conception of human behavior and the various forces that influence it. As I understand it, the motto was a reaction to the perception (especially among the technorati) of Microsoft at the time as an “evil empire.” But there was no recognition of any underlying reasons why Microsoft might have become “evil,” just aspersions cast on the characters of its executives. Microsoft was evil because the people who ran it were evil, so Google could be good if the people who ran it were good.
If the computer scientists who founded Google had ever been exposed to any social science, they might have considered that what led to Microsoft’s “evil” behavior was not character flaws, but the logic of late capitalism. Then they might have thought about how to design an organization that could be “good” not by virtue of its executives virtue, but in spite of its executive’s greed. That is what the founders of the United States did–they assumed the worst of their leaders and tried to design a system that would continue to function well in spite of their potential for evil.
The naivete of Google’s founders continues to haunt us all, as Google continues to put itself in positions where it has the potential to do great harm and asks us to believe that because its leaders are virtuous we have nothing to fear. If they had a better understanding of human behavior they would understand that the best way to prevent “evil” is to avoid creating situations where doing evil is possible.
Well said, Ryan.
My side won!
The transcript and video will be up on Friday. I will let you know how to post.
NYU STILL has my blog down. Bastards.