Facebook Recants (Again)

Amid increasing criticism regarding modifications to its terms of service, Facebook has withdrawn the changes it implemented on February 4, 2009. Mark Zuckerberg posted a message explaining this move, and FB also released a set of “Facebook Bill of Rights and Responsibilities” to try to calm people’s nerves.

A couple of comments:

:: Facebook’s short memory ::

Facebook continues to amaze me in its utter failure to learn from its past mistakes and do a better job of analyzing the social/ethical implications of its decisions, as well as communicating with its users. In 2006, Facebook launched the mini-feed, ignoring how the new “feature” impacted the norms of personal information flows — that the contextual integrity of the flow of information within Facebook was disrupted. When they tried to address the concern, the expanded privacy settings were defaulted to the maximum sharing of personal information — hardly a stance revealing a commitment to letting users control their information. We also must not forget that the mini-feed was automatically activated for every user without prior notice.

A year later, Facebook repeated the same mistakes. In pursuit of advertising revenue, Facebook released Facebook Ads/Beacon, a complex system whereby Facebook cookies are retrieved at third-party e-commerce sites, users are given 20-seconds to opt out (the default is to participate, and the screen disappears with the option still checked if no action is taken), and users’ likenesses are appropriated to shill for products. Here, Facebook again disrupted the contextual integrity of personal information flows, and made it difficult for users to opt-out of the potentially privacy-threatening situation. Only after significant complaints, including a campaign by MoveOn and threats of an FTC investigation, did Facebook modify how Beacon worked in order to better protect user privacy.

And now this month, Facebook makes a change to their Terms of Service that (if only in perception) impacted the control users have over the information they post to the website. The blog post announcing the changes made no mention of the specific changes regarding the extension of FB’s license beyond when a user removes content or closes her account. Amid pressure, Facebook tried to defend the change, but ultimately back-tracked.

Perhaps, someday, Facebook will put together a team of policy/privacy analysts (like others are trying to do) who might be able to identify these concerns before they are thrust on its 175 million users.

:: Is Facebook’s “Bill of Rights” the next “Don’t be Evil”? ::

Facebook is layout out a new “governing document” for its social networking ecosystem: the “Facebook Bill of Rights and Responsibilities”. At this point (it seems to be an organic document), the declaration states the following:

1. You own your information. Facebook does not. This includes your photos and all other content.

2. Facebook doesn’t claim rights to any of your photos or other content. We need a license in order to help you share information with your friends, but we don’t claim to own your information.

3. We won’t use the information you share on Facebook for anything you haven’t asked us to. We realize our current terms are too broad here and they make it seem like we might share information in ways you don’t want, but this isn’t what we’re doing.

4. We will not share your information with anyone if you deactivate your account. If you’ve already sent a friend a message, they’ll still have that message. However, when you deactivate your account, all of your photos and other content are removed.

5. We apologize for the confusion around these issues. We never intended to claim ownership over people’s content even though that’s what it seems like to many people. This was a mistake and we apologize for the confusion.

Google has taken a lot of hits over privacy, treatment of intellectual property, censorship, and so on. In many ways, Google opened itself up for attack given its informal corporate motto, “Don’t be evil“. Setting such a high benchmark invariably leads to criticism. Now that Facebook has laid out its promises in the form of a “bill of rights”, will it come back to haunt them?

Consider their declaration that “We won’t use the information you share on Facebook for anything you haven’t asked us to.” Ok, well, I never asked to be opted into an automatic News Feed, nor did I ask to be a part of Beacon, but Facebook used my data for these purposes without my informed consent. Will they do it again? Will a more robust behavioral targeting system be implemented? Will I have asked Facebook to use my profile data for that purpose?

It will be interesting to see how having an explicit “Bill of Rights” for Facebook will impact both the trust and criticism of the social networking giant.

1 comment

  1. I never tried, but. . . isn’t it possible to remove most information from the NewsFeed?
    What type of interaction that is not here:
    https://register.facebook.com/privacy/?view=feeds
    would you want to have, but exclude from the Feed ?

    Althought I agree with you (well, I prefer Fred Stutzman’s take): Facebook cannot claim to ask a licence for just sharing info, as they do much more anlysis than that— I disagree with both the assumption that Facebook has been repeatedly dishonnest in taking that road, and that a moral charter will be used a mockery against them.

    Zuckerberg always recognized his users’ concerns: if anything, he is the first guy to be a billionnaire both by accident, and by always acknowledging his mistakes the day after (and begging for a week of mercy and intense re-developing). When his fellow billionaires unapologetically drive the economy to the ground, that doesn’t sound too bad. I’m not the biggest fan of flipflops in the boardroom, but his relatively low-key approach (Yes, he is an obnoxious ivy fratboy, but I’m talking: by CEO standards) allows him to suggest counter-intuitive change. In spite of the massive uproar, he promissed the NewsFeed would end up being an important feature — and he was both right in promissing that the issues came from bad privacy settings (from Fb side) and right in promissing to kill the program soon if it failed to please. That promiss to go back isn’t void: he effectively killed the (opt-out) threat from Beacon, he did reverse the ToS; Facebook did join OpenID. They still have to implement OAuth — but so far, they are trying hard enough to be given a the week that they are asking for.

    Employees at Google are not victim of the moral burden of their firm’s motto : it’s an objective, and a reason to raise the head — when they had difficult decisions to make, that goal helped them sort the issues.* With critics as loud as Mark Canter, I can’t imagine someone sneaking too much actual crap; neither do I think the praises from David Recordon and Chris Messina can be undeserved.

    * [I know you are going to mention censorship in China, so let me paste a disclaimer that too many critics never seem to consider: I beleive they did the right thing. Annoying users by forcing them to see something politically painful would have been counter-productive. The closest comparison that I can think of would have been to feature a copy of Mein Kampf on every request from German users with words included in that book; albeit teaching that tragedy is necessary, making it an offensive burden is not helpful. It would do nothing but encourage hatred from a dangerous fringe — and the Chinese nationalists are no less threatening than most hatemongers. On the other side, Google knows how many users needed their presence in China to track the temporary proxies needed to actually circumvent the Great FireWall. They were in the unique position to be able to measure how many requests they had for political subject and technical help. They made an informed decision. De facto, they allow both by making the later easier, and made it possible to circumvent ISPs control. So I beleive that they did the right thing, and that critics are just the usual ethnocentrics.]

Leave a comment