Google on Wi-Fi Privacy Invasions: “No Harm, No Foul”

Recently we learned that Google’s Street View vehicles gathered people’s private communications on their home WiFi networks as they drove by snapping photos. Initially, Google denied it was collecting or storing any payload data, but later admitted that it had, in fact, collected private information that it should not have, information clearly beyond what any reasonable person who expect a street mapping service to collect.

Google’s explanation was that this privacy invasion was a mistake, and happened because some code inadvertently made its way into the Street View vehicles’ software. While I trust Google that this was a mistake, and that the data wasn’t used for anything, it reveals a significant lack of control over what its fleet of vehicles are doing — and what they are capable of doing without Google apparently knowing. It also reveals yet another example of how Google failed to recognize and address possible privacy issues related to the the fact they are deploying an army of vehicles to harvest information about the physical (and now wireless) terrain.

But not to be left out of the recent spate of dotcom executives making ignorant statements about online privacy, Eric Schmidt, Google’ CEO, had this response when asked about possible EU charges against Google for the WiFi privacy invasion:

“no harm, no foul”


“Who was harmed? Name the person.”


it was “highly unlikely” that any of the collected information was “useful” and that there appeared to “have been no use of that data.”

So, once again, we have the person in charge of a dominant Web company, a company in control of huge amounts of personal data about millions of users, defining privacy concerns solely in terms of the potential (or real) harm that could occur.

Schmidt’s harm-based conception of privacy supposes that so long as the data can be protected/prevented from being used to cause harm, the privacy of the subjects is maintained. Since no one was hurt, Schmidt appears to say, then what’s the big fuss?

Such a position ignores the broader dignity-based theory of privacy. This view recognizes that one does not need to have a tangible harm take place in order for there to be concerns over the privacy of one’s personal information. Rather, merely having one’s  information stripped from the intended sphere (personal WiFi network), and amassed by passing vehicles operated by the world’s largest search engine becomes an affront to the subjects’ human dignity and their ability to control the flow of their personal information.

But Schmidt doesn’t see things this way. Why? My theory is because he’s an engineer, not an ethicist. To an engineer, if the data has no obvious use-value, and no one was hurt, then all is good with the world. To an ethicist, harvesting personal information from the spheres of one’s personal WiFi network (whether the network was open or closed) is a privacy violation.

Until the computer scientists and engineers running the companies that possess so much of our personal information start to understand online privacy from contextual and dignity based frameworks, our privacy remains in peril.

[image source]


  1. I am not trying to defend Google, but I have to ask, how much privacy should be expected for people running an unsecured wireless network? You describe this information as being “stripped from the intended sphere,” but wouldn’t it be fair to suggest this data was being broadcast for anyone willing to tune in? Doesn’t a personal Wi-Fi network become a public one if its administrator decides to not enable security?

  2. @Stan: While technically, data in an unprotected network is left exposed to potential capture, my point with the notion of an “intended sphere” is that for the vast majority of people, even while their network is open, they have the expectation that (a) only people within their home will use the network, (b) the average person who happens to notice the open network won’t have the tools (or interest) in capturing any of the data, and (c) they certainly don’t expect a large company like Google to drive by and systematically capture and store any data from their open network.

    It is not as if these people posted their Web history on a sign outside their house in plain view for any passerby. They had a practical expectation that the data would only be available as intended: for personal use in their home. Google should respect that.

  3. @Stan: Put another way, this, again, is the difference between how an engineer and an ethicist would view the scenario. The engineer sees unprotected data that is de facto public and for the taking. An ethicist sees unprotected data that either (a) the user might not know is unprotected, and/or (b) the user doesn’t intend to make it available for wholesale collection and storage.

  4. “To an engineer, if the data has no obvious use-value, and no one was hurt, then all is good with the world. To an ethicist, harvesting personal information from the spheres of one’s personal WiFi network (whether the network was open or closed) is a privacy violation.”

    The engineer’s use-valuation has to bear on the ethicist’s question of whether the harvested signals amounted to personal information in the first instance. Agre’s representative model of information from “Surveillance and Capture” tells it best. Information is not a raw demonstration of its own signal-carrying capacity: information is about things. If fifth-of-a-second drags of WiFi channel throughput lack enough structure to be “strung together to form actual sensible stretches of activity”, Agre would argue, as would I, that nothing meaningful was compromised.

    I put no stock in Schmidt’s assurances, though, since his vantage point would be that of a CEO doing damage control.

  5. @Jason: Thanks for the thoughtful comment. While I understand Agre’s point regarding the nature of information, I’d like to make my own determination over whether the data is “useful”, rather than rely on Schmidt’s opinion. Further, as recent cases of how seemingly benign pieces of data can be used to de-anonymize other datasets, its hard to maintain with certainty that data — any data — is completely useless.

  6. Eric Schmidt does have a long history of making boneheaded comments about user privacy, but in this case, I think he has a point.

    While some folks who transmit data in plaintext over unsecured Wifi networks may well think their information will remain private, should we view such an expectation as reasonable? I doubt it. When I’m sitting on my couch at home, all it takes is a few clicks of the mouse to launch Wireshark and capture every single Wifi packet that’s being broadcast in the clear. What Google did with Streetview is really no different than what I can do within the confines of my own home, albeit on a larger scale.

    With regards to harm vs. dignity, is there any reason to believe that an actual human being viewed and/or interpreted the Wifi data in question? I suppose one could argue that having an individual’s sensitive personal data stored on a server is inherently antithetical to dignity, but if it’s true that nobody at Google even knew that this Wifi data was being collected in the first place, how much dignity was really lost?

  7. @Ryan: Thanks for the comments.
    My concern is more about the fact that (a) the use of Wireshark isn’t widespread and most people don’t realize how easy it might be to snag packets that are broadcast in the clear, and (b) the larger scale you mention is a meaningful difference.
    Even if (a) doesn’t hold true, having Google — a company that already warehouses a large amount of user data — drive around and systematically capture that data is quite different than you sitting on your couch and trying to sniff data from your neighbors.
    On your second point, a dignity-based approach doesn’t care if any human viewed or used the data. The very fact is was collected surreptitiously is an affront to one’s dignity. And, to me, the fact that the home office didn’t know this was happening is cause for even more alarm: What else might be happening that they don’t know about? What kind of audit and control do they have over their (contract) employees? Given their ignorance of what took place, how can they be so certain about whether the data was used, copied, shared, etc? (While I trust Google’s statements, this is a point of concern)

  8. Ryan, I don’t think ease of abuse tells much about the reasonableness of social norms. How much expertise would it take to listen in on cordless phone frequencies, or to point a shotgun mic at a bedroom window? But you and Michael both hit on something larger here: the present gap between how WiFi (and best-effort routing, for that matter) actually works and how most of the public perceives it.

  9. Interesting points.

    It’s worth noting that at least one federal court has ruled that files accessible to the public via an open Wifi network are not protected by the 4th amendment. While the Google matter obviously does not raise any Constitutional questions, the court’s ruling nevertheless relates to the Google discussion in an important sense. The reason the court declined to extend 4th amendment protection to publicly shared files was because the court found that individuals have no reasonable expectation of privacy over information they’ve made available to the public in the clear.

    Of course, the reason Google’s in hot water isn’t because it accessed Wifi file shares, but because it captured 802.11 frames transmitted in the clear via unsecured Wifi networks. There’s a difference between the two, but I’m not sure that it matters — shared Limewire files and 802.11 frames are both accessible to the public in plaintext via readily available free software. In fact, one could make the case that 802.11 frames are less private than publicly available file shares — accessing file shares requires establishing a connection with a private computer while 802.l1 frames can be passively captured by a Wifi device in promiscuous mode without initiating contact with any remote computer or wireless router.

    I should note that I’m not totally sure the Ahrnt Court got it right, but I do think that there is fairly societal awareness (at least among Internet users) that unsecured wifi networks are, well, insecure. Anecdotally, when I scan for Wifi networks in my apartment (a dense high-rise) I find over 20 networks, each password-protected by either WEP or WPA. A very small and unrepresentative sample, to be sure, but it’s something.

  10. I would like to see how far the “no harm no foul” defense gets me when a traffic officer pulls me over for speeding.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s