Last month I noted that Google’s Street View service was being challenged by German data privacy authorities, who insisted that Google must permanently remove personally-identifying images from their databases (not just blur them in the user interface). Google argued that the original images are necessary to help the system “learn” how to automatically blur better in the future, but Germany feels (and I agree) that privacy must trump. engineering in this case.
Google has conceded, and will now erase identifiable raw data depicting people, property, or cars upon request.
This is a first, and it is significant, but it is an exception only for Germany.
Rather than taking a broader value-centered approach to designing its systems, Google continues to base their decisions based (primarily) on local laws. The U.S. lacks laws guaranteeing individuals “privacy in public,” so Google launches street view with minimal (and poorly-executed) ability to protect one’s privacy. Canada, however, does have such laws, so Google decided to blur faces there (but only applies that engineering solution to Canada). Now, Germany wants the source data purged, so Google will only provide this privacy-protecting measure to that local authority.
A broader values-centered approach would (learning from the Canadian and EU legal environment) recognize that protecting one’s privacy in public might indeed be a fundamental right, and perhaps is something that must be designed into such a potentially privacy-invasive tool as Street View.
I’ve informally chatted with Google folks about these issues, and I applaud that they do have law/policy folks on every product team. But too often, when asked about something like “why didn’t you blur the faces in the U.S. version”, the answer is “the law doesn’t require it”. Such a strict legal approach to designing (or not) ethics into products is extremely shortsighted.
Do we need to start calling for Chief Ethical Officers in our corporations?