To document global surveillance measures in response to the coronavirus pandemic, OneZero compiled press reports from more than 25 countries where potential privacy issues are occurring. The most common form of surveillance implemented to battle the pandemic is the use of smartphone location data, which can track population-level movement down to enforcing individual quarantines. Some governments are making apps that offer coronavirus health information, while also sharing location information with authorities for a period of time.
Christopher James Riederer |
Most of the modern online economy is based on websites offering free services and content in exchange for advertising access and user data. Web companies collect vast troves of data about their users in order to better target their advertisements. An important subset of this harvested data is the locations visited by users. Location data is valuable as it is a “real world” signal compared to online behaviors: a visit to a store is a stronger signal than a visit to a website, and location data can reveal user attributes that are interesting to advertisers. The collection of this data, however, raises many concerns. Location data can reveal important attributes that users may not wish to disclose: ZIP codes can reveal income and race, visits to places of worship may allow discrimination, and insurers may want to know about trips to hospitals. The risks exist at both an individual level, with location tied to physical safety, and at a collective level, with inference about group membership a necessary step towards discrimination. In this thesis, I examine issues of privacy and fairness in the use of location data. In the first portion, I empirically demonstrate new attacks on the anonymity and privacy of users, including a theoretical basis for user identification. In the second portion, I propose and analyze new solutions for dealing with privacy, anonymity, and fairness in the collection and use of location data. In contrast to previous work which presents privacy in abstract ways or ignores the power of data aggregators, the work presented here focuses on concretely informing users and incorporates the economic incentives driving privacy and fairness concerns. |
A number of protocol designs for co-location tracking have already been developed, most of which claim to function in a privacy preserving manner. However, despite claims such as “GDPR compliance”, “anonymity”, “pseudonymity” or other forms of “privacy”, the authors of these designs usually neglect to precisely define what they (aim to) protect. We make a first step towards formally defining the privacy notions of proximity tracing services, especially with regards to the health, (co-)location, and social interaction of their users. We also give a high-level intuition of which protection the most prominent proposals can and cannot achieve. This initial
overview indicates that all proposals include some centralized services, and none protects identity and (co-)locations of infected sers perfectly from both other users and the service provider.
The idea is elegant in its simplicity: Google and Apple phones would quietly in the background create a database of other phones that have been in Bluetooth range—about 100 to 200 feet—over a rolling two-week time period. When users find out that they are infected, they can send an alert to all the phones that were recently in their proximity. […] But building a data set of people who have been in the same room together—data that would likely be extremely valuable to both marketers and law enforcement—is not without risk of exploitation, even stored on people’s phones, security and privacy experts said in interviews.