Eur J Secur Res (2020). |
Hans-Jörg Albrecht |
The article discusses different examples of data-driven policing, its legal provisions and effects on a society’s understanding of public security. It distinguishes between (a) the collection of classical data such as fingerprints or DNA, which serve to identify suspects and to collect evidence, (b) the processes and the impetus of big data, and (c) the networking of files from different security authorities. Discussing systematic forecasting tools, the article works out a significant difference between the prediction of incidents such as home burglary in the case of predictive policing, and the identification of individuals deemed to be at risk of involvement in various forms of crime in the case of risk control programs. Data and personality protection are interrelated issues. |
John Zhuang Liuy | Michael Sockinz | Wei Xiong |
This paper analyzes how different data-sharing schemes of a digital platform may affect consumer surplus and social surplus when a fraction of the consumers have weak self-control and su§ers from targeted advertising of temptation goods, such as gambling and video games. While sharing consumer data with firms improves the e¢ ciency of matching consumers with normal consumption goods, it also exposes weak-willed consumers to temptation goods. Despite the seeming appeal of the opt-in policy of allowing each consumer to opt in or out of data sharing, our analysis shows that this policy may not be effective in protecting severely tempted consumers. When other consumers, motivated by the improved access to normal goods, choose to share their data, their opt-in reduces the anonymity of the weak-willed consumers who choose to opt out. To alleviate this externality, privacy protection regulation needs to limit the bundling of the consumer authorization to share data with normal good and temptation good sellers. |
GigaScience, Volume 9, Issue 2, February 2020. |
Data reuse is often controlled to protect the privacy of subjects and patients. Data discovery tools need ways to inform researchers about restrictions on data access and re-use. We present elements in the Data Tags Suite (DATS) metadata schema describing data access, data use conditions, and consent information. DATS metadata are explained in terms of the administrative, legal, and technical systems used to protect confidential data. The access and use metadata items in DATS are designed from the perspective of a researcher who wants to find and re-use existing data. We call for standard ways of describing informed consent and data use agreements that will enable automated systems for managing research data. |
Trade used to be about goods crossing borders and the instrument of protection was mostly through tariffs. Then there was greater recognition of trade in services, now exceeding the share of goods in global trade. Because of services, the focus of trade protection shifted more towards ‘behind the border barriers’ or domestic regulations that can obstruct services trade. More recently, the flows of goods and services are eclipsed yet again by data flows whose contribution to the economy is projected to reach $11 trillion by 2025. In the digital era, a new set of non-tariff measures –mostly related to data -have thus emerged. The paper seeks to understand the role of data in business and trade, the nature of some data flows restrictions and other digital trade barriers, and potential impact of data regulations.
There is a pervasive assumption that elections can be won and lost on the basis of which candidate or party has the better data on the preferences and behaviour of the electorate. But there are myths and realities about data-driven elections. It is time to assess the actual implications of data-driven elections in the light of the Facebook/Cambridge Analytica scandal, and to reconsider the broader terms of the international debate. Political micro-targeting, and the voter analytics upon which it is based, are essentially forms of surveillance. We know a lot about how surveillance harms democratic values. We know a lot less, however, about how surveillance spreads as a result of democratic practices – by the agents and organisations that encourage us to vote (or not vote). The articles in this collection, developed out of a workshop hosted by the Office of the Information and Privacy Commissioner for British Columbia in April 2019, address the most central issues about data-driven elections, and particularly the impact of US social media platforms on local political institutions and cultures. The balance between rights to privacy, and the rights of political actors to communicate with the electorate, is struck in different ways in different jurisdictions depending on a complex interplay of various legal, political, and cultural factors. Collectively, the articles in this collection signal the necessary questions for academics and regulators in the years ahead.
To foster responsible data sharing in health research, ethical governance complementary to the EU General Data Protection Regulation is necessary. A governance framework for Big Data-driven research platforms will at least need to consider the conditions as specified a priori for individual datasets. We aim to identify and analyze these conditions for the Innovative Medicines Initiative’s (IMI) BigData@Heart platform. |
We performed a unique descriptive case study into the conditions for data sharing as specified for datasets participating in BigData@Heart. Principle investigators of 56 participating databases were contacted via e-mail with the request to send any kind of documentation that possibly specified the conditions for data sharing. Documents were qualitatively reviewed for conditions pertaining to data sharing and data access.
In this paper, we propose Mapping Distortion Protection (MDP) and its augmentation-based extension (AugMDP) to protect the data privacy by modifying the original dataset. In MDP, the label of the modified image does not match the ground-truth mapping, yet DNNs can still learn the ground-truth relation even when the provided mapping is distorted. As such, this method protects privacy when the dataset is leaked.
A prototype Privacy Preferences software tool for citizens’ health and social care data was developed and evaluated with focus groups comprising a wide range of users. The primary purpose of the focus groups was to evaluate the acceptability and ease-of-use of the software tool for sharing data for direct care. Fictitious data, based on real scenarios, was used in the evaluations. A possible use for a future commercial development of the tool might be in a Health Information Exchange system supporting access to records held in provider systems. The outcomes of the evaluation were that younger adults with significant computing experience could understand and use the tool, but people with less computer experience and confidence needed support. One conclusion was that the tool is appropriate for the citizen/patient to explore their data and to prototype sharing preferences; however, the preferences should only be turned into permissions which actually control access to data by a care professional during a consultation. This is suggested by several potential problems, including: adverse effects to treatment and care; difficulties with authentication; and, on the part of the citizen/patient user, lack of medical knowledge, lack of capacity (maybe unrecognised), insufficient experience with computing devices, and deliberate misuse.
This note attempts a systematisation of different pieces of literature that underpin the recent policy and academic debate on the value of data. It mainly poses foundational questions around the definition, economic nature and measurement of data value, and discusses the opportunity to redistribute it. It then articulates a framework to compare ways of implementing redistribution, distinguishing between data as capital, data as labour or data as an intellectual property. Each of these raises challenges, revolving around the notions of data property and data rights, that are also briefly discussed. The note concludes by indicating areas for policy considerations and a research agenda to shape the future structure of data governance more at large.
An Avast antivirus subsidiary sells ‘Every search. Every click. Every buy. On every site.’ Its clients have included Home Depot, Google, Microsoft, Pepsi, and McKinsey.