Guo Bing, a law professor in the Chinese city of Hangzhou, liked the zoo enough to purchase an annual pass. But he didn’t like it nearly enough to let the zoo take a high-resolution scan of his face.
In what judges called the first case of its kind in China, Guo sued the zoo — and won. He argued there was no legal basis for the Hangzhou Safari Park to collect visitors’ biometric data, and that it had not taken precautions to protect the information. In April, a Chinese appeals court ruled in favor of Guo, ordering the zoo to refund him and delete his face scan and fingerprints.
Now, China is putting its freewheeling facial recognition industry on notice. Citing Guo’s case, China’s top court announced this week that consumers’ privacy must be protected from unwarranted face tracking.
July 2020 the Secretariat of National Information Security Standardisation Technical Committee (NISSIT) released the Practical Guide to Cybersecurity Standards – Self-Assessment Guidelines for Apps to Collect and Use Personal Information to guide app operators to carry out self-assessment.(1)
The guidelines provide 28 self-assessment items in total, covering whether:
This research report offers a comparative analysis of privacy and data protection in Germany and India. It compares the two regimes on four counts. First, it examines how the right to privacy and/or its allied rights have developed in the two countries historically. In this, it explores the political factors contributing to the understanding and acceptability of the principles of privacy in the decades after the Second World War. Second, it delves into the instruments and forms of state surveillance employed by both the countries and analyses how the presence of parliamentary and judicial oversight on intelligence agencies impacts individual privacy. In the third section, it compares how biometric identity systems have been deployed in the two countries, the safeguards designed around the same, and the legal challenges they have thrown up. Lastly, it evaluates data subject rights as defined under the General Data Protection Regulation (GDPR) together with the Bundesdatenschutzgesetz-Neu (BDSG-Neu) and how they compare with those as defined under the Draft Personal Data Protection Bill, 2018 in the Indian context.
Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given “onlife”-era [1,2], or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy [3,4,6]. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle ) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council  with recent research trends in effectively applying the AI ethics principles of explainability and enforceability [4-9]. |
Analysing personal data is a privilege requiring researchers to safeguard data and to use data wisely. Safeguarding data means protecting the identity of individuals. Using data wisely means using, reusing, and sharing data to their maximum potential. More researchers should be given safe access to previously collected data from expensive clinical trials and laboratory or epidemiological studies. Journal editors therefore increasingly require a data sharing statement in published articles.
In the settlement, the company agreed to pay $150,000 in civil penalties and to delete the information it collected without obtaining appropriate parental consent. The $150,000 payment was in lieu of the $4,000,000 judgment entered against both the company and its CEO. Even still, in an unusual move, one of the commissioners dissented, stating that the penalty was too high in view of the (little) harm that resulted from the violation. The company and its officers also agreed to provide appropriate notice (as required by COPPA), as well as to get parental consent in the future.
Between the use and some tests that I always end up running on the applications I have installed on my Smartphone, I ended up bumping into a very interesting feature of Nubank, which allows the user to create a “billing link” and send that link to one or more people to charge. I found the functionality very useful, but when I saw up close how it worked, I was a little uncomfortable in view of the number of scenarios that that implementation could be exposed… With this intrinsic dissatisfaction, I decided to generate some proof of concepts and share them with the company so that they could re-evaluate the design of the functionality and it turned out that in this quick and simple demonstration I was able to map some personal data (CPF, Full Name, Account Number and Agency) of more than 100 customers.
The Law separately outlines certain provisions relating to the storage and processing of personal data that will be obtained during the experiment.
As a result, the Law makes it possible to use the previously anonymised personal data of individuals participating in the experiment to increase the effectiveness of the state or municipal government. However, the Law specifically establishes that such personal data can only be transferred to participants in the experiment and must be stored in Moscow.
In this paper, we identify a lack of a homogeneous interpretation of the data minimization principle and explore two operational definitions applicable in the context of personalization. The focus of our empirical study in the domain of recommender systems is on providing foundational insights about the (i) feasibility of different data minimization definitions, (ii) robustness of different recommendation algorithms to minimization, and (iii) performance of different minimization strategies. We find that the performance decrease incurred by data minimization might not be substantial,but that it might disparately impact different users—a finding whichhas implications for the viability of different formal minimizationdefinitions. Overall, our analysis uncovers the complexities of thedata minimization problem in the context of personalization andmaps the remaining computational and regulatory challenges