NISSIT issues self-assessment guidelines for apps to collect and use personal information

July 2020 the Secretariat of National Information Security Standardisation Technical Committee (NISSIT) released the Practical Guide to Cybersecurity Standards – Self-Assessment Guidelines for Apps to Collect and Use Personal Information to guide app operators to carry out self-assessment.(1)

The guidelines provide 28 self-assessment items in total, covering whether:

[artigo] Privacy and data protection in India and Germany: A comparative analysis

This research report offers a comparative analysis of privacy and data protection in Germany and India. It compares the two regimes on four counts. First, it examines how the right to privacy and/or its allied rights have developed in the two countries historically. In this, it explores the political factors contributing to the understanding and acceptability of the principles of privacy in the decades after the Second World War. Second, it delves into the instruments and forms of state surveillance employed by both the countries and analyses how the presence of parliamentary and judicial oversight on intelligence agencies impacts individual privacy. In the third section, it compares how biometric identity systems have been deployed in the two countries, the safeguards designed around the same, and the legal challenges they have thrown up. Lastly, it evaluates data subject rights as defined under the General Data Protection Regulation (GDPR) together with the Bundesdatenschutzgesetz-Neu (BDSG-Neu) and how they compare with those as defined under the Draft Personal Data Protection Bill, 2018 in the Indian context.

How to Put the Data Subject’s Sovereignty into Practice. Ethical Considerations and Governance Perspectives

Peter Dabrock |

AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and SocietyFebruary 2020 Pages 1–2. |

Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given “onlife”-era [1,2], or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy [3,4,6]. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle [5]) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council [3] with recent research trends in effectively applying the AI ethics principles of explainability and enforceability [4-9]. |

Sharing data safely while preserving privacy

Analysing personal data is a privilege requiring researchers to safeguard data and to use data wisely. Safeguarding data means protecting the identity of individuals. Using data wisely means using, reusing, and sharing data to their maximum potential. More researchers should be given safe access to previously collected data from expensive clinical trials and laboratory or epidemiological studies. Journal editors therefore increasingly require a data sharing statement in published articles.

[EUA] KleptoCats Maker Settles With FTC Over Failure To Get Parental Consent

In the settlement, the company agreed to pay $150,000 in civil penalties and to delete the information it collected without obtaining appropriate parental consent. The $150,000 payment was in lieu of the $4,000,000 judgment entered against both the company and its CEO. Even still, in an unusual move, one of the commissioners dissented, stating that the penalty was too high in view of the (little) harm that resulted from the violation. The company and its officers also agreed to provide appropriate notice (as required by COPPA), as well as to get parental consent in the future.

A little case about scraping personal data exposure in the web

Between the use and some tests that I always end up running on the applications I have installed on my Smartphone, I ended up bumping into a very interesting feature of Nubank, which allows the user to create a “billing link” and send that link to one or more people to charge. I found the functionality very useful, but when I saw up close how it worked, I was a little uncomfortable in view of the number of scenarios that that implementation could be exposed… With this intrinsic dissatisfaction, I decided to generate some proof of concepts and share them with the company so that they could re-evaluate the design of the functionality and it turned out that in this quick and simple demonstration I was able to map some personal data (CPF, Full Name, Account Number and Agency) of more than 100 customers.

Experimental AI regime to be introduced in Moscow

The Law separately outlines certain provisions relating to the storage and processing of personal data that will be obtained during the experiment.

As a result, the Law makes it possible to use the previously anonymised personal data of individuals participating in the experiment to increase the effectiveness of the state or municipal government. However, the Law specifically establishes that such personal data can only be transferred to participants in the experiment and must be stored in Moscow.

[artigo] Operationalizing the Legal Principle of Data Minimization for Personalization

In this paper, we identify a lack of a homogeneous interpretation of the data minimization principle and explore two operational definitions applicable in the context of personalization. The focus of our empirical study in the domain of recommender systems is on providing foundational insights about the (i) feasibility of different data minimization definitions, (ii) robustness of different recommendation algorithms to minimization, and (iii) performance of different minimization strategies. We find that the performance decrease incurred by data minimization might not be substantial,but that it might disparately impact different users—a finding whichhas implications for the viability of different formal minimizationdefinitions. Overall, our analysis uncovers the complexities of thedata minimization problem in the context of personalization andmaps the remaining computational and regulatory challenges

Download file icon. Source:

Download “Operationalizing the Legal Principle of Data Minimization for Personalization“.

How personal data could help contribute to a COVID-19 solution

The entire state of California just joined San Francisco and Silicon Valley in a “shelter at home” order from the state’s governor. Restaurants, movie theatres and schools are just some of the parts of our everyday lives that must be temporarily interrupted to curb the spread of Covid-19. While many Silicon Valley companies had previously institutionalised work-from-home policies, this shutdown will particularly impact those in the gig economy and part-time workers. Companies across the Bay Area are working together to see how technology can be used to combat the crisis and make sure Americans are ready to go back to work.

[EUA] Google forced to reveal anonymous reviewer’s details

Google could possibly surface the offending poster’s subscriber information and related IP addresses and phone numbers, along with location metadata, the judge said. It could also probably provide any other Google accounts, including their full name, email address and identifying details, originating from the same IP address around the same time that CBsm 23 posted their negative review.