Translation: Personal Information Protection Law of the People’s Republic of China (Effective Nov. 1, 2021)

This translation was produced by Rogier Creemers and Graham Webster on the basis of DigiChina’s earlier translation of the of the second review draft of the law, which in turn was based on our translation of the first draft, produced by Rogier Creemers, Mingli Shi, Lauren Dudley, and Graham Webster.

China passes data protection law

China has passed a personal data protection law, state media Xinhua reports (via Reuters).

The law, called the Personal Information Protection Law (PIPL), is set to take effect on November 1.

It was proposed last year — signalling an intent by China’s communist leaders to crack down on unscrupulous data collection in the commercial sphere by putting legal restrictions on user data collection.

Facebook hides friends lists on accounts in Afghanistan as a safety measure

Facebook’s head of security policy said the company is putting into place security measures for users in Afghanistan, including hiding “friends” lists and adding a tool to quickly lock down accounts. Nathaniel Gleicher said in a thread on Twitter that Facebook made the changes based on feedback from activists, journalists, and civil society groups.

Continue reading

Apple defends iPhone photo scanning, calls it an “advancement” in privacy

Apple’s decision to have iPhones and other Apple devices scan photos for child sexual abuse material (CSAM) has sparked criticism from security experts and privacy advocates—and from some Apple employees. But Apple believes its new system is an advancement in privacy that will “enabl[e] a more private world,” according to Craig Federighi, the company’s senior VP of software engineering.

Continue reading…

Civil society to U.S. FTC: fight for civil rights and privacy

Fonte: AccessNow

On July 29, Access Now, the Lawyers’ Committee for Civil Rights Under Law, and other civil society organizations called on the U.S. Federal Trade Commission (FTC) to protect civil rights and privacy in online commerce. The letter urges the FTC to regulate unfair and deceptive practices, create an Office of Civil Rights, and increase enforcement against tech companies.

The algorithms and models that are being used in commercial data practices are reinforcing the structural racism and systemic bias that pervades our society, most notably in the employment, finance, housing, and education sectors. These practices are denying communities of color equal opportunity, amplifying disinformation and white supremacy, and exploiting people.

“We’ve known for years that, behind closed doors, online companies collect and process data in ways that harm and discriminate against people,” said Eric Null, U.S. Policy Manager at Access Now. “It’s time for the FTC to take bold action against these companies and practices, and show the world that the U.S. in fact does care about protecting people’s privacy and civil rights.”

The organizations ask the FTC to take immediate steps to stop unfair and deceptive practices through comprehensive federal regulations to address abuse of commercial data by tech companies. Left unattended, these companies will negatively affect equal opportunity, data protection, due process, transparency, data security, and corporate accountability.

“Discrimination is the quintessential unfair and deceptive practice. Online or offline, no business should be allowed to engage in discrimination and deny opportunities just because someone’s race, ethnicity, or language does not meet a certain criteria,” said David Brody, who leads the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under Law. “The FTC has a responsibility to proactively defend the rights of Black Americans, other communities of color, and all consumers against exploitative data practices.”

“Exploitative data practices, like discriminatory AI systems or dark patterns designed to deceive people, disproportionately harm marginalized communities who already face discrimination on multiple fronts,” said Sara Collins, Policy Counsel at Public Knowledge. “Fortunately, the FTC has both the authority and the opportunity to support the people who need them most. We urge the FTC to seize this moment to create new rules to protect the privacy of all Americans, but especially of marginalized people.”

“The proliferation of online services has created an unprecedented need for proactive consumer protection. Nowhere is that need more urgent than the protection of civil rights and equal opportunity online,” said Erin Simpson, Associate Director of Technology Policy at the Center for American Progress. “The FTC must fully embrace its consumer protection mission by using all tools available to curb abusive commercial data practices—and center the communities whose rights are most at risk.”

Why Amazon’s £636m GDPR fine really matters

We were promised huge fines and GDPR has finally delivered. Last week, Amazon’s financial records revealed that officials in Luxembourg are fining the retailer €746 million (£636m) for breaching the European regulation.

The fine is unprecedented: it’s the biggest GDPR fine issued to date and is more than double the amount of every other GDPR fine combined. The financial penalty, which Amazon is appealing, comes at a time when GDPR is feeling the strain of lax enforcement and measly fines. Experts say companies are allowed to get away with abusing people’s privacy as GDPR investigations are too slow and ineffective. Some people even want GDPR to be ripped up entirely.

But Luxembourg’s action against Amazon stands out for two reasons: first, it shows the potential power of GDPR; second, it exposes cracks in how inconsistently such regulations are applied across the EU. And for both of these reasons it is arguably the most important GDPR decision issued.

Utilities Governed Like Empires

By Cory Doctorow | August 3, 2021 | Source: EFF

Believe the Hype

After decades of hype, it’s only natural for your eyes to skate over corporate mission-statements without stopping to take note of them, but when it comes to ending your relationship with them,  tech giants’ stated goals take on a sinister cast.

Whether it’s “bringing the world closer together” (Facebook), “organizing the world’s information” (Google), to be a market “where customers can find and discover anything they might want to buy online” (Amazon) or “to make personal computing accessible to each and every individual” (Apple), the founding missions of tech giants reveal a desire to become indispensable to our digital lives.

They’ve succeeded. We’ve entrusted these companies with our sensitive data, from family photos to finances to correspondence. We’ve let them take over our communities, from medical and bereavement support groups to little league and service organization forums. We’ve bought trillions of dollars’ worth of media from them, locked in proprietary formats that can’t be played back without their ongoing cooperation.

These services often work great…but they fail very, very badly. Tech giants can run servers to support hundreds of millions or billions of users – but they either can’t or won’t create equally user-centric procedures for suspending or terminating those users.

But as bad as tech giants’ content removal and account termination policies are, they’re paragons of sense and transparency when compared to their appeals processes. Many who try to appeal a tech company’s judgment quickly find themselves mired in a Kafkaesque maze of automated emails (to which you often can’t reply), requests for documents that either don’t exist or have already been furnished on multiple occasions, and high-handed, terse “final judgments” with no explanations or appeal.

The tech giants argue that they are entitled to run their businesses largely as they see fit: if you don’t like the house rules, just take your business elsewhere. These house rules are pretty arbitrary: platforms’ public-facing moderation policies are vaguely worded and subject to arbitrary interpretation, and their account termination policies are even more opaque. 

Kafka Was An Optimist

All of that would be bad enough, but when it is combined with the tech companies’ desire to dominate your digital life and become indispensable to your daily existence, it gets much worse.

Losing your cloud account can cost you decades of your family photos. Losing access to your media account can cost you access to thousands of dollars’ worth of music, movies, audiobooks and ebooks. Losing your IoT account can render your whole home uninhabitable, freezing the door locks while bricking your thermostat, burglar alarm and security cameras. 

But really, it’s worse than that: you will incur multiple losses if you get kicked off just one service. Losing your account with Amazon, Google or Apple can cost you access to your home automation and security, your mobile devices, your purchased ebooks/audiobooks/movies/music, and your photos. Losing your Apple or Google account can cost you decades’ worth of personal correspondence – from the last email sent by a long-dead friend to that file-attachment from your bookkeeper that you need for your tax audit. These services are designed to act as your backup – your offsite cloud, your central repository – and few people understand or know how to make a local copy of all the data that is so seamlessly whisked from their devices onto big companies’ servers.

In other words, the tech companies set out to make us dependent on them for every aspect of our online lives, and they succeeded – but when it comes to kicking you off their platforms, they still act like you’re just a bar patron at last call, not someone whose life would be shattered if they cut you off.

YouTubers Warned Us

This has been brewing for a long time. YouTubers and other creative laborers have long suffered under a system where the accounts on which they rely to make their livings could be demonetized, suspended or deleted without warning or appeal. But today, we’re all one bad moderation call away from having our lives turned upside-down.

The tech giants’ conquest of our digital lives is just getting started. Tech companies want to manage our health, dispense our medication, take us to the polls on election day, televise our political debates and teach our kids. Each of these product offerings comes with grandiose pretensions to total dominance – it’s not enough for Amazon Pharmacy to be popular, it will be the most popular, leveraging Amazon’s existing business to cut off your corner druggist’s market oxygen (Uber’s IPO included a plan to replace all the world’s public transit and taxi vehicles with rideshares). 

If the tech companies deliver on their promises to their shareholders, then being locked out of your account might mean being locked out of whole swathes of essential services, from buying medicine to getting to work.

Well, How Did We Get Here?

How did the vibrant electronic frontier become a monoculture of “five websites, each consisting of screenshots of text from the other four?” 

It wasn’t an accident. Tech, copyright, contract and competition policy helped engineer this outcome, as did VCs and entrepreneurs who decided that online businesses were only worth backing if they could grow to world-dominating scale.

Take laws like Section 1201 of the Digital Millennium Copyright Act, a broadly worded prohibition on tampering with or removing DRM, even for lawful purposes. When Congress passed the DMCA in 1998, they were warned that protecting DRM – even when no copyright infringement took place – would leave technology users at the mercy of corporations. You may have bought your textbooks or the music you practice piano to, but if it’s got DRM and the company that sold it to you cuts you off, the DMCA does not let you remove that DRM (say goodbye to your media). 

Companies immediately capitalized upon this dangerously broad law: they sold you media that would only play back on the devices they authorized. That locked you into their platform and kept you from defecting to a rival, because you couldn’t take your media with you. 

But even as DRM formats proliferated, the companies that relied on them continued to act like kicking you off their platforms was like the corner store telling you to buy your magazines somewhere else – not like a vast corporate empire of corner stores sending goons  to your house to take back every newspaper, magazine and paperback you ever bought there, with no appeal.

It’s easy to see how the DMCA and DRM give big companies far-reaching control over your purchases, but other laws have had a similar effect. The Computer Fraud and Abuse Act (CFAA), another broadly worded mess of a law, is so badly drafted that tech companies were able to claim for decades that simply violating their terms of service could be  a crime – a chilling claim that was only put to rest by the Supreme Court this summer.

From the start, tech lawyers and the companies they worked for set things up so that most of the time, our digital activities are bound by contractual arrangements, not ownership. These are usually mass contracts, with one-sided terms of service. They’re end user license agreements that ensure that the company has a simple process for termination without any actual due process, much less strong remedies if you lose your data or the use of your devices.  

CFAA, DMCA, and other rules allowing easy termination and limiting how users and competitors could reconfigure existing technology created a world where doing things that displeased a company’s shareholders could literally be turned into a crime – a kind of “felony contempt of business-model.” 

These kinds of shady business practices wouldn’t have been quite so bad if there were a wide variety of small firms that allowed us to shop around for a better deal. 

Unfortunately, the modern tech industry was born at the same moment as American antitrust law was being dismantled – literally. The Apple ][+ appeared on shelves the same year Ronald Reagan hit the campaign trail. After winning office, Reagan inaugurated a 40-year, bipartisan project to neuter antitrust law, allowing incumbents to buy and crush small companies before they could grow to be threats; letting giant companies merge with their direct competitors, and looking the other way while companies established “vertical monopolies” that controlled their whole supply chains.

Without any brakes, the runaway merger train went barrelling along, picking up speed. Today’s tech giants buy companies more often than you buy groceries, and it has turned the tech industry into a “kill-zone” where innovative ideas go to die.

How is it that you can wake up one day and discover you’ve lost your Amazon account, and get no explanation? How is that this can cost you the server you run your small business on, a decade of family photos, the use of your ebook reader and mobile phone, and access to your entire library of ebooks, movies and audiobooks? 


Amazon is in so many parts of your life because it was allowed to merge with small competitors, create vertical monopolies, wrap its media with DRM – and never take on any obligations to be fair or decent to customers it suspected of some unspecified wrongdoing. 

Not just Amazon, either – every tech giant has an arc that looks like Amazon’s, from the concerted effort to make you dependent on its products, to the indifferent, opaque system of corporate “justice” governing account termination and content removal.

Fix the Tech Companies

Companies should be better. Moderation decisions should be transparent, rules-based, and follow basic due process principles. All of this – and more – has been articulated in detail by an international group of experts from industry, the academy, and human rights activism, in an extraordinary document called The Santa Clara Principles. Tech companies should follow these rules when moderating content, because even if they are free to set their own house rules, the public has the right to tell them when those rules suck and to suggest better ones.

If a company does kick you off its platform – or if you decide to leave – they shouldn’t be allowed to hang onto your data (or just delete it). It’s your data, not theirs. The concept of a “fiduciary” – someone with a duty to “act in good faith” towards you – is well-established. If you fire your lawyer (or if they fire you as a client), they have to give you your files. Ditto your doctor or your mental health professional. 

Many legal scholars have proposed creating “information fiduciary” rules that create similar duties for firms that hold your data. This would impose a “duty of loyalty” (to act in the best interests of their customers, without regard to the interests of the business), and a “duty of care” (to act in the manner expected by a reasonable customer under the circumstances). 

Not only would this go a long way to resolving the privacy abuses that plague our online interactions – it would also guarantee you the right to take your data with you when you left a service, whether that departure was your idea or not. 

Information fiduciary isn’t the only way to get companies to be responsible. Direct consumer protection laws — such as requiring companies to make your content readily available to you in the event of termination — could too (there are other approaches as well).  How these rules would apply would depend on the content they host as well as the size of the business you’re dealing with – small companies would struggle to meet the standards we’d expect of giant companies. But every online service should have some duties to you – if the company that just kicked you off its servers and took your wedding photos hostage is a two-person operation, you still want your pictures back!

Fix the Internet

Improving corporate behavior is always a laudable goal, but the real problem with giant companies that are entwined in your life in ways you can’t avoid isn’t that those companies wield their incredible power unwisely. It’s that they have that power in the first place.

To give power to internet users, we have to take it away from giant internet companies. The FTC – under new leadership – has pledged that it will end decades of waving through anticompetitive mergers. That’s just for openers, though. Competition scholars and activists have made the case for the harder task of  breaking up the giants, literally cutting them down to size.

But there’s more.  Congress is considering the ACCESS Act, landmark legislation that would force the largest companies to interoperate with privacy-respecting new rivals, who’d be banned from exploiting user data. If the ACCESS Act passes, it will dramatically lower the high switching costs that keep us locked into big platforms even though we don’t like the way they operate. It also protects folks who want to develop tools to make it easier for you to take your data when you leave, whether voluntarily or because your account is terminated. 

That’s how we’ll turn the internet back into an ecosystem of companies, co-ops and nonprofits of every size that can take receipt of your data, and offer you an online base of operations from which you can communicate with friends, communities and customers regardless of whether they’re on the indieweb or inside a Big Tech silo.

That still won’t be enough, though. The fact that terms of service, DRM, and other technologies and laws can prevent third parties from supplying software for your phone, playing back the media you’ve bought, and running the games you own still gives big companies too much leverage over your digital life.

That’s why we need to restore the right to interoperate, in all its guises: competitive compatibility (the right to plug new products and services into existing ones, with or without permission from their manufacturers), bypassing DRM (we’re suing to make this happen!), the right to repair (a fight we’re winning!) and an end to abusive terms of service (the Supreme Court got this one right).

Digital Rights are Human Rights

When we joined this fight,  30 long years ago, very few people got it. Our critics jeered at the very idea of “digital rights” – as if the nerdfights over Star Trek forums could somehow be compared to history’s great struggles for self-determination and justice! Even a decade ago, the idea of digital rights was greeted with jeers and skepticism.

But we didn’t get into this to fight for “digital rights” – we’re here to defend human rights. The merger of the “real world” and the “virtual world” could be argued over in the 1990s, but not today, not after a lockdown where the internet became the nervous system for the planet, a single wire we depended on for free speech, a free press, freedom of assembly, romance, family, parenting, faith, education, employment, civics and politics.

Today, everything we do involves the internet. Tomorrow, everything will require it. We can’t afford to let our digital citizenship be reduced to a heavy-handed mess of unreadable terms of service and broken appeals processes.

We have the right to a better digital future – a future where the ambitions of would-be monopolists and their shareholders take a back-seat to fairness, equity, and your right to self-determination.

Online reputation rating: it is lawful if the operating mechanism of the algorithm is disclosed, says the Italian Supreme Court

By Order no. 14382/2021 the Italian Supreme Court ruled on the lawfulness of the personal data processing carried out through an online platform for measuring the reputation rating.

The Order was issued following the appeal brought by the Italian Data Protection Authority (DPA) against a decision of the Court of Rome of 4 April 2018, which had accepted the appeal brought by the association Mevaluate Onlus. The latter had challenged the decision of the DPA (commented upon here in our blog), which had prohibited any processing operation carried out by Mevaluate in connection with the services offered through the “Mevaluate Immaterial Infrastructure for Professional Qualification”.

Brazil: Cookies under the LGPD

Law No. 13.709 of 14 August 2018, General Personal Data Protection Law (as amended by Law No. 13.853 of 8 July 2019) (‘LGPD’) is still in its early phases, meaning that it may be difficult to understand how it applies to technologies such as cookies that have been a focus of many European supervisory authorities. Fabricio da Mota Alves and Gregório Paulo Rampche de Almeida, Partner and Lawyer respectively at Serur Advogados, discuss cookies under the LGPD with reference to approaches taken in the EU and how these may be relevant to Brazilian organisations.

[Áustria] Suprema Corte encaminha a Tribunal da UE questionamentos sobre tratamento de dados pelo Facebook

Via Semanário InternetLab

Em 20.07, a Suprema Corte da Áustria encaminhou ao Tribunal de Justiça da União Europeia 4 questões referentes à compatibilidade das atividades de tratamento de dados realizadas pelo Facebook com a GDPR (Regulamento Geral de Proteção de Dados). No caso, a corte acatou o pedido de Max Schrems, em uma ação movida contra o Facebook, na qual o autor alega que a plataforma estaria privando os usuários dos direitos e proteções garantidos pela legislação europeia de proteção de dados. A primeira pergunta a ser enviada ao TJUE diz respeito às bases legais para o tratamento de dados pelo Facebook. A Suprema Corte austríaca questiona se a agregação e análise de dados para fins de publicidade personalizada deve ser avaliada seguindo a base jurídica do consentimento ou da execução de contrato. A segunda questão diz respeito à minimização de dados. A Corte pergunta se as regras sobre minimização de dados, previstas no RGPD, autorizam que todos os dados pessoais coletados por uma plataforma sejam agregados, analisados ​​e processados ​​para fins de publicidade direcionada sem qualquer restrição quanto ao tempo ou natureza dos dados. Além disso, questiona se a vedação ao tratamento de dados sensíveis deve ser aplicável a publicidade direcionada com base na análise de interesses que possam inferir, por exemplo, orientação sexual ou política, ainda que o controlador não diferencie esses interesses como dados sensíveis ou não-sensíveis. Por fim, em sua última pergunta, a Corte questiona se uma declaração na plataforma sobre dados sensíveis, como orientação sexual, visando uma discussão pública, pode ser usada para fins de publicidade personalizada. O Tribunal também decidiu que Schrems deverá receber  € 500 em danos emocionais simbólicos, em razão do Facebook não ter lhe dado acesso total aos seus dados. Essa é a terceira ação de Schrems contra o Facebook apresentada perante o TJUE. Em 2015, o Tribunal decidiu pela anulação de um protocolo de privacidade de dados entre a UE e os EUA, conhecido como Safe Harbor Principles e, em 2020, invalidou o Privacy Shield, protocolo de compartilhamento de dados que permitia às empresas norte-americanas transferir informações pessoais sobre os cidadãos da UE aos EUA para processamento de dados.