Categories
free speech Intelwars International WikiLeaks

EFF Statement on British Court’s Rejection of Trump Administration’s Extradition Request for Wikileaks’ Julian Assange

Today, a British judge denied the Trump Administration’s extradition request for Wikileaks Editor Julian Assange, who is facing charges in the United States under the Espionage Act and the Computer Fraud and Abuse Act. The judge largely confirmed the charges against him, but ultimately determined that the United States’ extreme procedures for confinement that would be applied to Mr.  Assange would create a serious risk of suicide.

EFF’s Executive Director Cindy Cohn said in a statement today: 

“We are relieved that District Judge Vanessa Baraitser made the right decision: to reject extradition of Mr. Assange and, despite the U.S. government’its initial statement,  we hope that the U.S does not appeal that decision. chooses not to appeal it. The UK court decisionThis means that Assange will not face charges in the United States, which could have set dangerous precedent in two ways. First, it could call into question many of the journalistic practices that writers at the New York Times, the Washington Post, Fox News, and other publications engage in every day to ensure that the American peoplepublic stays informed about the operations of theirits government. Investigative journalism—including seeking, analyzing and publishing leaked government documents, especially those revealing abuses—has a vital role in holding the U.S. government to account. It is, and must remain, strongly protected by the First Amendment. Second, the prosecution, and the jJudge’s decision, embraces a theory of computer crime that is overly broad — essentially criminalizing a journalist for discussing and offering help with basic computer activities like use of rainbow tables and scripts based on wget, that are regularly used in computer security and elsewhere.  

While we applaud this decision, it does not erase the many years Assange has been dogged by prosecution, detainment, and intimidation for his journalistic work. It also does not erase the government’s arguments that, as in so many other cases, attempts to cast a criminal pall over routinebasic actionss because they were done with a computer. We are still reviewing the judge’s opinion and expect to have additional thoughts once we’ve completed our analysis. “

Read the judge’s full statement. 

Share
Categories
Commentary Creativity & Innovation free speech ICANN Intelwars International

How We Saved .ORG: 2020 in Review

If you come at the nonprofit sector, you’d best not miss.

Nonprofits and NGOs around the world were stunned last November when the Internet Society (ISOC) announced that it had agreed to sell the Public Interest Registry—the organization that manages the .ORG top-level domain (TLD)—to private equity firm Ethos Capital. EFF and other leaders in the NGO community sprung to action, writing a letter to ISOC urging it to stop the sale. What follows was possibly the most dramatic show of solidarity from the nonprofit sector of all time. And we won.

Oversight by a nonprofit was always part of the plan for .ORG.

Prior to the announcement, EFF had spent six months voicing our concerns to the Internet Corporation for Assigned Names and Numbers (ICANN) about the 2019 .ORG Registry Agreement, which gave the owner of .ORG new powers to censor nonprofits’ websites (the agreement also lifted a longstanding price cap on .ORG registrations and renewals).

The Registry Agreement gave the owner of .ORG the power to implement processes to suspend domain names based on accusations of “activity contrary to applicable law.” It effectively created a new pressure point that repressive governments, corporations, and other bad actors can use to silence their critics without going through a court. That should alarm any nonprofit or NGO, especially those that work under repressive regimes or frequently criticize powerful corporations.

Throughout that six-month process of navigating ICANN’s labyrinthine decision-making structure, none of us knew that ISOC would soon be selling PIR. With .ORG in the hands of a private equity firm, those fears of censorship and price gouging became a lot more tangible for nonprofits and NGOs. The power to take advantage of .ORG users was being handed to a for-profit company whose primary obligation was to make money for its investors.

Oversight by a nonprofit was always part of the plan for .ORG. When ISOC competed in 2002 for the contract to manage the TLD, it used its nonprofit status as a major selling point. As ISOC’s then president Lynn St. Amour put it, PIR would “draw upon the resources of ISOC’s extended global network to drive policy and management.”

More NGOs began to take notice of the .ORG sale and the danger it posed to nonprofits’ freedom of expression online. Over 500 organizations and 18,000 individuals had signed our letter by the end of 2019, including big-name organizations like Greenpeace, Consumer Reports, Oxfam, and the YMCA of the USA. At the same time, questions began to emerge (PDF) about whether Ethos Capital could possibly make a profit without some drastic changes in policy for .ORG.

By the beginning of 2020, the financial picture had become a lot clearer: Ethos Capital was paying $1.135 billion for .ORG, nearly a third of which was financed by a loan. No matter how well-meaning Ethos was, the pressure to sell “censorship as a service” would align with Ethos’ obligation to produce returns for its investors. The sector’s concerns were well-founded: the registry Donuts entered a private deal with the Motion Picture Association in 2016 to fast-track suspensions of domains that MPA claims infringe on its members’ copyrights. It’s fair to ask whether PIR would engage in similar practices under the leadership of Donuts co-founder Jonathon Nevett. Six members of Congress wrote a letter to ICANN in January urging it to scrutinize the sale more carefully.

A few days later, EFF, nonprofit advocacy group NTEN, and digital rights groups Fight for the Future and Demand Progress participated in a rally outside of the ICANN headquarters in Los Angeles. Our message was simple: stop the sale and create protections for nonprofits. Before the protest, ICANN staff reached out to the organizers offering to meet with us in person, but on the day of the protest, ICANN canceled on us. That same week, Amnesty International, Access Now, the Sierra Club, and other global NGOs held a press conference at the World Economic Forum to tell world leaders that selling .ORG threatens civil society. All of the noise caught the attention of California Attorney General Xavier Becerra, who wrote to ICANN (PDF) asking it for key information about its review of the sale.

COVID-19 demonstrated that the NGO community doesn’t need fancy “products and services” from a domain registry: it needs simple, reliable, boring service.

Recognizing that the heat was on, Ethos Capital and PIR hastily tried to build bridges with the nonprofit sector. Ethos attempted to convene a secret meeting with NGO sector leaders in February, and then abruptly canceled it. Ethos then announced that it would voluntarily limit price increases on .ORG registrations and renewals and establish a “stewardship council.” Like many details of the .ORG sale, what level of influence the stewardship council would have over PIR’s decisions was unclear. EFF executive director Cindy Cohn and NTEN CEO Amy Sample Ward responded in the Nonprofit Times:

The proposed “Stewardship Council” would fail to protect the interests of the NGO community. First, the council is not independent. The Public Interest Registry (PIR) board’s ability to veto nominated members would ensure that the council will not include members willing to challenge Ethos’ decisions. PIR’s handpicked members are likely to retain their seats indefinitely. The NGO community must have a real say in the direction of the .ORG registry, not a nominal rubber stamp exercised by people who owe their position to PIR.

Even Ethos’ promise to limit fee increases was rather hollow: if Ethos raised fees as allowed by the proposed rules, the price of .ORG registrations would more than double over eight years. After those eight years, there would be no limits on free increases whatsoever.

All the while, Ethos and PIR kept touting that with the new ownership would come new “products and services” for .ORG users, but it failed to give any information about what those offerings might entail. Cohn and Ward responded:

The product NGOs need from our registry operator is domain registration at a fair price that doesn’t increase arbitrarily. The service that operator must provide is to stand up to governments and other powerful actors when they demand that it silence us. It is more clear than ever that you cannot offer us either.

It’s almost poetic that the debate over .ORG reached a climax just as COVID-19 was becoming a worldwide crisis. Emergencies like this one are when the world most relies on nonprofits and NGOs; therefore, they’re also pressure tests for the sector. The crisis demonstrated that the NGO community doesn’t need fancy “products and services” from a domain registry: it needs simple, reliable, boring service. Those same members of Congress who’d scrutinized the .ORG sale wrote a more pointed letter to ICANN in March (PDF), plainly noting that there was no way that Ethos Capital could make a profit on its investment without making major changes at the expense of .ORG users.

Finally, in April, the ICANN board rejected the transfer of ownership of .ORG. “ICANN entrusted to PIR the responsibility to serve the public interest in its operation of the .ORG registry,” they wrote, “and now ICANN is being asked to transfer that trust to a new entity without a public interest mandate.”

While .ORG is safe for now, the bigger trend of registries becoming chokepoints for free speech online is as big a problem as ever. That’s why EFF is urging ICANN to reconsider its policies regarding public interest commitments—or as the Internet governance community has recently started calling them, registry voluntary commitments. Those are the additional rules that ICANN allows registries to set for specific top-level domains, like the new provisions in the .ORG Registry Agreement that allow the owner of .ORG to set policies to fast-track censoring speech online.

The story of the attempted .ORG sale is really the story of the power and resilience of the nonprofit sector. Every time Ethos and PIR tried to quell the backlash with empty promises, the sector responded even more loudly, gaining the voices of government officials, members of Congress, two UN Special Rapporteurs, and U.S. state charities regulators. As I said to that crowd of activists in front of ICANN’s offices, I’ve worked in the nonprofit sector for most of my adult life, and I’ve never seen the sector respond this unanimously to anything.

Thank you to everyone who stood up for .ORG, especially NTEN for its partnership on this campaign as a trusted leader in the nonprofit sector. If you were one of the 27,183 people who signed our open letter, or if you work for or support one of the 871 organizations that participated, then you were a part of this victory.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Share
Categories
Coders' Rights Project Commentary Intelwars International

The Slow-Motion Tragedy of Ola Bini’s Trial

EFF has been tracking the arrest, detention, and subsequent investigation of Ola Bini since its beginnings over 18 months ago. Bini, a Swedish-born open-source developer, was arrested in Ecuador’s Quito Airport in a flurry of media attention in April 2019. He was held without trial for ten weeks while prosecutors seized and pored over his technology, his business, and his private communications, looking for evidence attaching him to an alleged conspiracy to destabilize the Ecuadorean government.

Now, after months of delay, an Ecuadorean pre-trial judge has failed to dismiss the case – despite Bini’s defense documenting over hundred procedural and civil liberty violations made in the course of the investigation. EFF was one of the many human rights organizations, including Amnesty International, who were refused permission by the judge to act as observers at Wednesday’s hearing.

Bini, a Swedish-born open-source developer, was seized by police at Quito Airport shortly after Ecuador’s Interior Minister, Maria Paula Romo, held a press conference warning the country of an imminent cyber-attack. Romo spoke hours after the government had ejected Julian Assange from Ecuador’s London Embassy, and claimed that a group of Russians and Wikileaks-connected hackers were in the country, planning an attack in retaliation for the eviction. No further details of this sabotage plot were ever revealed, nor has it been explained how the Minister knew of the gangs’ plans in advance. Instead, only Bini was detained, imprisoned, and held in detention for 71 days without charge until a provincial court, facing a habeas corpus order, declared his imprisonment unlawful and released him to his friends and family. (Romo was dismissed as minister last month for ordering the use of tear gas against anti-government protestors.)

EFF visited Ecuador to investigate complaints of injustice in the case in August 2019. We concluded that the Bini affair had the sadly familiar hallmark of a politicized “hacker panic” where media depictions of hacking super-criminals and overbroad cyber-crime laws together encourage unjust prosecutions when the political and social atmosphere demands it. (EFF’s founding in 1990 was in part due to a notorious, and similar, case pursued in the United States by the Secret Service, documented in Bruce Sterling’s Hacker Crackdown.)

While the Ecuadorian government continues to portray him to journalists as a Wikileaks-employed malicious cybercriminal, his reputation outside the prosecution is very different. An advocate for a secure and open Internet and computer language expert, Bini is primarily known for his non-profit work on the secure communication protocol, OTP, and contributions to the Java implementation of the Ruby programming language. He has also contributed to EFF’s Certbot project, which provides easy-to-use security for millions of websites. He moved to Ecuador during his employment at the global consultancy ThoughtWorks, which has an office in the country’s capital.

After several months of poring over his devices, prosecutors have been able to provide only one piece of supposedly incriminating data: a copy of a screenshot, taken by Bini himself and sent to a colleague, that shows the telnet login screen of a router. From the context, it’s clear that Bini was expressing surprise that the telco router was not firewalled, and was seeking to draw attention to this potential security issue. Bini did not go further than the login prompt in his investigation of the open machine.

Defense and prosecution will now make arguments on the admissibility of this and other non-technical evidence, and the judge will determine if and when Bini’s case will progress to a full trial in the New Year.

We, once again, urge Ecuador’s judiciary to impartially consider the shaky grounds for this case, and divorce their deliberations from the politicized framing that has surrounded this prosecution from the start.

Share
Categories
Intelwars International Necessary and Proportionate privacy ¿Quién defiende tus datos?

IPANDETEC Releases First Report Rating Nicaraguan Telecom Providers’ Privacy Policies

IPANDETEC,  a digital rights organization in Central America, today released its first “Who Defends Your Data” (¿Quién Defiende Tus Datos?) report for Nicaragua, assessing how well the country’s mobile phone and Internet service providers (ISPs) are protecting users’ personal data and communications. The report follows the series of assessments IPANDETEC has conducted to evaluate the consumer privacy practices of Internet companies in Panama, joining a larger initiative across Latin America and Spain holding ISPs accountable for their privacy commitments. 

The organization reviewed six companies: Claro Nicaragua, a subsidiary of the Mexican company America Móvil; Tigo Nicaragua, a subsidiary of Millicom International, headquartered in Luxembourg; Cootel Nicaragua, part of the Chinese Xinwei Group; Yota Nicaragua, a subsidiary of Rostejnologuii, a Russian company; IBW, part of IBW Holding S.A, which provides telecom services across Central America, and Ideay, a local Nicaraguan  company.

The ¿Quién Defiende Tus Datos? report looks at whether the companies post data protection policies on their website, disclose how much personal data they collect from users, and whether, and how often, they share it with third parties. Companies are awarded stars for transparency in each of five categories, detailed below. Shining a light on these practices allows consumers to make informed choices about what companies they should entrust their data to.

Main Findings

IPANDETEC’s review shows that, with a few exceptions, Nicaragua’s leading ISPs have a long way to go in providing transparency about how they protect user data. Only three of the six companies surveyed—Claro, Tigo, and Cootel—publish privacy policies on their websites and, with the exception of Tigo, the information provided is limited to policies for collecting data from users visiting their websites. Tigo’s policy provides partial information about data collected beyond the company’s website, earning the company a full star. Cootel comes close—its policy refers to its app Mi Cootel (My Cootel), which allows customers to manage and change services under their contracts with the company. Claro and Cootel received half stars in this category.

Claro and Tigo’s parent companies publish more comprehensive data protection policies, but they are not available on the websites of their Nicaragua subsidiaries and don’t take into account how their practices in Nicaragua comport with the country’s data protection regulations and other laws. Tigo reported it’s working to improve the information available on the local website in its reply to IPANDETEC’s request for additional information.

Tigo received a half star for its partial commitment to a policy of requiring court authorization before providing the content of users’ communications to authorities. Claro received a quarter star in this category. The ISP’s local policy for requiring court authorization is not clear enough, although the global policy of its parent company America Movil is explicit on this requirement. Both companies have fallen short in showing a similar explicit commitment when handing users’ metadata—such as names, subject line information, and creation date of messages–to authorities.

Tigo earned a quarter star for making public guidelines for how law enforcement can access users’ information, primarily on account of Millicom’s global law enforcement assistance policy. The guidelines establish steps that must be taken locally when the company is responding to law enforcement requests, but they are general and not specific to Nicaragua’s legal framework. Moreover, the policy is available only in English on Millicom’s website. Claro’s subsidiaries in Chile and Peru publish such guidelines; unfortunately the company’s Nicaraguan unit does not.

The IPANDETEC report shows that all the top Internet companies in Nicaragua need to step up their transparency game—none publish transparency reports disclosing how many law enforcement requests for user data they receive. Tigo’s parent company Millicom publishes annual transparency reports, but the most recent one didn’t include information about its operations in Nicaragua. Millicom said it plans to include Nicaragua in future reports.

The companies were evaluated on specific criteria listed below. For more information on each company, you can find the full report on IPANDETEC’s website.

Data Protection Policy: Does the company post a data protection policy on its website? Is the policy written in clear and easily accessible language? Does the policy establish the retention period for user data?

Transparency Report: Does the company publish a transparency report? Is the report easily accessible? Does the report list the number of government requests received, accepted, and rejected?   

User Notification: Does the company publicly commit to notifying users, as soon as the law allows, when their information is requested by law enforcement authorities?

Judicial Authorization: Does the company publicly commit to request judicial authorization before handing users’ communications content and metadata

Law Enforcement Guidelines: Does the company outline public guidelines on how law enforcement can access users’ information?

Conclusion

Digital devices and Internet access allow people to stay connected with family and friends and have access to information and entertainment. But technology users around the world are concerned about privacy. It’s imperative for ISPs in Nicaragua to be transparent about if, and how, they are safekeeping users’ private information. We hope to see big strides from them in future reports. 

Share
Categories
Call to Action free speech Intelwars International Offline : Imprisoned Bloggers and Technologists

Action for Egyptian Human Rights Defenders

The undersigned organisations strongly condemn the persecution of employees of the Egyptian Initiative for Personal Rights (EIPR) and Egyptian civil society by the Egyptian government. We urge the global community and their respective governments to do the same and join us in calling for the release of detained human rights defenders and a stop to the demonisation of civil society organisations and human rights defenders by government-owned or pro-government media.

Since November 15, Egyptian authorities have escalated their crackdown on human rights defenders and civil society organizations. On November 19, Gasser Abdel-Razek, Executive Director of the Egyptian Initiative for Personal Rights (EIPR)—one of the few remaining human rights organisations in Egypt—was arrested at his home in Cairo by security forces. One day prior, EIPR’s Criminal Justice Unit Director, Karim Ennarah, was arrested while on vacation in Dahab. The organization’s Administrative Manager, Mohamed Basheer, was also taken in the early morning hours from his home in Cairo 15 November. 

All three appeared in front of the Supreme State Security Prosecution where they were charged with joining a terrorist group, spreading false news, and misusing social media, and were remanded into custody and given 15 days of pre-trial detention.

The interrogations of the security services and then the prosecution of the leaders of the EIPR focused on the organisation’s activities, the reports issued by it, and its efforts of advocating human rights, especially a meeting held in early November by EIPR and attended by a number of ambassadors and diplomats accredited to Egypt from some European countries, Canada, and the representative of the European Union.

The detention of EIPR staff means one thing: Egyptian authorities are continuing to commit human rights violations with full impunity. This crackdown comes amidst a number of other cases in which the prosecution and investigation judges have used pre-trial detention as a method of punishment. Egypt’s counterterrorism law was amended in 2015 under President Abdel-Fattah al-Sisi so that pre-trial detention can be extended for two years and, in terrorism cases, indefinitely. A number of other human rights defenders—including Mahienour el-Masry, Mohamed el-Baqer, Solafa Magdy, Alaa Abd El Fattah, Sanaa Seif, and Esraa Abdelfattah — are currently held in prolonged pre-trial detention. EIPR researcher Patrick George Zaki remains detained pending investigations by the Supreme State Security Prosecution (SSSP) over unfounded “terrorism”-related charges since his arrest in February 2020. Amnesty International has extensively documented how Egypt’s SSSP uses extended pre-trial detention to imprison opponents, critics, and human rights defenders over unfounded charges related to terrorism for months or even years without trial. 

In addition to these violations, Gasser Abdel-Razek told his lawyer that he received inhumane and degrading treatment in his cell that puts his health and safety in danger. He further elaborated that he was never allowed out of the cell, had only a metal bed to sleep on with neither mattress nor covers, save for a light blanket, was deprived of all his possessions and money, was given only two light pieces of summer garments, and was denied the right to use his own money to purchase food and essentials from the prison’s canteen. His head was shaved completely. 

The manner in which Egypt treats its members of civil society cannot continue, and we, an international coalition of human rights and civil society actors, denounce in the strongest of terms the arbitrary use of pre-trial detention as a form of punishment. The detention of EIPR staff is the latest example of how Egyptian authorities crackdown on civil society with full impunity. It’s time to hold the Egyptian government accountable for its human rights abuses and crimes. Join us in calling for the immediate release of EIPR staff, and an end to the persecution of Egyptian civil society.

Signed,

Access Now
Africa Freedom of Information Centre (AFIC)
Americans for Democracy & Human Rights in Bahrain (ADHRB)
Arabic Network for Human Rights Information (ANHRI)
ARTICLE 19 
Association of Caribbean Media Workers (ACM)
Association for Freedom of Thought and Expression (AFTE)
Association for Progressive Communications (APC)
Cairo Institute for Human Rights Studies (CIHRS)
Center for Democracy & Technology
Committee for Justice (CFJ)
Digital Africa Research Lab
Digital Rights Foundation
Egyptian Front for Human Rights
Electronic Frontier Foundation (EFF)
Elektronisk Forpost Norge (EFN)
epicenter.works – for digital rights
Fight for the Future
Free Media Movement (FMM)
Fundación Andina para la Observación y el Estudio de Medios (Fundamedios)
The Freedom Initiative
Fundación Ciudadanía Inteligente
Globe International Center
Gulf Centre for Human Rights (GCHR)
Homo Digitalis 
Human Rights Watch
Hungarian Civil Liberties Union (HCLU)
Index on Censorship
Independent Journalism Center Moldova (IJC-Moldova)
International Press Centre (IPC) Lagos-Nigeria
International Press Institute (IPI)
Initiative for Freedom of Expression – Turkey (IFoX)
International Free Expression Project
Masaar – Technology and Law Community
Mediacentar Sarajevo
Media Foundation for West Africa (MFWA)
Media Institute of Southern Africa (MISA) – Zimbabwe
MENA Rights Group
Mnemonic
Myanmar ICT for Development Organization (MIDO)
Open Observatory of Network Interference (OONI)
Pacific Islands News Association (PINA)
Pakistan Press Foundation (PPF)
PEN Canada
PEN Norway
Privacy International (PI)
Public Foundation for Protection of Freedom of Speech (Adil Soz)
R3D: Red en Defensa de los Derechos Digitales 
Reporters Sans Frontières (RSF)
Scholars at Risk (SAR)
Skyline International Foundation
Social Media Exchange (SMEX)
South East Europe Media Organisation (SEEMO)
Statewatch (UK)
Vigilance for Democracy and the Civic State

Share
Categories
Commentary free speech Intelwars International Offline : Imprisoned Bloggers and Technologists

EFF Condemns Egypt’s Latest Crackdown

We are quickly approaching the tenth anniversary of the Egyptian revolution, a powerfully hopeful time in history when—despite all odds—Egyptians rose up against an entrenched dictatorship and shook it from power, with the assistance of new technologies. Though the role of social media has been hotly debated and often overplayed, technology most certainly played a role in organizing and Egyptian activists demonstrated the potential of social media for organizing and disseminating key information globally. 

2011 was a hopeful time, but hope quickly gave way to repression—repression that has increased significantly this year, especially in recent months as the Egyptian government, under President Abdel Fattah Al-Sisi, has systematically persecuted human rights defenders and other members of civil society. In the hands of the state, technology was and still is used to censor and surveil citizens.

In 2013, Sisi’s government passed a law criminalizing unlicensed street demonstrations; that law has since been frequently used to criminalize online speech by activists. Two years later, the government adopted a sweeping counterterrorism law that has since been updated to allow for even greater repression. The new provisions of the law were criticized in April by UN Special Rapporteur on human rights and counter terrorism, Fionnuala D. Ní Aoláin, who stated that they would “profoundly impinge on a range of fundamental human rights”. 

But it is the government’s enactment of Law 180 of 2018 Regulating the Press and Media that has had perhaps the most widespread recent impact on free expression online. The law stipulates that press institutions, media outlets, and news websites must not broadcast or publish any information that violates Constitutional principles, granting authorities to ban or suspend distribution or operations of any publications, media outlets, or even social media accounts (with more than 5,000 followers) that are deemed to threaten national security, disturb the public peace, or promote discrimination, violence, racism, hatred, or intolerance. Additionally, Law No. 175 of 2018 on Anti-Cybercrime grants authorities the power to block or suspend websites deemed threatening to national security or the national economy.

A new escalation

In the past two weeks, Egyptian authorities have escalated their crackdown on human rights defenders and civil society organisations. On November 15, Mohammed Basheer, a staffer at the Egyptian Initiative for Personal Rights (EIPR) was arrested at his Cairo home in the early morning hours. Three days later, the organization’s criminal justice unit director, Karim Ennarah, was arrested while on vacation in Dahab. Most recently, Executive Director Gasser Abdel-Razek was arrested at his home by security forces.

All three appeared in front of the Supreme State Security Prosecution and were charged with “joining a terrorist group,” “spreading false news,” and “misusing social media.” They were remanded into custody and sent to fifteen days of pre-trial detention—a tactic commonly used by the Egyptian state as a form of punishment.

In the same week, Egyptian authorities placed 30 individuals on a terrorism watch list, accusing them of joining the Muslim Brotherhood. Among them is blogger, technologist, activist, and friend of EFF, Alaa Abd El Fattah.

A blogger and free software developer, Alaa has the distinction of having been detained under every head of state during his lifetime. In March 2019, he was released after serving a five-year sentence for his role in the peaceful demonstrations of 2011. As part of his parole, he was meant to spend every night at a police station for five years.

But in September of last year, he was re-arrested over allegations of publishing false news and inciting people to protest. He has been held without trial ever since, and as of this week is marked as a terrorist by the Egyptian state.

This designation lays bare the dangers of entrusting individual states with the ability to define “terrorism” for the global internet. While Egypt has used this designation to attack human rights defenders, the country is not alone in politicizing the definition. And at a time when governments are banding together to “eliminate terrorist and extremist content online” through efforts like the Christchurch Call (of which we are a member of the advisory network), it is imperative that social media companies, civil society, and states alike exercise great care in defining what qualifies as “terrorism.” We must not simply trust individual governments’ definitions.

A call for solidarity

EFF condemns the recent actions by the Egyptian government and stands in solidarity with our colleagues at EIPR and the many activists and human rights defenders imprisoned by the Sisi government. And we urge other governments and the incoming Biden administration to stand against repression and hold Egypt’s government accountable for their actions.

As the great Martin Luther King Jr. once wrote: “Injustice anywhere is a threat to justice everywhere.”

 

Share
Categories
Intelwars International Necessary and Proportionate privacy ¿Quién defiende tus datos?

Peru’s Third Who Defends Your Data? Report: Stronger Commitments from ISPs, But Imbalances, and Gaps to Bridge.

Hiperderecho, Peru’s leading digital rights organization, has launched today its third ¿Quién Defiende Tus Datos? (Who Defends you Data)–a report that seeks to hold telecom companies accountable for their users’ privacy. The new Peruvian edition shows improvements compared to 2019’s evaluation.

 Movistar and Claro commit to require a warrant for handing both users’ communications content and metadata to the government. The two companies also earned credit for defending user’s privacy in Congress or for challenging government requests. None scored any star last year in this category. Claro stands out with detailed law enforcement guidelines, including an explanatory chart for the procedures the company adopts before law enforcement requests for communications data. However, Claro should be more specific about the type of communications data covered by the guidelines. All companies have received full stars for their privacy policies, while only three did so in the previous report. Overall, Movistar and Claro are tied in the lead. Entel and Bitel lag, with the former bearing a slight advantage

Quien Defiende Tus Datos is part of a series across Latin America and Spain carried out in collaboration with EFF and inspired in our Who Has Your Back? Project. This year’s edition evaluates the four largest Internet Service Providers (ISPs) in Peru: Telefónica-Movistar, Claro, Entel, and Bitel.    

Hiperderecho assessed Peruvian ISPs on seven criteria concerning privacy policies, transparency, user notification, judicial authorization, defense of human rights, digital security, and law enforcement guidelines. In contrast to last year, the report has added two new categories – if ISPs publish law enforcement guidelines and a category checking companies’ commitments to users’ digital security. The full report is available in Spanish, and here we outline the main results:

Regarding transparency reports, Movistar leads the way, earning a full star while Claro receives a partial star. The report had to provide useful data about how many requests received and how many times the company complied. It should also include details about the government agencies that made the requests and the authority’s justifications. For the first time, Claro has provided statistical figures on government demands that require the “lifting of the secrecy of communication (LST).” However, Claro has failed to clarify which type of data (IP addresses and other technical identifiers) is protected under this legal regime. Since Peru’s Telecommunications Law and its regulation protect under communications secrecy both the content and personal information obtained through the provision of telecom services, we assume Claro might include both. Yet, as a best practice, the ISP should be more explicit about the type of data, including technical identifiers, protected under communication secrecy. As Movistar does, Claro should also break down government requests’ statistical data in content interception and metadata.  

Movistar and Claro have published their law enforcement guidelines. While Movistar only released a general global policy applicable to its subsidiaries, Claro stands out with detailed guidelines for Peru, including an explanatory chart for the company’s procedures before law enforcement requests for communications data. On the downside, the document broadly refers to “lifting the secrecy of communication” requests without defining what it entails. It should give users greater insight into which kind of data is included in the outlined procedures and whether they are mostly focused on authorities’ access to communications content or refer to specific metadata requests.

Entel, Bitel, Claro, and Movistar have published privacy policies applicable to their services that are easy to understand. All of the ISP’s policies provide information about the collected data (such as name, address, and records related to the service provision) and cases in which the company shares personal data with third parties. Claro and Movistar receive full credit in the judicial authorization category for having policies or other documents indicating their commitment to request a judicial order before handing communications data unless the law mandates otherwise. Similarly, Entel states they share users’ data with the government in compliance with the law. Peruvian law grants the specialized police investigation unit the power to request from telecom operators access to metadata in specific emergencies set by Legislative Decree 1182, with a subsequent judicial review.

Latin American countries still have a long way ahead in shedding enough light on government surveillance practices. Publishing meaningful transparency reports and law enforcement guidelines are two critical measures that companies should commit to. Users’ notification is the third one. In Peru, none of the ISPs have committed to notifying users of a government request at the earliest moment allowed by law. Yet, Movistar and Claro have provided further information on their reasons and their interpretation of the law for this refusal.

In the digital security category, all companies have received credit for using HTTPS on their websites and for providing secure methods to users in their online channels, such as two-step authentication. All companies but Bitel have scored for the promotion of human rights. While Entel receives a partial score for joining local multi-stakeholder forums, Movistar and Claro fill up their stars for this category. Among others, Movistar has sent comments to Congress in favor of user’s privacy, and Claro has challenged such a disproportionate request issued by the country’s tax administration agency (SUNAT) before Peru’s data protection authority.

We are glad to see that Peru’s third report shows significant progress, but much needed to be done to protect users’ privacy. Entel and Bitel have to catch up with the larger regional providers. And Movistar and Claro can also go further to complete their chart of stars. Hiperderecho will remain vigilant through their ¿Quien Defiende Tus Datos? Reports. 

Share
Categories
Big tech Intelwars International Surveillance and Human Rights transparency

EFF to Supreme Court: American Companies Complicit in Human Rights Abuses Abroad Should Be Held Accountable

For years EFF has been calling for U.S. companies that act as “repression’s little helpers” to be held accountable, and now we’re telling the U.S. Supreme Court. Despite all the ways that technology has been used as a force for good—connecting people around the world, giving voice to the less powerful, and facilitating knowledge sharing—technology has also been used as a force multiplier for repression and human rights violations, a dark side that cannot be denied.

Today EFF filed a brief urging the Supreme Court to preserve one of the few tools of legal accountability that exist for companies that intentionally aid and abet foreign repression, the Alien Tort Statute (ATS). We told the court about what we and others have been seeing over the past decade or so: surveillance, communications, and database systems, just to name a few, have been used by foreign governments—with the full knowledge of and assistance by the U.S. companies selling those technologies—to spy on and track down activists, journalists, and religious minorities who have been imprisoned, tortured, and even killed.

Specifically, we asked the Supreme Court today to rule that U.S. corporations can be sued by foreigners under the ATS and taken to court for aiding and abetting gross human rights abuses. The court is reviewing an ATS lawsuit brought by former child slaves from Côte d’Ivoire who claim two American companies, Nestlé USA and Cargill, aided in abuse they suffered by providing financial support to cocoa farms they were forced to work at. The ATS allows noncitizens to bring a civil claim in U.S. federal court against a defendant that violated human rights laws. The companies are asking the court to rule that companies cannot be held accountable under the law, and that only individuals can.

We were joined in the brief by the leading organizations tracking the sale of surveillance technology: Access Now, Article 19, Privacy International, Center for Long-Term Cybersecurity, and Ronald Deibert, director of Citizen Lab at University of Toronto. We told the court that the Nestlé case does not just concern chocolate and children. The outcome will have profound implications for millions of Internet users and other citizens of countries around the world. Why? Because providing sophisticated surveillance and censorship products and services to foreign governments is big business for some American tech companies. The fact that their products are clearly being used for tools of oppression seems not to matter. Here are a few examples we cite in our brief:

Cisco custom-built the so-called “Great Firewall” in China, also known as the “Golden Shield,” which enables the government to conduct Internet surveillance and censorship against its citizens. Company documents have revealed that, as part of its marketing pitch to China, Cisco built a specific “Falun Gong module” into the Golden Shield that helped Chinese authorities efficiently identify and locate members of the Falun Gong religious minority, who were then apprehended and subjected to torture, forced conversion, and other human rights abuses. Falun Gong practitioners sued Cisco under the ATS in a case currently pending in the U.S. Court of Appeals for the Ninth Circuit. EFF has filed briefs siding with the plaintiffs throughout the case.

Ning Xinhua, a pro-democracy activist from China, just last month sued the successor companies, founder, and former CEO of Yahoo! under the ATS for sharing his private emails with the Chinese government, which led to his arrest, imprisonment, and torture.

Recently, the government of Belarus used technology from Sandvine, a U.S. network equipment company, to block much of the Internet during the disputed presidential election in August (the company canceled its contract with Belarus because of the censorship). The company’s technology is also used by Turkey, Syria, and Egypt against Internet users to redirect them to websites that contain spyware or block their access to political, human rights, and news content.

We also cited a case against IBM where we filed a brief in support of the plaintiffs, victims of apartheid, who sued under the ATS on claims that the tech giant aided and abetted the human rights abuses they suffered at the hands of the South African government. IBM created a customized computer-based national identification system that facilitated the “denationalization” of country’s Black population. Its customized technology enabled efficient identification, racial categorization, and forced segregation, furthering the systemic oppression of South Africa’s native population. Unfortunately the case was dismissed by the U.S. Court of Appeals for the Second Circuit. 

The Supreme Court has severely limited the scope of the ATS in several rulings over the years. The court is now being asked to essentially grant immunity from the ATS to U.S. corporations. That would be a huge mistake. Companies that provide products and services to customers that clearly intend to, and do, use them to commit gross human rights abuses must be held accountable for their actions. We don’t think companies should be held liable just because their technologies ended up in the hands of governments that use them to hurt people. But when technology corporations custom-make products for governments that are plainly using them to commit human rights abuses, they cross a moral, ethical, and legal line.

We urge the Supreme Court to hold that U.S. courts are open when a U.S. tech company decides to put profits over basic human rights, and people in foreign countries are seriously harmed or killed by those choices.

 

Share
Categories
EU Policy Intelwars International Policy Analysis

EU Parliament Paves the Way for an Ambitious Internet Bill

The European Union has made the first step towards a significant overhaul of its core platform regulation, the e-Commerce Directive.

In order to inspire the European Commission, which is currently preparing a proposal for a Digital Services Act Package, the EU Parliament has voted on three related Reports (IMCO, JURI, and LIBE reports), which address the legal responsibilities of platforms regarding user content, include measures to keep users safe online, and set out special rules for very large platforms that dominate users’ lives.

Clear EFF’s Footprint

Ahead of the votes, together with our allies, we argued to preserve what works for a free Internet and innovation, such as to retain the E-Commerce directive’s approach of limiting platforms’ liability over user content and banning Member States from imposing obligations to track and monitor users’ content. We also stressed that it is time to fix what is broken: to imagine a version of the Internet where users have a right to remain anonymous, enjoy substantial procedural rights in the context of content moderation, can have more control over how they interact with content, and have a true choice over the services they use through interoperability obligations.

It’s a great first step in the right direction that all three EU Parliament reports have considered EFF suggestions. There is an overall agreement that platform intermediaries have a pivotal role to play in ensuring the availability of content and the development of the Internet. Platforms should not be held responsible for ideas, images, videos, or speech that users post or share online. They should not be forced to monitor and censor users’ content and communication–for example, using upload filters. The Reports also makes a strong call to preserve users’ privacy online and to address the problem of targeted advertising. Another important aspect of what made the E-Commerce Directive a success is the “country or origin” principle. It states that within the European Union, companies must adhere to the law of their domicile rather than that of the recipient of the service. There is no appetite from the side of the Parliament to change this principle.

Even better, the reports echo EFF’s call to stop ignoring the walled gardens big platforms have become. Large Internet companies should no longer nudge users to stay on a platform that disregards their privacy or jeopardizes their security, but enable users to communicate with friends across platform boundaries. Unfair trading, preferential display of platforms’ own downstream services and transparency of how users’ data are collected and shared: the EU Parliament seeks to tackle these and other issues that have become the new “normal” for users when browsing the Internet and communicating with their friends. The reports also echo EFF’s concerns about automated content moderation, which is incapable of understanding context. In the future, users should receive meaningful information about algorithmic decision-making and learn if terms of service change. Also, the EU Parliament supports procedural justice for users who see their content removed or their accounts disabled. 

Concerns Remain 

The focus on fundamental rights protection and user control is a good starting point for the ongoing reform of Internet legislation in Europe. However, there are also a number of pitfalls and risks. There is a suggestion that platforms should report illegal content to enforcement authorities and there are open questions about public electronic identity systems. Also, the general focus of consumer shopping issues, such as liability provision for online marketplaces, may clash with digital rights principles: the Commission itself acknowledged in a recent internal document that “speech can also be reflected in goods, such as books, clothing items or symbols, and restrictive measures on the sale of such artefacts can affect freedom of expression.” Then, the general idea to also include digital services providers established outside the EU could turn out to be a problem to the extent that platforms are held responsible to remove illegal content. Recent cases (Glawischnig-Piesczek v Facebook) have demonstrated the perils of worldwide content takedown orders.

It’s Your Turn Now @EU_Commission

The EU Commission is expected to present a legislative package on 2 December. During the public consultation process, we urged the Commission to protect freedom of expression and to give control to users rather than the big platforms. We are hopeful that the EU will work on a free and interoperable Internet and not follow the footsteps of harmful Internet bills such as the German law NetzDG or the French Avia Bill, which EFF helped to strike down. It’s time to make it right. To preserve what works and to fix what is broken.

Share
Categories
Artificial Intelligence & Machine Learning Commentary face surveillance free speech Intelwars International

Pioneer Award Ceremony 2020: A Celebration of Communities

Last week, we celebrated the 29th Annual—and first ever online—Pioneer Award Ceremony, which EFF convenes for our digital heroes and the folks that help make the online world a better, safer, stronger, and more fun place. Like the many Pioneer Award Ceremonies before it, the all-online event was both an intimate party with friends, and a reminder of the critical digital rights work that’s being done by so many groups and individuals, some of whom are not as well-known as they should be.    

Perhaps it was a feature of the pandemic — not a bug — that anyone could attend this year’s celebration, and anyone can now watch it online. You can also read the full transcript. More than ever before, this year’s Pioneer Award Ceremony was a celebration of online communities— specifically, the Open Technology Fund community working to create better tech globally; the community of Black activists pushing for racial justice in how technology works and is used; and the sex worker community that’s building digital tools to protect one another, both online and offline. 

But it was, after all, a celebration. So we kicked off the night by just vibing to DJ Redstickman, who brought his characteristic mix of fun, funky music, as well as some virtual visuals. 

DJ Redstickman

EFF’s Executive Director, Cindy Cohn, began her opening remarks with a reminder that this is EFF’s 30th year, and though we’ve been at it a long time, we’ve never been busier: 

EFF Executive Director Cindy Cohn

We’re busy in the courts — including a new lawsuit last week against the City of San Francisco for allowing the cops to spy on Black Lives Matter protesters and the Pride Parade in violation of an ordinance that we helped pass. We’re busy building technologies – including continuing our role in encrypting the web. We’re busy in the California legislature — continuing to push for Broadband for All, which is so desperately needed for the millions of Californians now required to work and go to school from home. We’re busy across the nation and around the world standing up for your right to have a private conversation using encryption and for your right to build interoperable tools. And we’re blogging, tweeting and posting on all sorts of social media to keeping you aware of what’s going on and hopefully, occasionally amused.   

Cindy was followed by our keynote speaker, longtime friend of EFF, author, and one of the top reporters researching all things tech, Cyrus Farivar. Cyrus’s recent book, Habeus Data, covers 50 years of surveillance law in America, and his previous book The Internet of Elsewhere, focuses on the history and effects of the Internet on different countries around the world. 

Keynote speaker, Cyrus Farivar

Cyrus detailed his journey to becoming a tech reporter, from his time on IRC chats in his teenage years to his realization, in Germany in 2010, about “what it means to be private and what it means to have surveillance.” At the time, German politicians were concerned with the privacy implications of Google Streetview. In Germany, Cyrus explained, specifically in every German state, there is a data protection agency: “In a way, I kind of think about EFF as one of the best next things.  We don’t really have a data protection agency or authority in this country.  Sure, we have the FCC.  We have other government agencies that are responsible for taking care of us, but we don’t have something like that.  I feel like one of the things the EFF does probably better than most other organizations is really try to figure out what makes sense in this new reality.”

Cyrus, of course, is one of the many people helping us all make sense of this new reality, through his reporting—and we’re glad that he’s been fighting the good fight ever since encountering EFF during the Blue Ribbon Campaign. 

Following Cyrus was EFF Staff Technologist Daly Barnett, who introduced the winner of the first Barlow—Ms. Danielle Blunt, aka Mistress Blunt. Danielle Blunt is a sex worker activist and tech policy researcher, and is one of the co-founders of Hacking//Hustling, a collective of sex workers and accomplices working at the intersection of tech and social justice. Her research into sex work and equitable access to technology from a public health perspective has led her to being one of the primary experts on the impacts of the censorship law FOSTA-SESTA, and on how content moderation affects the movement work of sex workers and activists. As Daly said during her introduction, “there are few people on this planet that are as well equipped to subvert the toxic power dynamic that big tech imposes on many of us. Mistress Blunt can look at a system like that, pinpoint the weak spots, and leverage the right tools to exploit them.”

Pioneer Award Winner, Danielle Blunt

Mistress Blunt showcased and highlighted specifically how Hacking//Hustling bridges the gaps between sex worker rights, tech policy, and academia, and pointed out the ways in which sex workers, who are often early adopters, are also exploited by tech companies:

Sex workers were some of the earliest adopters of the web.  Sex workers were some of the first to use ecommerce platforms and the first to have personal websites. The rapid growth of countless tech platforms was reliant on the early adoption of sex workers…  [but ]Not all sex workers have equitable access to technologies or the Internet. This means that digital freedom for sex workers means equitable access to technologies. It means cultivating a deeper understanding of how technology is deployed to surveil and restrict movement of sex workers and how this impacts all of us, because it does impact all of us.  

After Mistress Blunt’s speech, EFF Director of International Freedom of Expression Jillian York joined us from Germany to introduce the next honoree, Laura Cunningham. Laura accepted the award for the Open Technology Fund community, a group which has fostered a global community and provided support—both monetary and in-kind—to more than 400 projects that seek to combat censorship and repressive surveillance. This has resulted in over 2 billion people in over 60 countries being able to access the open Internet more safely.

Unfortunately, new leadership has recently been appointed by the Trump administration to run OTF’s funder, the U.S. Agency for Global Media (USAGM). As a result, there is a chance that the organization’s funds could be frozen—threatening to leave many well-established global freedom tools, their users, and their developers in the lurch. As a result, this award was offered to the entire OTF community for their hard work and dedication to global Internet freedom—and because EFF recognizes the need to protect this community and ensure its survival despite the current political attacks. As Laura said in accepting it, the award “recognizes the impact and success of the entire OTF community,” and is “a poignant reminder of what a committed group of passionate individuals can accomplish when they unite around a common goal.”

Laura Cunningham accepted the Barlow on behalf of the Open Technology Fund Community

But because OTF is a community, Laura didn’t accept the award alone. A pre-recorded video montage of OTF community members gave voice to this principle as they described what the community means to them: 

For me, the OTF community is resourceful.  I’ve never met a community that does so much with so little considering how important their work is for activists and journalists across the world to fight surveillance and censorship.

I love OTF because apart from providing open-source technology to marginalized communities, I have found my sisters in struggle and solidarity in this place for a woman of color and find my community within the OTF community.  Being part of the OTF community means I’m not alone in the fight against injustice, inequality, against surveillance and censorship.  

Members of the OTF Community spoke about what it means to them

For me, the OTF community plays an important role in the work that I do because it allows me to be in a space where I see people from different countries around the world working towards a common goal of Internet freedom.  

I’m going to tell you a story about villagers in Vietnam.  The year is 2020.  An 84-year-old elder of a village shot dead by police while defending his and the villagers’ land.  His two sons sentenced to death.  His grandson sentenced to life in prison.  That was the story of three generations of family in a rural area of the Vietnam.  It was the online world that brought the stories to tens of millions of Vietnamese and prompted a series of online actions.  Thanks to our fight against internet censorship, Vietnamese have access to information.  

We are one community fighting together for Internet freedom, a precondition today to enjoy fundamental rights.  

These are just a few highlights. We hope you’ll watch the video to see exactly why OTF is so important, and so appreciated globally. 

Following this stunning video, EFF’s Director of Community Organizing, Nathan Sheard, introduced the final award winners—Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji.

Pioneer Award Winners Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji

The trio have done groundbreaking research on race and gender bias in facial analysis technology, which laid the groundwork for the national movement to ban law enforcement’s use face surveillance in American cities. In accepting the award, each honoree spoke, beginning with Deborah Raji, who detailed some of the dangers of face recognition that their work together on the Gender Shades Project helped uncover: “Technology requiring the privacy violation of numerous individuals doesn’t work. Technology hijacked to be weaponized and target and harass vulnerable communities doesn’t work. Technology that fails to live up to its claims to some subgroups over other subgroups certainly doesn’t work at all.”

Following Deborah, Dr. Timnit Gebru described how this group came together, beginning with Joy founding the Algorithmic Justice League, Deb founding Project Include, and her own co-founding of Black in AI. Importantly, Timnit noted how these three—and their organizations—look out for each other: “Joy actually got this award and she wanted to share it with us. All of us want to see each other rise, and all of the organizations we’ve founded—we try to have all these organizations support each other.” 

Lastly, Joy Buolamwini closed off the acceptance with a performance—a “switch from performance metrics to performance art.” Joy worked with Brooklyn tenants who were organizing against compelled use of face recognition in their building, and her poem was an ode to them—and to everyone “resisting and revealing the lie that we must accept the surrender of our faces.” The poem is here in full: 

To the Brooklyn tenants resisting and revealing the lie that we must accept the surrender of our faces, the harvesting of our data, the plunder of our traces, we celebrate your courage. No silence.  No consent. You show the path to algorithmic justice requires a league, a sisterhood, a neighborhood, hallway gathering, Sharpies and posters, coalitions, petitions, testimonies, letters, research, and potlucks, livestreams and twitches, dancing and music. Everyone playing a role to orchestrate change. To the Brooklyn tenants and freedom fighters around the world and the EFF family going strong, persisting and prevailing against algorithms of oppression, automating inequality through weapons of math destruction, we stand with you in gratitude. You demonstrate the people have a voice and a choice.  When defiant melodies harmonize to elevate human life, dignity, and rights, the victory is ours.  

Joy Buolamwini, aka Poet of Code

There is really no easy way to summarize such a celebration as the Pioneer Award Ceremony—especially one that brought together this diverse set of communities to show, again and again, how connected we all are, and must be, to fight back against oppression. As Cindy said, in closing: 

We all know no big change happens because of a single person and how important the bonds of community can be when we’re up against such great odds…The Internet can give us the tools, and it can help us create others to allow us to connect and fight for a better world, but really what it takes is us.  It takes us joining together and exerting our will and our intelligence and our grit to make it happen. But when we get it right, EFF, I hope, can help lead those fights, but also we can help support others who are leading them and always, always help light the way to a better future.  

EFF would like to thank the members around the world who make the Pioneer Award Ceremony and all of EFF’s work possible. You can help us work toward a digital world that supports freedom, justice, and innovation for all people by donating to EFF. We know that these are deeply dangerous times, and with your support, we will stand together no matter how dark it gets and we will still be here to usher in a brighter day. 

Thanks again to Dropbox, No Starch Press, Ridder Costa & Johnstone LLP, and Ron Reed for supporting this year’s ceremony! If you or your company are interested in learning more about sponsorship, please contact Nicole Puller.

Share
Categories
Commentary EUROPEAN UNION Intelwars International

Orders from the Top: The EU’s Timetable for Dismantling End-to-End Encryption

The last few months have seen a steady stream of proposals, encouraged by the advocacy of the FBI and Department of Justice, to provide “lawful access” to end-to-end encrypted services in the United States. Now lobbying has moved from the U.S., where Congress has been largely paralyzed by the nation’s polarization problems, to the European Union—where advocates for anti-encryption laws hope to have a smoother ride. A series of leaked documents from the EU’s highest institutions show a blueprint for how they intend to make that happen, with the apparent intention of presenting anti-encryption law to the European Parliament within the next year.

The public signs of this shift in the EU—which until now has been largely supportive toward privacy-protecting technologies like end-to-end encryption—began in June with a speech by Ylva Johansson, the EU’s Commissioner for Home Affairs.

Speaking at a webinar on “Preventing and combating child sexual abuse [and] exploitation”, Johansson called for a “technical solution” to what she described as the “problem” of encryption, and announced that her office had initiated “a special group of experts from academia, government, civil society and business to find ways of detecting and reporting encrypted child sexual abuse material.”

The subsequent report was subsequently leaked to Politico. It includes a laundry list of tortuous ways to achieve the impossible: allowing government access to encrypted data, without somehow breaking encryption.

At the top of that precarious stack was, as with similar proposals in the United States, client-side scanning. We’ve explained previously why client-side scanning is a backdoor by any other name. Unalterable computer code that runs on your own device, comparing in real-time the contents of your messages to an unauditable ban-list, stands directly opposed to the privacy assurances that the term “end-to-end encryption” is understood to convey. It’s the same approach used by China to keep track of political conversations on services like WeChat, and has no place in a tool that claims to keep conversations private.

It’s also a drastically invasive step by any government that wishes to mandate it. For the first time outside authoritarian regimes, Europe would be declaring which Internet communication programs are lawful, and which are not. While the proposals are the best that academics faced with squaring a circle could come up with, it may still be too aggressive to politically succeed as  enforceable regulation—even if tied, as Johannsson ensured it was in a subsequent Commission communication, to the fight against child abuse.

But while it would require a concerted political push, EU’s higher powers are gearing up for such a battle. In late September, Statewatch published a note, now being circulated by the current EU German Presidency, called “Security through encryption and security despite encryption”, encouraging the EU’s member states to agree to a new EU position on encryption in the final weeks of 2020.

While conceding that “the weakening of encryption by any means (including backdoors) is not a desirable option”, the Presidency’s note also positively quoted an EU Counter-Terrorism Coordinator (CTC) paper from May (obtained and made available by German digital rights news site NetzPolitik.org), which calls for what it calls a “front-door”—a “legal framework that would allow lawful access to encrypted data for law enforcement without dictating technical solutions for providers and technology companies”.

The CTC highlighted what would be needed in order to legislate this framework:

The EU and its Member States should seek to be increasingly present in the public debate on encryption, in order to inform the public narrative on encryption by sharing the law enforcement and judicial perspective…

This avoids a one-sided debate mainly driven by the private sector and other nongovernmental voices. This may involve engaging with relevant advocacy groups, including victims associations that can relate to government efforts in that area. Engagement with the [European Parliament] will also be key to prepare the ground for possible legislation.

A speech by Commissioner Johannsson tying defeating secure messaging to protecting children; a paper spelling out “technical solutions” to attempt to fracture the currently unified (or “one-sided”) opposition; and, presumably in the very near future, once the EU has published its new position on encryption, a concerted attempt to lobby members of the European Parliament for this new legal framework: these all fit the Counter-Terrorist Coordinators’ original plans.

We are in the first stages of a long anti-encryption march by the upper echelons of the EU, headed directly toward Europeans’ digital front-doors. It’s the same direction as the United Kingdom, Australia, and the United States have been moving for some time. If Europe wants to keep its status as a jurisdiction that treasures privacy, it will need to fight for it.

Share
Categories
EUROPEAN UNION Intelwars International International Privacy Standards Legislative Analysis Necessary and Proportionate privacy

A Look-Back and Ahead on Data Protection in Latin America and Spain

We’re proud to announce a new updated version of The State of Communications Privacy Laws in eight Latin American countries and Spain. For over a year, EFF has worked with partner organizations to develop detailed questions and answers (FAQs) around communications privacy laws. Our work builds upon previous and ongoing research of such developments in Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain. We aim to understand each country’s legal challenges, in order to help us spot trends, identify the best and worst standards, and provide recommendations to look ahead. This post about data protection developments in the region is one of a series of posts on the current State of Communications Privacy Laws in Latin America and Spain. 

As we look back at the past ten years in data protection, we have seen considerable legal progress in granting users’ control over their personal lives. Since 2010, sixty-two new countries have enacted data protection laws, giving a total of 142 countries with data protection laws worldwide. In Latin America, Chile was the first country to adopt such a law in 1999, followed by Argentina in 2000. Several countries have now followed suit: Uruguay (2008), Mexico (2010), Peru (2011), Colombia (2012), Brazil (2018), Barbados (2019), and Panama (2019). While there are still different privacy approaches, data protection laws are no longer a purely European phenomenon.

Yet, contemporary developments in European data protection law continue to have an enormous influence in the region—in particular, the EU’s 2018 General Data Protection Regulation (GDPR). Since 2018, several countries, including Barbados and Panama have led the way in adopting GDPR-inspired laws in the region, promising the beginning of a new generation of data protection legislation. In fact, the privacy protections of Brazil’s new GDPR-inspired law took effect this month, on September 18, after the Senate pushed back on a delaying order from President Jair Bolsonaro.

But when it comes to data protection in the law enforcement context, few countries have adopted the latest steps of the European Union. The EU Police Directive, a law on the processing of personal data for police forces, has not yet become a Latin American phenomenon. Mexico is the only country with a specific data protection regulation for the public sector. In doing so, countries in the Americas are missing a crucial opportunity to strengthen their communications privacy safeguards with rights and principles common to the global data protection toolkit.

New GDPR-Inspired Data Protection Laws
BrazilBarbados, and Panama have been the first countries in the region to adopt GDPR-inspired data protection laws. Panama’s law, approved in 2019, will enter into force in March 2021. 

Brazil’s law has faced an uphill battle. The provisions creating the oversight authority came into force in December 2018, but it took the government one and a half years to introduce a decree implementing its structure. The decree, however, will only have legal force when the President of the Board is officially appointed and approved by the Senate. No appointment has been made as of the publication of this post. For the rest of the law, February 2020 was the original deadline to enter into force. This was later changed to August 2020. The law was then further delayed to May 2021 through an Executive Act issued by President Bolsonaro. Yet, in a surprising positive twist, Brazil’s Senate stopped President Bolsonaro’s deferral in August. That means the law is now in effect, except for the penalties’ section which have been deferred again, to August 2021. 

Definition of Personal Data 
Like the GDPR, Brazil and Panama’s laws include a comprehensive definition of personal data. It includes any information concerning an identified or identifiable person. The definition of personal data in Barbados’s law has certain limitations. It only protects data which relates to an individual who can be identified “from that data; or from that data together with other information which is in the possession of or is likely to come into the possession of the provider.” Anonymized data in Brazil, Panama, and Barbados falls outside the scope of the law. There are also variations in how these countries define anonymized data. Panama defines it as data that cannot be re-identified by reasonable means. However, the law doesn’t set explicit parameters to guide this assessment. Brazil’s law makes it clear that anonymized data will be considered personal data if the anonymization process is reversed using exclusively the provider’s own means, or if it can be reversed with reasonable efforts. The Brazilian law defines objective factors to determine what’s reasonable such as the cost and time necessary to reverse the anonymization process, according to the technologies available, and exclusive use of the provider’s own means.  These parameters affect big tech companies with  extensive computational power and large collections of data, which will need to determine if their own resources could be used to re-identify anonymized data. This provision should not be interpreted in a way that ignores scenarios where the sharing or linking of anonymized data with other data sets, or publicly available information, leads to the re-identification of the data.

Right to Portability 
The three countries grant users the right to portability—a right to take their data from a service provider and transfer it elsewhere. Portability adds to the so-called ARCO (Access, Rectification, Cancellation, and Opposition) rights—a set of users’ rights that allow them to exercise control over their own personal data.

Enforcers of portability laws will need to make careful decisions about what happens when one person wants to port away data that relates both to them and another person, such as their social graph of contacts and contact information like phone numbers. This implicates the privacy and other rights of more than one person. Also, while portability helps users leave a platform, it doesn’t help them communicate with others who still use the previous one. Network effects can prevent upstart competitors from taking off. This is why we also need interoperability to enable users to interact with one another across the boundaries of large platforms. 

Again, different countries have different approaches. The Brazilian law tries to solve the multi-person data and interoperability issues by not limiting the “ported data” to data the user has given to a provider. It also doesn’t detail the format to be adopted. Instead, the data protection authority can set the standards among others for interoperability, security, retention periods, and transparency. In Panama, portability is a right and a principle. It is one of the general data protection principles that guide the interpretation and implementation of their overarching data protection law. As a right, it resembles the GDPR model. The user has the right to receive a copy of their personal data in a structured, commonly used, and machine-readable format. The right applies only when the user has provided their data directly to the service provider  and has given their consent or when the data is needed for the execution of a contract. Panama’s law expressly states that portability is “an irrevocable right” that can be requested at any moment. 

Portability rights in Barbados are similar to those in Panama. But, like the GDPR, there are some limitations. Users can only exercise their rights to directly port their data from one provider to another when technically feasible. Like Panama, users can port data that they have provided themselves to the providers, and not data about themselves that other users have shared.  

Automated Decision-Making About Individuals
Automated decision-making systems are making continuous decisions about our lives to aid or replace human decision making. So there is an emerging GDPR-inspired right not to be subjected to solely automated decision-making processes that can produce legal or similarly significant effects on the individual. This new right would apply, for example, to automated decision-making systems that use “profiles” to predict aspects of our personality, behavior, interests, locations, movements, and habits. With this new right, the user can contest the decisions made about them, and/or obtain an explanation about the logic of the decision. Here, too, there are a few variations among countries. 

Brazilian law establishes that the user has a right to review decisions affecting them that are based solely on the automated processing of personal data. These include decisions intended to define personal, professional, consumer, or credit profiles, or other traits of someone’s personality. Unfortunately, President Bolsonaro vetoed a provision requiring human review in this automated-decision-making. On the upside, the user has a right to request the provider to disclose information on the criteria and procedures adopted for automated decision-making, though unfortunately there is an exception for trade and industrial secrets.

In Barbados, the user has the right to know, upon request to the provider, about the existence of decisions based on automated processing of personal data, including profiling. As in other countries, this includes access to information about the logic involved and the envisaged consequences on them. Barbados users also have the right not to be subject to automated decision-making processes without human involvement, and to automated decisions that will produce legal or similarly significant effects on the individual, including profiling. There are exceptions for when automated decisions are: necessary for entering or performing a contract between the user and the provider; authorized by law; or based on user consent. Barbados has defined consent similar to the GDPR’s definition. That means there must be a freely given, specific, informed, and unambiguous indication of the user’s wishes to the processing of their personal data. The user has the ability to change their mind

Panama law also grants users the right not to be subject to a decision based solely on automated processing of their personal data, without human involvement, but this right only applies when the process produces negative legal effects concerning the user or detrimentally affects the users’ rights. As in Barbados, Panama allows automated decisions that are necessary for entering or performing a contract, based on the user’s consent, or permitted by law. But Panama defines “consent” in a less user-protective manner: when a person provides a “manifestation” of their will.

Legal Basis for the Processing of Personal Data
It is important for data privacy laws to require service providers to have a valid lawful basis in order to process personal data, and to document that basis before starting the processing. If not, the processing will be unlawful. Data protection regimes, including all principles and user’s rights, must apply regardless of whether consent is required or not.

Panama’s new law allows three legal bases other than consent: to comply with a contractual obligation, to comply with a legal obligation, or as authorized by a particular law. Brazil and Barbados set out ten legal bases for personal data processing—four more than the GDPR, with consent as only one of them. Brazilian and Barbados law seeks to balance this approach by providing users with clear and concise information about what providers do with their personal data. It also grants users the right to object to the processing of their data, which allows users to stop or prevent processing. 

Data Protection in the Law Enforcement Context
Latin America lags on a comprehensive data protection regime that applies not just to corporations, but also to public authorities when processing personal data for law enforcement purposes. The EU, on the other hand, has adopted not just the GDPR but also the EU Police Directive, a law that regulates the processing of personal data for police forces. Most Latam data protection laws exempt law enforcement and intelligence activities from the application of the law. However, in Colombia, some data protection rules apply to the public sector. That nation’s GDPL applies to the public sector, with exceptions for national security, defense, anti-money-laundering regulations, and intelligence. The Constitutional Court has stated that these exceptions are not absolute exclusions from the law’s application, but an exemption to just some provisions. Complementary statutory law should regulate them, subject to the proportionality principle. 

Spain has not implemented the EU’s Police Directive yet. As a result, personal data processing for law enforcement activities remains held to the standards of the country’s previous data protection law. Argentina’s and Chile’s laws do apply to law enforcement agencies, and Mexico has a specific data protection regulation for the public sector. But Peru and Panama exclude law enforcement agencies from the scope of their data protection laws. Brazil’s law creates an exception to personal data processing solely for public safety, national security, and criminal investigations. Still, it lays down that specific legislation has to be approved to regulate these activities. 

Recommendations and Looking Ahead
Communication privacy has much to gain with the intersection of its traditional inviolability safeguards and the data protection toolkit. That intersection helps entrench international human rights standards applicable to law enforcement access to communications data. The principles of data minimization and purpose limitation in the data protection world correlate with the necessity, adequacy, and proportionality principles under international human rights law. They are necessary to curb massive data retention or dragnet government access to data. The idea that any personal data processing requires a legitimate basis upholds the basic tenets of legality and legitimate aim to place limitations on fundamental rights. Law enforcement access to communications data must be clearly and precisely prescribed by law. No other legitimate basis than the compliance with a legal obligation is acceptable in this context. 

Data protection transparency and information safeguards reinforce a user’s right to a notification when government authorities have requested their data. European courts have asserted this right stems from privacy and data protection safeguards. In the Tele2 Sverige AB and Watson cases, the EU Court of Justice (CJEU) held that “national authorities to whom access to the retained data has been granted must notify the persons affected . . . as soon as that notification is no longer liable to jeopardize the investigations being undertaken by those authorities.” Before that, in Szabó and Vissy v. Hungary, the European Court of Human Rights (ECHR) had declared that notifying users of surveillance measures is also inextricably linked to the right to an effective remedy against the abuse of monitoring powers.

Data protection transparency and information safeguards can also play a key role in fostering greater insight into companies’ and governments’ practices when it comes to requesting and handing over users’ communications data. In collaboration with EFF, many Latin American NGOs have been pushing Internet Service Providers to publish their law enforcement guidelines and aggregate information on government data requests. We’ve made progress over the years, but there’s still plenty of room for improvement. When it comes to public oversight, data protection authorities should have the legal mandate to supervise personal data processing by public entities, including law enforcement agencies. They should be impartial and independent authorities, conversant in data protection and technology, and have adequate resources in exercising the functions assigned to them.

There are already many essential safeguards in the Latam region. Most countries’ constitutions have explicitly recognized privacy as a fundamental right, and most have adopted data protection laws.  Each constitution recognized a general right to private life or intimacy or a set of multiple, specific rights: a right to the inviolability of communications; an explicit data protection right (Chile, Mexico, Spain); or “habeas data” (Argentina, Peru, Brazil) as either a right or legal remedy. (In general, habeas data protects the right of any person to find out what data is held about themselves.) And, most recently, a landmark ruling of Brazil’s Supreme Court has recognized data protection as a fundamental right drawn from the country’s Constitution.

Across our work in the region, our FAQs help to spot loopholes, flag concerning standards, or highlight pivotal safeguards (or lack thereof). It’s clear that the rise of data protection laws have helped secure user privacy across the region: but more needs to be done. Strong data protection rules that apply to law enforcement activities would enhance communication privacy protections in the region. More transparency is urgently needed, both in how the regulations will be implemented, and what additional work private companies and the public sector are taking to pro-actively protect user data.

We invite everyone to read these reports and reflect on what work we should champion and defend in the days ahead, and what still needs to be done.

Share
Categories
Intelwars International Necessary and Proportionate privacy Surveillance and Human Rights ¿Quién defiende tus datos?

Spain’s New Who Defends Your Data Report Shows Robust Privacy Policies But Crucial Gaps to Fill

ETICAS Foundation’s second ¿Quien Defiende Tus Datos? (Who Defends Your Data?) report on data privacy practices in Spain shows how Spain’s leading Internet and mobile app providers are making progress in being clear about how users’ personal data is being protected. Providers are disclosing what information is being collected, how long it’s being kept, and who it’s shared with. Compared to Eticas’ first report on Spain in 2018, there was significant improvement in the number of companies informing users about how long they store data as well as notifying users about privacy policy changes.

The report evaluating policies at 13 Spanish Internet companies also indicates that a handful are taking seriously their obligations under the new General Data Protection Regulation (GDPR), the European Union’s data privacy law that sets tough standards for protecting customers’ private information and gives users more information about and control over their private data. The law went into effect in December 2018.

But the good news for most of the companies pretty much stops there. All but the largest Internet providers in Spain are seriously lagging when it comes to transparency around government demands for user data, according to the Eticas report released today.

While Orange commits to notify users about government requests and both Vodafone and Telefónica clearly state the need for a court order before handing users’ communications to authorities, other featured companies have much to improve. They are failing to provide information about how they handle law enforcement requests for user data, whether they require judicial authorization before giving personal information to police, or if they notify users as soon as legally possible that their data was released to law enforcement. The lack of disclosure about their practices leaves an open question about whether they have users’ backs when the government wants personal data.

The format of the Eticas report is based on EFF’s Who Has Your Back project, which was launched nine years ago to shine a light on how well U.S. companies protect user data, especially when the government wants it. Since then the project has expanded internationally, with leading digital rights groups in Europe and the Americas evaluating data privacy practices of Internet companies so that users can make informed choices about to whom they should trust their data. Eticas Foundation first evaluated Spain’s leading providers in 2018 as part of a region-wide initiative focusing on Internet privacy policies and practices in Iberoamerica. 

In today’s report, Eticas evaluated 13 companies, including six telecom providers (Orange, Ono-Vodafone, Telefónica-Movistar, MásMóvil, Euskatel, and Somos Conexión), five home sales and rental apps (Fotocasa, Idealista, Habitaclia, Pisos.com, and YaEncontré), and two apps for selling second hand goods (Vibbo and Wallapop). The companies were assessed against a set of criteria covering policies for data collection, handing data over to law enforcement agencies, notifying customers about government data requests, publishing transparency reports, and promoting user privacy. Companies were awarded stars based on their practices and conduct. In light of the adoption of the GDPR, this year’s report assessed companies against several new criteria, including providing information on how to contact a company data protection officer, using private data to automate decision making without human involvement and build user profiles, and practices regarding international data transfers. Etica also looked at whether they provide guidelines, tailored to local law, for law enforcement seeking user data.

The full study is available in Spanish, and we outline the main findings below. 

An Overview of Companies’ Commitments and Shortcomings

Telefonica-Movistar, Spain’s largest mobile phone company, was the most highly rated, earning stars in 10 out of 13 categories. Vodafone was a close second, with nine stars. There was a big improvement overall in companies providing information about how long they keep user data—all 13 companies reported doing so this year, compared to only three companies earning partial credit in 2018. The implementation of the GDPR has had a positive effect on privacy policies at only some companies, the report shows. While most companies are providing contact information for data protection officials, only four—Movistar, Fotocasa, Habitaclia, and Vibbo—provide information about their practices for using data-based, nonhuman decision making, and profiling, and six—Vodafone, MásMóvil, Pisos.com, Idealista, Yaencontré, and Wallapop—provide information only about profiling. 

Only Telefónica-Movistar and Vodafone disclose information to users about its policies for giving personal data to law enforcement agencies. Telefonica-Movistar is vague in its data protection policy, only stating that it will hand user data to police in accordance with the law. However, the company’s transparency report shows that it lets police intercept communications only with a court order or in emergency situations. For metadata, the information provided is generic: it only mentions the legal framework and the authorities entitled to request it (judges, prosecutors, and the police).

Vodafone’s privacy policy says data will be handed over “according to the law and according to an exhaustive assessment of all legal requirements”. While its data protection policy does not provide information in a clear way, there’s an applicable legal framework report that describes both the framework and how the company interprets it, and states that a court order is needed to provide content and metadata to law enforcement.

Orange Spain is the only company that says it’s committed to telling users when their data is released to law enforcement unless there’s a legal prohibition against it. Because the company didn’t make clear it will do so as soon as there’s no legal barrier, it received partial credit. Euskatel and Somos Conexión, smaller ISPs, have stood out in promoting user privacy through campaigns or defending users in courts. On the latter, Euskatel has challenged a judicial order demanding the company reveal IP addresses in a commercial claim. After finally handing them over once the sentence was confirmed by a higher court, Euskatel filed a complaint with the Spanish data protection authority for possible violation of purpose limitation safeguards considering how the claimant used the data.

The report shows that, in general, the five home apps (Fotocasa, Idealista, Habitaclia, Pisos.com, and YaEncontré) and two second-hand goods sales apps (Vibbo and Wallapop) have to step up their privacy information game considerably. They received no stars in fully nine out of the 13 categories evaluated. This should give users pause and, in turn, motivate these companies to increase transparency about their data privacy practices so that the next time they are asked if they protect customers’ personal data, they have more to show.

Through ¿Quien Defiende Tus Datos? reports, local organizations in collaboration with EFF have been comparing companies’ commitments to transparency and users’ privacy in different Latin American countries and Spain. Earlier this year, Fundación Karisma in Colombia, ADC in Argentina, and TEDIC in Paraguay published new reports. New editions in Panamá, Peru, and Brazil are also on their way to spot which companies stand with their users and those that fall short of doing so. 

Share
Categories
Commentary DMCA DMCA Rulemaking DRM Intelwars International Trade Agreements and Digital Rights

Human Rights and TPMs: Lessons from 22 Years of the U.S. DMCA

Introduction

In 1998, Bill Clinton signed the Digital Millennium Copyright Act (DMCA), a sweeping overhaul of U.S. copyright law notionally designed to update the system for the digital era. Though the DMCA contains many controversial sections, one of the most pernicious and problematic elements of the law is Section 1201, the “anti-circumvention” rule which prohibits bypassing, removing, or revealing defects in “technical protection measures” (TPMs) that control not just use but also access to copyrighted works.

In drafting this provision, Congress ostensibly believed it was preserving fair use and free expression but failed to understand how the new law would interact with technology in the real world and how some courts could interpret the law to drastically expand the power of copyright owners. Appellate courts disagree about the scope of the law, and the uncertainty and the threat of lawsuits have meant that rightsholders have been able to effectively exert control over legitimate activities that have nothing to do with infringement, to the detriment of basic human rights.. Manufacturers who designed their products with TPMs that protected business models, rather than profits, can claim that using those products in ways that benefited their customers, (rather than their shareholders) is illegal.

22 years later, TPMs are everywhere, sometimes called “DRM” (“digital rights management”). TPMs control who can fix cars and tractors, who can audit the security of medical implants, who can refill a printer cartridge and whether you can store a cable broadcast and what you can do with it.

Last month, the Mexican Congress passed amendments to the Federal Copyright Law and the Federal Criminal Code, notionally to comply with the country’s treaty obligations under Donald Trump’s USMCA, the successor to NAFTA. This law included many provisions that interfered with human rights, so much so that the Mexican National Commission for Human Rights has filed a constitutional challenge before the Supreme Court seeking to annul these amendments.

Among the gravest of the defects in the new amendments to the Mexican copyright law and the Federal Criminal Code are the rules regarding TPMs, which replicate the defects in DMCA 1201. Notably, the new law does not address the flawed language of the DMCA that has allowed rightsholders to block legitimate and noninfringing uses of copyrighted works that depend on circumvention and creates harsh and disproportionate criminal penalties that creates unintended consequences for privacy and freedom of expression . Such criminal provisions are so broad and vague that it can be applied to any person, even the owner of the device, even if that person hasn’t committed any malicious intent to commit a wrongful act that will result in harm to another. To make things worse, the Mexican law does not provide even the inadequate protections the US version offers, such as an explicit, regular regulatory proceeding that creates exemptions for areas where the law is provably creating harms.

As with DMCA 1201, the new amendments to the Mexican copyright law contains language that superficially appears to address these concerns; however, as with DMCA 1201, the Mexican law’s safeguard provisions are entirely cosmetic, so burdened with narrow definitions and onerous conditions that they are unusable. That is why, in 22 years of DMCA 1201, no one has ever successfully invoked the exemptions written into the statute.

EFF has had 22 years of experience with the fallout from DMCA 1201. In this article, we offer our hard-won expertise to our colleagues in Mexican civil society, industry, lawmaking and to the Mexican public.

Below, we have set out examples of how DMCA 1201 — and its Mexican equivalent — is incompatible with human rights, including free expression, self-determination, the rights of people with disabilities, cybersecurity, education, and archiving; as well as the law’s consequences for Mexico’s national resiliency and economic competitiveness and food- and health-security.

Free Expression

Copyright and free expression are in obvious tension with one another: the former grants creators exclusive rights to reproduce and build upon expressive materials; the latter demands the least-possible restrictions on who can express themselves and how.

Balancing these two priorities is a delicate act, and while different countries manage their limitations and exceptions to copyright differently — fair use, fair dealing, derecho de autor, and more — these systems typically require a subjective, qualitative judgment in order to evaluate whether a use falls into one of the exempted categories: for example, the widespread exemptions for parody or commentary, or rules that give broad latitude to uses that are “transformative” or “critical.” These are rules that are designed to be interpreted by humans — ultimately by judges.

TPM rules that have no nexus with copyright infringement vaporize the vital qualitative considerations in copyright’s free expression exemptions, leaving behind a quantitative residue that is easy for computers to act upon, but which does not correspond closely to the policy objectives of limitations in copyright.

For example, a computer can tell if a video includes more than 25 frames of another video, or if the other works included in its composition do not exceed 10 percent of its total running time. But the computer cannot tell if the material that has been incorporated is there for parody, or commentary, or education — or if the video-editor absentmindedly dragged a video-clip from another project into the file before publishing it.

And in truth, when TPMs collide with copyright exemptions, they are rarely even this nuanced.

Take the TPMs that prevent recording or duplication of videos, beginning with CSS, the system used in the first generation of DVD players, and continuing through the suite of video TPMs, including AACS (Blu-Ray) and HDCP (display devices). These devices can’t tell if you are making a recording in order to produce a critical or parodical video commentary. In 2018, the US Copyright Office recognized that these TPMs interfere with the legitimate free expression rights of the public and granted an exemption to DMCA 1201 permitting the public to bypass these TPMs in order to make otherwise lawful recordings.The Mexican version of the DMCA does not include a formal procedure for granting comparable exemptions.

Other times, TPMs collide with free expression by allowing third parties to interpose themselves between rightsholders and their audiences, preventing the former from selling their expressive works to the latter.

The most prominent example of this interference is to be found in Apple’s App Store, the official monopoly retailer for apps that can run on Apple’s iOS devices, such as iPhones, iPads, Apple Watches, and iPods. Apple’s devices use TPMs that prevent owners of these devices from choosing to acquire software from rivals of the App Store. As a result, Apple’s editorial choices about which apps it includes in the App Store have the force of law. For an Apple customer to acquire an app from someone other than Apple, they must bypass the TPM on their device. Though we have won the right for customers to “jailbreak” their devices, anyone who sells them a tool to effect this ommits a felony under DMCA 1201 and risks both a five-year prison sentence and a $500,000 fine (for a first offense).

While the recent dispute with Epic Games has highlighted the economic dimension of this system (Epic objects to paying a 30 percent commission to Apple for transactions related to its game Fortnite), there are many historic examples of pure content-based restrictions on Apple’s part:

In these cases, Apple’s TPM interferes with speech in ways that are far more grave than merely blocking recording to advantage rightsholders. Rather, Apple is using TPMs backed by DMCA 1201 to interfere with rightsholders as well. Thanks to DMCA 1201, the creator of an app and a person who wants to use that app on a device that they own cannot transact without Apple’s approval.

If Apple withholds that approval, the owner of the device and the creator of the copyrighted work are not allowed to consummate their arrangement, unless they bypass a TPM. Recall that commercial trafficking in TPM-circumvention tools is a serious crime under DMCA 1201, carrying a penalty of a five year prison sentence and a $500,000 fine for a first criminal offense, even if those tools are used to allow rightsholders to share works with their audiences.

In the years since Apple perfected the App Store model, many manufacturers have replicated it, for categories of devices as diverse as games consoles, cars and tractors, thermostats and toys. In each of these domains — as with Apple’s App Store — DMCA 1201 interferes with free expression in arbitrary and anticompetitive ways.

Self Determination

What is a “family?”

Human social arrangements don’t map well to rigid categories. Digital systems can take account of the indeterminacy of these social connections by allowing their users to articulate the ambiguous and complex nature of their lives within a database. For example, a system could allow users to enter several names of arbitrary length to accommodate the common experience of being called different things by different people, or it could allow them to define their own familial relationships, declaring the people they live with as siblings to be their “brothers” or “sisters” — or declaring an estranged parent to be a stranger, or a re-married parent’s spouse to be a “mother.”

But when TPMs enter the picture, these necessary and beneficial social complexities are collapsed down into a set of binary conditions, fenced in by the biases and experiences of their designers. These systems are suspicious of their users, designed to prevent “cheating,” and they treat attempts to straddle their rigid categorical lines as evidence of dishonesty — not as evidence that the system is too narrow to accommodate its users’ lived experience.

One such example is CPCM, the “Content Protection and Copy Management component of DVB, a standard for digital television broadcasts used all over the world.

CPCM relies on the concept of an “authorized domain” that serves as a proxy for a single family. Devices designated as belonging to an “authorized domain” can share video recordings freely with one another, but may not share videos with people from outside the domain — that is, with people who are not part of their family.

The committee that designed the authorized domain was composed almost exclusively of European and US technology, broadcast, and media executives, and they took pains to design a system that was flexible enough to accommodate their lived experience.

If you have a private boat, or a luxury car with its own internal entertainment system, or a summer house in another country, the Authorized Domain is smart enough to understand that all these are part of a single family and will permit content to move seamlessly between them.

But the Authorized Domain is far less forgiving to families that have members who live abroad as migrant workers, or who are part of the informal economy in another state or country, or nomads who travel through the year with a harvest. These “families” are not recognized as such by DVB-CPCM, even though there are far more families in their situation than there are families with summer homes in the Riviera.

All of this would add up to little more than a bad technology design, except for DMCA 1201 and other anti-circumvention laws.

Because of these laws — including Mexico’s new copyright law — defeating CPCM in order to allow a family member to share content with you is itself a potential offense, and selling a tool to enable this is a potential criminal offense, carrying a five-year sentence and a $500,000 fine for a first offense.

Mexico’s familial relations should be defined by Mexican lawmakers and Mexican courts and the Mexican people — not by wealthy executives from the global north meeting in board-rooms half a world away.

The Rights of People With Disabilities

Though disabilities are lumped into broad categories — “motor disabilities,” “blindness,” “deafness,” and so on — the capabilities and challenges of each person with a disability are as unique as the capabilities and challenges faced by each able-bodied person.

That is why the core of accessibility isn’t one-size-fits-all “accommodations” for people with disabilities; rather, it is “universal design” is “design of systems so that they can be accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability.”

The more a system can be altered by its user, the more accessible it is. Designers can and should build in controls and adaptations, from closed captions to the ability to magnify text or increase its contrast, but just as important is to leave the system open-ended, so that people whose needs were not anticipated during the design phase can suit them to their needs, or recruit others to do so for them.

This is incompatible with TPMs. TPMs are designed to prevent their users from modifying them. After all, if users could modify TPMs, they could subvert their controls.

Accessibility is important for people with disabilities, but it is also a great boon to able-bodied people: first, because many of us are merely “temporarily able-bodied” and will have to contend with some disability during our lives; and second, because flexible systems can accommodate use-cases that designers have not anticipated that able-bodied people also value: from the TV set with captions turned on in a noisy bar (or for language-learners) to the screen magnifiers used by people who have mislaid their glasses.

Like able-bodied people, many people with disabilities are able to effect modifications and improvements in their own tools. However, most people — whether they are able-bodied and people with disabilities — rely on third parties to modify the systems they rely on because they lack the skill or time to make these modifications themselves.

That is why DMCA 1201’s prohibition on “trafficking in circumvention devices” is so punitive: it not only deprives programmers of the right to improve their tools, but it also deprives the rest of us of the right to benefit from those programmers’ creations, and programmers who dare defy this stricture face lengthy prison sentences and giant fines if they are prosecuted.

Recent examples of TPMs interfering with disabilities reveal how confining DMCA 1201 is for people with disabilities.

In 2017, the World Wide Web Consortium (W3C) approved a controversial TPM for videos on the Web called Encrypted Media Extensions (EME). EME makes some affordances for people with disabilities, but it lacks other important features. For example, people with photosensitive epilepsy cannot use automated tools to identify and skip past strobing effects in videos that could trigger dangerous seizures, while color-blind people can’t alter the color-palette of the videos to correct for their deficit.

A more recent example comes from the med-tech giant Abbott Labs, which used DMCA 1201 to suppress a tool that allowed people with diabetes to link their glucose monitors to their insulin pumps, in order to automatically calculate and administer doses of insulin in an “artificial pancreas.”

Note that there is no copyright infringement in any of these examples: monitoring your blood sugar, skipping past seizure-inducing video effects, or changing colors to a range you can perceive do not violate anyone’s rights under US copyright law. These are merely activities that are dispreferred by manufacturers.

Normally, a manufacturer’s preference is subsidiary to the interests of the owner of a product, but not in this case. Once a product is designed so that you must bypass a TPM to use it in ways the manufacturer doesn’t like, DMCA 1201 gives the manufacturer’s preferences the force of law,

Archiving

In 1991, the science fiction writer Bruce Sterling gave a keynote address to the Game Developer’s Conference in which he described the assembled game creators as practitioners without a history, whose work crumbled under their feet as fast as they could create it: “Every time a [game] platform vanishes it’s like a little cultural apocalypse. And I can imagine a time when all the current platforms might vanish, and then what the hell becomes of your entire mode of expression?”

Sterling contrasted the creative context of software developers with authors: authors straddle a vast midden of historical material that they — and everyone else — can access. But in 1991, as computers and consoles were appearing and disappearing at bewildering speed, the software author had no history to refer to: the works of their forebears were lost to the ages, no longer accessible thanks to the disappearance of the hardware needed to run them.

Today, Sterling’s characterization rings hollow. Software authors, particularly games developers, have access to the entire corpus of their industry, playable on modern computers, thanks to the rise and rise of “emulators” — programs that simulate primitive, obsolete hardware on modern equipment that is orders of magnitude more powerful.

However, preserving the history of an otherwise ephemeral medium was not for the faint of heart. From the earliest days of commercial software, companies have deployed TPMs to prevent their customers from duplicating their products or running them without authorization. Preserving the history of software is impossible without bypassing TPMs, and bypassing TPMs is a potential felony that can send you to prison for five years and/or cost you half a million dollars if you supply a tool to do so.

That is why the US Copyright Office has repeatedly granted exemptions to DMCA 1201, permitting archivists in the United States to bypass software TPMs for preservation purposes.

Of course, it’s not merely software that is routinely restricted with TPMs, frustrating the efforts of archivists: from music to movies, books to sound recordings, TPMs are routine. Needless to say, these TPMs interfere with routine, vital archiving activities just as much as they interfere with the archiving and preservation of software.

Education

Copyright systems around the world create exemptions for educational activities; U.S. copyright law specifically mentions education in the criteria for exempted use.

But educators frequently run up against the blunt, indiscriminate restrictions imposed by TPMs, whose code cannot distinguish between someone engaged in educational activities and someone engaged in noneducational activities.

Educators’ conflicts with TPMs are many and varied: a teacher may build a lesson plan around an online video but be unable to act on it if the video is removed; in the absence of a TPM, the teacher could make a local copy of the video as a fallback.

For a decade, the U.S. Copyright Office has affirmed the need for educators to bypass TPMs in order to engage in normal pedagogical activities, most notably the need for film professors to bypass TPMs in order to teach their students and so that their students can analyze and edit commercial films as part of their studies.

National Resiliency

Thus far, this article has focused on the TPMs’ impact on individual human rights, but human rights are dependent on the health and resiliency of the national territory in which they are exercised. Nutrition, health, and security are human rights just as surely as free speech, privacy and accessibility.

The pandemic has revealed the brittleness and transience of seemingly robust supply chains and firms. Access to replacement parts and skilled technicians has been disrupted and firms have failed, taking down their servers and leaving digital tools in unusable or partially unusable states.

But TPMs don’t understand pandemics or other emergencies: they enforce restrictions irrespective of the circumstances on the ground. And where laws like DMCA 1201 prevent the development of tools and knowledge for bypassing TPMs, these indiscriminate restrictions take on the force of law and acquire a terrible durability, as few firms or even individuals are willing to risk prison and fines to supply the tools to make repairs to devices that are locked with TPMs.

Nowhere is this more visible than in agriculture, where the markets for key inputs like heavy machinery, seeds and fertilizer have grown dangerously concentrated, depriving farmers of meaningful choice from competitors with distinctive offers.

Farmers work under severe constraints: they work in rural, inaccessible territories, far from authorized service depots, and the imperatives of the living organisms they cultivate cannot be argued with. When your crop is ripe, it must be harvested — and that goes double if there’s a storm on the horizon.

That’s why TPMs in tractors constitute a severe threat to national resiliency, threatening the food supply itself. Ag-tech giant John Deere has repeatedly asserted that farmers may not effect their own tractor repairs, insisting that these repairs are illegal unless they are finalized by an authorized technician who can take days to arrive (even when there isn’t a pandemic), and who charge hundreds of dollars to inspect the farmer’s own repairs and type an unlock code into the tractor’s keyboard.

John Deere’s position is that farmers are not qualified and should not be permitted to repair their own property. However, farmers have been fixing their own equipment for as long as agriculture has existed — every farm has a workshop and sometimes even a forge. Indeed, John Deere’s current designs are descended from modifications that farmers themselves made to earlier models: Deere used to dispatch field engineers to visit farms and copy farmers’ innovations for future models.

This points to another key feature for national resiliency: adaptation. Just as every person has unique needs that cannot be fully predicted and accounted for by product designers, so too does every agricultural context. Every plot of land has its own biodynamics, from soil composition to climate to labor conditions, and farmers have always adapted their tools to suit their needs. Multinational ag-tech companies can profitably target the conditions of the wealthiest farmers, but if you fall too far outside the median use-case, the parameters of your tractor are unlikely to fully suit your needs. That is why farmers are so accustomed to adapting their equipment.

To be clear, John Deere’s restrictions do not prevent farmers from modifying their tractors — they merely put those farmers in legal peril. Instead, farmers have turned to black market Ukrainian replacement software for their tractors; no one knows who made this software, it comes with no guarantees, and if it contained malicious or defective code, there would be no one to sue.

And John Deere’s abuse of TPMs doesn’t stop at repairs. Tractors contain sophisticated sensors that can map out soil conditions to a high degree of accuracy, measuring humidity, density and other factors and plotting them on a centimeter-accurate grid. This data is automatically generated by farmers driving tractors around their own fields, but the data does not go to the farmer. Rather, John Deere harvests the data that farmers generate while harvesting their crops and builds up detailed pictures of regional soil conditions that the company sells as market intelligence to the financial markets for bets in crop futures.

That data is useful to the farmers who generated it: accurate soil data is needed for “precision agriculture,” which improves crop yields by matching planting, fertilizing and watering to soil conditions. Farmers can access a small slice of that data, but only through an app that comes bundled with seed from Bayer-Monsanto. Competing seed companies, including domestic seed providers, cannot make comparable offers.

Again, this is bad enough under normal conditions, but when supply chains fail, the TPMs that enforce these restrictions prevent local suppliers from filling in the gaps.

Right to Repair

TPMs don’t just interfere with ag-tech repairs: dominant firms in every sector have come to realize that repairs are a doubly lucrative nexus of control. First, companies that control repairs can extract money from their customers by charging high prices to fix their property and by forcing customers to use high-priced manufacturer-approved replacement parts in those repairs; and second, companies can unilaterally declare some consumer equipment to be beyond repair and demand that they pay to replace it.

Apple spent lavishly in 2018 on a campaign that stalled 20 state-level Right to Repair bills in the U.S.A., and, in his first shareholder address of 2019, Apple CEO Tim Cook warned that a major risk to Apple’s profitability came from consumers who chose to repair, rather than replace, their old phones, tablets and laptops.

The Right to Repair is key to economic self-determination at any time, but in times of global or local crisis, when supply chains shatter, repair becomes a necessity. Alas, the sectors most committed to thwarting independent repair are also sectors whose products are most critical to weathering crises.

Take the automotive sector: manufacturers in this increasingly concentrated sector have used TPMs to prevent independent repair, from scrambling the diagnostic codes used on cars’ internal communications networks to adding “security chips” to engine parts that prevent technicians from using functionally equivalent replacement parts from competing manufacturers.

The issue has simmered for a long time: in 2012, voters in the Commonwealth of Massachusetts overwhelmingly backed a ballot initiative that safeguarded the rights of drivers to choose their own mechanics, prompting the legislature to enact a right-to-repair law. However, manufacturers responded to this legal constraint by deploying TPMs that allow them to comply with the letter of the 2012 law while still preventing independent repair. The situation is so dire that Massachusetts voters have put another ballot initiative on this year’s ballot, which would force automotive companies to disable TPMs in order to enable independent repair.

It’s bad enough to lose your car while a pandemic has shut down public transit, but it’s not just drivers who need the Right to Repair: it’s also hospitals.

Medtronic is the world’s largest manufacturer of ventilators. For 20 years, it has manufactured the workhorse Puritan Bennett 840 ventilator, but recently the company added a TPM to its ventilator design. The TPM prevents technicians from repairing a ventilator with a broken screen by swapping in a screen from another broken ventilator; this kind of parts-reuse is common, and authorized Medtronic technicians can refurbish a broken ventilator this way because they have the code to unlock the ventilator.

There is a thriving secondary market for broken ventilators, but refurbishers who need to transplant a monitor from one ventilator to another must bypass Medtronic’s TPM. To do this, they rely on a single Polish technician who manufacturers a circumvention device and ships it to medical technicians around the world to help them with their repairs.

Medtronic strenuously objects to this practice and warns technicians that unauthorized repairs could expose patients to risk — we assume that the patients whose lives were saved by refurbished ventilators are unimpressed by this argument. In a cruel twist of irony, the anti-repair Medtronic was founded in 1949 as a medical equipment repair business that effected unauthorized repairs.

Cybersecurity

In the security field, it’s a truism that “there is no security in obscurity” — or, as cryptographer Bruce Schneier puts it, “anyone can design a system that they can’t think of a way around. That doesn’t mean it’s secure, it just means it’s secure against people stupider than you.”

Another truism in security is that “security is a process, not a product.” You can never know if a system is secure — all you can know is whether any defects have been discovered in it. Grave defects have been discovered even very mature, widely used systems that have been in use for decades.

The corollary of these two rules is that security requires that systems be open to auditing by as many third parties as possible, because the people who designed those systems are blind to their own mistakes, and because each auditor brings their own blind spots to the exercise.

But when a system has TPMs, they often interfere with security auditing, and, more importantly, security disclosures. TPMs are widely used in embedded systems to prevent competitors from creating interoperable products — think of inkjet printers using TPMs to detect and reject third-party ink cartridges — and when security researchers bypass these to investigate products, their reports can run afoul of DMCA 1201. Revealing a defect in a TPM, after all, can help attackers disable that TPM, and thus constitutes “circumvention” information. Recall that supplying “circumvention devices” to the public is a criminal offense under DMCA 1201.

This problem is so pronounced that in 2018, the US Copyright Office granted an exemption to DMCA 1201 for security researchers.

However, that exemption is not broad enough to encompass all security research. A coalition of security researchers is returning to the Copyright Office this rulemaking to explain again why regulators have been wrong to impose restrictions on legitimate research.

Competition

Firms use TPMs in three socially harmful ways:

  1. Controlling customers: From limiting repairs to forcing the purchase of expensive spares and consumables to arbitrarily blocking apps, firms can use TPMs to compel their customers to behave in ways that put corporate interests above the interests of their customers;
  2. Controlling critics: DMCA 1201 means that when a security researcher discovers a defect in a product, the manufacturer can exercise a veto over the disclosure of the defect by threatening legal action;
  3. Controlling competitors: DMCA 1201 allows firms to unilaterally decide whether a competitor’s parts, apps, features and services are available to its customers.

This concluding section delves into three key examples of TPMs’ interference with competitive markets.

App Stores

In principle, there is nothing wrong with a manufacturer “curating” a collection of software for its products that are tested and certified to be of high quality. However, when devices are designed so that using a rival’s app store requires bypassing a TPM, manufacturers can exercise a curator’s veto, blocking rival apps on the basis that they compete with the manufacturer’s own services.

The most familiar example of this is Apple’s repeated decision to block rivals on the grounds that they offer alternative payment mechanisms that bypass Apple’s own payment system and thus evade paying a commission to Apple. Recent high-profile examples include the HEY! email app, and the bestselling Fortnite app.

Streaming media

This plays out in other device categories as well, notably streaming video: AT&T’s HBO Max is deliberately incompatible with leading video-to-TV bridges such as Amazon Fire and Roku TV, who command 70% of the market. The Fire and Roku are often integrated directly into televisions, meaning that HBO Max customers must purchase additional hardware to watch the TV they’re already paying for on their own television sets. To make matters worse, HBO has cancelled its HBO Go service, which enabled people who paid for HBO over satellite and cable to watch programming on Roku and Amazon devices .

Browsers

TPMs also allow for the formation of cartels that can collude to exclude entire development methodologies from a market and to deliver control over the market to a single company. For example, the W3C’s Encrypted Media Extensions (see “The Rights of People With Disabilities,” above) is a standard for streaming video to web browsers.

However, EME is designed so that it does not constitute a complete technical solution: every browser vendor that implements EME must also separately license a proprietary descrambling component called a “content decryption module” (CDM).

In practice, only one company makes a licensable CDM: Google, whose “Widevine” technology must be licensed in order to display commercial videos from companies like Netflix, Amazon Prime and other market leaders in a browser.

However, Google will not license this technology to free/open source browsers except for those based on its own Chrome/Chromium browser. In standardizing a TPM for browsers, the W3C — and Section 1201 of the DMCA — has delivered gatekeeper status to Google, who now get to decide who may enter the browser market that it dominates; rivals that attempt to implement a CDM without Google’s permission risk prison sentences and large fines.

Conclusion

The U.S.A. has had 22 years of experience with legal protections for TPMs under Section 1201 in the DMCA. In that time, the U.S. government has repeatedly documented multiple ways in which TPMs interfere with basic human rights and the systems that permit their exercise. The Mexican Supreme Court has now taken up the question of whether Mexico can follow the U.S.’s example and establish a comparable regime in accordance with the rights recognized by the Mexican Constitution and international human rights law. In this document, we provide evidence that TPM regimes are incompatible with this goal.

The Mexican Congress — and the U.S. Congress — could do much to improve this situation by tying offenses under TPM law to actual acts of copyright violation. As the above has demonstrated, the most grave abuses of TPMs stem from their use to interfere with activities that do not infringe copyright.

However, rightsholders already have a remedy for copyright infringements: copyright law. A separate liability regime for TPM circumvention serves no legitimate purpose. Rather, its burden falls squarely on people who want to stay on the right side of the law and find that their important, legitimate activities and expression are put in legal peril.

Share
Categories
Commentary Fair Use Intelwars International

An Open Letter to the Government of South Africa on the Need to Protect Human Rights in Copyright

Five years ago, South Africa embarked upon a long-overdue overhaul of its copyright system, and, as part of that process, the country incorporated some of the best elements of both U.S. and European copyright.

From the U.S.A., South Africa imported the flexible idea of fair use — a set of tests for when it’s okay to use others’ copyrighted work without permission. From the E.U., South Africa imported the idea of specific, enumerated exemptions for libraries, galleries, archives, museums, and researchers.

Both systems are important for preserving core human rights, including free expression, privacy, education, and access to knowledge; as well as important cultural and economic priorities such as the ability to build U.S.- and European-style industries that rely on flexibilities in copyright.

Taken together, the two systems are even better: the European system of enumerated exemptions gives a bedrock of certainty on which South Africans can stand, knowing for sure that they are legally permitted to make those uses. The U.S. system, meanwhile, future-proofs these exemptions by giving courts a framework with which to evaluate new uses involving technologies and practices that do not yet exist.

But as important as these systems are, and as effective as they’d be in combination, powerful rightsholder lobbies insisted that they should not be incorporated in South African law. Incredibly, the U.S. Trade Representative objected to elements of the South African law that were nearly identical to U.S. copyright, arguing that the freedoms Americans take for granted should not be enjoyed by South Africans.

Last week, South African President Cyril Ramaphosa alarmed human rights N.G.O.s and the digital rights community when he returned the draft copyright law to Parliament, striking out both the E.U.- and U.S.-style limitations and exceptions, arguing that they violated South Africa’s international obligations under the Berne Convention, which is incorporated into other agreements such as the WTO’s TRIPS Agreement and the WIPO Copyright Treaty.

President Ramaphosa has been misinformed. The copyright limitations and exceptions under consideration in South Africa are both lawful under international treaties and important to the human rights, cultural freedom, economic development, national sovereignty and self-determination of the South African nation, the South African people, and South African industry.

Today, EFF sent an open letter to The Honourable Ms. Thabi Modise, Speaker of South Africa’s National Assembly; His Excellency Mr. Cyril Ramaphosa, President of South Africa; Ms. Inze Neethling, Personal Assistant to Minister E. Patel, South African Department of Trade, Industry and Competition; and The Honourable Mr. Andre Hermans, Secretary of the Portfolio Committee on Trade and Industry of the Parliament of South Africa.

In our letter, we set out the legal basis for the U.S. fair use system’s compliance with international law, and the urgency of balancing South African copyright with limitations and exceptions that preserve the public interest.

This is an urgent matter. EFF is proud to partner with NGOs in South Africa and around the world in advocating for the public’s rights in copyright.

Share
Categories
Commentary Intelwars International Trade Agreements and Digital Rights

On the Road to Victory for Human Rights in Mexico!

Mexico’s National Commission for Human Rights has taken a crucial step towards averting a human rights catastrophe, asking Mexico’s Supreme Court to assess the constitutionality of the Mexican copyright law: The Commission stated that the law contains “possible violations of the rights to freedom of expression, property, freedom of commerce or work and cultural rights, among others.”

Last month, Mexico enacted a terrible new copyright law, one that duplicated the worst aspects of the US copyright system without even including its (largely inadequate) protections. The new Mexican law–passed as part of Donald Trump’s US-Canada-Mexico Agreement (USMCA)–is dangerous to the human rights of Mexican people and puts Mexican businesses at a permanent, structural disadvantage relative to companies in the USA and Canada.

As we explained in detail, the law has wide-ranging impacts on Mexicans’ human rights: from free expression to cybersecurity; from disability, education, health and repair to national sovereignty (you can download all our analysis here).

EFF was proud to stand with the many Mexican civil society organizations, including R3D and Derechos Digitales, and with the thousands of Mexican people who demanded that the National Commission for Human Rights bring the law before the Supreme Court of Justice on the basis of its blatant unconstitutionality. After all, Mexico’s free speech protections are among the strongest in the world, and these protections were roundly ignored during the drafting of the new law.

We are greatly cheered to learn that the Commission has petitioned the Supreme Court of Justice to overturn this law!

However, this process of court review can take years, and every day that this law is in force, it creates real damage for the Mexican people and Mexican industry: pressuring companies to apply automated speech filters, exposing Internet users to the danger of being “doxxed” for speaking out, interfering with the repair of medical and agricultural equipment and preventing people with disabilities from adapting the technology they rely on.

Because of these real, ongoing harms, we call upon the Supreme Court of Justice to suspend this law pending its judgment. The Mexican people can’t afford to wait years for their digital human rights to be recognized.

(Our thanks to Luis Fernando García Muñoz from R3D for his translation of the quotation from the Commission’s press release, above)

Share
Categories
Commentary Intelwars International Trade Agreements and Digital Rights

Mexico’s New Copyright Law Undermines Mexico’s National Sovereignty, Continuing Generations of Unfair “Fair Trade Deals” Between the USA and Latin America

Earlier this month, Mexico’s Congress hastily imported most of the US copyright system into Mexican law, in a dangerous and ill-considered act. But neither this action nor its consequences occurred in a vacuum: rather, it was a consequence of Donald Trump’s US-Mexico-Canada Agreement (USMCA), the successor to NAFTA.

Trade agreements are billed as creating level playing fields between nations to their mutual benefit. But decades of careful scholarship show that poorer nations typically come off worse through these agreements, even when they are subjected to the same rules, because the same rules don’t have the same effect on different countries. Besides that, Mexico has now adopted worse rules than its trade partners.

To understand how this works, we need only look to the history of the USA’s relationship with the copyrights and patents of foreign persons and firms. When the USA was a new, poor, developing nation that imported more copyrights and patents than it exported it did not honor foreigners’ copyrights or patents, but rather allowed its people and its businesses to use them without paying, to develop the nation. Once the USA became an industrial and cultural powerhouse, it entered into agreements with other countries for mutual recognition of one another’s copyrights and patents in order to extract wealth based on rights to its technology and culture.

But the USA has a short memory for what it once considered just; it has made the foreign enforcement of US copyrights a trade priority for decades, often demanding that its trading partners extend more legal privileges to US copyright holders than they (or anyone else) receive at home in the United States; and preventing local users from benefiting from fair use or other balancing rights available in the United States. The poorer the trading partner, the more the US government and US industry expect it to surrender.

Mexico’s copyright is a sad and enervating example of this principle in action. The law imposes restrictions that do not — and could not — exist under US law, because they violate US Constitutional principles (these laws also violate Mexican Constitutional principles).

For example, Mexico’s copyright law effectively mandates copyright filters, which automatically screen Mexican Internet users’ expressive speech and arbitrarily censor some of it based on an algorithm’s decision to treat it as a copyright infringement.

Neither the US nor Canada has such a requirement, which puts Mexican online firms at a significant trade disadvantage relative to its “equal partners” under USMCA. These filters can be very costly to develop and maintain. For example, YouTube has invested over $100,000,000 to develop its content filtering systems. Those are costs that Mexican online services will have to shoulder if they compete with Canadian and US firms, while their counterparts in the USA and Canada face no such requirement.

Just as dangerous to Mexico’s prosperity are its new rules on TPMs (including “Digital Rights Management” or DRM). The US version of these rules, Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), sets out a procedure for granting exemptions to the ban on bypassing digital locks. The Mexican version holds out the possibility of creating such a process but does not describe it.

Even if the Mexican government eventually develops an equivalent procedure, people and businesses in the USA will still enjoy more flexibility than their Mexican counterparts: that’s because the US system has produced a long, extensive list of exemptions that Mexico will have to develop on its own, through whatever process it eventually creates (if it ever does).

These rules interfere with many key activities, including accessibility adaptations for people with disabilities, education, and repair, including repair of agricultural and medical equipment, most of which come from US firms, who can charge Mexican consumers and the Mexican health-care system arbitrarily high prices for repairs, without having to fear competition from Mexican repair shops. They can also unilaterally declare equipment to be “beyond repair” and insist that it be replaced at full cost.

All of this happened even as the US government is facing a legal challenge to its ban on circumventing access controls that might see the law struck down in the USA, but still in force in Mexico.

Mexico’s new copyright law also includes a much narrower set of limitations and exceptions than either the US (“fair use”) or Canadian (“fair dealing”) systems provide for. That means that Mexican consumers must pay US and Canadian firms for activities that people in the USA and Canada can undertake for free.

This is especially dangerous when coupled with Mexico’s new Notice and Takedown system, which allows anyone to have content removed from the Internet simply by claiming to be the victim of copyright infringement. Under the US system, companies that do not act on these notices are only penalized if they actually commit indirect copyright infringement. But Mexico’s version of these rules (Article 232 Quinquies (II)) forces compliance with a copyright owner’s takedown demands even if the platform believes the content is a noninfringing use.

That means that US firms and individuals can remove material — for example, negative reviews quoting a book or warnings about defective software — from Mexican online services, while such a tactic could be ignored by US online services.

This asymmetry is not new. It is a recurring feature of US-Mexico trade relations, something that was already present under NAFTA, but which USMCA expands to the digital realm through this outrageous copyright law.

Under NAFTA, US exports of corn syrup to Mexico surged, and Mexican anti-obesity campaigners who tried to stem the tide were rebuffed by the rules of the trade agreement.

As a result, Mexico’s obesity epidemic is among the worst in the region, as is Mexican consumption of processed food. Julio Berdegué, a regional representative of the Food and Agriculture Organization of the United Nations, said “Unfortunately, Mexico is one of the leading countries in obesity, both in men and women and children. It is a very serious problem.” Mexico’s export sector has also shifted, with much of the fresh fruits and vegetables that once made up the country’s dietary staples now being exported to the USA.

Mexico’s new copyright law only exacerbates this problem. Mexico’s TPM rules hamper the security research that is the country’s best hope to secure its people’s digital devices. During Mexico’s “sugar wars,” activists were hacked with weapons sold by the cyber-arms dealer NSO Group, as part of an illegal campaign to neutralize their opposition to the powerful US sugar industry. That attack exploited a vulnerability in the activists’ mobile apps, and Mexico’s new copyright law impedes the work of those who would reveal those vulnerabilities.

The history of Latin America is filled with shameful instances of US interference to improve its prosperity at the expense of its southern neighbors.

The passage of the Mexican copyright law, rushed through in the middle of the pandemic without adequate consultation or debate, continues this denial of dignity and sovereignty. Lobbyists for just laws don’t fear public scrutiny, after all. The only reason to undertake a lawmaking exercise like this under the shroud of haste and obscurity is to sneak it through before the public knows what’s going on and can organize in opposition to it.

If you are based in Mexico, we urge you to participate in the R3D’s and allies campaign “Ni Censura ni Candados” and send a letter to Mexico’s National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D’s privacy policy.

Share
Categories
Commentary Intelwars International Trade Agreements and Digital Rights

Disability, Education, Repair and Health: How Mexico’s Copyright Law Hurts Self-Determination in the Internet Age

Mexico’s new copyright law was rushed through Congress without adequate debate or consultation, and that’s a problem, because the law — a wholesale copy of the US copyright system — creates unique risks to the human rights of the Mexican people, and the commercial fortunes of Mexican businesses and workers.

The Mexican law contains three troubling provisions:

I. Copyright filters: these automated censorship systems remove content from the Internet without human review and are a form of “prior restraint” (“prior censorship” in the Mexican legal parlance), which is illegal under Article 13 of the American Convention on Human Rights, which Mexico’s Supreme Court has affirmed is part of Mexican free speech law (Mexico has an outstanding set of constitutional protections for free expression).

II. Technical Protection Measures: “TPMs” (including “digital rights management” or “DRM”) are the digital locks that manufacturers use to constrain how owners of their products may use them, and to create legal barriers to competing products and embarrassing disclosures of security defects in their products. As with the US copyright system, Mexico’s system does not create reliable exemptions for lawful conduct.

III. Notice and Takedown: A system allowing anyone purporting to be a copyright holder to have material swiftly removed from the Internet, without any judicial oversight or even presentation of evidence. The new Mexican law can easily be abused by criminals and corrupt officials who can use copyright to force online service providers to turn over the sensitive personal details of their critics, simply by pretending to be the victims of copyright infringement.

This system has grave implications for Mexicans’ human rights, beyond free expression and cybersecurity.

Implicated in this new system are Mexicans’ rights to education, repair, and adaptation for persons with disability.

Unfit for purpose

The new law does contain language that seems to protect these activities, but that language is deceptive, as the law demands that Mexicans satisfy unattainable conditions and subject themselves to vague promises, with dire consequences for getting it wrong. There are four ways in which these exemptions are unfit for purpose:

  1. Sole Purpose. The exemptions specify that one must act for the “sole purpose” of the exempted activity a security researcher must be investigating a device for the sole purpose of fixing its defects, but arguably not to advance the state of security research in general, or to protect the privacy and autonomy of users of a computer, system, or network in ways that conflict with what the manufacturer would view as the security of the device.
  2. Noncommercial. The exemptions also frequently cover only “noncommercial” actors, implying that you can only modify a system if you can do so yourself, or if you can find someone else to do it for free. If you are blind and want to convert an ebook so that you can read it with your screenreader, you have to write the code yourself or find a volunteer who’ll do it for you you can’t pay someone else to do the work.
  3. Good faith. The exemptions frequently require that anyone who uses them must be acting in “good faith,” an imprecise term that can be a matter of opinion when corporate interests conflict with those of researchers. If a judge doesn’t belief you were acting in good faith, you could face both fines and criminal sanctions.
  4. No tools. Even if you are confident that you are acting for the sole purpose of exercising an exemption and doing so non commercially and in good faith, you are still stuck. Because while the statute recognizes in general terms that there could be a process to create further exemptions for people who bypass digital locks, it does not provide a similar process for those who make tools for those purposes.

The defects in the Mexican law are largely present in the US law from which they were copied. It’s telling that no US defendant has ever successfully used any of the statutory exemptions, not in 22 years. Indeed, the US Copyright Office has repeatedly affirmed that these exemptions do not adequately protect legitimate conduct with the clarity that would be required for them to be effective.

Education

The US experience reveals the ways that badly drafted copyright law can interfere with education:

  • Educational materials are removed from the Internet due to incorrect or fraudulent copyright claims, without warning, leaving teachers who relied on those materials with holes in their curriculum;
  • Educational materials are automatically removed from the Internet due to copyright filter errors, also stranding teachers with missing curricular materials; and
  • Educators cannot make lawful use of the materials purchased for their students because they are blocked by TPMs that they are legally prohibited from bypassing.

Right to Repair

Increasingly, dominant firms have used control over repairs as sources of undeserved, monopoly profits. By controlling repair, firms can not only force customers to pay higher prices for repairs and to use more expensive, more profitable original parts — they can also force customers to discard their devices and buy new ones, by declaring them to be beyond repair.

Enacting legal penalties for bypassing TPMs is a gift to any company seeking to control repairs. Companies use TPMs so that even after the correct part is installed, the device refuses to work unless a company technician inputs an unlock code.

Disturbingly, this conduct has spread to the world of medical devices, where multinational corporations use TPMs to prevent repairs to ventilators.

At the forefront of the Right to Repair movement are farmers, whose must contend with both a remote location (far from the authorized technicians) and urgent timescales (you need to get your crop in before the storm hits, even if the authorized technician can’t make it out before then).

During the global pandemic, many of us are living under conditions familiar to farmers, dangling at the end of a long, slow, unreliable supply chain and confronted by urgent needs.

Technology is primarily designed in the global north by engineers and product specialists whose lives are very different from people in the global south. Mexican people have long relied on their own ingenuity and technical mastery to modify, repair and adapt systems built by distant people in foreign lands to suit their own lived experience in their own land.

Mexican law does not provide any clear protection for repairs that require access to or use of copyrighted works.

Repair is a vital part of self-determination, and the Mexican copyright law puts the interests of monopolistic, rent-seeking foreign companies ahead of the rights of Mexican people to decide how they will use their own property.

Adaptation and Disability

Nowhere is the need for technological self-determination more keenly felt than when it involves people with disabilities.

A rallying cry of the disability movement is “nothing about us without us” — meaning, among other things, that each person with a disability should have the final say about how their technology works.

The creation of assistive adaptations by and with people with disabilities has been a boon for everyone: the principle of “universal design” design that enables every body and every mind to participate fully in life means that all of us benefit, whether that’s using closed captions to watch a video in a noisy environment or to learn a foreign language; or using screen magnifiers to read small or low-contrast text.

Digital technology holds the promise of incredible advances in universal design: automated caption-generation and scene description, adaptive systems that anticipate a user’s intention based on statistical analysis of their historic usage, predictive text input, and more. Some of these adaptations will come from original manufacturers, but many will come from the community of those using the technology.

People with disabilities should face no conditions as to how they adapt their technology or who they chose to work with to make adaptations on their behalf. None. Period.

People with disabilities do not each necessarily have the technical knowledge to modify their own devices, by themselves, to suit their needs. This is why the exemption for people with disabilities in the Mexican law is wholly inadequate. It precludes hiring someone else to effect a modification (that would be “commercial activity”) and it forecloses on general-purpose research that helps with adaptation because no one is allowed to provide technology or services to aid in bypassing TPMs to adapt technology.

Under the Mexican law, the way that, say, a blind person is permitted to make a work accessible is to:

  1. become a cybersecurity expert;
  2. discover a defect in the e-reader software;
  3. write a piece of software to liberate the ebook they want to read;

No one is allowed to offer them technical assistance, and they may not share their accomplishment to help others. It would be a joke, if it wasn’t so grimly unfunny.

There can be no question that all of this is by intent or extreme negligence. Not only did Mexico’s Congress have the benefit of 22 years’ worth of documented problems with the US version of this law, they also had an easy remedy to these problems. All they had to do was say, “You are allowed to bypass a TPM provided that you are not violating someone’s copyright.” That’s it. Rather than larding their exemptions with unattainable and vague conditions, Mexico’s lawmakers could have articulated a crisp, bright-line rule that anyone could follow: don’t bypass TPMs in a way that’s connected to copyright infringement, and you’re fine.

They didn’t.

If you are based in Mexico, we urge you to participate in R3D’s campaign “Ni Censura ni Candados” and send a letter to Mexico’s National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D’s privacy policy.

Share
Categories
Commentary Corporate Speech Controls free speech Intelwars International

Turkey’s New Internet Law Is the Worst Version of Germany’s NetzDG Yet

For years, free speech and press freedoms have been under attack in Turkey. The country has the distinction of being the world’s largest jailer of journalists and has in recent years been cracking down on online speechNow, a new law, passed by the Turkish Parliament on the 29th of July, introduces sweeping new powers and takes the country another giant step towards further censoring speech online. The law was ushered through parliament quickly and without allowing for opposition or stakeholder inputs and aims for complete control over social media platforms and the speech they host. The bill was introduced after a series of allegedly insulting tweets aimed at President Erdogan’s daughter and son-in-law and ostensibly aims to eradicate hate speech and harassment online. Turkish lawyer and Vice President of Ankara Bar Association IT, Technology & Law Council Gül?ah Deniz-Atalar called the lawan attempt to initiate censorship to erase social memory on digital spaces.”

Once ratified by President Erdogan, the law would mandate social media platforms with more than a million daily users to appoint a local representative in Turkey, which activists are concerned will enable the government to conduct even more censorship and surveillance. Failure to do so could result in advertisement bans, steep penalty fees, and, most troublingly, bandwidth reductions. Shockingly, the legislation introduces new powers for Courts to order Internet providers to throttle social media platforms’ bandwidth by up to 90%, practically blocking access to those sites. Local representatives would be tasked with responding to government requests to block or take down content. The law foresees that companies would be required to remove content that allegedly violates “personal rights” and the “privacy of personal life” within 48 hours of receiving a court order or face heavy fines. It also includes provisions that would require social media platforms to store users’ data locally, prompting fears that providers would be obliged to transmit those data to the authorities, which experts expect to aggravate the already rampant self-censorship of Turkish social media users. 

While Turkey has a long history of Internet censorship, with several hundred thousand websites currently blocked, this new law would establish unprecedented control of speech online by the Turkish government. When introducing the new law, Turkish lawmakers explicitly referred to the controversial German NetzDG law and a similar initiative in France as a positive example. 

Germany’s Network Enforcement Act, or NetzDG for short, claims to tackle “hate speech” and illegal content on social networks and passed into law in 2017 (and has been tightened twice since). Rushedly passed amidst vocal criticism from lawmakers, academia and civil experts, the law mandates social media platforms with one million users to name a local representative authorized to act as a focal point for law enforcement and receive content take down requests from public authorities. The law mandates social media companies with more than two million German users to remove or disable content that appears to be “manifestly illegal” within 24 hours of having been alerted of the content. The law has been heavily criticized in Germany and abroad, and experts have suggested that it interferes with the EU’s central Internet regulation, the e-Commerce Directive. Critics have also pointed out that the strict time window to remove content does not allow for a balanced legal analysis. Evidence is indeed mounting that NetzDG’s conferral of policing powers to private companies continuously leads to takedowns of innocuous posts, thereby undermining the freedom of expression. 

A successful German export

Since its introduction, NetzDG has been a true Exportschlager, or export success, as it has inspired a number of similarly harmful laws in jurisdictions around the globe. A recent study reports that at least thirteen countries, including Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia have proposed or enacted laws based on the regulatory structure of NetzDG since it entered into force. 

In Russia, a 2017 law encourages users to report allegedly “unlawful” content and requires social media platforms with more than two million users to take down the content in question as well as possible re-posts, which closely resembles the German law. Russia’s copy-pasting of Germany’s NetzDG confirmed critics’ worst fears: that the law would serve as a model and legitimization for autocratic governments to censor online speech. 

Recent Malaysian and Phillipinen laws aimed at tackling “fake news” and misinformation also explicitly refer to NetzDG, although NetzDG’s scope does not even extend to cover misinformation. In both countries, NetzDG’s model of imposing steep fines (and in the case of the Philippines up to 20 years of imprisonment) on social media platforms for failing to remove content swiftly was applied.

In Venezuela, another 2017 law that expressly refers to NetzDG takes the logic of NetzDG one step further by imposing a six hour time window for failing to remove content considered to be “hate speech”. The Venezuelan law—which includes weak definitions and a very broad scope and was also legitimized by invoking the German initiative—is a potent and flexible tool for the country’s government to oppress dissidents. 

Singapore is yet another country that got inspired by Germany’s NetzDG: In May 2019, the Protection from Online Falsehoods and Manipulation Bill was adopted, which empowers the government to order platforms to correct or disable content, accompanied with significant fines if the platform fails to comply. A government report preceding the introduction of the law explicitly references the German law. 

Similarly to these examples, the recently adopted Turkish law shows clear parallels with the German approach: targeting platforms of a certain size, the law incentivizes platforms to implement takedown requests by stipulating significant fees, thereby turning platforms into the ultimate gatekeepers tasked with deciding on the legality of online speech. In important ways, the Turkish law goes way beyond NetzDG, as its scope does not only include social media platforms but also new sites. In combination with its exorbitant fines and the threat to block access to websites, the law enables the Turkish government to erase any dissent, criticism or resistance. 

Even worse than NetzDG

But the fact that the Turkish law goes even beyond NetzDG highlights the danger of exporting Germany’s flawed law internationally. When Germany passed the law in 2017, states around the world were getting increasingly interested in regulating alleged and real online threats, ranging from hate speech to illegal content and cyberbullying. Already problematic in Germany, where it is embedded in a functioning legal system with appropriate checks and balances and equipped with safeguards absent from the laws it inspired, NetzDG has served to legitimize draconian censorship legislation across the globe. While it’s always bad if flawed laws are being copied elsewhere, this is particularly problematic in authoritarian states that have already pushed for and implemented severe censorship and restrictions on free speech and the freedom of the press. While the anti-free speech tendencies of countries like Turkey, Russia, Venezuela, Singapore and the Philippines long predate NetzDG, the German law surely provides legitimacy for them to further erode fundamental rights online.

Share
Categories
Commentary Intelwars International Trade Agreements and Digital Rights

Mexico’s New Copyright Law: Cybersecurity and Human Rights

This month, Mexico rushed through a new, expansive copyright law without adequate debate or consultation, and as a result, it adopted a national rule that is absolutely unfit for purpose, with grave implications for human rights and cybersecurity.

The new law was passed as part of the country’s obligations under Donald Trump’s United States-Mexico-Canada Agreement (USMCA), and it imports the US copyright system wholesale, and then erases the USA’s own weak safeguards for fundamental rights.

Central to the cybersecurity issue is Article 114 Bis, which establishes a new kind of protection for “Technical Protection Measures” (TPMs) this includes rightsholder technologies commonly known as Digital Rights Management (DRM), but it also includes basic encryption and other security measures that prevent access to copyrighted software. These are the familiar, dreaded locks that stop you from refilling your printer’s ink cartridge, using an unofficial App Store with your phone or game console, or watching a DVD from overseas in your home DVD player. Sometimes there is a legitimate security purpose to restricting the ability to modify the software in a device, but when you as the owner of the device aren’t allowed to do so, serious problems arise and you become less able to ensure your device security.

Under the US system, it is an offense to bypass these TPMs when they control access to a copyrighted work, even when no copyright infringement takes place. If you have to remove a TPM to modify your printer to accept third-party ink or your car to accept a new engine part, you do not violate copyright but you still violate this extension of copyright law.

Unsurprisingly, manufacturers have aggressively adopted TPMs because these allow them to control both their customers and their competitors. A company whose phone or game console is locked to a single, official App Store can monopolize the market for software for their products, skimming a percentage from every app sold to every owner of that device.

Customers cannot lawfully remove the TPM to use a third-party app-store, and competitors can’t offer them the tools to unlock their devices. “Trafficking” in these tools is a crime in the USA, punishable by a five-year prison sentence and a $500,000 fine.

But the temptation to use a TPM isn’t limited to controlling customers and competitors: companies that use TPMs also get to decide who can reveal the defects in their products.

Computer programs inevitably have bugs, and some of these bugs present terrible cybersecurity risks. Security defects allow hackers to remotely take over your car and drive it off the road, alter the ballot counts in elections, wirelessly direct your medical implants to kill you, or stalk and terrorize people.

The only reliable way to discover these defects before they can be weaponized is to subject products and systems to independent scrutiny. As the renowned security expert Bruce Schneier says, “Anyone can design a security system that works so well they can’t think of a way to defeat it. That doesn’t mean it works, that just means it works against people stupider than them.”

Independent security research is incompatible with laws protecting TPMs. In order to investigate systems and report on their defects, security researchers must be free to bypass TPMs, extract the software from the device, and subject it to testing and analysis.

When security researchers do discover defects, it’s common for companies to deny that they exist, or that they are important, painting the matter as a “he said/she said” dispute.

But these disputes have a simple resolution: security researchers routinely publish “proof of concept” code that allows anyone to independently verify their findings. This is simple scientific best practice: since the Enlightenment, scientists have published their findings and invited others to replicate them, a process that is at the core of the Scientific Method.

Section 1201 of the US Digital Millennium Copyright Act (DMCA 1201) defines a process for resolving disputes between TPMs and fundamental human rights. Every three years, the US Copyright Office hears petitions from people whose fundamental rights have been compromised by the TPM law, and grants exemptions to it.

The US government has repeatedly acknowledged that TPMs interfere with security research and granted explicit exemptions to the TPM rule for security research. These exemptions are weak (the US statute does not give the Copyright Office authority to authorize security researchers to publish proof-of-concept code), but it still provides much-needed surety for researchers attempting to warn us that we are in danger from our devices. When powerful corporations threaten security researchers in attempts to silence them, the Copyright Office’s exemptions can give them the courage to publish anyway, protecting all of us.

The US exemptions process is weak and inadequate. The Mexican version of this process is even weaker, and even more inadequate (the law doesn’t even bother to define how it will work, and merely suggests that some process will be created in the future).

Article 114 Quater (I) of Mexico’s law does contain a vague offer of protection for security research, similar to an equally vague assurance in the DMCA. The DMCA has been US law for 22 years, and in all that time, no one has ever used this clause to defend themselves.

To understand why, it is useful to examine the text of the Mexican law. Under the Mexican law, security researchers are only protected if their “sole purpose” is “testing, investigating or correcting the security of that computer, computer system or network.” It is rare for a security researcher to have only one purpose: they want to provide the knowledge they glean to the necessary parties so that security flaws do not harm any of the users of similar technology. They may also want to protect the privacy and autonomy of users of a computer, system, or network in ways that conflict with what the manufacturer would view as the security of the device.

Likewise, the Mexican law requires that security researchers be operating in “good faith,” creating unquantifiable risk. Researchers often disagree with manufacturers about the appropriate way to investigate and disclose security vulnerabilities. The vague statutory provision for security testing in the United States was far too unreliable to successfully foster essential security research, something that even the US Copyright Office has now repeatedly acknowledged.

The bottom line: our devices cannot be made more secure if independent researchers are prohibited from auditing them. The Mexican law will deter this activity. It will make Mexicans less secure.

Cybersecurity is intimately bound up with human rights. Insecure voting machines can compromise elections, and even when they are not hacked, the presence of insecurities robs elections of legitimacy, leading to civic chaos.

Civil society groups engaged in democratic political activity around the world have been attacked by commercial malware that uses security defects to invade their devices, subjecting them to illegal surveillance, kidnapping, torture, and even murder.

One such product, the NSO Group’s Pegasus malware, was implicated in the murder of Jamal Khashoggi. That same tool was used to target Mexican investigative journalists, human rights defenders, Mexican anti-sugar campaigners, and even Mexican children whose parents were investigative journalists.

Defects in our devices expose us to politically motivated surveillance, but they also expose us to risk from organized criminals, for example, “stalkerware” can enable human traffickers to monitor their victims.

Digital rights are human rights. Without the ability to secure our devices, we cannot fully enjoy our familiar, civic, political, or social lives.

If you are based in Mexico, we urge you to participate in R3D’s campaign “Ni Censura ni Candados” and send a letter to Mexico’s National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D’s privacy policy.

Share
Categories
Commentary Intelwars International Trade Agreements and Digital Rights

How Mexico’s New Copyright Law Crushes Free Expression

When Mexico’s Congress rushed through a new copyright law as part of its adoption of Donald Trump’s United States-Mexico-Canada Agreement (USMCA), it largely copy-pasted the US copyright statute, with some modifications that made the law even worse for human rights.

The result is a legal regime that has all the deficits of the US system, and some new defects that are strictly hecho en Mexico, to the great detriment of the free expression rights of the Mexican people.

Mexico’s Constitution has admirable, far-reaching protections for the free expression rights of its people. Mexico’s Congress is not merely prohibited from censoring its peoples’ speech — it is also banned from making laws that would cause others to censor Mexicans’ speech.

Mexico’s Supreme Court has ruled that Mexican authorities and laws must recognize both Mexican constitutional rights law and international human rights law as the law of the land. This means that the human rights recognized in the Constitution and international human rights treaties such as the American Convention on Human Rights, including their interpretation by the authorized bodies, make up a “parameter of constitutional consistency,” except that where they clash, the most speech-protecting rule wins. Article 13 of the American Convention bans prior restraint (censorship prior to publication) and indirect restrictions on expression.

As we will see, Mexico’s new copyright law falls very far from this mark, exposing Mexicans to grave risks to their fundamental human right to free expression.

Filters

While the largest tech companies in America have voluntarily adopted algorithmic copyright filters, Article 114 Octies of the new Mexican law says that “measures must be taken to prevent the same content that is claimed to be infringing from being uploaded to the system or network controlled and operated by the Internet Service Provider after the removal notice.” This makes it clear that any online service in Mexico will have to run algorithms that intercept everything posted by a user, compare it to a database of forbidden sounds, words, pictures, and moving images, and, if it finds a match, it will have to block this material from public view or face potential fines.

Requiring these filters is an unlawful restriction on freedom of expression. “At no time can an ex ante measure be put in place to block the circulation of any content that can be assumed to be protected. Content filtering systems put in place by governments or commercial service providers that are not controlled by the end-user constitute a form of prior censorship and do not represent a justifiable restriction on freedom of expression.” Moreover, they are routinely wrong. Filters often mistake users own creative works for copyrighted works controlled by large corporations and block them at the source. For example, classical pianists who post their own performances of public domain music by Beethoven, Bach, and Mozart find their work removed in an eyeblink by an algorithm that accuses them of stealing from Sony Music, which has registered its own performances of the same works.

To make this worse, these filters amplify absurd claims about copyright for example, the company Rumblefish has claimed copyright in many recordings of ambient birdsong, with the effect that videos of people walking around outdoors get taken down by filters because a bird was singing in the background. More recently, humanitarian efforts to document war-crimes fell afoul of automated filtering.

Filters can’t tell when a copyrighted work is incidental to a user’s material or central to it. For example, if your seven-hour scholarly conference’s livestream captures some background music playing during the lunch break, YouTube’s filters will wipe out all seven hours’ worth of audio, destroying the only record of the scientific discussions during the rest of the day.

For many years, people have toyed with the idea of preventing their ideological opponents’ demonstrations and rallies from showing up online by playing copyrighted music in the background, causing all video-clips from the event to be filtered away before the message could spread.

This isn’t a fanciful strategy: footage from US Black Lives Matter demonstrations is vanishing from the Internet because the demonstrators played amplified music during their protests.

No one is safe from filters: last week, CBS’s own livestreamed San Diego Comic-Con presentation was shut down due to an erroneous copyright claim by itself.

Filters can only tell you if a work matches or doesn’t match something in their database they can’t tell if that match constitutes a copyright violation. Mexican copyright contains “limitations and exceptions” for a variety of purposes. While this is narrower than the US’s fair use law, it nevertheless serves as a vital escape valve for Mexicans’ free expression. A filter can’t tell if a match means that you are a critic quoting a work for a legitimate purpose or an infringer breaking the law.

As if all this wasn’t bad enough: the Mexican filter rule does not allow firms to ignore those with a history of making false copyright claims. This means that if a fraudster sent Twitter or Facebook or a Made-In-Mexico alternative claims to own the works of Shakespeare, Cervantes, or Juana Inés de la Cruz, the companies could ignore those particular claims if their lawyers figured out that the sender did not own the copyright, but would have to continue evaluating each new claim from this known bad actor. If a fraudster included just one real copyright claim amidst the torrent of fraud, the online service provider would be required to detect that single valid claim and honor it.

This isn’t a hypothetical risk: “copyfraud” is a growing form of extortion, in which scammers claim to own artists’ copyrights, then coerce the artists with threats of copyright complaints.

Algorithms work at the speed of data, but their mistakes are corrected in human time (if at all). If an algorithm is correct an incredible, unrealistic 99 percent of the time, that means it is wrong one percent of the time. Platforms like YouTube, Facebook and TikTok receive hundreds of millions of videos, pictures and comments every day one percent of one hundred million is one million. That’s one million judgments that have to be reviewed by the company’s employees to decide whether the content should be reinstated.

The line to have your case heard is long. How long? Jamie Zawinski, a nightclub owner in San Francisco, posted an announcement of an upcoming performance by a band at his club in 2018, only to have it erroneously removed by Instagram. Zawinski appealed. 28 months later, Instagram reversed its algorithm’s determination and reinstated his announcement more than two years after the event had taken place.

This kind of automated censorship is not limited to nightclubs. Your contribution to your community’s online discussion of an upcoming election is just as likely to be caught in a filter as Zawinski’s talking about a band. When (and if) the platform decides to let your work out of content jail, the vote will have passed, and with it, your chance to be part of your community’s political deliberations.

As terrible as filters are, they are also very expensive. YouTube’s “Content ID” filter has cost the company more than $100,000,000, and this flawed and limited filter accomplishes only a narrow slice of the filtering required under the new Mexican law. Few companies have an extra $100,000,000 to spend on filtering technology, and while the law says these measures “should not impose substantial burdens” on implementers, it also requires them to find a way to achieve permanent removal of material following a notification of copyright infringement. Filter laws mean even fewer competitors in the already monopolized online world, giving the Mexican people fewer places where they may communicate with one another.

TPMs

Section 1201 of America’s Digital Millennium Copyright Act (DMCA) is one of the most catastrophic copyright laws in history. It provides harsh penalties for anyone who tampers with or disables a “technical protection measure” (TPM): massive fines or, in some cases, prison sentences. These TPMs including what is commonly known as “Digital Rights Management” or DRM are the familiar, dreaded locks that stop you from refilling your printer’s ink cartridge, using an unofficial App Store with your phone or game console, or watching a DVD from overseas in your home DVD player.

You may have noticed that none of these things violate copyright and yet, because you must remove a digital lock in order to do them, you could be sued in the name of copyright law. DMCA 1201 does not provide the clear, unambiguous protection that would be needed to protect free expression. One appellate court in the United States has explicitly held that you can be liable for a violation of Section 1201 even if you’re making a fair use, and that is the position adopted by the U.S. Copyright Office. Other courts disagree, but the net effect is that you engage in these non-infringing uses and expressions at your peril. The US Congress has failed to clarify this law and tie liability for bypassing a TPM to an actual act of copyright infringement “you may not remove the TPM from a Netflix video to record it and put it on the public Internet (a copyright infringement), but if you do so in order to make a copy for personal use (not a copyright infringement), that’s fine.”

The failure to clearly tie DMCA 1201 liability to infringement has had wide-ranging effects for repair, cybersecurity and competition that we will explore in later installments of this series. Today, we want to focus on how TPMs undermine free expression.

TPMs give unlimited power to manufacturers. An ever-widening constellation of devices are designed so that any modifications require bypassing a TPM and incurring liability. This allows companies to sell you a product but dictate how you must use it preventing you from installing your own apps or other code to make it work the way you want it to.

The first speech casualty of TPM rules is the software author. This person can write code — a form of speech but they cannot run it on their devices without permission from the manufacturer, nor can they give the code to others to run on their devices.

Why might a software author want to change how their device works? Perhaps because it is interfering with their ability to read literature, watch films, hear music or see images. TPMs such as the global DVB CPCM standard enforce a policy called the “Authorized Domain” that defines what is and is not a family. Authorized Domain devices owned by a standards compliant family can all share creative works among them, allowing parents and children to share among themselves.

But an “Authorized Domain family” is not the same as an actual family. The Authorized Domain was designed by rich people from the global north working for multinational corporations, whose families are far from typical. The Authorized Domain will let you share videos between your boat, your summer home, and your SUV but it won’t let you share videos between a family whose daughter works as a domestic worker in another country, whose son is a laborer in another state, and whose parents are migrant workers who are often separated (there are far more families in this situation than there are families with yachts and second homes!).

Even if your family meets with the approval of an algorithm designed in a distant board-room by strangers who have never lived a life like yours, you still may find yourself unable to partake in culture that you are entitled to. TPMs typically require a remote server to function, and when your Internet goes down, your books or movies can be rendered unviewable.

It’s not just Internet problems that can cause the art and culture you own to vanish: last year, Microsoft became the latest in a long list of companies who switched off their DRM servers because they decided they no longer wanted to be a bookstore. Everyone who ever bought a book from Microsoft lost their books.

Forever.

Mexico’s Congress did nothing to rebalance its version of America’s TPM rules. Indeed, Mexico’s rules are worse than America’s. Under DMCA 1201, the US Copyright Office holds hearings every three years to grant exemptions to the TPM rule, granting people the right to remove or bypass TPMs for legitimate purposes. America’s copyright regulator has granted a very long list of these exemptions, having found that TPMs were interfering with Americans in unfair, unjust, and even unsafe ways. Of course, that process is far from perfect: it’s slow, skewed heavily in favor of rightsholders, and illegally restricts free expression by forcing would-be speakers to ask the government in advance for permission through an arbitrary process.

Mexico’s new copyright law mentions a possible equivalent proceeding but leaves it maddeningly undefined and certainly does nothing to remedy the defects in the US process. Recall that USMCA is a trade agreement, supposedly designed to put all three countries on equal footing but Americans have the benefit of more than two decades’ worth of exemptions to this terrible rule, while Mexicans will have to labor under its full weight until (and unless) they can use this undefined process to secure a comparable list of exemptions. And even then, they won’t have the flexibility offered by fair use under US law.

Notice and Takedown

Section 512 of the US DMCA created a “notice and takedown” rule that allows rightsholders or their representatives to demand the removal of works without any showing of evidence or finding of fact that their copyrights were infringed. This has been a catastrophe for free expression, allowing the removal of material without due care or even through malicious, fraudulent acts (the author of this article had his New York Times bestselling novel improperly removed from the Internet by careless lawyers for Fox Entertainment, who mistook it for an episode of a TV show of the same name).

As bad as America’s notice and takedown system is, Mexico’s is now worse.

In America, online services that honor notice and takedown get a “safe harbor” meaning that they are not considered liable for their users’ copyright infringements. However, online services in the US that believe a user’s content is noninfringing may ignore it, and they are only liable at all if they meet the tests for “secondary liability” for copyright infringement, something that is far from automatic. If the rightsholder sues, the service may end up in court alongside their user, but the service can still rely on the safe harbor in relation to other works published by other users, provided they remove them upon notice of infringement.

The Mexican law makes it a strict requirement to remove content. Under Article 232 Quinquies (II), providers must honor all takedown demands by copyright owners, even obviously overreaching ones, or they face fines of UMA1,000-20,000.

Further, Article 232 Quinquies (III) of the Mexican law allows anyone claiming to be an infringed-upon rightsholder to obtain the personal information of the alleged infringer. This means that gangsters, thin-skinned public officials, stalkers, and others can use fraudulent copyright claims to unmask their critics. Who will complain about corrupt police, abusive employers, or local crime-lords when their personal information can be retrieved with such ease? We recently defended the anonymity of a person who questioned their religious community, when the religious organization tried to use the corresponding part of the DMCA to identify them. In the name of copyright, the law gives new tools to anyone with power to stifle dissent and criticism.

This isn’t the only “chilling effect” in the Mexican law. Under Article 114 Octies (II), a platform must comply with takedown requests for mere links to a Web-page that is allegedly infringing. Linking, by itself, is not an infringement in the United States or Canada, and its legal status is contested in Mexico. There are good reasons why linking is not infringement: It’s important to be able to talk about speech elsewhere on the Internet and to share facts, which may include the availability of copyrighted works whose license or infringement status is unknown. Besides that, Web-pages change all the time: if you link to a page that is outside of your control and it is later updated in a way that infringes copyright, you could be the target of a takedown request.

Act now!

If you are based in Mexico, we urge you to participate in R3D’s campaign “Ni Censura ni Candados” and send a letter to Mexico’s National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D’s privacy policy.

Share
Categories
Commentary Intelwars International Trade Agreements and Digital Rights

Mexico’s new copyright law puts human rights in jeopardy

Today, the Electronic Frontier Foundation joins a coalition of international organizations in publishing an open letter of opposition to Mexico’s new copyright law; the letter lays out the threats that Mexico’s new law poses to fundamental human rights and calls upon Mexico’s National Human Rights Commission to take action to invalidate this flawed and unsalvageable law.

In a rushed process without meaningful consultation or debate, Mexico’s Congress has adopted a new copyright law modeled on the U.S. system, without taking any account of the well-publicized, widely acknowledged problems with American copyright law. The new law was passed as part of a package of legal reforms accompanying the United States-Mexico-Canada Agreement (USMCA), Donald Trump’s 2020 successor to 1989’s North American Free Trade Agreement (NAFTA).

However, Mexico’s implementation of this Made-in-America copyright system imposes far more restrictions than either the USMCA demands or that Canada or the USA have imposed on themselves. This new copyright regime places undue burdens on Mexican firms and the Mexican people, conferring a permanent trade advantage on the richer, more developed nations of the USA and Canada, while undermining the fundamental rights of Mexicans guaranteed by the Mexican Constitution and the American Convention on Human Rights.

The opposition that sprang up after the swift passage of the new Mexican copyright law faces many barriers, but among the most serious ones is a disinformation campaign that (predictably) characterizes the claims about U.S. copyright law as “fake news“. The EFF has more experience with the defects of U.S. copyright law than anyone, and in these coming days we will use it to explain in detail how Mexico’s copyright law repeats and magnifies the errors that American lawmakers committed in 1998.

In 1998, the U.S. adopted the Digital Millennium Copyright Act (DMCA), a law whose problems the US government has documented in exquisite detail in the decades since. By the U.S. government’s own account, the DMCA presents serious barriers to:

  • free expression;
  • national resiliency;
  • economic self-determination;
  • the rights of people with disabilities;
  • cybersecurity;
  • independent repair;
  • education;
  • archiving;
  • access to knowledge; and
  • competition.

Despite these manifest defects, the U.S. government successfully pressured Canada into adopting substantially similar legislation in 2011 with the passage of Canada’s Bill C-11.

Both the U.S. and Canada have taken substantial steps to modify the defects in their copyright law. Canada, in particular, used the USMCA as an occasion to rebalance its copyright law, removing some of the onerous terms that Mexico has adopted.

In a series of posts over the coming days, we will elucidate the ways in which the Mexican copyright bill imposes undue and unique burdens on Mexico, Mexican people, and Mexican industry, and what lessons Mexico should have learned from the U.S. and Canadian experience with this one-sided, overreaching version of copyright for the digital world.

In 1998, the US tragically failed to see the import of getting the rules for the Internet right, passing a copyright law that treated the Internet as a glorified entertainment medium. When Canada adopted its law in 2011, it had no excuse for missing the fact that the Internet had become the world’s digital nervous system, a medium where we transact our civics and politics; our personal, familial and romantic lives; our commerce and employment; our health and our education.

But these failings pale in comparison to the dereliction of Mexican lawmakers in importing this system to Mexico. The pandemic and its lockdown made it clear that everything we do not only involves the Internet: it requires the Internet. In today’s world, it is absolutely inexcusable for a lawmaker to regulate the net as though it were nothing more than a glorified video-on-demand service.

Mexico’s prosperity depends on getting this right. Even more: the human rights of the Mexican people require that the Congress of Mexico or the Mexican Court get this right.

Read the letter from EFF, Derechos Digitales and NGOs around the world to Mexico’s National Human Rights Commission here.

If you are based in Mexico, we urge you to participate in R3D’s campaign “Ni Censura ni Candados” and send a letter to Mexico’s National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D’s privacy policy.

Share
Categories
Commentary EU Policy Intelwars International NSA spying Surveillance and Human Rights

EU Court Again Rules That NSA Spying Makes U.S. Companies Inadequate for Privacy

The European Union’s highest court today made clear—once again—that the US government’s mass surveillance programs are incompatible with the privacy rights of EU citizens. The judgment was made in the latest case involving Austrian privacy advocate and EFF Pioneer Award winner Max Schrems. It invalidated the “Privacy Shield,” the data protection deal that secured the transatlantic data flow, and narrowed the ability of companies to transfer data using individual agreements (Standard Contractual Clauses, or SCCs).

Despite the many “we are disappointed” statements by the EU Commission, U.S. government officials, and businesses, it should come as no surprise, since it follows the reasoning the court made in Schrems’ previous case, in 2015.

Back then, the EU Court of Justice (CJEU) noted that European citizens had no real recourse in US law if their data was swept up in the U.S. governments’ surveillance schemes. Such a violation of their basic privacy rights meant that U.S. companies could not provide an “adequate level of [data] protection,” as required by EU law and promised by the EU/U.S. “Privacy Safe Harbor” self-regulation regime. Accordingly, the Safe Harbor was deemed inadequate, and data transfers by companies between the EU and the U.S. were forbidden.

Since that original decision, multinational companies, the U.S. government, and the European Commission sought to paper over the giant gaps between U.S. spying practices and the EU’s fundamental values. The U.S. government made clear that it did not intend to change its surveillance practices, nor push for legislative fixes in Congress. All parties instead agreed to merely fiddle around the edges of transatlantic data practices, reinventing the previous Safe Harbor agreement, which weakly governed corporate handling of EU citizen’s personal data, under a new name: the EU-U.S. Privacy Shield.

EFF, along with the rest of civil society on both sides of the Atlantic, pointed out that this was just shuffling chairs on the Titanic. The Court cited government programs like PRISM and Upstream as its primary reason for ending data flows between Europe and the United States, not the (admittedly woeful) privacy practices of the companies themselves. That meant that it was entirely in the government and U.S. Congress’ hands to decide whether U.S. tech companies are allowed to handle European personal data. The message to the U.S. government is simple: Fix U.S. mass surveillance, or undermine one of the United States’ major industries.

Five years after the original iceberg of Schrems 1, Schrems 2 has pushed the Titanic fully beneath the waves. The new judgment explicitly calls out the weaknesses of U.S. law in protecting non-U.S. persons from arbitrary surveillance, highlighting that:

Section 702 of the FISA does not indicate any limitations on the power it confers to implement surveillance programmes for the purposes of foreign intelligence or the existence of guarantees for non-US persons potentially targeted by those programmes.

and

… neither Section 702 of the FISA, nor E.O. 12333, read in conjunction with PPD?28, correlates to the minimum safeguards resulting, under EU law, from the principle of proportionality, with the consequence that the surveillance programmes based on those provisions cannot be regarded as limited to what is strictly necessary.

The CJEU could not be more blunt in its pronouncements: but it remains unclear how the various actors that could fix this problem will react. Will EU data protection authorities step up their enforcement activities and invalidate SCCs that authorize data flows to the U.S. for failing to protect EU citizens from U.S. mass surveillance programs? And if U.S. corporations cannot confidently rely on either SCCs or the defunct Privacy Shield, will they lobby harder for real U.S. legislative change to protect the privacy rights of Europeans in the U.S.—or just find another temporary stopgap to force yet another CJEU decision? And will the European Commission move from defending the status quo and current corporate practices, to truly acting on behalf of its citizens?

Whatever the initial reaction by EU regulators, companies and the Commission, the real solution lies, as it always has, with the United States Congress. Today’s decision is yet another significant indicator that the U.S. government’s foreign intelligence surveillance practices need a massive overhaul. Congress half-heartedly began the process of improving some parts of FISA earlier this year—a process which now appears to have been abandoned. But this decision shows, yet again, that the U.S. needs much broader, privacy-protective reform, and that Congress’ inaction makes us all less safe, wherever we are.

Share
Categories
free speech Intelwars International privacy

Brazil’s Fake News Bill Would Dismantle Crucial Rights Online and is on a Fast Track to Become Law

Despite widespread complaints about its effects on free expression and privacy, Brazilian Congress is moving forward in its attempts to hastily approve a “Fake News” bill. We’ve already reported about some of the most concerning issues in previous proposals, but the draft text released this week is even worse. It will impede users’ access to social networks and applications, require the construction of massive databases of users’ real identities, and oblige companies to keep track of our private communications online. It creates demands that disregard Internet key characteristics like end-to-end encryption and decentralised tool-building, running afoul of innovation, and could criminalize the online expression of political opinions. Although the initial bill arose as an attempt to address legitimate concerns on the spread of online disinformation, it has opened the door to arbitrary and unnecessary measures, that strike settled privacy and freedom of expression safeguards.

You can join the hundreds of other protestors and organizations telling Brazil’s lawmakers why not to approve this Fake News bill right now.

Here’s how the latest proposals measure up:

Providers Are Required to Retain the Chain of Forwarded Communications

Social networks and any other Internet application that allows social interaction would be obliged to keep the chain of all communications that have been forwarded, whether distribution of the content was done maliciously or not. This is a massive data retention obligation which would affect millions of innocent users instead of only those investigated for an illegal act. Although Brazil already has obligations for retaining specific communications metadata, the proposed rule goes much further. Piecing together a communication chain may reveal highly sensitive aspects of individuals, groups, and their interactions — even when none are actually involved in illegitimate activities. The data will end up as a constantly-updated map of connections and relations between nearly every Brazilian Internet user: it will be ripe for abuse.

Furthermore, this obligation disregards the way more decentralized communication architectures work. It assumes that application providers are always able to identify and distinguish forwarded and non-forwarded content, and also able to identify the origin of a forwarded message. In practice, this depends on the design of the service and on the relationship between applications and services. When the two are independent it is common that the service provider will not be able to  differentiate between forwarded and non-forwarded content, and that the application does not store the forwarding history except on the user’s device.  This architectural separation is traditional in Internet communications, including  web browsers, FTP clients, email, XMPP, file sharing, etc. All of them allow actions equivalent to the forwarding of contents or the act of copying and pasting them, where the client application and its functions are  technically and legally independent from the service to which it connects. The obligation would also negatively impact open source applications, designed to let  end-users not only understand but also to modify and adapt  the functioning of local applications.

It Compels Applications to Get All User’s ID and Cell Phone Numbers

The bill creates a general monitoring obligation on user’s identity, compelling Internet applications to require all users to give proof of identity through a national ID or passport, as well as their phone number. This requirement goes in the opposite direction to the  principles and safeguards set out in the country’s data protection law which is yet to enter into force.  A vast database of identity cards, held by private actors, is in no way aligned with the standards of data minimization, purpose limitation and the prevention of risks in processing and storing personal data that Brazil’s data protection law represents. Current versions of the “Fake News” Bill do not even ensure the use of  pseudonyms for Internet users. As we’ve said many times before, there are myriad reasons why individuals may wish to use a name other than the one they have on their IDs and were born with. Women rebuilding their lives despite the harassment of domestic violence abusers, activists and community leaders facing threats, investigative journalists carrying out sensitive research in online groups, transgender users affirming their identities are just a few of examples of the need for pseudonymity in a modern society. Under the new bill, users’ accounts would be linked to their cell phone numbers, allowing  — and in some cases requiring —  telecom service providers and Internet companies to track users even more closely. Anyone without a mobile number would be prevented from using any social network — if users’ numbers are disabled for any reason, their social media accounts would be suspended. In addition to privacy harms, the rule creates serious hurdles to speak, learn, and share online. 

Censorship, Data Localization, and Blocking

These proposals seriously curb the online expression of political opinions and could quickly lead to political persecution. The bill sets high fines in cases of online sponsored content that mocks electoral candidates or question election reliability. Although elections’ trustworthiness is crucial for democracy and disinformation attempts to disrupt it should be properly tackled, a broad interpretation of the bill would severely endanger the vital work of e-voting security researchers in preserving that trustworthiness and reliability. Electoral security researchers already face serious harassment in the region. Other new and vague criminal offenses set by the bill are prone to silence legitimate critical speech and could criminalize users’ routine actions without the proper consideration of malicious intent.

The bill revives the disastrous idea of data localization. One of its provisions would force  social networks to store user data in a special database that would be required to be hosted in Brazil. Data localization rules such as this can make data especially vulnerable to security threats and surveillance, while also imposing serious barriers to international trade and e-commerce.

Finally, as the icing on the cake of a raft of provisions that disregard  the Internet’s global nature, providers that fail to comply with the rules would be subject to a suspension penalty. Such suspensions are unjustifiable and disproportionate, curtailing the communications of millions of Brazilians and incentivizing applications to overcompliance in the detriment of users’ privacy, security, and free expression.

EFF has joined many other organizations across the world calling on the Brazilian parliament to reject the latest version of the bill and stop the fast-track mode that has been adopted. You can also take action against the “Fake News” bill now, with our Twitter campaign aimed at senators of the National Congress.

Share