Categories
Big tech Commentary Creativity & Innovation Digital Services Act EUROPEAN UNION Intelwars International Online Behavioral Tracking privacy

The GDPR, Privacy and Monopoly

In Privacy Without Monopoly: Data Protection and Interoperability, we took a thorough look at the privacy implications of various kinds of interoperability. We examined the potential privacy risks of interoperability mandates, such as those contemplated by 2020’s ACCESS Act (USA), the Digital Services Act and Digital Markets Act (EU), and the recommendations presented in the Competition and Markets Authority report on online markets and digital advertising (UK). 

We also looked at the privacy implications of “competitive compatibility” (comcom, AKA adversarial interoperability), where new services are able to interoperate with existing incumbents without their permission, by using reverse-engineering, bots, scraping, and other  improvised techniques common to unsanctioned innovation.

Our analysis concluded that while interoperability created new privacy risks (for example, that a new firm might misappropriate user data under cover of helping users move from a dominant service to a new rival), these risks can largely be mitigated with thoughtful regulation and strong enforcement. More importantly, interoperability also had new privacy benefits, both because it made it easier to leave a service with unsuitable privacy policies, and because this created real costs for dominant firms that did not respect their users’ privacy: namely, an easy way for those users to make their displeasure known by leaving the service.

Critics of interoperability (including the dominant firms targeted by interoperability proposals) emphasize the fact that weakening a tech platform’s ability to control its users weakens its power to defend its users.

 They’re not wrong, but they’re not complete either. It’s fine for companies to defend their users’ privacy—we should accept nothing less—but the standards for defending user-privacy shouldn’t be set by corporate fiat in a remote boardroom, they should come from democratically accountable law and regulation.

The United States lags in this regard: Americans whose privacy is violated have to rely on patchy (and often absent) state privacy laws. The country needs—and deserves—a strong federal privacy law with a private right of action.

That’s something Europeans actually have. The General Data Protection Regulation (GDPR), a powerful, far-reaching, and comprehensive (if flawed and sometimes frustrating) privacy law came into effect in 2018.

The European Commission’s pending Digital Services Act (DSA) and Digital Markets Act (DMA) both contemplate some degree of interoperability, prompting two questions:

  1. Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy? And
  2. Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

We think the answers are “no” and “no,” respectively. Below, we explain why.

Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy?

Increased interoperability can help to address user lock-in and ultimately create opportunities for services to offer better data protection.

The European Data Protection Supervisor has weighed in on the relation between the GDPR and the Digital Markets Act (DMA), and they affirmed that interoperability can advance the GDPR’s goals.

Note that the GDPR doesn’t directly mandate interoperability, but rather “data portability,” the ability to take your data from one online service to another. In this regard, the GDPR represents the first two steps of a three-step process for full technological self-determination: 

  1. The right to access your data, and
  2. The right to take your data somewhere else.

The GDPR’s data portability framework is an important start! Lawmakers correctly identified the potential of data portability to help promote competition of platform services and to reduce the risk of user lock-in by reducing switching costs for users.

The law is clear on the duty of platforms to provide data in a structured, commonly used and machine-readable format and users should have the right to transmit data without hindrance from one data controller to another. Where technically feasible, users also have the right to ask the data controller to transmit the data to another controller.

Recital 68 of the GDPR explains that data controllers should be encouraged to develop interoperable formats that enable data portability. The WP29, a former official European data protection advisory body, explained that this could be implemented by making application programme interfaces (APIs) available.

However, the GDPR’s data portability limits and interoperability shortcomings have become more obvious since it came into effect. These shortcomings are exacerbated by lax enforcement. Data portability rights are insufficient to get Europeans the technological self-determination the GDPR seeks to achieve.

The limits the GDPR places on which data you have the right to export, and when you can demand that export, have not served their purpose. They have left users with a right to data portability, but few options about where to port that data to.

Missing from the GDPR is step three:

      3. The right to interoperate with the service you just left.

The DMA proposal is a legislative way of filling in that missing third step, creating a “real time data portability” obligation, which is a step toward real interop, of the sort that will allow you to leave a service, but remain in contact with the users who stayed behind. An interop mandate breathes life into the moribund idea of data-portability.

Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

The GDPR is very far-reaching, and European officials are still coming to grips with its implications. It’s conceivable that the Commission could propose a regulation that cannot be reconciled with EU data protection rules. We learned that in 2019, when the EU Parliament adopted the Copyright Directive without striking down the controversial and ill-conceived Article 13 (now Article 17). Article 17’s proponents confidently asserted that it would result in mandatory copyright filters for all major online platforms, not realizing that those filters cannot be reconciled with the GDPR.

But we don’t think that’s what’s going on here. Interoperability—both the narrow interop contemplated in the DMA, and more ambitious forms of interop beyond the conservative approach the Commission is taking—is fully compatible with European data protection, both in terms of what Europeans legitimately expect and what the GDPR guarantees.

Indeed, the existence of the GDPR solves the thorniest problem involved in interop and privacy. By establishing the rules for how providers must treat different types of data and when and how consent must be obtained and from whom during the construction and operation of an interoperable service, the GDPR moves hard calls out of the corporate boardroom and into a democratic and accountable realm.

Facebook often asserts that its duty to other users means that it has to block you from bringing some of “your” data with you if you want to leave for a rival service. There is definitely some material on Facebook that is not yours, like private conversations between two or more other people. Even if you could figure out how to access those conversations, we want Facebook to take steps to block your access and prevent you from taking that data elsewhere.

But what about when Facebook asserts that its privacy duties mean it can’t let you bring the replies to your private messages; or the comments on your public posts; or the entries in your address book; with you to a rival service? These are less clear-cut than the case of other peoples’ private conversations, but blocking you from accessing this data also helps Facebook lock you onto its platform, which is also one of the most surveilled environments in the history of data-collection.

There’s something genuinely perverse about deferring these decisions to the reigning world champions of digital surveillance, especially because an unfavorable ruling about which data you can legitimately take with you when you leave Facebook might leave you stuck on Facebook, without a ready means to address any privacy concerns you have about Facebook’s policies.

This is where the GDPR comes in. Rather than asking whether Facebook thinks you have the right to take certain data with you or to continue accessing that data from a rival platform, the GDPR lets us ask the law which kinds of data connections are legitimate, and when consent from other implicated users is warranted. Regulation can make good, accountable decisions about whether a survey app deserves access to all of the “likes” by all of its users’ friends (Facebook decided it did, and the data ended up in the hands of Cambridge Analytica), or whether a user should be able to download a portable list of their friends to help switch to another service (which Facebook continues to prevent).

The point of an interoperability mandate—either the modest version in the DMA or a more robust version that allows full interop—is to allow alternatives to high-surveillance environments like Facebook to thrive by reducing switching costs. There’s a hard collective action problem of getting all your friends to leave Facebook at the same time as you. If people can leave Facebook but stay in touch with their Facebook friends, they don’t need to wait for everyone else in their social circle to feel the same way. They can leave today.

In a world where platforms—giants, startups, co-ops, nonprofits, tinkerers’ hobbies—all treat the GDPR as the baseline for data-processing, services can differentiate themselves by going beyond the GDPR, sparking a race to the top for user privacy.

Consent, Minimization and Security

We can divide all the data that can be passed from a dominant platform to a new, interoperable rival into several categories. There is data that should not be passed. For example, a private conversation between two or more parties who do not want to leave the service and who have no connection to the new service. There is data that should be passed after a simple request from the user. For example, your own photos that you uploaded, with your own annotations; your own private and public messages, etc. Then there is data generated by others about you, such as ratings. Finally, there is someone else’s personal information contained in a reply to a message you posted.

The last category is tricky, and it turns on the GDPR’s very fulcrum: consent. The GDPR’s rules on data portability clarify that exporting data needs to respect the rights and freedom of others. Thus, although there is no ban on porting data that does not belong to the requesting user, data from other users shouldn’t be passed on without their explicit consent, or under another GDPR legal basis, and without further safeguards. 

That poses a unique challenge for allowing users to take their data with them to other platforms, when that data implicates other users—but it also promises a unique benefit to those other users.

If the data you take with you to another platform implicates other users, the GDPR requires that they consent to it. The GDPR’s rules for this are complex, but also flexible.

For example, say, in the future, that Facebook obtains consent from users to allow their friends to take the comments, annotations, and messages they send to those friends with them to new services. If you quit Facebook and take your data (including your friends’ contributions to it) to a new service, the service doesn’t have to bother all your friends to get their consent again—under the WP Guidelines, so long as the new service uses the data in a way that is consistent with the uses Facebook obtained consent for in the first place, that consent carries over.

But even though the new service doesn’t have to obtain consent from your friends, it does have to notify them within 30 days – so your friends will always know where their data ended up.

And the new platform has all the same GDPR obligations that Facebook has: they must only process data when they have a “lawful basis” to do so; they must practice data minimization; they must maintain the confidentiality and security of the data; and they must be accountable for its use.

None of that prevents a new service from asking your friends for consent when you bring their data along with you from Facebook. A new service might decide to do this just to be sure that they are satisfying the “lawfulness” obligations under the GDPR.

One way to obtain that consent is to incorporate it into Facebook’s own consent “onboarding”—the consent Facebook obtains when each user creates their account. To comply with the GDPR, Facebook already has to obtain consent for a broad range of data-processing activities. If Facebook were legally required to permit interoperability, it could amend its onboarding process to include consent for the additional uses involved in interop.

Of course, the GDPR does not permit far-reaching, speculative consent. There will be cases where no amount of onboarding consent can satisfy either the GDPR or the legitimate privacy expectations of users. In these cases, Facebook can serve as a “consent conduit,” through which consent to allow their friends to take data with muddled claims with them to a rival platform can be sought, obtained, or declined.

Such a system would mean that some people who leave Facebook would have to abandon some of the data they’d hope to take with them—their friends’ contact details, say, or the replies to a thread they started—and it would also mean that users who stayed behind would face a certain amount of administrative burden when their friends tried to leave the service. Facebook might dislike this on the grounds that it “degraded the user experience,” but on the other hand, a flurry of notices from friends and family who are leaving Facebook behind might spur the users who stayed to reconsider that decision and leave as well.

For users pondering whether to allow their friends to take their blended data with them onto a new platform, the GDPR presents a vital assurance: because the GDPR does not permit companies to seek speculative, blanket consent for future activities for new purposes that you haven’t already consented to, and because the companies your friends take your data to have no way of contacting you, they generally cannot lawfully make any further use of that data (except through one of the other narrow bases permitted by GDPR, for example, to fulfil a “legitimate interest”) . Your friends can still access it, but neither they, nor the services they’ve fled to, can process your data beyond the scope of the initial consent to move it to the new context. Once the data and you are separated, there is no way for third parties to obtain the consent they’d need to lawfully repurpose it for new products or services.

Beyond consent, the GDPR binds online services to two other vital obligations: “data minimization” and “data security.” These two requirements act as a further backstop to users whose data travels with their friends to a new platform.

Data minimization means that any user data that lands on a new platform has to be strictly necessary for its users’ purposes (whether or not there might be some commercial reason to retain it). That means that if a Facebook rival imports your comments to its new user’s posts, any irrelevant data that Facebook transmits along with that data (say, your location when you left the comment, or which link brought you to the post), must be discarded. This provides a second layer of protection for users whose friends migrate to new services: not only is their consent required before their blended data travels to the new service, but that service must not retain or process any extraneous information that seeps in along the way.

The GDPR’s security guarantee, meanwhile, guards against improper handling of the data you consent to let your friends take with them to new services. That means that the data in transit has to be encrypted, and likewise the data at rest, on the rival service’s servers. And no matter that the new service is a startup, it has a regulated, affirmative duty to practice good security across the board, with real liability if it commits a material omission that leads to a breach.

Without interoperability, the monopolistic high-surveillance platforms are likely to enjoy long term, sturdy dominance. The collective action problem represented by getting all the people on Facebook whose company you enjoy to leave at the same time you do means that anyone who leaves Facebook incurs a high switching cost.

Interoperability allows users to depart Facebook for rival platforms, including those that both honor the GDPR and go beyond its requirements. These smaller firms will have less political and economic influence than the monopolists whose dominance they erode, and when they do go wrong, their errors will be less consequential because they impact fewer users.

Without interoperability, privacy’s best hope is to gentle Facebook, rendering it biddable and forcing it to abandon its deeply held beliefs in enrichment through nonconsensual surveillance —and to do all of this without the threat of an effective competitor that Facebook users can flee to no matter how badly it treats them.

Interoperability without privacy safeguards is a potential disaster, provoking a competition to see who can extract the most data from users while offering the least benefit in return. Every legislative and regulatory interoperability proposal in the US, the UK, and the EU contains some kind of privacy consideration, but the EU alone has a region-wide, strong privacy regulation that creates a consistent standard for data-protection no matter what measure is being contemplated. Having both components – an interoperability requirement and a comprehensive privacy regulation – is the best way to ensure interoperability leads to competition in desirable activities, not privacy invasions.

Share
Categories
Intelwars International International Privacy Standards Necessary and Proportionate Policy Analysis privacy Surveillance and Human Rights

Why Indian Courts Should Reject Traceability Obligations

End-to-end encryption is under attack in India. The Indian government’s new and dangerous online intermediary rules forcing messaging applications to track—and be able to identify—the originator of any message, which is fundamentally incompatible with the privacy and security protections of strong encryption. Three petitions have been filed (Facebook; WhatsApp; Arimbrathodiyil) asking the Indian High Courts (in Delhi and Kerala) to strike down these rules.

The traceability provision—Rule 4(2) in the “Intermediary Guidelines and Digital Media Ethics Code” rules (English version starts at page 19)—was adopted by the Ministry of Electronics and Information Technology earlier this year. The rules require any large social media intermediary that provides messaging “shall enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received, or stored in any computer resource.)

The minister has claimed that the rules will “[not] impact the normal functioning of WhatsApp” and said that “the entire debate on whether encryption would be maintained or not is misplaced” because technology companies can still decide to use encryption—so long as they accept the “responsibility to find a technical solution, whether through encryption or otherwise” that permits traceability. WhatsApp strongly disagrees, writing that “traceability breaks end-to-end encryption and would severely undermine the privacy of billions of people who communicate digitally.” 

The Indian government’s assertion is bizarre because the rules compel intermediaries to know information about the content of users’ messages that they currently don’t and which is currently protected by encryption. This legal mandate seeks to change WhatsApp’s security model and technology, and the assumptions somehow seem to imply that such matter needn’t matter to users and needn’t bother companies.

That’s wrong. Because WhatsApp uses a specific privacy-by-design implementation that protects users’ secure communication by making forwarding indistinguishable for the private messaging app from other kinds of communications. So when a WhatsApp user forwards a message using the arrow, it serves to mark the forward information at the client-side, but the fact that the message has been forwarded is not visible to the WhatsApp server. The traceability mandate would make WhatsApp change the application to make this information, which was previously invisible to the server, now visible.  

The Indian government also defended the rules by noting that legal safeguards restrict the process of gaining access to the identity of a person who originated a message, that such orders can only be issued for national security and serious crime investigations, and on the basis that “it is not any individual who can trace the first originator of the information.” However, messaging services do not know ahead of time which messages will or will not be subject to such orders; as WhatsApp has noted,

there is no way to predict which message a government would want to investigate in the future. In doing so, a government that chooses to mandate traceability is effectively mandating a new form of mass surveillance. To comply, messaging services would have to keep giant databases of every message you send, or add a permanent identity stamp—like a fingerprint—to private messages with friends, family, colleagues, doctors, and businesses. Companies would be collecting more information about their users at a time when people want companies to have less information about them.  

India’s legal safeguards will not solve the core problem:

The rules represent a technical mandate for companies to re-engineer or re-design their systems for every user, not just for criminal suspects.

The overall design of messaging services must change to comply with the government’s demand to identify the originator of a message.

Such changes move companies away from privacy-focused engineering and data minimization principles that should characterize secure private messaging apps.

This provision is one of many features of the new rules that pose a threat to expression and privacy online, but it’s drawn particular attention because of the way it comes into collision with end-to-end encryption. WhatsApp previously wrote:

“Traceability” is intended to do the opposite by requiring private messaging services like WhatsApp to keep track of who-said-what and who-shared-what for billions of messages sent every day. Traceability requires messaging services to store information that can be used to ascertain the content of people’s messages, thereby breaking the very guarantees that end-to-end encryption provides. In order to trace even one message, services would have to trace every message.

Rule 4(2) applies to WhatsApp, Telegram, Signal, iMessage, or any “significant social media intermediaries” with more than 5 million registered users in India. It can also apply to federated social networks such as Mastodon or Matrix if the government decides these pose a “material risk of harm” to national security (rule 6). Free and open-source software developers are also afraid that they’ll be targeted next by this rule (and other parts of the intermediary rules), including for developing or operating more decentralized services. So Facebook and WhatsApp aren’t the only ones seeking to have the rules struck down; a free software developer named Praveen Arimbrathodiyil, who helps run community social networking services in India, has also sued, citing the burdens and risks of the rules for free and open-source software and not-for-profit communications tools and platforms.

This fight is playing out across the world. EFF has long said that end-to-end encryption, where intermediaries do not know the content of users’ messages, is a vitally important feature for private communications, and has criticized tech companies that don’t offer it or offer it in a watered-down or confusing way. Its end-to-end messaging encryption features are something WhatsApp is doing right—following industry best practices on how to protect users—and the government should not try to take this away.

Share
Categories
Intelwars International Necessary and Proportionate privacy ¿Quién defiende tus datos?

Chile’s New “Who Defends Your Data?” Report Shows ISPs’ Race to Champion User Privacy

Derechos Digitales’ fourth ¿Quien Defiende Tus Datos? (Who Defends Your Data?) report on Chilean ISPs’ data privacy practices launched today, showing that companies must keep improving their commitments to user rights if they want to hold their leading positions. Although Claro  (América Móvil) remains at the forefront as in 2019’s report, Movistar (Telefónica) and GTD have made progress in all the evaluated categories. WOM lost points and ended in a tie with Entel in the second position, while VTR lagged behind.

Over the last four years, certain transparency practices that once seemed unusual in Latin America have become increasingly more common. In Chile, they have even become a default. This year, all companies evaluated except for VTR received credit for adopting three important industry-accepted best practices: publishing law enforcement guidelines, which help provide a glimpse into the process and standard companies use for analyzing government requests for user data; disclosing personal data processing practices in contracts and policies; and releasing transparency reports.

Overall, the publishing of transparency reports has also become more common. These are critical for understanding a company’s practice of managing user data and its handling of government data requests. VTR is the only company that has not updated its transparency report recently—since May 2019. After the last edition, GTD published its first transparency report and law enforcement guidelines. Similarly, for the first time Movistar has released specific guidelines for authorities requesting access to user’s data in Chile, and received credit for denying legally controversial government requests for user’s data.

Most of the companies also have policies stating their right to provide user notification when there is no secrecy obligation in place or its term has expired. But as in the previous edition, earning a full star in this category requires more than that. Companies have to clearly set up a notification procedure or make concrete efforts to put them in place. Derechos Digitales also urged providers to engage in legislative discussions regarding Chile’s cybercrime bill, in favor of stronger safeguards for user notification. Claro has upheld the right to notification within the country’s data protection law reform and has raised concerns against attempts to increase the data retention period for communications metadata in the cybercrime bill.    

Responding to concerns over government’s use of location data in the context of the COVID pandemic, the new report also sheds light on whether ISPs’ have made public commitments not to disclose user location data unless it is anonymized and aggregated, without a previous judicial order. While the pandemic has changed society in many ways, it has not reduced the need for privacy when it comes to sensitive personal data. Companies’ policies should also push back sensitive personal data requests that seek to target groups rather than individuals. In addition, the study aimed to spot which providers went public about their anonymized and aggregate location data-sharing agreements with private and public institutions. Movistar is the only company that has disclosed such agreements.

Together, the six researched companies account for 88.3% of fixed Internet users and 99.2% of mobile connections in Chile.

This year’s report rates providers in five criteria overall: data protection policies, law enforcement guidelines, defending users in courts or Congress, transparency reports, and user notification. The full report is available in Spanish, and here we highlight the main findings.

Main results

Data Protection Policies and ARCO Rights

Compared to 2019’s edition, Movistar and GTD improved their marks on data protection policies. Companies should not only publish those policies, but commit to support user-centric data protection principles inspired by the bill reforming the data protection law, under discussion in Chilean Congress. GTD has overcome its poor score from 2019, and has earned a full star in this category this year. Movistar received a partial score for failing to commit to the complete set of principles. On the upside, the ISP has devised a specific page to inform users about their ARCO rights (access, rectification, cancellation, and opposition). The report highlights other positive remarks for WOM, Claro, and Entel for providing a specific point of contact for users to demand these rights. WOM went above and beyond, and has made it easier for users to unsubscribe from the provider’s targeted ads database. 

Transparency Reports and Law Enforcement Guidelines

Both transparency reports and law enforcement guidelines have become an industry norm among Chile’s main ISPs. All featured companies have published them, although VTR has failed to disclose an updated transparency report since the 2019 study. Amid many advances since last edition, GTD  disclosed its first transparency report referring to government data requests during 2019. The company earned a partial score in this category for not releasing new statistical data about 2020’s requests.

As for law enforcement guidelines, not all companies clearly state the need for a judicial order to hand over different kinds of communication metadata to authorities. Claro, Entel, and GTD have more explicit commitments in this sense. VTR requests a judicial order before carrying out interception measures or handing call records to authorities. However, the ISP does not mention this requirement for other metadata, such as IP addresses. Movistar’s guidelines are detailed about the types of user data that the government can ask for, but it refers to judicial authorization only when addressing the interception of communications.

Finally, WOM’s 2021 guidelines explicitly require a warrant before handing phone and tower traffic data, as well as geolocation data. As the report points out, in early 2020, WOM was featured in the news as the only ISP to comply with a direct and massive location data request made by prosecutors, which the company denied. We’ve written about this case as an example of worrisome reverse searches, targeting all users in a particular area instead of specific individuals. Directly related to this concern, this year’s report underscores Claro’s and Entel’s commitment to only comply with individualized personal data requests. 

Pushing for User Notification about Data Requests

Claro remains in the lead when it comes to user notification. Beyond stating in the company policy that it has a right to notify users when this is not prohibited by law (as the other companies do, except for Movistar) – Claro’s policies also describe the user notice procedure for data requests in civil, labor, and family judicial cases. Derechos Digitales points out the ISP has also explored with the Public Prosecutor’s Office ways to implement such notification with regard to criminal cases, once the secrecy obligation has expired. WOM’s transparency report mentions similar efforts, urging authorities to collaborate in providing information to ISPs about the status of investigations and legal cases, so they are aware when a secrecy obligation is no longer in effect. As the company says:

“Achieving advances in this area would allow the various stakeholders to continue to comply with their legal duties and at the same time make progress in terms of transparency and safeguarding users’ rights.”

Having Users’ Backs Before Disproportionate Data Requests and Legislative Proposals

Companies can also stand with their users by challenging disproportionate data requests or defending users’ privacy in Congress. WOM and Claro have specific sections on their websites listing some of their work on this front (see, respectively, tabs “protocolo de entrega de información a la autoridad” y “relación con la autoridad”). Such reports include Claro’s meetings with Chilean senators who take part in the commission discussing the cybercrime bill. The ISP reports having emphasized concerns about the expansion of the mandatory retention period for metadata, as well as suggesting that the reform of the country’s data protection law should explicitly authorize telecom operators to notify users about surveillance measures. 

Entel and Movistar have received equally high scores in this category. Entel, in particular, has kept its fight against a disproportionate request made by Chile’s telecommunications regulator (Subtel) for subscriber data. In 2018, the regulator asked for personal information pertaining to the totality of Entel’s customer base in order to share those with private research companies for carrying out satisfaction surveys. Other Chilean ISPs received the same request, but only Entel challenged the legal grounds of Subtel’s authority for such a demand. The case, which was first reported for this category in the last edition, had a new development in late 2019, when the Supreme Court confirmed the sanctions against Entel for not delivering the data, but reduced the company’s fine. Civil society groups Derechos Digitales, Fundación Datos Protegidos, and Fundación Abriendo Datos have recently released a statement stressing how Subtel’s request conflicts with data protection principles, particularly purpose limitation, proportionality, and data security.

Movistar‘s credit in this category also relates to a Subtel request for subscriber data, this one in 2019. The ISP denied the demand, pointing out a legal tension between the agency’s oversight authority to request customer personal data without user consent and privacy safeguards provided by Chile’s Constitution and data protection law that set limits on personal data-sharing.

***

Since its first edition in 2017, Chile’s reports have shown solid and continuous progress, fostering ISP competition toward stronger standards and commitments in favor of users’ privacy and transparency. Derechos Digitales’ work is part of a series of reports across Latin America and Spain adapted from EFF’s Who Has Your Back? report, which for nearly a decade has evaluated the practices of major global tech companies.

Share
Categories
free speech Intelwars International Security Education

#ParoNacionalColombia and Digital Security Considerations for Police Brutality Protests

In the wake of Colombia’s tax reform proposal, which came as more Colombians fell into poverty as a result of the pandemic, demonstrations spread over the country in late April, reviving social unrest and socio-economic demands that led people to the streets in 2019.The government’s attempts to reduce public outcry by withdrawing the tax proposal to draft a new text did not work. Protests continue online and offline. Violent repression on the ground by police, and the military presence in Colombian cities, have raised concerns among national and international groups—from civil organizations across the globe to human rights bodies that are calling on the government to respect people’s constitutional rights to assemble and allow free expression on the Internet and the streets. Media has reported on government crackdowns against the protestors, including physical violence, missing persons, and deaths, seizing of phones and other equipment used to document protests, and police action, as well as internet disruptions and content restrictions or takedowns by online platforms.

As the turmoil and demonstrations continue, we’ve put together some useful resources from EFF and allies we hope can help those attending protests and using technology and the Internet to speak up, report, and organize. Please note that the authors of this post come from primarily U.S.- and Brazil-based experiences. The post is by no means comprehensive. We urge readers to be aware that protest circumstances change quickly; digital security risks, and their mitigation, can vary depending on your location and other contexts. 

This post has two sections covering resources for navigating protests and resources for navigating networks.

Resources for Navigating Protests

Resources for Navigating Network Issues

Resources for Navigating Protests

To attend protests safely, demonstrators need to consider many factors and threats: these range from protecting themselves from harassment and their own devices’ location tracking capabilities, to balancing the need to use technologies for documenting law enforcement brutality and disseminating information. Another consideration is using encryption to protect data and messages from unintended readers. Some resources that may be helpful are:

For Protestors (Colombia)

 For Bringing Devices to Protests

For Using Videos and Photos to Document Police Brutality, Protect Protesters’ Faces, and Scrub Metadata

Resources for Navigating Network Issues

What happens if the Internet is really slow, down altogether, or there’s some other problem keeping people from connecting online? What if social media networks remove or block content from being widely seen, and each platform has a different policy for addressing content issues? We’ve included some resources for understanding hindrances to sending messages and posts or connecting online. 

For Network and Platform Blockages (Colombia) 

For Network Censorship 

For Selecting a Circumvention Tool

If circumvention (not anonymity) is your primary goal for accessing and sending material online, the following resources might be helpful. Keep in mind that Internet Service Providers (ISPs) are still able to see that you are using one of these tools (e.g. that you’re on a Virtual Private Network (VPN) or that you’re using Tor), but not where you’re browsing, nor the content of what you are accessing. 

VPNs

A few diagrams showing the difference between default connectivity to an ISP using a VPN and using Tor are included below (from the Understanding and Circumventing Network Censorship SSD guide).

 the request for eff.org passes through a router and ISP server on the way to the eff.org's server.

Your computer tries to connect to https://eff.org, which is at a listed IP address (the numbered sequence beside the server associated with EFF’s website). The request for that website is made and passed along to various devices, such as your home network router and your ISP, before reaching the intended IP address of https://eff.org. The website successfully loads for your computer.

 the request passes encrypted through the router, ISP's server, the VPN's server, before finally landing at eff.org's server.

In this diagram, the computer uses a VPN, which encrypts its traffic and connects to eff.org. The network router and ISP might see that the computer is using a VPN, but the data is encrypted. The ISP routes the connection to the VPN server in another country. This VPN then connects to the eff.org website.

Tor 

Digital security guide on using Tor Browser, which uses the volunteer-run Tor network, from Surveillance Self-Defense (EFF): How to: Use Tor on macOS (English), How to: Use Tor for Windows (English), How to: Use Tor for Linux (English), Cómo utilizar Tor en macOS (Español), Cómo Usar Tor en Windows (Español), Como usar Tor en Linux (Español)

 The request is encrypted and passes through the router, ISP server, three Tor servers, before landing at the intended eff.org server.

The computer uses Tor to connect to eff.org. Tor routes the connection through several “relays,” which can be run by different individuals or organizations all over the world. The final “exit relay” connects to eff.org. The ISP can see that you’re using Tor, but cannot easily see what site you are visiting. The owner of eff.org, similarly, can tell that someone using Tor has connected to its site, but does not know where that user is coming from.

For Peer-to-Peer Resources

Peer-to-Peer alternatives can be helpful during a shutdown or during network disruptions and include tools like the Briar App, as well as other creative uses such as Hong Kong protesters’ use of AirDrop on iOS devices.

For Platform Censorship and Content Takedowns

If your content is taken down from services like social media platforms, this guide may be helpful for understanding what might have happened, and making an appeal (Silenced Online): How to Appeal (English)

For Identifying Disinformation

Verifying the authenticity of information (like determining if the poster is part of a bot campaign, or if the information itself is part of a propaganda campaign) is tremendously difficult. Data & Society’s reports on the topic (English), and Derechos Digitales’ thread (Español) on what to pay attention to and how to check information might be helpful as a starting point. 

Need More Help?

For those on the ground who need digital security assistance, Access Now has a 24/7 Helpline for human rights defenders and folks at risk, which is available in English, Spanish, French, German, Portuguese, Russian, Tagalog, Arabic, and Italian. You can contact their helpline at https://www.accessnow.org/help/

Thanks to former EFF fellow Ana Maria Acosta for her contributions to this piece.

Share
Categories
free speech Intelwars International Surveillance and Human Rights

Proposed New Internet Law in Mauritius Raises Serious Human Rights Concerns

As debate continues in the U.S. and Europe over how to regulate social media, a number of countries—such as India and Turkey—have imposed stringent rules that threaten free speech, while others, such as Indonesia, are considering them. Now, a new proposal to amend Mauritius’ Information and Communications Technologies Act (ICTA) with provisions to install a proxy server to intercept otherwise secure communications raises serious concerns about freedom of expression in the country.

Mauritius, a democratic parliamentary republic with a population just over 1.2 million, has an Internet penetration rate of roughly 68% and a high rate of social media use. The country’s Constitution guarantees the right to freedom of expression but, in recent years, advocates have observed a backslide in online freedoms.

In 2018, the government amended the ICTA, imposing heavy sentences—as high as ten years in prison—for online messages that “inconvenience” the receiver or reader. The amendment was in turn utilized to file complaints against journalists and media outlets in 2019.

In 2020, as COVID-19 hit the country, the government levied a tax on digital services operating  in the country, defined as any service supplied by “a foreign supplier over the internet or an electronic network which is reliant on the internet; or by a foreign supplier and is dependent on information technology for its supply.”

The latest proposal to amend the ICTA has raised alarm bells amongst local and international free expression advocates, as it would enable government officials who have established instances of “abuse and misuse” to block social media accounts and track down users using their IP addresses.

The amendments are reminiscent of those in India and Turkey in that they seek to regulate foreign social media, but differ in that Mauritius—a far smaller country—lacks the ability to force foreign companies to maintain a local presence. In a paper for a consultation of the amendments, proponents argue:

Legal provisions prove to be relatively effective only in countries where social media platforms have regional offices. Such is not the case for Mauritius. The only practical solution in the local context would be the implementation of a regulatory and operational framework which not only provides for a legal solution to the problem of harmful and illegal online content but also provides for the necessary technical enforcement measures required to handle this issue effectively in a fair, expeditious, autonomous and independent manner.

While some of the concerns raised in the paper—such as the fact that social media companies do not sufficiently moderate content in the country’s local language—are valid, the solutions proposed are disproportionate. 

A Change.org petition calling on local and international supporters to oppose the amendments notes that “Whether human … or AI, the system that will monitor, flag and remove information shared by users will necessarily suffer from conscious or unconscious bias. These biases will either be built into the algorithm itself, or will afflict those who operate the system.” 

Most concerning, however, is that authorities wish to install a local/proxy server that impersonates social media networks to fool devices and web browsers into sending secure information to the local server instead of social media networks, effectively creating an archive of the social media information of all users in Mauritius before resending it to the social media networks’ servers. This plan fails to mention how long the information will be archived, or how user data will be protected from data breaches.

Local free expression advocates are calling on the ICTA authorites to “concentrate their efforts in ethically addressing concerns made by citizens on posts that already exist and which have been deemed harmful.” Supporters are encouraged to sign the Change.org petition or submit comment to the open consultation by emailing socialmediaconsultation@icta.mu before May 5, 2021.

Share
Categories
free speech Intelwars International

Brazil’s Bill Repealing National Security Law Has its Own Threats to Free Expression

The Brazilian Chamber of Deputies is on track to approve  a law that threatens freedom of expression and the right to assemble and protest, with the stated aim of defending the democratic constitutional state. Bill 6764/02 repeals the Brazilian National Security Law (Lei de Segurança Nacional), one of the ominous legacies of the country’s dictatorship that lasted until 1985. Although there’s a broad consensus over the harm the National Security Law represents, Brazilian civil groups have been stressing that replacing it with a new act without careful discussion on its grounds, principles, and specific rules risks rebuilding a framework serving more to repressive than to democratic ends.

The Brazilian National Security Law has a track record of abuses in persecuting and silencing dissent, with vague criminal offenses and provisions targeting speech. After a relatively dormant period, it gained new prominence during President Bolsonaro’s administration. It has served as a legal basis for accusations against opposition leaders, critics, journalists, and even a congressman aligned to Bolsonaro in the country’s current turbulent political landscape.

However, its proposed replacement, Bill 6764/02, raises various concerns, some particularly unsettling for digital rights. Even with alternative drafts trying to untangle them, problems remain.

First, the espionage offense in the bill defines the handover of secret documents to foreign governments as a crime. It’s crucial that this and related offenses do not apply to acts in a way that would raise serious human rights concerns: whistleblowers revealing facts or acts that could imply the violation of human rights, crimes committed by government officials, and other serious wrongdoings affecting public administration; or,  journalistic and investigative reporting, and the work of civil groups and activists, that bring to light governments’ unlawful practices and abuses. These acts should be clearly exempted from the offense. Amendments under discussion seek to address these concerns, but there’s no assurance they will prevail in the final text if this new law is approved.

The IACHR’s Freedom of Expression Rapporteur highlighted how often governments in Latin America classify information under national security reasons without proper assessment and substantiation. The report provides a number of examples in the region on the hurdles this represents to accessing information related to human rights violations and government surveillance. The IACHR Rapporteur stresses the key role of investigative journalists, the protection of their sources, and the need to grant legal backing against reprisal to whistleblowers who expose human rights violations and other wrongdoings. This aligns with the UN Freedom of Expression Rapporteur’s previous recommendations and reinforces the close relationship between democracy and strong safeguards for those who take a stand of unveiling sensitive public interest information. As the UN High Commissioner for Human Rights has already pointed out:

The right to privacy, the right to access to information and freedom of expression are closely linked. The public has the democratic right to take part in the public affairs and this right cannot be effectively exercised by solely relying on authorized information.

Second, the proposal also aims to tackle “fake news” by making “mass misleading communication” a crime against democratic institutions. Although the bill should be strictly tailored to counter exceptionally serious threats, bringing disinformation into its scope, on the contrary, potentially targets millions of Internet users. Disseminating “facts the person know is untrue” that could put at risk “the health of the electoral process” or “the free exercise of constitutional powers,” using “means not provided by the private messaging application,” could lead to up to five years’ jail time.

We agree with the digital rights groups on the ground which have stressed the provision’s harmful implications to users’ freedom of expression.  Criminalizing the spread of disinformation is full of traps. It criminalizes speech by relying on vague terms (as in this bill) easily twisted to stifle critical voices and those challenging entrenched political power. Repeatedly, joint declarations of the Freedom of Expression Rapporteurs urged States not to take that road.

Moreover, the provision applies when such messages were distributed using “means not provided by the application.” Presuming that the use of such means is inherently malicious poses a major threat to interoperability. The technical ability to plug one product or service into another product or service, even when one service provider hasn’t authorized that use, has been a key driver to competition and innovation. And dominant companies repeatedly abuse legal protections to ward off and try to punish competitors. 

This is not to say we do not care about the malicious spread of disinformation at scale. But it should not be part of this bill, given its specific scope, neither be addressed without careful attention to unintended consequences. There’s an ongoing debate, and other avenues to pursue that are aligned with fundamental rights and rely on joint efforts from the public and private sectors.

Political pressure has hastened the bill’s vote. Bill 6764/02 may pass in a few days in the Chamber of Deputies, pending the Senate’s approval. We join the call of civil and digital rights groups that a rushed approach actually creates greater risks for what the bill is supposed to protect. These and other troubling provisions put freedom of expression on the spot, serving also to spur government’s surveillance and repressive actions. These risks are what the defense of democracy should fend off, not reiterate. 

Share
Categories
Blockchain Commentary Intelwars International

Indian Government’s Plans to Ban Cryptocurrency Outright Are A Bad Idea

While Turkey hit the headlines last week with a ban on paying for items with cryptocurrency, the government of India appears to be moving towards outlawing cryptocurrency completely. An unnamed senior government official told Reuters last month that a forthcoming bill this parliamentary session would include the prohibition of the “possession, issuance, mining, trading and transferring [of] crypto-assets.” Officials have subsequently done little to dispel the concern that they are seeking a full cryptocurrency ban: in response to questions by Indian MPs about the timing and the content of a potential Cryptocurrency Act, the Finance Ministry was non-committal, beyond stating that the bill would follow “due process.” 

If the Indian government plans to effectively police its own draconian rules, it would need to seek to block, disrupt, and spy on Internet traffic

If rumors of a complete ban accurately describe the bill, it would be a drastic and over-reaching prohibition that would require draconian oversight and control to enforce. But it would also be in keeping with previous overreactions to cryptocurrency by regulators and politicians in India.

India regulators’ involvement with cryptocurrency began four years ago with concerns about consumer safety in the face of scams, Ponzi schemes, and the unclear future of many blockchain projects. The central bank issued a circular prohibiting all regulated entities, including banks, from servicing businesses dealing in virtual currencies. Nearly two years later, the ban was overturned by the Indian Supreme Court on the ground that it amounted to disproportionate regulatory action in the absence of evidence of harm caused to the regulated entities. A subsequent report in 2019 by the Finance Ministry proposed a draft bill that would have led to a broad ban on the use of cryptocurrency. It’s this bill that commentators suspect will form the core of the new legislation.

The Indian government is worried about the use of cryptocurrency to facilitate illegal activity, but this ignores the many entirely legal uses for cryptocurrencies that already exist and that will continue to develop in the future. Cryptocurrency is naturally more censorship-resistant than many other forms of financial instruments currently available. It provides a powerful market alternative to the existing financial behemoths that exercise control over much of our online transactions today, so that websites engaged in legal (but controversial) speech have a way to receive funds when existing financial institutions refuse to serve them. Cryptocurrency innovation also holds the promise of righting other power imbalances: it can expand financial inclusion by lowering the cost of credit, offering instant transaction resolution, and enhancing customer verification processes. Cryptocurrency can help unbanked individuals get access to financial services.

If the proposed cryptocurrency bill does impose a full prohibition, as rumors suggest, the Indian government should consider, too, the enforcement regime it would have to create. Many cryptocurrencies, including Bitcoin, offer some privacy-enhancing features which make it relatively easy for the geographical location of a cryptocurrency transaction to be concealed, so while India’s cryptocurrency users would be prohibited from using local, regulated cryptocurrency services, they could still covertly join the rest of the world’s cryptocurrency markets. As the Internet and Mobile Association of India has warned, the result would be that Indian cryptocurrency transactions would move to “illicit” sites that would be far worse at protecting consumers.

Moreover, if the Indian government plans to effectively police its own draconian rules, it would need to seek to block, disrupt, and spy on Internet traffic to detect or prevent cryptocurrency transactions. Those are certainly powers that the past and present Indian administrations have sought: but unless they are truly necessary and proportionate to a legitimate aim, such interference will violate international law, and, if India’s Supreme Court decides they are unreasonable, will fail once again to pass judicial muster.

The Indian government has claimed that it does want to support blockchain technology in general. In particular, the current government has promoted the idea of a “Digital Rupee”, which it expects to be placed on a statutory footing in the same bill that bans private cryptocurrencies. It’s unclear what the two actions have in common. A centrally-run digital currency has no reason to be implemented on a blockchain, a technology that is primarily needed for distributed trust consensus, and has little applicability when the government itself is providing the centralized backstop for trust. Meanwhile, legitimate companies and individuals exploring the blockchain for purposes for which it is well-suited will always fear falling afoul of the country’s criminal sanctions—which will, Reuter’s source claims, include ten-year prison sentences in its list of punishments. Such liability would be the severest disincentive to any independent investor or innovator, whether they are commercial or working in the public interest.

Addressing potential concerns around cryptocurrency by banning the entire technology would be excessive and unjust. It denies Indians access to the innovations that may come from this sector, and, if enforced at all, would require prying into Indian’s digital communications to an unnecessary and disproportionate degree.

Share
Categories
Content Blocking Intelwars International

India’s Strict Rules For Online Intermediaries Undermine Freedom of Expression

India has introduced draconian changes to its rules for online intermediaries, tightening government control over the information ecosystem and what can be said online. It has created rules that seek to restrict social media companies and other content hosts from coming up with their own moderation policies, including those framed to comply with international human rights obligations. The new “Intermediary Guidelines and Digital Media Ethics Code” (2021 Rules) have already been used in an attempt to censor speech about the government. Within days of being published, the rules were used by a state in which the ruling Bharatiya Janata Party is in power to issue a legal notice to an online news platform that has been critical of the government. The legal notice was withdrawn almost immediately after public outcry, but served as a warning of how the rules can be used.

The 2021 Rules, ostensibly created to combat misinformation and illegal content, substantially revise India’s intermediary liability scheme. They were notified as rules under the Information Technology Act 2000, replacing the 2011 Intermediary Rules.

New Categories of Intermediaries

The 2021 Rules create two new subsets of intermediaries: “social media intermediaries” and “significant social media intermediaries,” the latter of which are subject to more onerous regulations. The due diligence requirements for these companies include having proactive speech monitoring, compliance personnel who reside in India, and the ability to trace and identify the originator of a post or message.

“Social media intermediaries” are defined broadly, as entities which primarily or solely “enable online interaction between two or more users and allow them to create, upload, share, disseminate, modify or access information using its services.” Obvious examples include Facebook, Twitter, and YouTube, but the definition could also include search engines and cloud service providers, which are not social media in a strict sense.

“Significant social media intermediaries” are those with registered users in India above a 5 million threshold. But the 2021 Rules also allow the government to deem any “intermediary” – including telecom and internet service providers, web-hosting services, and payment gateways – a ‘significant’ social media intermediary if it creates a “material risk of harm” to the sovereignty, integrity, and security of the state, friendly relations with Foreign States, or public order. For example, a private messaging app can be deemed “significant” if the government decides that the app allows the “transmission of information” in a way that could create a “material risk of harm.” The power to deem ordinary intermediaries as significant also encompasses ‘parts’ of services, which are “in the nature of an intermediary” – like Microsoft Teams and other messaging applications.

New  ‘Due Diligence’ Obligations

The 2021 Rules, like their predecessor 2011 Rules, enact a conditional immunity standard. They lay out an expanded list of due diligence obligations that intermediaries must comply with in order to avoid being held liable for content hosted on their platforms.

Intermediaries are required to incorporate content rules—designed by the Indian government itself—into their policies, terms of service, and user agreements. The 2011 Rules contained eight categories of speech that intermediaries must notify their users not to “host, display, upload, modify, publish, transmit, store, update or share.” These include content that violates Indian law, but also many vague categories that could lead to censorship of legitimate user speech. By complying with government-imposed restrictions, companies cannot live up to their responsibility to respect international human rights, in particular freedom of expression, in their daily business conduct. 

Strict Turnaround for Content Removal

The 2021 Rules require all intermediaries to remove restricted content within 36 hours of obtaining actual knowledge of its existence, taken to mean a court order or notification from a government agency. The law gives non-judicial government bodies great authority to compel intermediaries to take down restricted content. Platforms that disagree with or challenge government orders face penal consequences under the Information Technology Act and criminal law if they fail to comply.

The Rules impose strict turnaround timelines for responding to government orders and requests for data. Intermediaries must provide information within their control or possession, or ‘assistance,’ within 72 hours to government agencies for a broad range of purposes: verification of identity, or the prevention, detection, investigation, or prosecution of offenses or for cybersecurity incidents. In addition, intermediaries are required to remove or disable, within 24 hours of receiving a complaint, non-consensual sexually explicit material or material in the “nature of impersonation in an electronic form, including artificially morphed images of such individuals.” The deadlines do not provide sufficient time to assess complaints or government orders. To meet them, platforms will be compelled to use automated filter systems to identify and remove content. These error-prone systems can filter out legitimate speech and are a threat to users’ rights to free speech and expression.

Failure to comply with these rules could lead to severe penalties, such as a jail term of up to seven years. In the past, the Indian government has threatened company executives with prosecution – as, for instance, when they served a legal notice on Twitter, asking the company to explain why recent territorial changes in the state of Kashmir were not reflected accurately on the platform’s services. The notice threatened to block Twitter or imprison its executives if a “satisfactory” explanation was not furnished. Similarly, the government threatened Twitter executives with imprisonment when they reinstated content about farmer protests that the government had ordered them to take down.

Additional Obligations for Significant Social Media Intermediaries

On a positive note, the Rules require significant social media intermediaries to have transparency and due process rules in place for content takedowns. Companies must notify users when their content is removed, explain why is was taken down, and provide an appeals process.

On the other hand, the 2021 Rules compel providers to appoint an Indian resident “Chief Compliance Officer,” who will be held personally liable in any proceedings relating to non-compliance with the rules, and a “Resident Grievance Officer” responsible for responding to users’ complaints and government and court orders. Companies must also appoint a resident employee to serve as a contact person for coordination with law enforcement agencies. With more executives residing in India, where they could face prosecution, intermediaries may find it difficult to challenge or resist arbitrary and disproportionate government orders.

Proactive Monitoring

Significant social media intermediaries are called on to “endeavour to deploy technology-based measures,” including automated tools or other mechanisms, to proactively identify certain types of content. This includes information depicting rape or child sexual abuse and content that has previously been removed for violating rules. The stringent provisions in Rule 2021 already encourage over-removal of content; requiring intermediaries to deploy automated filters will likely result in more takedowns.

Encryption and Traceability Requirements

The Indian government has been wrangling with messaging app companies—most famously WhatsApp—for several years now, demanding “traceability” of the originators of forwarded messages. The demand first emerged in the context of a series of mob lynchings in India, triggered by rumors that went viral on WhatsApp. Subsequently, petitions were filed in Indian courts seeking to link social networking accounts with the users’ biometric identity (Aadhar) numbers. Although the court ruled against the proposal, expert opinions supplied by a member of the Prime Minister’s scientific advisory committee suggested technical measures to enable traceability on end-to-end encrypted platforms.

Because of their privacy and security features, some messaging systems don’t learn or record the history of who first created particular content that was then forwarded by others, a state of affairs that the Indian government and others have found objectionable. The 2021 Rules represent a further escalation of this conflict, requiring private messaging intermediaries to “enable the identification of the first originator of the information” upon a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received ,or stored in any computer resource). If the first originator of a message is located outside the territory of India, the private messaging app will be compelled to identify the first originator of that information within India.

The 2021 Rules place various limitations on these court orders, namely they can only be issued for serious crimes. However, limitations will not solve the core problem with this proposal: A technical mandate for companies to reengineer or re-designing messaging services to comply with the government’s demand to identify the originator of a message.

Conclusion

The 2021 Rules were fast-tracked without public consultation or a pre-legislative consultation, where the government seeks recommendations from stakeholders in a transparent process. They will have profound implications for the privacy and freedom of expression of Indian users. They restrict companies’ discretion in moderating their own platforms and create new possibilities for government surveillance of citizens. These rules threaten the idea of a free and open internet built on a bedrock of international human rights standards.

Share
Categories
Content Blocking EU Policy Intelwars International

The EU Online Terrorism Regulation: a Bad Deal

On 12 September 2018, the European Commission presented a proposal for a regulation on preventing the dissemination of terrorist content online—dubbed the Terrorism Regulation, or TERREG for short—that contained some alarming ideas. In particular, the proposal included an obligation for platforms to remove potentially terrorist  content within one hour, following an order from national competent authorities. 

Ideas such as this one have been around for some time already. In 2016, we first wrote about the European Commission’s attempt to create a voluntary agreement for companies to remove certain content (including terrorist expression) within 24 hours, and Germany’s Network Enforcement Act (NetzDG) requires the same. NetzDG has spawned dozens of copycats throughout the world, including in countries like Turkey with far fewer protections for speech, and human rights more generally.

Beyond the one hour removal requirement, the TERREG also contained a broad definition of what constitutes terrorist content as “material that incites or advocates committing terrorist offences, promotes the activities of a terrorist group or provides instructions and techniques for committing terrorist offences”.  

Furthermore, it introduced a duty of care for all platforms to avoid being misused for the dissemination of terrorist content. This includes the requirement of taking proactive measures to prevent the dissemination of such content. These rules were accompanied by a framework of cooperation and enforcement. 

These aspects of the TERREG are particularly concerning, as research we’ve conducted in collaboration with other groups demonstrates that companies routinely make content moderation errors that remove speech that parodies or pushes back against terrorism, or documents human rights violations in countries like Syria that are experiencing war.

TERREG and human rights

TERREG was created  without real consultation of free expression and human rights groups and has serious repercussions for online expression. Even worse, the proposal was adopted based on political spin rather than evidence

Notably, in 2019, the EU Fundamental Rights Agency—tasked with an opinion by the EU parliament—expressed concern about the regulation. In particular, the FRA noted that the definition of terrorist content had to be modified as it was too wide and would interfere with freedom of expression rights. Also, “According to the FRA, the proposal does not guarantee the involvement by the judiciary and the Member States’ obligation to protect fundamental rights online has to be strengthened.” 

Together with many other civil society groups, we voiced our deep concern over the proposed legislation and stressed that the new rules would pose serious potential threats to fundamental rights of privacy, freedom of expression.

The message to EU policymakers was clear:

  • Abolish the one-hour time frame for content removal, which is too tight for platforms and will lead to over removal of content;
  • Respect the principles of territoriality and ensure access to justice in cases of cross-border takedowns by ensuring that only the Member State in which the hosting service provider has its legal establishment can issue removal orders;
  • Ensure due process and clarify that the legality of content be determined by a court or independent administrative authority;
  • Don’t impose the use of upload or re-upload filters (automated content recognition technologies) to services under the scope of the Regulation;
  • Exempt certain protected forms of expression, such as educational, artistic, journalistic, and research materials.

However, while responsible committees of the EU Parliament showed willingness to take the concerns of civil society groups into account, things looked more grim in Council, where government ministers from each EU country meet to discuss and adopt laws. During the closed-door negotiations between the EU-institutions to strike a deal, different versions of TERREG were discussed, which culminated in further letters by civil society groups, urging the lawmakers to ensure key safeguards on freedom of expressions and the rule of law.

Fortunately, civil society groups and fundamental rights-friendly MEPs in the Parliament were able to achieve some of their goals. For example, the agreement reached by the EU institutions includes exceptions for journalistic, artistic, and educational purposes. Another major improvement concerns the definition of terrorist content (now matching the narrower definition of the EU Directive on combating terrorism) and the option for host providers to invoke technical and operational reasons for non-complying with the strict one-hour removal obligation. And most importantly, the deal states that authorities cannot impose upload filters on platforms.

The Deal Is Still Not Good Enough

While civil society intervention has resulted in a series of significant improvements to the law, there is more work to be done. The proposed regulation still gives broad powers to national authorities, without judicial oversight, to censor online content that they deem to be “terrorism” anywhere in the EU, within a one-hour timeframe, and to incentivize companies to delete more content of their own volition. It further encourages the use of automated tools, without any guarantee of human oversight.

Now, a broad coalition of civil society organizations is voicing their concerns with the Parliament, which must agree to the deal for it to become law. EFF and others suggest that the Members of the European Parliament should vote against the adoption of the proposal. We encourage our followers to raise awareness about the implications of TERREG and reach out to their national members of the EU Parliament.

Share
Categories
free speech Intelwars International WikiLeaks

EFF Statement on British Court’s Rejection of Trump Administration’s Extradition Request for Wikileaks’ Julian Assange

Today, a British judge denied the Trump Administration’s extradition request for Wikileaks Editor Julian Assange, who is facing charges in the United States under the Espionage Act and the Computer Fraud and Abuse Act. The judge largely confirmed the charges against him, but ultimately determined that the United States’ extreme procedures for confinement that would be applied to Mr.  Assange would create a serious risk of suicide.

EFF’s Executive Director Cindy Cohn said in a statement today: 

“We are relieved that District Judge Vanessa Baraitser made the right decision: to reject extradition of Mr. Assange and, despite the U.S. government’its initial statement,  we hope that the U.S does not appeal that decision. chooses not to appeal it. The UK court decisionThis means that Assange will not face charges in the United States, which could have set dangerous precedent in two ways. First, it could call into question many of the journalistic practices that writers at the New York Times, the Washington Post, Fox News, and other publications engage in every day to ensure that the American peoplepublic stays informed about the operations of theirits government. Investigative journalism—including seeking, analyzing and publishing leaked government documents, especially those revealing abuses—has a vital role in holding the U.S. government to account. It is, and must remain, strongly protected by the First Amendment. Second, the prosecution, and the jJudge’s decision, embraces a theory of computer crime that is overly broad — essentially criminalizing a journalist for discussing and offering help with basic computer activities like use of rainbow tables and scripts based on wget, that are regularly used in computer security and elsewhere.  

While we applaud this decision, it does not erase the many years Assange has been dogged by prosecution, detainment, and intimidation for his journalistic work. It also does not erase the government’s arguments that, as in so many other cases, attempts to cast a criminal pall over routinebasic actionss because they were done with a computer. We are still reviewing the judge’s opinion and expect to have additional thoughts once we’ve completed our analysis. “

Read the judge’s full statement. 

Share
Categories
Commentary Creativity & Innovation free speech ICANN Intelwars International

How We Saved .ORG: 2020 in Review

If you come at the nonprofit sector, you’d best not miss.

Nonprofits and NGOs around the world were stunned last November when the Internet Society (ISOC) announced that it had agreed to sell the Public Interest Registry—the organization that manages the .ORG top-level domain (TLD)—to private equity firm Ethos Capital. EFF and other leaders in the NGO community sprung to action, writing a letter to ISOC urging it to stop the sale. What follows was possibly the most dramatic show of solidarity from the nonprofit sector of all time. And we won.

Oversight by a nonprofit was always part of the plan for .ORG.

Prior to the announcement, EFF had spent six months voicing our concerns to the Internet Corporation for Assigned Names and Numbers (ICANN) about the 2019 .ORG Registry Agreement, which gave the owner of .ORG new powers to censor nonprofits’ websites (the agreement also lifted a longstanding price cap on .ORG registrations and renewals).

The Registry Agreement gave the owner of .ORG the power to implement processes to suspend domain names based on accusations of “activity contrary to applicable law.” It effectively created a new pressure point that repressive governments, corporations, and other bad actors can use to silence their critics without going through a court. That should alarm any nonprofit or NGO, especially those that work under repressive regimes or frequently criticize powerful corporations.

Throughout that six-month process of navigating ICANN’s labyrinthine decision-making structure, none of us knew that ISOC would soon be selling PIR. With .ORG in the hands of a private equity firm, those fears of censorship and price gouging became a lot more tangible for nonprofits and NGOs. The power to take advantage of .ORG users was being handed to a for-profit company whose primary obligation was to make money for its investors.

Oversight by a nonprofit was always part of the plan for .ORG. When ISOC competed in 2002 for the contract to manage the TLD, it used its nonprofit status as a major selling point. As ISOC’s then president Lynn St. Amour put it, PIR would “draw upon the resources of ISOC’s extended global network to drive policy and management.”

More NGOs began to take notice of the .ORG sale and the danger it posed to nonprofits’ freedom of expression online. Over 500 organizations and 18,000 individuals had signed our letter by the end of 2019, including big-name organizations like Greenpeace, Consumer Reports, Oxfam, and the YMCA of the USA. At the same time, questions began to emerge (PDF) about whether Ethos Capital could possibly make a profit without some drastic changes in policy for .ORG.

By the beginning of 2020, the financial picture had become a lot clearer: Ethos Capital was paying $1.135 billion for .ORG, nearly a third of which was financed by a loan. No matter how well-meaning Ethos was, the pressure to sell “censorship as a service” would align with Ethos’ obligation to produce returns for its investors. The sector’s concerns were well-founded: the registry Donuts entered a private deal with the Motion Picture Association in 2016 to fast-track suspensions of domains that MPA claims infringe on its members’ copyrights. It’s fair to ask whether PIR would engage in similar practices under the leadership of Donuts co-founder Jonathon Nevett. Six members of Congress wrote a letter to ICANN in January urging it to scrutinize the sale more carefully.

A few days later, EFF, nonprofit advocacy group NTEN, and digital rights groups Fight for the Future and Demand Progress participated in a rally outside of the ICANN headquarters in Los Angeles. Our message was simple: stop the sale and create protections for nonprofits. Before the protest, ICANN staff reached out to the organizers offering to meet with us in person, but on the day of the protest, ICANN canceled on us. That same week, Amnesty International, Access Now, the Sierra Club, and other global NGOs held a press conference at the World Economic Forum to tell world leaders that selling .ORG threatens civil society. All of the noise caught the attention of California Attorney General Xavier Becerra, who wrote to ICANN (PDF) asking it for key information about its review of the sale.

COVID-19 demonstrated that the NGO community doesn’t need fancy “products and services” from a domain registry: it needs simple, reliable, boring service.

Recognizing that the heat was on, Ethos Capital and PIR hastily tried to build bridges with the nonprofit sector. Ethos attempted to convene a secret meeting with NGO sector leaders in February, and then abruptly canceled it. Ethos then announced that it would voluntarily limit price increases on .ORG registrations and renewals and establish a “stewardship council.” Like many details of the .ORG sale, what level of influence the stewardship council would have over PIR’s decisions was unclear. EFF executive director Cindy Cohn and NTEN CEO Amy Sample Ward responded in the Nonprofit Times:

The proposed “Stewardship Council” would fail to protect the interests of the NGO community. First, the council is not independent. The Public Interest Registry (PIR) board’s ability to veto nominated members would ensure that the council will not include members willing to challenge Ethos’ decisions. PIR’s handpicked members are likely to retain their seats indefinitely. The NGO community must have a real say in the direction of the .ORG registry, not a nominal rubber stamp exercised by people who owe their position to PIR.

Even Ethos’ promise to limit fee increases was rather hollow: if Ethos raised fees as allowed by the proposed rules, the price of .ORG registrations would more than double over eight years. After those eight years, there would be no limits on free increases whatsoever.

All the while, Ethos and PIR kept touting that with the new ownership would come new “products and services” for .ORG users, but it failed to give any information about what those offerings might entail. Cohn and Ward responded:

The product NGOs need from our registry operator is domain registration at a fair price that doesn’t increase arbitrarily. The service that operator must provide is to stand up to governments and other powerful actors when they demand that it silence us. It is more clear than ever that you cannot offer us either.

It’s almost poetic that the debate over .ORG reached a climax just as COVID-19 was becoming a worldwide crisis. Emergencies like this one are when the world most relies on nonprofits and NGOs; therefore, they’re also pressure tests for the sector. The crisis demonstrated that the NGO community doesn’t need fancy “products and services” from a domain registry: it needs simple, reliable, boring service. Those same members of Congress who’d scrutinized the .ORG sale wrote a more pointed letter to ICANN in March (PDF), plainly noting that there was no way that Ethos Capital could make a profit on its investment without making major changes at the expense of .ORG users.

Finally, in April, the ICANN board rejected the transfer of ownership of .ORG. “ICANN entrusted to PIR the responsibility to serve the public interest in its operation of the .ORG registry,” they wrote, “and now ICANN is being asked to transfer that trust to a new entity without a public interest mandate.”

While .ORG is safe for now, the bigger trend of registries becoming chokepoints for free speech online is as big a problem as ever. That’s why EFF is urging ICANN to reconsider its policies regarding public interest commitments—or as the Internet governance community has recently started calling them, registry voluntary commitments. Those are the additional rules that ICANN allows registries to set for specific top-level domains, like the new provisions in the .ORG Registry Agreement that allow the owner of .ORG to set policies to fast-track censoring speech online.

The story of the attempted .ORG sale is really the story of the power and resilience of the nonprofit sector. Every time Ethos and PIR tried to quell the backlash with empty promises, the sector responded even more loudly, gaining the voices of government officials, members of Congress, two UN Special Rapporteurs, and U.S. state charities regulators. As I said to that crowd of activists in front of ICANN’s offices, I’ve worked in the nonprofit sector for most of my adult life, and I’ve never seen the sector respond this unanimously to anything.

Thank you to everyone who stood up for .ORG, especially NTEN for its partnership on this campaign as a trusted leader in the nonprofit sector. If you were one of the 27,183 people who signed our open letter, or if you work for or support one of the 871 organizations that participated, then you were a part of this victory.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Share
Categories
Coders' Rights Project Commentary Intelwars International

The Slow-Motion Tragedy of Ola Bini’s Trial

EFF has been tracking the arrest, detention, and subsequent investigation of Ola Bini since its beginnings over 18 months ago. Bini, a Swedish-born open-source developer, was arrested in Ecuador’s Quito Airport in a flurry of media attention in April 2019. He was held without trial for ten weeks while prosecutors seized and pored over his technology, his business, and his private communications, looking for evidence attaching him to an alleged conspiracy to destabilize the Ecuadorean government.

Now, after months of delay, an Ecuadorean pre-trial judge has failed to dismiss the case – despite Bini’s defense documenting over hundred procedural and civil liberty violations made in the course of the investigation. EFF was one of the many human rights organizations, including Amnesty International, who were refused permission by the judge to act as observers at Wednesday’s hearing.

Bini, a Swedish-born open-source developer, was seized by police at Quito Airport shortly after Ecuador’s Interior Minister, Maria Paula Romo, held a press conference warning the country of an imminent cyber-attack. Romo spoke hours after the government had ejected Julian Assange from Ecuador’s London Embassy, and claimed that a group of Russians and Wikileaks-connected hackers were in the country, planning an attack in retaliation for the eviction. No further details of this sabotage plot were ever revealed, nor has it been explained how the Minister knew of the gangs’ plans in advance. Instead, only Bini was detained, imprisoned, and held in detention for 71 days without charge until a provincial court, facing a habeas corpus order, declared his imprisonment unlawful and released him to his friends and family. (Romo was dismissed as minister last month for ordering the use of tear gas against anti-government protestors.)

EFF visited Ecuador to investigate complaints of injustice in the case in August 2019. We concluded that the Bini affair had the sadly familiar hallmark of a politicized “hacker panic” where media depictions of hacking super-criminals and overbroad cyber-crime laws together encourage unjust prosecutions when the political and social atmosphere demands it. (EFF’s founding in 1990 was in part due to a notorious, and similar, case pursued in the United States by the Secret Service, documented in Bruce Sterling’s Hacker Crackdown.)

While the Ecuadorian government continues to portray him to journalists as a Wikileaks-employed malicious cybercriminal, his reputation outside the prosecution is very different. An advocate for a secure and open Internet and computer language expert, Bini is primarily known for his non-profit work on the secure communication protocol, OTP, and contributions to the Java implementation of the Ruby programming language. He has also contributed to EFF’s Certbot project, which provides easy-to-use security for millions of websites. He moved to Ecuador during his employment at the global consultancy ThoughtWorks, which has an office in the country’s capital.

After several months of poring over his devices, prosecutors have been able to provide only one piece of supposedly incriminating data: a copy of a screenshot, taken by Bini himself and sent to a colleague, that shows the telnet login screen of a router. From the context, it’s clear that Bini was expressing surprise that the telco router was not firewalled, and was seeking to draw attention to this potential security issue. Bini did not go further than the login prompt in his investigation of the open machine.

Defense and prosecution will now make arguments on the admissibility of this and other non-technical evidence, and the judge will determine if and when Bini’s case will progress to a full trial in the New Year.

We, once again, urge Ecuador’s judiciary to impartially consider the shaky grounds for this case, and divorce their deliberations from the politicized framing that has surrounded this prosecution from the start.

Share
Categories
Intelwars International Necessary and Proportionate privacy ¿Quién defiende tus datos?

IPANDETEC Releases First Report Rating Nicaraguan Telecom Providers’ Privacy Policies

IPANDETEC,  a digital rights organization in Central America, today released its first “Who Defends Your Data” (¿Quién Defiende Tus Datos?) report for Nicaragua, assessing how well the country’s mobile phone and Internet service providers (ISPs) are protecting users’ personal data and communications. The report follows the series of assessments IPANDETEC has conducted to evaluate the consumer privacy practices of Internet companies in Panama, joining a larger initiative across Latin America and Spain holding ISPs accountable for their privacy commitments. 

The organization reviewed six companies: Claro Nicaragua, a subsidiary of the Mexican company America Móvil; Tigo Nicaragua, a subsidiary of Millicom International, headquartered in Luxembourg; Cootel Nicaragua, part of the Chinese Xinwei Group; Yota Nicaragua, a subsidiary of Rostejnologuii, a Russian company; IBW, part of IBW Holding S.A, which provides telecom services across Central America, and Ideay, a local Nicaraguan  company.

The ¿Quién Defiende Tus Datos? report looks at whether the companies post data protection policies on their website, disclose how much personal data they collect from users, and whether, and how often, they share it with third parties. Companies are awarded stars for transparency in each of five categories, detailed below. Shining a light on these practices allows consumers to make informed choices about what companies they should entrust their data to.

Main Findings

IPANDETEC’s review shows that, with a few exceptions, Nicaragua’s leading ISPs have a long way to go in providing transparency about how they protect user data. Only three of the six companies surveyed—Claro, Tigo, and Cootel—publish privacy policies on their websites and, with the exception of Tigo, the information provided is limited to policies for collecting data from users visiting their websites. Tigo’s policy provides partial information about data collected beyond the company’s website, earning the company a full star. Cootel comes close—its policy refers to its app Mi Cootel (My Cootel), which allows customers to manage and change services under their contracts with the company. Claro and Cootel received half stars in this category.

Claro and Tigo’s parent companies publish more comprehensive data protection policies, but they are not available on the websites of their Nicaragua subsidiaries and don’t take into account how their practices in Nicaragua comport with the country’s data protection regulations and other laws. Tigo reported it’s working to improve the information available on the local website in its reply to IPANDETEC’s request for additional information.

Tigo received a half star for its partial commitment to a policy of requiring court authorization before providing the content of users’ communications to authorities. Claro received a quarter star in this category. The ISP’s local policy for requiring court authorization is not clear enough, although the global policy of its parent company America Movil is explicit on this requirement. Both companies have fallen short in showing a similar explicit commitment when handing users’ metadata—such as names, subject line information, and creation date of messages–to authorities.

Tigo earned a quarter star for making public guidelines for how law enforcement can access users’ information, primarily on account of Millicom’s global law enforcement assistance policy. The guidelines establish steps that must be taken locally when the company is responding to law enforcement requests, but they are general and not specific to Nicaragua’s legal framework. Moreover, the policy is available only in English on Millicom’s website. Claro’s subsidiaries in Chile and Peru publish such guidelines; unfortunately the company’s Nicaraguan unit does not.

The IPANDETEC report shows that all the top Internet companies in Nicaragua need to step up their transparency game—none publish transparency reports disclosing how many law enforcement requests for user data they receive. Tigo’s parent company Millicom publishes annual transparency reports, but the most recent one didn’t include information about its operations in Nicaragua. Millicom said it plans to include Nicaragua in future reports.

The companies were evaluated on specific criteria listed below. For more information on each company, you can find the full report on IPANDETEC’s website.

Data Protection Policy: Does the company post a data protection policy on its website? Is the policy written in clear and easily accessible language? Does the policy establish the retention period for user data?

Transparency Report: Does the company publish a transparency report? Is the report easily accessible? Does the report list the number of government requests received, accepted, and rejected?   

User Notification: Does the company publicly commit to notifying users, as soon as the law allows, when their information is requested by law enforcement authorities?

Judicial Authorization: Does the company publicly commit to request judicial authorization before handing users’ communications content and metadata

Law Enforcement Guidelines: Does the company outline public guidelines on how law enforcement can access users’ information?

Conclusion

Digital devices and Internet access allow people to stay connected with family and friends and have access to information and entertainment. But technology users around the world are concerned about privacy. It’s imperative for ISPs in Nicaragua to be transparent about if, and how, they are safekeeping users’ private information. We hope to see big strides from them in future reports. 

Share
Categories
Call to Action free speech Intelwars International Offline : Imprisoned Bloggers and Technologists

Action for Egyptian Human Rights Defenders

The undersigned organisations strongly condemn the persecution of employees of the Egyptian Initiative for Personal Rights (EIPR) and Egyptian civil society by the Egyptian government. We urge the global community and their respective governments to do the same and join us in calling for the release of detained human rights defenders and a stop to the demonisation of civil society organisations and human rights defenders by government-owned or pro-government media.

Since November 15, Egyptian authorities have escalated their crackdown on human rights defenders and civil society organizations. On November 19, Gasser Abdel-Razek, Executive Director of the Egyptian Initiative for Personal Rights (EIPR)—one of the few remaining human rights organisations in Egypt—was arrested at his home in Cairo by security forces. One day prior, EIPR’s Criminal Justice Unit Director, Karim Ennarah, was arrested while on vacation in Dahab. The organization’s Administrative Manager, Mohamed Basheer, was also taken in the early morning hours from his home in Cairo 15 November. 

All three appeared in front of the Supreme State Security Prosecution where they were charged with joining a terrorist group, spreading false news, and misusing social media, and were remanded into custody and given 15 days of pre-trial detention.

The interrogations of the security services and then the prosecution of the leaders of the EIPR focused on the organisation’s activities, the reports issued by it, and its efforts of advocating human rights, especially a meeting held in early November by EIPR and attended by a number of ambassadors and diplomats accredited to Egypt from some European countries, Canada, and the representative of the European Union.

The detention of EIPR staff means one thing: Egyptian authorities are continuing to commit human rights violations with full impunity. This crackdown comes amidst a number of other cases in which the prosecution and investigation judges have used pre-trial detention as a method of punishment. Egypt’s counterterrorism law was amended in 2015 under President Abdel-Fattah al-Sisi so that pre-trial detention can be extended for two years and, in terrorism cases, indefinitely. A number of other human rights defenders—including Mahienour el-Masry, Mohamed el-Baqer, Solafa Magdy, Alaa Abd El Fattah, Sanaa Seif, and Esraa Abdelfattah — are currently held in prolonged pre-trial detention. EIPR researcher Patrick George Zaki remains detained pending investigations by the Supreme State Security Prosecution (SSSP) over unfounded “terrorism”-related charges since his arrest in February 2020. Amnesty International has extensively documented how Egypt’s SSSP uses extended pre-trial detention to imprison opponents, critics, and human rights defenders over unfounded charges related to terrorism for months or even years without trial. 

In addition to these violations, Gasser Abdel-Razek told his lawyer that he received inhumane and degrading treatment in his cell that puts his health and safety in danger. He further elaborated that he was never allowed out of the cell, had only a metal bed to sleep on with neither mattress nor covers, save for a light blanket, was deprived of all his possessions and money, was given only two light pieces of summer garments, and was denied the right to use his own money to purchase food and essentials from the prison’s canteen. His head was shaved completely. 

The manner in which Egypt treats its members of civil society cannot continue, and we, an international coalition of human rights and civil society actors, denounce in the strongest of terms the arbitrary use of pre-trial detention as a form of punishment. The detention of EIPR staff is the latest example of how Egyptian authorities crackdown on civil society with full impunity. It’s time to hold the Egyptian government accountable for its human rights abuses and crimes. Join us in calling for the immediate release of EIPR staff, and an end to the persecution of Egyptian civil society.

Signed,

Access Now
Africa Freedom of Information Centre (AFIC)
Americans for Democracy & Human Rights in Bahrain (ADHRB)
Arabic Network for Human Rights Information (ANHRI)
ARTICLE 19 
Association of Caribbean Media Workers (ACM)
Association for Freedom of Thought and Expression (AFTE)
Association for Progressive Communications (APC)
Cairo Institute for Human Rights Studies (CIHRS)
Center for Democracy & Technology
Committee for Justice (CFJ)
Digital Africa Research Lab
Digital Rights Foundation
Egyptian Front for Human Rights
Electronic Frontier Foundation (EFF)
Elektronisk Forpost Norge (EFN)
epicenter.works – for digital rights
Fight for the Future
Free Media Movement (FMM)
Fundación Andina para la Observación y el Estudio de Medios (Fundamedios)
The Freedom Initiative
Fundación Ciudadanía Inteligente
Globe International Center
Gulf Centre for Human Rights (GCHR)
Homo Digitalis 
Human Rights Watch
Hungarian Civil Liberties Union (HCLU)
Index on Censorship
Independent Journalism Center Moldova (IJC-Moldova)
International Press Centre (IPC) Lagos-Nigeria
International Press Institute (IPI)
Initiative for Freedom of Expression – Turkey (IFoX)
International Free Expression Project
Masaar – Technology and Law Community
Mediacentar Sarajevo
Media Foundation for West Africa (MFWA)
Media Institute of Southern Africa (MISA) – Zimbabwe
MENA Rights Group
Mnemonic
Myanmar ICT for Development Organization (MIDO)
Open Observatory of Network Interference (OONI)
Pacific Islands News Association (PINA)
Pakistan Press Foundation (PPF)
PEN Canada
PEN Norway
Privacy International (PI)
Public Foundation for Protection of Freedom of Speech (Adil Soz)
R3D: Red en Defensa de los Derechos Digitales 
Reporters Sans Frontières (RSF)
Scholars at Risk (SAR)
Skyline International Foundation
Social Media Exchange (SMEX)
South East Europe Media Organisation (SEEMO)
Statewatch (UK)
Vigilance for Democracy and the Civic State

Share
Categories
Commentary free speech Intelwars International Offline : Imprisoned Bloggers and Technologists

EFF Condemns Egypt’s Latest Crackdown

We are quickly approaching the tenth anniversary of the Egyptian revolution, a powerfully hopeful time in history when—despite all odds—Egyptians rose up against an entrenched dictatorship and shook it from power, with the assistance of new technologies. Though the role of social media has been hotly debated and often overplayed, technology most certainly played a role in organizing and Egyptian activists demonstrated the potential of social media for organizing and disseminating key information globally. 

2011 was a hopeful time, but hope quickly gave way to repression—repression that has increased significantly this year, especially in recent months as the Egyptian government, under President Abdel Fattah Al-Sisi, has systematically persecuted human rights defenders and other members of civil society. In the hands of the state, technology was and still is used to censor and surveil citizens.

In 2013, Sisi’s government passed a law criminalizing unlicensed street demonstrations; that law has since been frequently used to criminalize online speech by activists. Two years later, the government adopted a sweeping counterterrorism law that has since been updated to allow for even greater repression. The new provisions of the law were criticized in April by UN Special Rapporteur on human rights and counter terrorism, Fionnuala D. Ní Aoláin, who stated that they would “profoundly impinge on a range of fundamental human rights”. 

But it is the government’s enactment of Law 180 of 2018 Regulating the Press and Media that has had perhaps the most widespread recent impact on free expression online. The law stipulates that press institutions, media outlets, and news websites must not broadcast or publish any information that violates Constitutional principles, granting authorities to ban or suspend distribution or operations of any publications, media outlets, or even social media accounts (with more than 5,000 followers) that are deemed to threaten national security, disturb the public peace, or promote discrimination, violence, racism, hatred, or intolerance. Additionally, Law No. 175 of 2018 on Anti-Cybercrime grants authorities the power to block or suspend websites deemed threatening to national security or the national economy.

A new escalation

In the past two weeks, Egyptian authorities have escalated their crackdown on human rights defenders and civil society organisations. On November 15, Mohammed Basheer, a staffer at the Egyptian Initiative for Personal Rights (EIPR) was arrested at his Cairo home in the early morning hours. Three days later, the organization’s criminal justice unit director, Karim Ennarah, was arrested while on vacation in Dahab. Most recently, Executive Director Gasser Abdel-Razek was arrested at his home by security forces.

All three appeared in front of the Supreme State Security Prosecution and were charged with “joining a terrorist group,” “spreading false news,” and “misusing social media.” They were remanded into custody and sent to fifteen days of pre-trial detention—a tactic commonly used by the Egyptian state as a form of punishment.

In the same week, Egyptian authorities placed 30 individuals on a terrorism watch list, accusing them of joining the Muslim Brotherhood. Among them is blogger, technologist, activist, and friend of EFF, Alaa Abd El Fattah.

A blogger and free software developer, Alaa has the distinction of having been detained under every head of state during his lifetime. In March 2019, he was released after serving a five-year sentence for his role in the peaceful demonstrations of 2011. As part of his parole, he was meant to spend every night at a police station for five years.

But in September of last year, he was re-arrested over allegations of publishing false news and inciting people to protest. He has been held without trial ever since, and as of this week is marked as a terrorist by the Egyptian state.

This designation lays bare the dangers of entrusting individual states with the ability to define “terrorism” for the global internet. While Egypt has used this designation to attack human rights defenders, the country is not alone in politicizing the definition. And at a time when governments are banding together to “eliminate terrorist and extremist content online” through efforts like the Christchurch Call (of which we are a member of the advisory network), it is imperative that social media companies, civil society, and states alike exercise great care in defining what qualifies as “terrorism.” We must not simply trust individual governments’ definitions.

A call for solidarity

EFF condemns the recent actions by the Egyptian government and stands in solidarity with our colleagues at EIPR and the many activists and human rights defenders imprisoned by the Sisi government. And we urge other governments and the incoming Biden administration to stand against repression and hold Egypt’s government accountable for their actions.

As the great Martin Luther King Jr. once wrote: “Injustice anywhere is a threat to justice everywhere.”

 

Share
Categories
Intelwars International Necessary and Proportionate privacy ¿Quién defiende tus datos?

Peru’s Third Who Defends Your Data? Report: Stronger Commitments from ISPs, But Imbalances, and Gaps to Bridge.

Hiperderecho, Peru’s leading digital rights organization, has launched today its third ¿Quién Defiende Tus Datos? (Who Defends you Data)–a report that seeks to hold telecom companies accountable for their users’ privacy. The new Peruvian edition shows improvements compared to 2019’s evaluation.

 Movistar and Claro commit to require a warrant for handing both users’ communications content and metadata to the government. The two companies also earned credit for defending user’s privacy in Congress or for challenging government requests. None scored any star last year in this category. Claro stands out with detailed law enforcement guidelines, including an explanatory chart for the procedures the company adopts before law enforcement requests for communications data. However, Claro should be more specific about the type of communications data covered by the guidelines. All companies have received full stars for their privacy policies, while only three did so in the previous report. Overall, Movistar and Claro are tied in the lead. Entel and Bitel lag, with the former bearing a slight advantage

Quien Defiende Tus Datos is part of a series across Latin America and Spain carried out in collaboration with EFF and inspired in our Who Has Your Back? Project. This year’s edition evaluates the four largest Internet Service Providers (ISPs) in Peru: Telefónica-Movistar, Claro, Entel, and Bitel.    

Hiperderecho assessed Peruvian ISPs on seven criteria concerning privacy policies, transparency, user notification, judicial authorization, defense of human rights, digital security, and law enforcement guidelines. In contrast to last year, the report has added two new categories – if ISPs publish law enforcement guidelines and a category checking companies’ commitments to users’ digital security. The full report is available in Spanish, and here we outline the main results:

Regarding transparency reports, Movistar leads the way, earning a full star while Claro receives a partial star. The report had to provide useful data about how many requests received and how many times the company complied. It should also include details about the government agencies that made the requests and the authority’s justifications. For the first time, Claro has provided statistical figures on government demands that require the “lifting of the secrecy of communication (LST).” However, Claro has failed to clarify which type of data (IP addresses and other technical identifiers) is protected under this legal regime. Since Peru’s Telecommunications Law and its regulation protect under communications secrecy both the content and personal information obtained through the provision of telecom services, we assume Claro might include both. Yet, as a best practice, the ISP should be more explicit about the type of data, including technical identifiers, protected under communication secrecy. As Movistar does, Claro should also break down government requests’ statistical data in content interception and metadata.  

Movistar and Claro have published their law enforcement guidelines. While Movistar only released a general global policy applicable to its subsidiaries, Claro stands out with detailed guidelines for Peru, including an explanatory chart for the company’s procedures before law enforcement requests for communications data. On the downside, the document broadly refers to “lifting the secrecy of communication” requests without defining what it entails. It should give users greater insight into which kind of data is included in the outlined procedures and whether they are mostly focused on authorities’ access to communications content or refer to specific metadata requests.

Entel, Bitel, Claro, and Movistar have published privacy policies applicable to their services that are easy to understand. All of the ISP’s policies provide information about the collected data (such as name, address, and records related to the service provision) and cases in which the company shares personal data with third parties. Claro and Movistar receive full credit in the judicial authorization category for having policies or other documents indicating their commitment to request a judicial order before handing communications data unless the law mandates otherwise. Similarly, Entel states they share users’ data with the government in compliance with the law. Peruvian law grants the specialized police investigation unit the power to request from telecom operators access to metadata in specific emergencies set by Legislative Decree 1182, with a subsequent judicial review.

Latin American countries still have a long way ahead in shedding enough light on government surveillance practices. Publishing meaningful transparency reports and law enforcement guidelines are two critical measures that companies should commit to. Users’ notification is the third one. In Peru, none of the ISPs have committed to notifying users of a government request at the earliest moment allowed by law. Yet, Movistar and Claro have provided further information on their reasons and their interpretation of the law for this refusal.

In the digital security category, all companies have received credit for using HTTPS on their websites and for providing secure methods to users in their online channels, such as two-step authentication. All companies but Bitel have scored for the promotion of human rights. While Entel receives a partial score for joining local multi-stakeholder forums, Movistar and Claro fill up their stars for this category. Among others, Movistar has sent comments to Congress in favor of user’s privacy, and Claro has challenged such a disproportionate request issued by the country’s tax administration agency (SUNAT) before Peru’s data protection authority.

We are glad to see that Peru’s third report shows significant progress, but much needed to be done to protect users’ privacy. Entel and Bitel have to catch up with the larger regional providers. And Movistar and Claro can also go further to complete their chart of stars. Hiperderecho will remain vigilant through their ¿Quien Defiende Tus Datos? Reports. 

Share
Categories
Big tech Intelwars International Surveillance and Human Rights transparency

EFF to Supreme Court: American Companies Complicit in Human Rights Abuses Abroad Should Be Held Accountable

For years EFF has been calling for U.S. companies that act as “repression’s little helpers” to be held accountable, and now we’re telling the U.S. Supreme Court. Despite all the ways that technology has been used as a force for good—connecting people around the world, giving voice to the less powerful, and facilitating knowledge sharing—technology has also been used as a force multiplier for repression and human rights violations, a dark side that cannot be denied.

Today EFF filed a brief urging the Supreme Court to preserve one of the few tools of legal accountability that exist for companies that intentionally aid and abet foreign repression, the Alien Tort Statute (ATS). We told the court about what we and others have been seeing over the past decade or so: surveillance, communications, and database systems, just to name a few, have been used by foreign governments—with the full knowledge of and assistance by the U.S. companies selling those technologies—to spy on and track down activists, journalists, and religious minorities who have been imprisoned, tortured, and even killed.

Specifically, we asked the Supreme Court today to rule that U.S. corporations can be sued by foreigners under the ATS and taken to court for aiding and abetting gross human rights abuses. The court is reviewing an ATS lawsuit brought by former child slaves from Côte d’Ivoire who claim two American companies, Nestlé USA and Cargill, aided in abuse they suffered by providing financial support to cocoa farms they were forced to work at. The ATS allows noncitizens to bring a civil claim in U.S. federal court against a defendant that violated human rights laws. The companies are asking the court to rule that companies cannot be held accountable under the law, and that only individuals can.

We were joined in the brief by the leading organizations tracking the sale of surveillance technology: Access Now, Article 19, Privacy International, Center for Long-Term Cybersecurity, and Ronald Deibert, director of Citizen Lab at University of Toronto. We told the court that the Nestlé case does not just concern chocolate and children. The outcome will have profound implications for millions of Internet users and other citizens of countries around the world. Why? Because providing sophisticated surveillance and censorship products and services to foreign governments is big business for some American tech companies. The fact that their products are clearly being used for tools of oppression seems not to matter. Here are a few examples we cite in our brief:

Cisco custom-built the so-called “Great Firewall” in China, also known as the “Golden Shield,” which enables the government to conduct Internet surveillance and censorship against its citizens. Company documents have revealed that, as part of its marketing pitch to China, Cisco built a specific “Falun Gong module” into the Golden Shield that helped Chinese authorities efficiently identify and locate members of the Falun Gong religious minority, who were then apprehended and subjected to torture, forced conversion, and other human rights abuses. Falun Gong practitioners sued Cisco under the ATS in a case currently pending in the U.S. Court of Appeals for the Ninth Circuit. EFF has filed briefs siding with the plaintiffs throughout the case.

Ning Xinhua, a pro-democracy activist from China, just last month sued the successor companies, founder, and former CEO of Yahoo! under the ATS for sharing his private emails with the Chinese government, which led to his arrest, imprisonment, and torture.

Recently, the government of Belarus used technology from Sandvine, a U.S. network equipment company, to block much of the Internet during the disputed presidential election in August (the company canceled its contract with Belarus because of the censorship). The company’s technology is also used by Turkey, Syria, and Egypt against Internet users to redirect them to websites that contain spyware or block their access to political, human rights, and news content.

We also cited a case against IBM where we filed a brief in support of the plaintiffs, victims of apartheid, who sued under the ATS on claims that the tech giant aided and abetted the human rights abuses they suffered at the hands of the South African government. IBM created a customized computer-based national identification system that facilitated the “denationalization” of country’s Black population. Its customized technology enabled efficient identification, racial categorization, and forced segregation, furthering the systemic oppression of South Africa’s native population. Unfortunately the case was dismissed by the U.S. Court of Appeals for the Second Circuit. 

The Supreme Court has severely limited the scope of the ATS in several rulings over the years. The court is now being asked to essentially grant immunity from the ATS to U.S. corporations. That would be a huge mistake. Companies that provide products and services to customers that clearly intend to, and do, use them to commit gross human rights abuses must be held accountable for their actions. We don’t think companies should be held liable just because their technologies ended up in the hands of governments that use them to hurt people. But when technology corporations custom-make products for governments that are plainly using them to commit human rights abuses, they cross a moral, ethical, and legal line.

We urge the Supreme Court to hold that U.S. courts are open when a U.S. tech company decides to put profits over basic human rights, and people in foreign countries are seriously harmed or killed by those choices.

 

Share
Categories
EU Policy Intelwars International Policy Analysis

EU Parliament Paves the Way for an Ambitious Internet Bill

The European Union has made the first step towards a significant overhaul of its core platform regulation, the e-Commerce Directive.

In order to inspire the European Commission, which is currently preparing a proposal for a Digital Services Act Package, the EU Parliament has voted on three related Reports (IMCO, JURI, and LIBE reports), which address the legal responsibilities of platforms regarding user content, include measures to keep users safe online, and set out special rules for very large platforms that dominate users’ lives.

Clear EFF’s Footprint

Ahead of the votes, together with our allies, we argued to preserve what works for a free Internet and innovation, such as to retain the E-Commerce directive’s approach of limiting platforms’ liability over user content and banning Member States from imposing obligations to track and monitor users’ content. We also stressed that it is time to fix what is broken: to imagine a version of the Internet where users have a right to remain anonymous, enjoy substantial procedural rights in the context of content moderation, can have more control over how they interact with content, and have a true choice over the services they use through interoperability obligations.

It’s a great first step in the right direction that all three EU Parliament reports have considered EFF suggestions. There is an overall agreement that platform intermediaries have a pivotal role to play in ensuring the availability of content and the development of the Internet. Platforms should not be held responsible for ideas, images, videos, or speech that users post or share online. They should not be forced to monitor and censor users’ content and communication–for example, using upload filters. The Reports also makes a strong call to preserve users’ privacy online and to address the problem of targeted advertising. Another important aspect of what made the E-Commerce Directive a success is the “country or origin” principle. It states that within the European Union, companies must adhere to the law of their domicile rather than that of the recipient of the service. There is no appetite from the side of the Parliament to change this principle.

Even better, the reports echo EFF’s call to stop ignoring the walled gardens big platforms have become. Large Internet companies should no longer nudge users to stay on a platform that disregards their privacy or jeopardizes their security, but enable users to communicate with friends across platform boundaries. Unfair trading, preferential display of platforms’ own downstream services and transparency of how users’ data are collected and shared: the EU Parliament seeks to tackle these and other issues that have become the new “normal” for users when browsing the Internet and communicating with their friends. The reports also echo EFF’s concerns about automated content moderation, which is incapable of understanding context. In the future, users should receive meaningful information about algorithmic decision-making and learn if terms of service change. Also, the EU Parliament supports procedural justice for users who see their content removed or their accounts disabled. 

Concerns Remain 

The focus on fundamental rights protection and user control is a good starting point for the ongoing reform of Internet legislation in Europe. However, there are also a number of pitfalls and risks. There is a suggestion that platforms should report illegal content to enforcement authorities and there are open questions about public electronic identity systems. Also, the general focus of consumer shopping issues, such as liability provision for online marketplaces, may clash with digital rights principles: the Commission itself acknowledged in a recent internal document that “speech can also be reflected in goods, such as books, clothing items or symbols, and restrictive measures on the sale of such artefacts can affect freedom of expression.” Then, the general idea to also include digital services providers established outside the EU could turn out to be a problem to the extent that platforms are held responsible to remove illegal content. Recent cases (Glawischnig-Piesczek v Facebook) have demonstrated the perils of worldwide content takedown orders.

It’s Your Turn Now @EU_Commission

The EU Commission is expected to present a legislative package on 2 December. During the public consultation process, we urged the Commission to protect freedom of expression and to give control to users rather than the big platforms. We are hopeful that the EU will work on a free and interoperable Internet and not follow the footsteps of harmful Internet bills such as the German law NetzDG or the French Avia Bill, which EFF helped to strike down. It’s time to make it right. To preserve what works and to fix what is broken.

Share
Categories
Artificial Intelligence & Machine Learning Commentary face surveillance free speech Intelwars International

Pioneer Award Ceremony 2020: A Celebration of Communities

Last week, we celebrated the 29th Annual—and first ever online—Pioneer Award Ceremony, which EFF convenes for our digital heroes and the folks that help make the online world a better, safer, stronger, and more fun place. Like the many Pioneer Award Ceremonies before it, the all-online event was both an intimate party with friends, and a reminder of the critical digital rights work that’s being done by so many groups and individuals, some of whom are not as well-known as they should be.    

Perhaps it was a feature of the pandemic — not a bug — that anyone could attend this year’s celebration, and anyone can now watch it online. You can also read the full transcript. More than ever before, this year’s Pioneer Award Ceremony was a celebration of online communities— specifically, the Open Technology Fund community working to create better tech globally; the community of Black activists pushing for racial justice in how technology works and is used; and the sex worker community that’s building digital tools to protect one another, both online and offline. 

But it was, after all, a celebration. So we kicked off the night by just vibing to DJ Redstickman, who brought his characteristic mix of fun, funky music, as well as some virtual visuals. 

DJ Redstickman

EFF’s Executive Director, Cindy Cohn, began her opening remarks with a reminder that this is EFF’s 30th year, and though we’ve been at it a long time, we’ve never been busier: 

EFF Executive Director Cindy Cohn

We’re busy in the courts — including a new lawsuit last week against the City of San Francisco for allowing the cops to spy on Black Lives Matter protesters and the Pride Parade in violation of an ordinance that we helped pass. We’re busy building technologies – including continuing our role in encrypting the web. We’re busy in the California legislature — continuing to push for Broadband for All, which is so desperately needed for the millions of Californians now required to work and go to school from home. We’re busy across the nation and around the world standing up for your right to have a private conversation using encryption and for your right to build interoperable tools. And we’re blogging, tweeting and posting on all sorts of social media to keeping you aware of what’s going on and hopefully, occasionally amused.   

Cindy was followed by our keynote speaker, longtime friend of EFF, author, and one of the top reporters researching all things tech, Cyrus Farivar. Cyrus’s recent book, Habeus Data, covers 50 years of surveillance law in America, and his previous book The Internet of Elsewhere, focuses on the history and effects of the Internet on different countries around the world. 

Keynote speaker, Cyrus Farivar

Cyrus detailed his journey to becoming a tech reporter, from his time on IRC chats in his teenage years to his realization, in Germany in 2010, about “what it means to be private and what it means to have surveillance.” At the time, German politicians were concerned with the privacy implications of Google Streetview. In Germany, Cyrus explained, specifically in every German state, there is a data protection agency: “In a way, I kind of think about EFF as one of the best next things.  We don’t really have a data protection agency or authority in this country.  Sure, we have the FCC.  We have other government agencies that are responsible for taking care of us, but we don’t have something like that.  I feel like one of the things the EFF does probably better than most other organizations is really try to figure out what makes sense in this new reality.”

Cyrus, of course, is one of the many people helping us all make sense of this new reality, through his reporting—and we’re glad that he’s been fighting the good fight ever since encountering EFF during the Blue Ribbon Campaign. 

Following Cyrus was EFF Staff Technologist Daly Barnett, who introduced the winner of the first Barlow—Ms. Danielle Blunt, aka Mistress Blunt. Danielle Blunt is a sex worker activist and tech policy researcher, and is one of the co-founders of Hacking//Hustling, a collective of sex workers and accomplices working at the intersection of tech and social justice. Her research into sex work and equitable access to technology from a public health perspective has led her to being one of the primary experts on the impacts of the censorship law FOSTA-SESTA, and on how content moderation affects the movement work of sex workers and activists. As Daly said during her introduction, “there are few people on this planet that are as well equipped to subvert the toxic power dynamic that big tech imposes on many of us. Mistress Blunt can look at a system like that, pinpoint the weak spots, and leverage the right tools to exploit them.”

Pioneer Award Winner, Danielle Blunt

Mistress Blunt showcased and highlighted specifically how Hacking//Hustling bridges the gaps between sex worker rights, tech policy, and academia, and pointed out the ways in which sex workers, who are often early adopters, are also exploited by tech companies:

Sex workers were some of the earliest adopters of the web.  Sex workers were some of the first to use ecommerce platforms and the first to have personal websites. The rapid growth of countless tech platforms was reliant on the early adoption of sex workers…  [but ]Not all sex workers have equitable access to technologies or the Internet. This means that digital freedom for sex workers means equitable access to technologies. It means cultivating a deeper understanding of how technology is deployed to surveil and restrict movement of sex workers and how this impacts all of us, because it does impact all of us.  

After Mistress Blunt’s speech, EFF Director of International Freedom of Expression Jillian York joined us from Germany to introduce the next honoree, Laura Cunningham. Laura accepted the award for the Open Technology Fund community, a group which has fostered a global community and provided support—both monetary and in-kind—to more than 400 projects that seek to combat censorship and repressive surveillance. This has resulted in over 2 billion people in over 60 countries being able to access the open Internet more safely.

Unfortunately, new leadership has recently been appointed by the Trump administration to run OTF’s funder, the U.S. Agency for Global Media (USAGM). As a result, there is a chance that the organization’s funds could be frozen—threatening to leave many well-established global freedom tools, their users, and their developers in the lurch. As a result, this award was offered to the entire OTF community for their hard work and dedication to global Internet freedom—and because EFF recognizes the need to protect this community and ensure its survival despite the current political attacks. As Laura said in accepting it, the award “recognizes the impact and success of the entire OTF community,” and is “a poignant reminder of what a committed group of passionate individuals can accomplish when they unite around a common goal.”

Laura Cunningham accepted the Barlow on behalf of the Open Technology Fund Community

But because OTF is a community, Laura didn’t accept the award alone. A pre-recorded video montage of OTF community members gave voice to this principle as they described what the community means to them: 

For me, the OTF community is resourceful.  I’ve never met a community that does so much with so little considering how important their work is for activists and journalists across the world to fight surveillance and censorship.

I love OTF because apart from providing open-source technology to marginalized communities, I have found my sisters in struggle and solidarity in this place for a woman of color and find my community within the OTF community.  Being part of the OTF community means I’m not alone in the fight against injustice, inequality, against surveillance and censorship.  

Members of the OTF Community spoke about what it means to them

For me, the OTF community plays an important role in the work that I do because it allows me to be in a space where I see people from different countries around the world working towards a common goal of Internet freedom.  

I’m going to tell you a story about villagers in Vietnam.  The year is 2020.  An 84-year-old elder of a village shot dead by police while defending his and the villagers’ land.  His two sons sentenced to death.  His grandson sentenced to life in prison.  That was the story of three generations of family in a rural area of the Vietnam.  It was the online world that brought the stories to tens of millions of Vietnamese and prompted a series of online actions.  Thanks to our fight against internet censorship, Vietnamese have access to information.  

We are one community fighting together for Internet freedom, a precondition today to enjoy fundamental rights.  

These are just a few highlights. We hope you’ll watch the video to see exactly why OTF is so important, and so appreciated globally. 

Following this stunning video, EFF’s Director of Community Organizing, Nathan Sheard, introduced the final award winners—Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji.

Pioneer Award Winners Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji

The trio have done groundbreaking research on race and gender bias in facial analysis technology, which laid the groundwork for the national movement to ban law enforcement’s use face surveillance in American cities. In accepting the award, each honoree spoke, beginning with Deborah Raji, who detailed some of the dangers of face recognition that their work together on the Gender Shades Project helped uncover: “Technology requiring the privacy violation of numerous individuals doesn’t work. Technology hijacked to be weaponized and target and harass vulnerable communities doesn’t work. Technology that fails to live up to its claims to some subgroups over other subgroups certainly doesn’t work at all.”

Following Deborah, Dr. Timnit Gebru described how this group came together, beginning with Joy founding the Algorithmic Justice League, Deb founding Project Include, and her own co-founding of Black in AI. Importantly, Timnit noted how these three—and their organizations—look out for each other: “Joy actually got this award and she wanted to share it with us. All of us want to see each other rise, and all of the organizations we’ve founded—we try to have all these organizations support each other.” 

Lastly, Joy Buolamwini closed off the acceptance with a performance—a “switch from performance metrics to performance art.” Joy worked with Brooklyn tenants who were organizing against compelled use of face recognition in their building, and her poem was an ode to them—and to everyone “resisting and revealing the lie that we must accept the surrender of our faces.” The poem is here in full: 

To the Brooklyn tenants resisting and revealing the lie that we must accept the surrender of our faces, the harvesting of our data, the plunder of our traces, we celebrate your courage. No silence.  No consent. You show the path to algorithmic justice requires a league, a sisterhood, a neighborhood, hallway gathering, Sharpies and posters, coalitions, petitions, testimonies, letters, research, and potlucks, livestreams and twitches, dancing and music. Everyone playing a role to orchestrate change. To the Brooklyn tenants and freedom fighters around the world and the EFF family going strong, persisting and prevailing against algorithms of oppression, automating inequality through weapons of math destruction, we stand with you in gratitude. You demonstrate the people have a voice and a choice.  When defiant melodies harmonize to elevate human life, dignity, and rights, the victory is ours.  

Joy Buolamwini, aka Poet of Code

There is really no easy way to summarize such a celebration as the Pioneer Award Ceremony—especially one that brought together this diverse set of communities to show, again and again, how connected we all are, and must be, to fight back against oppression. As Cindy said, in closing: 

We all know no big change happens because of a single person and how important the bonds of community can be when we’re up against such great odds…The Internet can give us the tools, and it can help us create others to allow us to connect and fight for a better world, but really what it takes is us.  It takes us joining together and exerting our will and our intelligence and our grit to make it happen. But when we get it right, EFF, I hope, can help lead those fights, but also we can help support others who are leading them and always, always help light the way to a better future.  

EFF would like to thank the members around the world who make the Pioneer Award Ceremony and all of EFF’s work possible. You can help us work toward a digital world that supports freedom, justice, and innovation for all people by donating to EFF. We know that these are deeply dangerous times, and with your support, we will stand together no matter how dark it gets and we will still be here to usher in a brighter day. 

Thanks again to Dropbox, No Starch Press, Ridder Costa & Johnstone LLP, and Ron Reed for supporting this year’s ceremony! If you or your company are interested in learning more about sponsorship, please contact Nicole Puller.

Share
Categories
Commentary EUROPEAN UNION Intelwars International

Orders from the Top: The EU’s Timetable for Dismantling End-to-End Encryption

The last few months have seen a steady stream of proposals, encouraged by the advocacy of the FBI and Department of Justice, to provide “lawful access” to end-to-end encrypted services in the United States. Now lobbying has moved from the U.S., where Congress has been largely paralyzed by the nation’s polarization problems, to the European Union—where advocates for anti-encryption laws hope to have a smoother ride. A series of leaked documents from the EU’s highest institutions show a blueprint for how they intend to make that happen, with the apparent intention of presenting anti-encryption law to the European Parliament within the next year.

The public signs of this shift in the EU—which until now has been largely supportive toward privacy-protecting technologies like end-to-end encryption—began in June with a speech by Ylva Johansson, the EU’s Commissioner for Home Affairs.

Speaking at a webinar on “Preventing and combating child sexual abuse [and] exploitation”, Johansson called for a “technical solution” to what she described as the “problem” of encryption, and announced that her office had initiated “a special group of experts from academia, government, civil society and business to find ways of detecting and reporting encrypted child sexual abuse material.”

The subsequent report was subsequently leaked to Politico. It includes a laundry list of tortuous ways to achieve the impossible: allowing government access to encrypted data, without somehow breaking encryption.

At the top of that precarious stack was, as with similar proposals in the United States, client-side scanning. We’ve explained previously why client-side scanning is a backdoor by any other name. Unalterable computer code that runs on your own device, comparing in real-time the contents of your messages to an unauditable ban-list, stands directly opposed to the privacy assurances that the term “end-to-end encryption” is understood to convey. It’s the same approach used by China to keep track of political conversations on services like WeChat, and has no place in a tool that claims to keep conversations private.

It’s also a drastically invasive step by any government that wishes to mandate it. For the first time outside authoritarian regimes, Europe would be declaring which Internet communication programs are lawful, and which are not. While the proposals are the best that academics faced with squaring a circle could come up with, it may still be too aggressive to politically succeed as  enforceable regulation—even if tied, as Johannsson ensured it was in a subsequent Commission communication, to the fight against child abuse.

But while it would require a concerted political push, EU’s higher powers are gearing up for such a battle. In late September, Statewatch published a note, now being circulated by the current EU German Presidency, called “Security through encryption and security despite encryption”, encouraging the EU’s member states to agree to a new EU position on encryption in the final weeks of 2020.

While conceding that “the weakening of encryption by any means (including backdoors) is not a desirable option”, the Presidency’s note also positively quoted an EU Counter-Terrorism Coordinator (CTC) paper from May (obtained and made available by German digital rights news site NetzPolitik.org), which calls for what it calls a “front-door”—a “legal framework that would allow lawful access to encrypted data for law enforcement without dictating technical solutions for providers and technology companies”.

The CTC highlighted what would be needed in order to legislate this framework:

The EU and its Member States should seek to be increasingly present in the public debate on encryption, in order to inform the public narrative on encryption by sharing the law enforcement and judicial perspective…

This avoids a one-sided debate mainly driven by the private sector and other nongovernmental voices. This may involve engaging with relevant advocacy groups, including victims associations that can relate to government efforts in that area. Engagement with the [European Parliament] will also be key to prepare the ground for possible legislation.

A speech by Commissioner Johannsson tying defeating secure messaging to protecting children; a paper spelling out “technical solutions” to attempt to fracture the currently unified (or “one-sided”) opposition; and, presumably in the very near future, once the EU has published its new position on encryption, a concerted attempt to lobby members of the European Parliament for this new legal framework: these all fit the Counter-Terrorist Coordinators’ original plans.

We are in the first stages of a long anti-encryption march by the upper echelons of the EU, headed directly toward Europeans’ digital front-doors. It’s the same direction as the United Kingdom, Australia, and the United States have been moving for some time. If Europe wants to keep its status as a jurisdiction that treasures privacy, it will need to fight for it.

Share
Categories
EUROPEAN UNION Intelwars International International Privacy Standards Legislative Analysis Necessary and Proportionate privacy

A Look-Back and Ahead on Data Protection in Latin America and Spain

We’re proud to announce a new updated version of The State of Communications Privacy Laws in eight Latin American countries and Spain. For over a year, EFF has worked with partner organizations to develop detailed questions and answers (FAQs) around communications privacy laws. Our work builds upon previous and ongoing research of such developments in Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain. We aim to understand each country’s legal challenges, in order to help us spot trends, identify the best and worst standards, and provide recommendations to look ahead. This post about data protection developments in the region is one of a series of posts on the current State of Communications Privacy Laws in Latin America and Spain. 

As we look back at the past ten years in data protection, we have seen considerable legal progress in granting users’ control over their personal lives. Since 2010, sixty-two new countries have enacted data protection laws, giving a total of 142 countries with data protection laws worldwide. In Latin America, Chile was the first country to adopt such a law in 1999, followed by Argentina in 2000. Several countries have now followed suit: Uruguay (2008), Mexico (2010), Peru (2011), Colombia (2012), Brazil (2018), Barbados (2019), and Panama (2019). While there are still different privacy approaches, data protection laws are no longer a purely European phenomenon.

Yet, contemporary developments in European data protection law continue to have an enormous influence in the region—in particular, the EU’s 2018 General Data Protection Regulation (GDPR). Since 2018, several countries, including Barbados and Panama have led the way in adopting GDPR-inspired laws in the region, promising the beginning of a new generation of data protection legislation. In fact, the privacy protections of Brazil’s new GDPR-inspired law took effect this month, on September 18, after the Senate pushed back on a delaying order from President Jair Bolsonaro.

But when it comes to data protection in the law enforcement context, few countries have adopted the latest steps of the European Union. The EU Police Directive, a law on the processing of personal data for police forces, has not yet become a Latin American phenomenon. Mexico is the only country with a specific data protection regulation for the public sector. In doing so, countries in the Americas are missing a crucial opportunity to strengthen their communications privacy safeguards with rights and principles common to the global data protection toolkit.

New GDPR-Inspired Data Protection Laws
BrazilBarbados, and Panama have been the first countries in the region to adopt GDPR-inspired data protection laws. Panama’s law, approved in 2019, will enter into force in March 2021. 

Brazil’s law has faced an uphill battle. The provisions creating the oversight authority came into force in December 2018, but it took the government one and a half years to introduce a decree implementing its structure. The decree, however, will only have legal force when the President of the Board is officially appointed and approved by the Senate. No appointment has been made as of the publication of this post. For the rest of the law, February 2020 was the original deadline to enter into force. This was later changed to August 2020. The law was then further delayed to May 2021 through an Executive Act issued by President Bolsonaro. Yet, in a surprising positive twist, Brazil’s Senate stopped President Bolsonaro’s deferral in August. That means the law is now in effect, except for the penalties’ section which have been deferred again, to August 2021. 

Definition of Personal Data 
Like the GDPR, Brazil and Panama’s laws include a comprehensive definition of personal data. It includes any information concerning an identified or identifiable person. The definition of personal data in Barbados’s law has certain limitations. It only protects data which relates to an individual who can be identified “from that data; or from that data together with other information which is in the possession of or is likely to come into the possession of the provider.” Anonymized data in Brazil, Panama, and Barbados falls outside the scope of the law. There are also variations in how these countries define anonymized data. Panama defines it as data that cannot be re-identified by reasonable means. However, the law doesn’t set explicit parameters to guide this assessment. Brazil’s law makes it clear that anonymized data will be considered personal data if the anonymization process is reversed using exclusively the provider’s own means, or if it can be reversed with reasonable efforts. The Brazilian law defines objective factors to determine what’s reasonable such as the cost and time necessary to reverse the anonymization process, according to the technologies available, and exclusive use of the provider’s own means.  These parameters affect big tech companies with  extensive computational power and large collections of data, which will need to determine if their own resources could be used to re-identify anonymized data. This provision should not be interpreted in a way that ignores scenarios where the sharing or linking of anonymized data with other data sets, or publicly available information, leads to the re-identification of the data.

Right to Portability 
The three countries grant users the right to portability—a right to take their data from a service provider and transfer it elsewhere. Portability adds to the so-called ARCO (Access, Rectification, Cancellation, and Opposition) rights—a set of users’ rights that allow them to exercise control over their own personal data.

Enforcers of portability laws will need to make careful decisions about what happens when one person wants to port away data that relates both to them and another person, such as their social graph of contacts and contact information like phone numbers. This implicates the privacy and other rights of more than one person. Also, while portability helps users leave a platform, it doesn’t help them communicate with others who still use the previous one. Network effects can prevent upstart competitors from taking off. This is why we also need interoperability to enable users to interact with one another across the boundaries of large platforms. 

Again, different countries have different approaches. The Brazilian law tries to solve the multi-person data and interoperability issues by not limiting the “ported data” to data the user has given to a provider. It also doesn’t detail the format to be adopted. Instead, the data protection authority can set the standards among others for interoperability, security, retention periods, and transparency. In Panama, portability is a right and a principle. It is one of the general data protection principles that guide the interpretation and implementation of their overarching data protection law. As a right, it resembles the GDPR model. The user has the right to receive a copy of their personal data in a structured, commonly used, and machine-readable format. The right applies only when the user has provided their data directly to the service provider  and has given their consent or when the data is needed for the execution of a contract. Panama’s law expressly states that portability is “an irrevocable right” that can be requested at any moment. 

Portability rights in Barbados are similar to those in Panama. But, like the GDPR, there are some limitations. Users can only exercise their rights to directly port their data from one provider to another when technically feasible. Like Panama, users can port data that they have provided themselves to the providers, and not data about themselves that other users have shared.  

Automated Decision-Making About Individuals
Automated decision-making systems are making continuous decisions about our lives to aid or replace human decision making. So there is an emerging GDPR-inspired right not to be subjected to solely automated decision-making processes that can produce legal or similarly significant effects on the individual. This new right would apply, for example, to automated decision-making systems that use “profiles” to predict aspects of our personality, behavior, interests, locations, movements, and habits. With this new right, the user can contest the decisions made about them, and/or obtain an explanation about the logic of the decision. Here, too, there are a few variations among countries. 

Brazilian law establishes that the user has a right to review decisions affecting them that are based solely on the automated processing of personal data. These include decisions intended to define personal, professional, consumer, or credit profiles, or other traits of someone’s personality. Unfortunately, President Bolsonaro vetoed a provision requiring human review in this automated-decision-making. On the upside, the user has a right to request the provider to disclose information on the criteria and procedures adopted for automated decision-making, though unfortunately there is an exception for trade and industrial secrets.

In Barbados, the user has the right to know, upon request to the provider, about the existence of decisions based on automated processing of personal data, including profiling. As in other countries, this includes access to information about the logic involved and the envisaged consequences on them. Barbados users also have the right not to be subject to automated decision-making processes without human involvement, and to automated decisions that will produce legal or similarly significant effects on the individual, including profiling. There are exceptions for when automated decisions are: necessary for entering or performing a contract between the user and the provider; authorized by law; or based on user consent. Barbados has defined consent similar to the GDPR’s definition. That means there must be a freely given, specific, informed, and unambiguous indication of the user’s wishes to the processing of their personal data. The user has the ability to change their mind

Panama law also grants users the right not to be subject to a decision based solely on automated processing of their personal data, without human involvement, but this right only applies when the process produces negative legal effects concerning the user or detrimentally affects the users’ rights. As in Barbados, Panama allows automated decisions that are necessary for entering or performing a contract, based on the user’s consent, or permitted by law. But Panama defines “consent” in a less user-protective manner: when a person provides a “manifestation” of their will.

Legal Basis for the Processing of Personal Data
It is important for data privacy laws to require service providers to have a valid lawful basis in order to process personal data, and to document that basis before starting the processing. If not, the processing will be unlawful. Data protection regimes, including all principles and user’s rights, must apply regardless of whether consent is required or not.

Panama’s new law allows three legal bases other than consent: to comply with a contractual obligation, to comply with a legal obligation, or as authorized by a particular law. Brazil and Barbados set out ten legal bases for personal data processing—four more than the GDPR, with consent as only one of them. Brazilian and Barbados law seeks to balance this approach by providing users with clear and concise information about what providers do with their personal data. It also grants users the right to object to the processing of their data, which allows users to stop or prevent processing. 

Data Protection in the Law Enforcement Context
Latin America lags on a comprehensive data protection regime that applies not just to corporations, but also to public authorities when processing personal data for law enforcement purposes. The EU, on the other hand, has adopted not just the GDPR but also the EU Police Directive, a law that regulates the processing of personal data for police forces. Most Latam data protection laws exempt law enforcement and intelligence activities from the application of the law. However, in Colombia, some data protection rules apply to the public sector. That nation’s GDPL applies to the public sector, with exceptions for national security, defense, anti-money-laundering regulations, and intelligence. The Constitutional Court has stated that these exceptions are not absolute exclusions from the law’s application, but an exemption to just some provisions. Complementary statutory law should regulate them, subject to the proportionality principle. 

Spain has not implemented the EU’s Police Directive yet. As a result, personal data processing for law enforcement activities remains held to the standards of the country’s previous data protection law. Argentina’s and Chile’s laws do apply to law enforcement agencies, and Mexico has a specific data protection regulation for the public sector. But Peru and Panama exclude law enforcement agencies from the scope of their data protection laws. Brazil’s law creates an exception to personal data processing solely for public safety, national security, and criminal investigations. Still, it lays down that specific legislation has to be approved to regulate these activities. 

Recommendations and Looking Ahead
Communication privacy has much to gain with the intersection of its traditional inviolability safeguards and the data protection toolkit. That intersection helps entrench international human rights standards applicable to law enforcement access to communications data. The principles of data minimization and purpose limitation in the data protection world correlate with the necessity, adequacy, and proportionality principles under international human rights law. They are necessary to curb massive data retention or dragnet government access to data. The idea that any personal data processing requires a legitimate basis upholds the basic tenets of legality and legitimate aim to place limitations on fundamental rights. Law enforcement access to communications data must be clearly and precisely prescribed by law. No other legitimate basis than the compliance with a legal obligation is acceptable in this context. 

Data protection transparency and information safeguards reinforce a user’s right to a notification when government authorities have requested their data. European courts have asserted this right stems from privacy and data protection safeguards. In the Tele2 Sverige AB and Watson cases, the EU Court of Justice (CJEU) held that “national authorities to whom access to the retained data has been granted must notify the persons affected . . . as soon as that notification is no longer liable to jeopardize the investigations being undertaken by those authorities.” Before that, in Szabó and Vissy v. Hungary, the European Court of Human Rights (ECHR) had declared that notifying users of surveillance measures is also inextricably linked to the right to an effective remedy against the abuse of monitoring powers.

Data protection transparency and information safeguards can also play a key role in fostering greater insight into companies’ and governments’ practices when it comes to requesting and handing over users’ communications data. In collaboration with EFF, many Latin American NGOs have been pushing Internet Service Providers to publish their law enforcement guidelines and aggregate information on government data requests. We’ve made progress over the years, but there’s still plenty of room for improvement. When it comes to public oversight, data protection authorities should have the legal mandate to supervise personal data processing by public entities, including law enforcement agencies. They should be impartial and independent authorities, conversant in data protection and technology, and have adequate resources in exercising the functions assigned to them.

There are already many essential safeguards in the Latam region. Most countries’ constitutions have explicitly recognized privacy as a fundamental right, and most have adopted data protection laws.  Each constitution recognized a general right to private life or intimacy or a set of multiple, specific rights: a right to the inviolability of communications; an explicit data protection right (Chile, Mexico, Spain); or “habeas data” (Argentina, Peru, Brazil) as either a right or legal remedy. (In general, habeas data protects the right of any person to find out what data is held about themselves.) And, most recently, a landmark ruling of Brazil’s Supreme Court has recognized data protection as a fundamental right drawn from the country’s Constitution.

Across our work in the region, our FAQs help to spot loopholes, flag concerning standards, or highlight pivotal safeguards (or lack thereof). It’s clear that the rise of data protection laws have helped secure user privacy across the region: but more needs to be done. Strong data protection rules that apply to law enforcement activities would enhance communication privacy protections in the region. More transparency is urgently needed, both in how the regulations will be implemented, and what additional work private companies and the public sector are taking to pro-actively protect user data.

We invite everyone to read these reports and reflect on what work we should champion and defend in the days ahead, and what still needs to be done.

Share
Categories
Intelwars International Necessary and Proportionate privacy Surveillance and Human Rights ¿Quién defiende tus datos?

Spain’s New Who Defends Your Data Report Shows Robust Privacy Policies But Crucial Gaps to Fill

ETICAS Foundation’s second ¿Quien Defiende Tus Datos? (Who Defends Your Data?) report on data privacy practices in Spain shows how Spain’s leading Internet and mobile app providers are making progress in being clear about how users’ personal data is being protected. Providers are disclosing what information is being collected, how long it’s being kept, and who it’s shared with. Compared to Eticas’ first report on Spain in 2018, there was significant improvement in the number of companies informing users about how long they store data as well as notifying users about privacy policy changes.

The report evaluating policies at 13 Spanish Internet companies also indicates that a handful are taking seriously their obligations under the new General Data Protection Regulation (GDPR), the European Union’s data privacy law that sets tough standards for protecting customers’ private information and gives users more information about and control over their private data. The law went into effect in December 2018.

But the good news for most of the companies pretty much stops there. All but the largest Internet providers in Spain are seriously lagging when it comes to transparency around government demands for user data, according to the Eticas report released today.

While Orange commits to notify users about government requests and both Vodafone and Telefónica clearly state the need for a court order before handing users’ communications to authorities, other featured companies have much to improve. They are failing to provide information about how they handle law enforcement requests for user data, whether they require judicial authorization before giving personal information to police, or if they notify users as soon as legally possible that their data was released to law enforcement. The lack of disclosure about their practices leaves an open question about whether they have users’ backs when the government wants personal data.

The format of the Eticas report is based on EFF’s Who Has Your Back project, which was launched nine years ago to shine a light on how well U.S. companies protect user data, especially when the government wants it. Since then the project has expanded internationally, with leading digital rights groups in Europe and the Americas evaluating data privacy practices of Internet companies so that users can make informed choices about to whom they should trust their data. Eticas Foundation first evaluated Spain’s leading providers in 2018 as part of a region-wide initiative focusing on Internet privacy policies and practices in Iberoamerica. 

In today’s report, Eticas evaluated 13 companies, including six telecom providers (Orange, Ono-Vodafone, Telefónica-Movistar, MásMóvil, Euskatel, and Somos Conexión), five home sales and rental apps (Fotocasa, Idealista, Habitaclia, Pisos.com, and YaEncontré), and two apps for selling second hand goods (Vibbo and Wallapop). The companies were assessed against a set of criteria covering policies for data collection, handing data over to law enforcement agencies, notifying customers about government data requests, publishing transparency reports, and promoting user privacy. Companies were awarded stars based on their practices and conduct. In light of the adoption of the GDPR, this year’s report assessed companies against several new criteria, including providing information on how to contact a company data protection officer, using private data to automate decision making without human involvement and build user profiles, and practices regarding international data transfers. Etica also looked at whether they provide guidelines, tailored to local law, for law enforcement seeking user data.

The full study is available in Spanish, and we outline the main findings below. 

An Overview of Companies’ Commitments and Shortcomings

Telefonica-Movistar, Spain’s largest mobile phone company, was the most highly rated, earning stars in 10 out of 13 categories. Vodafone was a close second, with nine stars. There was a big improvement overall in companies providing information about how long they keep user data—all 13 companies reported doing so this year, compared to only three companies earning partial credit in 2018. The implementation of the GDPR has had a positive effect on privacy policies at only some companies, the report shows. While most companies are providing contact information for data protection officials, only four—Movistar, Fotocasa, Habitaclia, and Vibbo—provide information about their practices for using data-based, nonhuman decision making, and profiling, and six—Vodafone, MásMóvil, Pisos.com, Idealista, Yaencontré, and Wallapop—provide information only about profiling. 

Only Telefónica-Movistar and Vodafone disclose information to users about its policies for giving personal data to law enforcement agencies. Telefonica-Movistar is vague in its data protection policy, only stating that it will hand user data to police in accordance with the law. However, the company’s transparency report shows that it lets police intercept communications only with a court order or in emergency situations. For metadata, the information provided is generic: it only mentions the legal framework and the authorities entitled to request it (judges, prosecutors, and the police).

Vodafone’s privacy policy says data will be handed over “according to the law and according to an exhaustive assessment of all legal requirements”. While its data protection policy does not provide information in a clear way, there’s an applicable legal framework report that describes both the framework and how the company interprets it, and states that a court order is needed to provide content and metadata to law enforcement.

Orange Spain is the only company that says it’s committed to telling users when their data is released to law enforcement unless there’s a legal prohibition against it. Because the company didn’t make clear it will do so as soon as there’s no legal barrier, it received partial credit. Euskatel and Somos Conexión, smaller ISPs, have stood out in promoting user privacy through campaigns or defending users in courts. On the latter, Euskatel has challenged a judicial order demanding the company reveal IP addresses in a commercial claim. After finally handing them over once the sentence was confirmed by a higher court, Euskatel filed a complaint with the Spanish data protection authority for possible violation of purpose limitation safeguards considering how the claimant used the data.

The report shows that, in general, the five home apps (Fotocasa, Idealista, Habitaclia, Pisos.com, and YaEncontré) and two second-hand goods sales apps (Vibbo and Wallapop) have to step up their privacy information game considerably. They received no stars in fully nine out of the 13 categories evaluated. This should give users pause and, in turn, motivate these companies to increase transparency about their data privacy practices so that the next time they are asked if they protect customers’ personal data, they have more to show.

Through ¿Quien Defiende Tus Datos? reports, local organizations in collaboration with EFF have been comparing companies’ commitments to transparency and users’ privacy in different Latin American countries and Spain. Earlier this year, Fundación Karisma in Colombia, ADC in Argentina, and TEDIC in Paraguay published new reports. New editions in Panamá, Peru, and Brazil are also on their way to spot which companies stand with their users and those that fall short of doing so. 

Share
Categories
Commentary DMCA DMCA Rulemaking DRM Intelwars International Trade Agreements and Digital Rights

Human Rights and TPMs: Lessons from 22 Years of the U.S. DMCA

Introduction

In 1998, Bill Clinton signed the Digital Millennium Copyright Act (DMCA), a sweeping overhaul of U.S. copyright law notionally designed to update the system for the digital era. Though the DMCA contains many controversial sections, one of the most pernicious and problematic elements of the law is Section 1201, the “anti-circumvention” rule which prohibits bypassing, removing, or revealing defects in “technical protection measures” (TPMs) that control not just use but also access to copyrighted works.

In drafting this provision, Congress ostensibly believed it was preserving fair use and free expression but failed to understand how the new law would interact with technology in the real world and how some courts could interpret the law to drastically expand the power of copyright owners. Appellate courts disagree about the scope of the law, and the uncertainty and the threat of lawsuits have meant that rightsholders have been able to effectively exert control over legitimate activities that have nothing to do with infringement, to the detriment of basic human rights.. Manufacturers who designed their products with TPMs that protected business models, rather than profits, can claim that using those products in ways that benefited their customers, (rather than their shareholders) is illegal.

22 years later, TPMs are everywhere, sometimes called “DRM” (“digital rights management”). TPMs control who can fix cars and tractors, who can audit the security of medical implants, who can refill a printer cartridge and whether you can store a cable broadcast and what you can do with it.

Last month, the Mexican Congress passed amendments to the Federal Copyright Law and the Federal Criminal Code, notionally to comply with the country’s treaty obligations under Donald Trump’s USMCA, the successor to NAFTA. This law included many provisions that interfered with human rights, so much so that the Mexican National Commission for Human Rights has filed a constitutional challenge before the Supreme Court seeking to annul these amendments.

Among the gravest of the defects in the new amendments to the Mexican copyright law and the Federal Criminal Code are the rules regarding TPMs, which replicate the defects in DMCA 1201. Notably, the new law does not address the flawed language of the DMCA that has allowed rightsholders to block legitimate and noninfringing uses of copyrighted works that depend on circumvention and creates harsh and disproportionate criminal penalties that creates unintended consequences for privacy and freedom of expression . Such criminal provisions are so broad and vague that it can be applied to any person, even the owner of the device, even if that person hasn’t committed any malicious intent to commit a wrongful act that will result in harm to another. To make things worse, the Mexican law does not provide even the inadequate protections the US version offers, such as an explicit, regular regulatory proceeding that creates exemptions for areas where the law is provably creating harms.

As with DMCA 1201, the new amendments to the Mexican copyright law contains language that superficially appears to address these concerns; however, as with DMCA 1201, the Mexican law’s safeguard provisions are entirely cosmetic, so burdened with narrow definitions and onerous conditions that they are unusable. That is why, in 22 years of DMCA 1201, no one has ever successfully invoked the exemptions written into the statute.

EFF has had 22 years of experience with the fallout from DMCA 1201. In this article, we offer our hard-won expertise to our colleagues in Mexican civil society, industry, lawmaking and to the Mexican public.

Below, we have set out examples of how DMCA 1201 — and its Mexican equivalent — is incompatible with human rights, including free expression, self-determination, the rights of people with disabilities, cybersecurity, education, and archiving; as well as the law’s consequences for Mexico’s national resiliency and economic competitiveness and food- and health-security.

Free Expression

Copyright and free expression are in obvious tension with one another: the former grants creators exclusive rights to reproduce and build upon expressive materials; the latter demands the least-possible restrictions on who can express themselves and how.

Balancing these two priorities is a delicate act, and while different countries manage their limitations and exceptions to copyright differently — fair use, fair dealing, derecho de autor, and more — these systems typically require a subjective, qualitative judgment in order to evaluate whether a use falls into one of the exempted categories: for example, the widespread exemptions for parody or commentary, or rules that give broad latitude to uses that are “transformative” or “critical.” These are rules that are designed to be interpreted by humans — ultimately by judges.

TPM rules that have no nexus with copyright infringement vaporize the vital qualitative considerations in copyright’s free expression exemptions, leaving behind a quantitative residue that is easy for computers to act upon, but which does not correspond closely to the policy objectives of limitations in copyright.

For example, a computer can tell if a video includes more than 25 frames of another video, or if the other works included in its composition do not exceed 10 percent of its total running time. But the computer cannot tell if the material that has been incorporated is there for parody, or commentary, or education — or if the video-editor absentmindedly dragged a video-clip from another project into the file before publishing it.

And in truth, when TPMs collide with copyright exemptions, they are rarely even this nuanced.

Take the TPMs that prevent recording or duplication of videos, beginning with CSS, the system used in the first generation of DVD players, and continuing through the suite of video TPMs, including AACS (Blu-Ray) and HDCP (display devices). These devices can’t tell if you are making a recording in order to produce a critical or parodical video commentary. In 2018, the US Copyright Office recognized that these TPMs interfere with the legitimate free expression rights of the public and granted an exemption to DMCA 1201 permitting the public to bypass these TPMs in order to make otherwise lawful recordings.The Mexican version of the DMCA does not include a formal procedure for granting comparable exemptions.

Other times, TPMs collide with free expression by allowing third parties to interpose themselves between rightsholders and their audiences, preventing the former from selling their expressive works to the latter.

The most prominent example of this interference is to be found in Apple’s App Store, the official monopoly retailer for apps that can run on Apple’s iOS devices, such as iPhones, iPads, Apple Watches, and iPods. Apple’s devices use TPMs that prevent owners of these devices from choosing to acquire software from rivals of the App Store. As a result, Apple’s editorial choices about which apps it includes in the App Store have the force of law. For an Apple customer to acquire an app from someone other than Apple, they must bypass the TPM on their device. Though we have won the right for customers to “jailbreak” their devices, anyone who sells them a tool to effect this ommits a felony under DMCA 1201 and risks both a five-year prison sentence and a $500,000 fine (for a first offense).

While the recent dispute with Epic Games has highlighted the economic dimension of this system (Epic objects to paying a 30 percent commission to Apple for transactions related to its game Fortnite), there are many historic examples of pure content-based restrictions on Apple’s part:

In these cases, Apple’s TPM interferes with speech in ways that are far more grave than merely blocking recording to advantage rightsholders. Rather, Apple is using TPMs backed by DMCA 1201 to interfere with rightsholders as well. Thanks to DMCA 1201, the creator of an app and a person who wants to use that app on a device that they own cannot transact without Apple’s approval.

If Apple withholds that approval, the owner of the device and the creator of the copyrighted work are not allowed to consummate their arrangement, unless they bypass a TPM. Recall that commercial trafficking in TPM-circumvention tools is a serious crime under DMCA 1201, carrying a penalty of a five year prison sentence and a $500,000 fine for a first criminal offense, even if those tools are used to allow rightsholders to share works with their audiences.

In the years since Apple perfected the App Store model, many manufacturers have replicated it, for categories of devices as diverse as games consoles, cars and tractors, thermostats and toys. In each of these domains — as with Apple’s App Store — DMCA 1201 interferes with free expression in arbitrary and anticompetitive ways.

Self Determination

What is a “family?”

Human social arrangements don’t map well to rigid categories. Digital systems can take account of the indeterminacy of these social connections by allowing their users to articulate the ambiguous and complex nature of their lives within a database. For example, a system could allow users to enter several names of arbitrary length to accommodate the common experience of being called different things by different people, or it could allow them to define their own familial relationships, declaring the people they live with as siblings to be their “brothers” or “sisters” — or declaring an estranged parent to be a stranger, or a re-married parent’s spouse to be a “mother.”

But when TPMs enter the picture, these necessary and beneficial social complexities are collapsed down into a set of binary conditions, fenced in by the biases and experiences of their designers. These systems are suspicious of their users, designed to prevent “cheating,” and they treat attempts to straddle their rigid categorical lines as evidence of dishonesty — not as evidence that the system is too narrow to accommodate its users’ lived experience.

One such example is CPCM, the “Content Protection and Copy Management component of DVB, a standard for digital television broadcasts used all over the world.

CPCM relies on the concept of an “authorized domain” that serves as a proxy for a single family. Devices designated as belonging to an “authorized domain” can share video recordings freely with one another, but may not share videos with people from outside the domain — that is, with people who are not part of their family.

The committee that designed the authorized domain was composed almost exclusively of European and US technology, broadcast, and media executives, and they took pains to design a system that was flexible enough to accommodate their lived experience.

If you have a private boat, or a luxury car with its own internal entertainment system, or a summer house in another country, the Authorized Domain is smart enough to understand that all these are part of a single family and will permit content to move seamlessly between them.

But the Authorized Domain is far less forgiving to families that have members who live abroad as migrant workers, or who are part of the informal economy in another state or country, or nomads who travel through the year with a harvest. These “families” are not recognized as such by DVB-CPCM, even though there are far more families in their situation than there are families with summer homes in the Riviera.

All of this would add up to little more than a bad technology design, except for DMCA 1201 and other anti-circumvention laws.

Because of these laws — including Mexico’s new copyright law — defeating CPCM in order to allow a family member to share content with you is itself a potential offense, and selling a tool to enable this is a potential criminal offense, carrying a five-year sentence and a $500,000 fine for a first offense.

Mexico’s familial relations should be defined by Mexican lawmakers and Mexican courts and the Mexican people — not by wealthy executives from the global north meeting in board-rooms half a world away.

The Rights of People With Disabilities

Though disabilities are lumped into broad categories — “motor disabilities,” “blindness,” “deafness,” and so on — the capabilities and challenges of each person with a disability are as unique as the capabilities and challenges faced by each able-bodied person.

That is why the core of accessibility isn’t one-size-fits-all “accommodations” for people with disabilities; rather, it is “universal design” is “design of systems so that they can be accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability.”

The more a system can be altered by its user, the more accessible it is. Designers can and should build in controls and adaptations, from closed captions to the ability to magnify text or increase its contrast, but just as important is to leave the system open-ended, so that people whose needs were not anticipated during the design phase can suit them to their needs, or recruit others to do so for them.

This is incompatible with TPMs. TPMs are designed to prevent their users from modifying them. After all, if users could modify TPMs, they could subvert their controls.

Accessibility is important for people with disabilities, but it is also a great boon to able-bodied people: first, because many of us are merely “temporarily able-bodied” and will have to contend with some disability during our lives; and second, because flexible systems can accommodate use-cases that designers have not anticipated that able-bodied people also value: from the TV set with captions turned on in a noisy bar (or for language-learners) to the screen magnifiers used by people who have mislaid their glasses.

Like able-bodied people, many people with disabilities are able to effect modifications and improvements in their own tools. However, most people — whether they are able-bodied and people with disabilities — rely on third parties to modify the systems they rely on because they lack the skill or time to make these modifications themselves.

That is why DMCA 1201’s prohibition on “trafficking in circumvention devices” is so punitive: it not only deprives programmers of the right to improve their tools, but it also deprives the rest of us of the right to benefit from those programmers’ creations, and programmers who dare defy this stricture face lengthy prison sentences and giant fines if they are prosecuted.

Recent examples of TPMs interfering with disabilities reveal how confining DMCA 1201 is for people with disabilities.

In 2017, the World Wide Web Consortium (W3C) approved a controversial TPM for videos on the Web called Encrypted Media Extensions (EME). EME makes some affordances for people with disabilities, but it lacks other important features. For example, people with photosensitive epilepsy cannot use automated tools to identify and skip past strobing effects in videos that could trigger dangerous seizures, while color-blind people can’t alter the color-palette of the videos to correct for their deficit.

A more recent example comes from the med-tech giant Abbott Labs, which used DMCA 1201 to suppress a tool that allowed people with diabetes to link their glucose monitors to their insulin pumps, in order to automatically calculate and administer doses of insulin in an “artificial pancreas.”

Note that there is no copyright infringement in any of these examples: monitoring your blood sugar, skipping past seizure-inducing video effects, or changing colors to a range you can perceive do not violate anyone’s rights under US copyright law. These are merely activities that are dispreferred by manufacturers.

Normally, a manufacturer’s preference is subsidiary to the interests of the owner of a product, but not in this case. Once a product is designed so that you must bypass a TPM to use it in ways the manufacturer doesn’t like, DMCA 1201 gives the manufacturer’s preferences the force of law,

Archiving

In 1991, the science fiction writer Bruce Sterling gave a keynote address to the Game Developer’s Conference in which he described the assembled game creators as practitioners without a history, whose work crumbled under their feet as fast as they could create it: “Every time a [game] platform vanishes it’s like a little cultural apocalypse. And I can imagine a time when all the current platforms might vanish, and then what the hell becomes of your entire mode of expression?”

Sterling contrasted the creative context of software developers with authors: authors straddle a vast midden of historical material that they — and everyone else — can access. But in 1991, as computers and consoles were appearing and disappearing at bewildering speed, the software author had no history to refer to: the works of their forebears were lost to the ages, no longer accessible thanks to the disappearance of the hardware needed to run them.

Today, Sterling’s characterization rings hollow. Software authors, particularly games developers, have access to the entire corpus of their industry, playable on modern computers, thanks to the rise and rise of “emulators” — programs that simulate primitive, obsolete hardware on modern equipment that is orders of magnitude more powerful.

However, preserving the history of an otherwise ephemeral medium was not for the faint of heart. From the earliest days of commercial software, companies have deployed TPMs to prevent their customers from duplicating their products or running them without authorization. Preserving the history of software is impossible without bypassing TPMs, and bypassing TPMs is a potential felony that can send you to prison for five years and/or cost you half a million dollars if you supply a tool to do so.

That is why the US Copyright Office has repeatedly granted exemptions to DMCA 1201, permitting archivists in the United States to bypass software TPMs for preservation purposes.

Of course, it’s not merely software that is routinely restricted with TPMs, frustrating the efforts of archivists: from music to movies, books to sound recordings, TPMs are routine. Needless to say, these TPMs interfere with routine, vital archiving activities just as much as they interfere with the archiving and preservation of software.

Education

Copyright systems around the world create exemptions for educational activities; U.S. copyright law specifically mentions education in the criteria for exempted use.

But educators frequently run up against the blunt, indiscriminate restrictions imposed by TPMs, whose code cannot distinguish between someone engaged in educational activities and someone engaged in noneducational activities.

Educators’ conflicts with TPMs are many and varied: a teacher may build a lesson plan around an online video but be unable to act on it if the video is removed; in the absence of a TPM, the teacher could make a local copy of the video as a fallback.

For a decade, the U.S. Copyright Office has affirmed the need for educators to bypass TPMs in order to engage in normal pedagogical activities, most notably the need for film professors to bypass TPMs in order to teach their students and so that their students can analyze and edit commercial films as part of their studies.

National Resiliency

Thus far, this article has focused on the TPMs’ impact on individual human rights, but human rights are dependent on the health and resiliency of the national territory in which they are exercised. Nutrition, health, and security are human rights just as surely as free speech, privacy and accessibility.

The pandemic has revealed the brittleness and transience of seemingly robust supply chains and firms. Access to replacement parts and skilled technicians has been disrupted and firms have failed, taking down their servers and leaving digital tools in unusable or partially unusable states.

But TPMs don’t understand pandemics or other emergencies: they enforce restrictions irrespective of the circumstances on the ground. And where laws like DMCA 1201 prevent the development of tools and knowledge for bypassing TPMs, these indiscriminate restrictions take on the force of law and acquire a terrible durability, as few firms or even individuals are willing to risk prison and fines to supply the tools to make repairs to devices that are locked with TPMs.

Nowhere is this more visible than in agriculture, where the markets for key inputs like heavy machinery, seeds and fertilizer have grown dangerously concentrated, depriving farmers of meaningful choice from competitors with distinctive offers.

Farmers work under severe constraints: they work in rural, inaccessible territories, far from authorized service depots, and the imperatives of the living organisms they cultivate cannot be argued with. When your crop is ripe, it must be harvested — and that goes double if there’s a storm on the horizon.

That’s why TPMs in tractors constitute a severe threat to national resiliency, threatening the food supply itself. Ag-tech giant John Deere has repeatedly asserted that farmers may not effect their own tractor repairs, insisting that these repairs are illegal unless they are finalized by an authorized technician who can take days to arrive (even when there isn’t a pandemic), and who charge hundreds of dollars to inspect the farmer’s own repairs and type an unlock code into the tractor’s keyboard.

John Deere’s position is that farmers are not qualified and should not be permitted to repair their own property. However, farmers have been fixing their own equipment for as long as agriculture has existed — every farm has a workshop and sometimes even a forge. Indeed, John Deere’s current designs are descended from modifications that farmers themselves made to earlier models: Deere used to dispatch field engineers to visit farms and copy farmers’ innovations for future models.

This points to another key feature for national resiliency: adaptation. Just as every person has unique needs that cannot be fully predicted and accounted for by product designers, so too does every agricultural context. Every plot of land has its own biodynamics, from soil composition to climate to labor conditions, and farmers have always adapted their tools to suit their needs. Multinational ag-tech companies can profitably target the conditions of the wealthiest farmers, but if you fall too far outside the median use-case, the parameters of your tractor are unlikely to fully suit your needs. That is why farmers are so accustomed to adapting their equipment.

To be clear, John Deere’s restrictions do not prevent farmers from modifying their tractors — they merely put those farmers in legal peril. Instead, farmers have turned to black market Ukrainian replacement software for their tractors; no one knows who made this software, it comes with no guarantees, and if it contained malicious or defective code, there would be no one to sue.

And John Deere’s abuse of TPMs doesn’t stop at repairs. Tractors contain sophisticated sensors that can map out soil conditions to a high degree of accuracy, measuring humidity, density and other factors and plotting them on a centimeter-accurate grid. This data is automatically generated by farmers driving tractors around their own fields, but the data does not go to the farmer. Rather, John Deere harvests the data that farmers generate while harvesting their crops and builds up detailed pictures of regional soil conditions that the company sells as market intelligence to the financial markets for bets in crop futures.

That data is useful to the farmers who generated it: accurate soil data is needed for “precision agriculture,” which improves crop yields by matching planting, fertilizing and watering to soil conditions. Farmers can access a small slice of that data, but only through an app that comes bundled with seed from Bayer-Monsanto. Competing seed companies, including domestic seed providers, cannot make comparable offers.

Again, this is bad enough under normal conditions, but when supply chains fail, the TPMs that enforce these restrictions prevent local suppliers from filling in the gaps.

Right to Repair

TPMs don’t just interfere with ag-tech repairs: dominant firms in every sector have come to realize that repairs are a doubly lucrative nexus of control. First, companies that control repairs can extract money from their customers by charging high prices to fix their property and by forcing customers to use high-priced manufacturer-approved replacement parts in those repairs; and second, companies can unilaterally declare some consumer equipment to be beyond repair and demand that they pay to replace it.

Apple spent lavishly in 2018 on a campaign that stalled 20 state-level Right to Repair bills in the U.S.A., and, in his first shareholder address of 2019, Apple CEO Tim Cook warned that a major risk to Apple’s profitability came from consumers who chose to repair, rather than replace, their old phones, tablets and laptops.

The Right to Repair is key to economic self-determination at any time, but in times of global or local crisis, when supply chains shatter, repair becomes a necessity. Alas, the sectors most committed to thwarting independent repair are also sectors whose products are most critical to weathering crises.

Take the automotive sector: manufacturers in this increasingly concentrated sector have used TPMs to prevent independent repair, from scrambling the diagnostic codes used on cars’ internal communications networks to adding “security chips” to engine parts that prevent technicians from using functionally equivalent replacement parts from competing manufacturers.

The issue has simmered for a long time: in 2012, voters in the Commonwealth of Massachusetts overwhelmingly backed a ballot initiative that safeguarded the rights of drivers to choose their own mechanics, prompting the legislature to enact a right-to-repair law. However, manufacturers responded to this legal constraint by deploying TPMs that allow them to comply with the letter of the 2012 law while still preventing independent repair. The situation is so dire that Massachusetts voters have put another ballot initiative on this year’s ballot, which would force automotive companies to disable TPMs in order to enable independent repair.

It’s bad enough to lose your car while a pandemic has shut down public transit, but it’s not just drivers who need the Right to Repair: it’s also hospitals.

Medtronic is the world’s largest manufacturer of ventilators. For 20 years, it has manufactured the workhorse Puritan Bennett 840 ventilator, but recently the company added a TPM to its ventilator design. The TPM prevents technicians from repairing a ventilator with a broken screen by swapping in a screen from another broken ventilator; this kind of parts-reuse is common, and authorized Medtronic technicians can refurbish a broken ventilator this way because they have the code to unlock the ventilator.

There is a thriving secondary market for broken ventilators, but refurbishers who need to transplant a monitor from one ventilator to another must bypass Medtronic’s TPM. To do this, they rely on a single Polish technician who manufacturers a circumvention device and ships it to medical technicians around the world to help them with their repairs.

Medtronic strenuously objects to this practice and warns technicians that unauthorized repairs could expose patients to risk — we assume that the patients whose lives were saved by refurbished ventilators are unimpressed by this argument. In a cruel twist of irony, the anti-repair Medtronic was founded in 1949 as a medical equipment repair business that effected unauthorized repairs.

Cybersecurity

In the security field, it’s a truism that “there is no security in obscurity” — or, as cryptographer Bruce Schneier puts it, “anyone can design a system that they can’t think of a way around. That doesn’t mean it’s secure, it just means it’s secure against people stupider than you.”

Another truism in security is that “security is a process, not a product.” You can never know if a system is secure — all you can know is whether any defects have been discovered in it. Grave defects have been discovered even very mature, widely used systems that have been in use for decades.

The corollary of these two rules is that security requires that systems be open to auditing by as many third parties as possible, because the people who designed those systems are blind to their own mistakes, and because each auditor brings their own blind spots to the exercise.

But when a system has TPMs, they often interfere with security auditing, and, more importantly, security disclosures. TPMs are widely used in embedded systems to prevent competitors from creating interoperable products — think of inkjet printers using TPMs to detect and reject third-party ink cartridges — and when security researchers bypass these to investigate products, their reports can run afoul of DMCA 1201. Revealing a defect in a TPM, after all, can help attackers disable that TPM, and thus constitutes “circumvention” information. Recall that supplying “circumvention devices” to the public is a criminal offense under DMCA 1201.

This problem is so pronounced that in 2018, the US Copyright Office granted an exemption to DMCA 1201 for security researchers.

However, that exemption is not broad enough to encompass all security research. A coalition of security researchers is returning to the Copyright Office this rulemaking to explain again why regulators have been wrong to impose restrictions on legitimate research.

Competition

Firms use TPMs in three socially harmful ways:

  1. Controlling customers: From limiting repairs to forcing the purchase of expensive spares and consumables to arbitrarily blocking apps, firms can use TPMs to compel their customers to behave in ways that put corporate interests above the interests of their customers;
  2. Controlling critics: DMCA 1201 means that when a security researcher discovers a defect in a product, the manufacturer can exercise a veto over the disclosure of the defect by threatening legal action;
  3. Controlling competitors: DMCA 1201 allows firms to unilaterally decide whether a competitor’s parts, apps, features and services are available to its customers.

This concluding section delves into three key examples of TPMs’ interference with competitive markets.

App Stores

In principle, there is nothing wrong with a manufacturer “curating” a collection of software for its products that are tested and certified to be of high quality. However, when devices are designed so that using a rival’s app store requires bypassing a TPM, manufacturers can exercise a curator’s veto, blocking rival apps on the basis that they compete with the manufacturer’s own services.

The most familiar example of this is Apple’s repeated decision to block rivals on the grounds that they offer alternative payment mechanisms that bypass Apple’s own payment system and thus evade paying a commission to Apple. Recent high-profile examples include the HEY! email app, and the bestselling Fortnite app.

Streaming media

This plays out in other device categories as well, notably streaming video: AT&T’s HBO Max is deliberately incompatible with leading video-to-TV bridges such as Amazon Fire and Roku TV, who command 70% of the market. The Fire and Roku are often integrated directly into televisions, meaning that HBO Max customers must purchase additional hardware to watch the TV they’re already paying for on their own television sets. To make matters worse, HBO has cancelled its HBO Go service, which enabled people who paid for HBO over satellite and cable to watch programming on Roku and Amazon devices .

Browsers

TPMs also allow for the formation of cartels that can collude to exclude entire development methodologies from a market and to deliver control over the market to a single company. For example, the W3C’s Encrypted Media Extensions (see “The Rights of People With Disabilities,” above) is a standard for streaming video to web browsers.

However, EME is designed so that it does not constitute a complete technical solution: every browser vendor that implements EME must also separately license a proprietary descrambling component called a “content decryption module” (CDM).

In practice, only one company makes a licensable CDM: Google, whose “Widevine” technology must be licensed in order to display commercial videos from companies like Netflix, Amazon Prime and other market leaders in a browser.

However, Google will not license this technology to free/open source browsers except for those based on its own Chrome/Chromium browser. In standardizing a TPM for browsers, the W3C — and Section 1201 of the DMCA — has delivered gatekeeper status to Google, who now get to decide who may enter the browser market that it dominates; rivals that attempt to implement a CDM without Google’s permission risk prison sentences and large fines.

Conclusion

The U.S.A. has had 22 years of experience with legal protections for TPMs under Section 1201 in the DMCA. In that time, the U.S. government has repeatedly documented multiple ways in which TPMs interfere with basic human rights and the systems that permit their exercise. The Mexican Supreme Court has now taken up the question of whether Mexico can follow the U.S.’s example and establish a comparable regime in accordance with the rights recognized by the Mexican Constitution and international human rights law. In this document, we provide evidence that TPM regimes are incompatible with this goal.

The Mexican Congress — and the U.S. Congress — could do much to improve this situation by tying offenses under TPM law to actual acts of copyright violation. As the above has demonstrated, the most grave abuses of TPMs stem from their use to interfere with activities that do not infringe copyright.

However, rightsholders already have a remedy for copyright infringements: copyright law. A separate liability regime for TPM circumvention serves no legitimate purpose. Rather, its burden falls squarely on people who want to stay on the right side of the law and find that their important, legitimate activities and expression are put in legal peril.

Share
Categories
Commentary Fair Use Intelwars International

An Open Letter to the Government of South Africa on the Need to Protect Human Rights in Copyright

Five years ago, South Africa embarked upon a long-overdue overhaul of its copyright system, and, as part of that process, the country incorporated some of the best elements of both U.S. and European copyright.

From the U.S.A., South Africa imported the flexible idea of fair use — a set of tests for when it’s okay to use others’ copyrighted work without permission. From the E.U., South Africa imported the idea of specific, enumerated exemptions for libraries, galleries, archives, museums, and researchers.

Both systems are important for preserving core human rights, including free expression, privacy, education, and access to knowledge; as well as important cultural and economic priorities such as the ability to build U.S.- and European-style industries that rely on flexibilities in copyright.

Taken together, the two systems are even better: the European system of enumerated exemptions gives a bedrock of certainty on which South Africans can stand, knowing for sure that they are legally permitted to make those uses. The U.S. system, meanwhile, future-proofs these exemptions by giving courts a framework with which to evaluate new uses involving technologies and practices that do not yet exist.

But as important as these systems are, and as effective as they’d be in combination, powerful rightsholder lobbies insisted that they should not be incorporated in South African law. Incredibly, the U.S. Trade Representative objected to elements of the South African law that were nearly identical to U.S. copyright, arguing that the freedoms Americans take for granted should not be enjoyed by South Africans.

Last week, South African President Cyril Ramaphosa alarmed human rights N.G.O.s and the digital rights community when he returned the draft copyright law to Parliament, striking out both the E.U.- and U.S.-style limitations and exceptions, arguing that they violated South Africa’s international obligations under the Berne Convention, which is incorporated into other agreements such as the WTO’s TRIPS Agreement and the WIPO Copyright Treaty.

President Ramaphosa has been misinformed. The copyright limitations and exceptions under consideration in South Africa are both lawful under international treaties and important to the human rights, cultural freedom, economic development, national sovereignty and self-determination of the South African nation, the South African people, and South African industry.

Today, EFF sent an open letter to The Honourable Ms. Thabi Modise, Speaker of South Africa’s National Assembly; His Excellency Mr. Cyril Ramaphosa, President of South Africa; Ms. Inze Neethling, Personal Assistant to Minister E. Patel, South African Department of Trade, Industry and Competition; and The Honourable Mr. Andre Hermans, Secretary of the Portfolio Committee on Trade and Industry of the Parliament of South Africa.

In our letter, we set out the legal basis for the U.S. fair use system’s compliance with international law, and the urgency of balancing South African copyright with limitations and exceptions that preserve the public interest.

This is an urgent matter. EFF is proud to partner with NGOs in South Africa and around the world in advocating for the public’s rights in copyright.

Share