DMCA Intelwars

Standing With Security Researchers Against Misuse of the DMCA

Security research is vital to protecting the computers upon which we all depend, and protecting the people who have integrated electronic devices into their daily lives. To conduct security research, we need to protect the researchers, and allow them the tools to find and fix vulnerabilities. The Digital Millennium Copyright Act’s anti-circumvention provisions, Section 1201, can cast a shadow over security research, and unfortunately the progress we’ve made through the DMCA rule-making process has not been sufficient to remove this shadow.

DMCA reform has long been part of EFF’s agenda, to protect security researchers and others from its often troublesome consequences. We’ve sued to overturn the onerous provisions of Section 1201 that violate the First Amendment, weve advocated for exemptions in every triennial rule-making process, and the Coders Rights Project helps advise security researchers about the legal risks they face in conducting and disclosing research.

Today, we are honored to stand with a group of security companies and organizations that are showing their public support for good faith cybersecurity research, standing up against use of Section 1201 of the DMCA to suppress the software and tools necessary for that research. In the statement below, the signers have united to urge policymakers and legislators to reform Section 1201 to allow security research tools to be provided and used for good faith security research, and to urge companies and prosecutors to refrain from using Section 1201 to unnecessarily target tools used for security research.

The statement in full:

We the undersigned write to caution against use of Section 1201 of the Digital Millennium Copyright Act (DMCA) to suppress software and tools used for good faith cybersecurity research. Security and encryption researchers help build a safer future for all of us by identifying vulnerabilities in digital technologies and raising awareness so those vulnerabilities can be mitigated. Indeed, some of the most critical cybersecurity flaws of the last decade, like Heartbleed, Shellshock, and DROWN, have been discovered by independent security researchers.

However, too many legitimate researchers face serious legal challenges that prevent or inhibit their work. One of these critical legal challenges comes from provisions of the DMCA that prohibit providing technologies, tools, or services to the public that circumvent technological protection measures (such as bypassing shared default credentials, weak encryption, etc.) to access copyrighted software without the permission of the software owner. 17 USC 1201(a)(2), (b). This creates a risk of private lawsuits and criminal penalties for independent organizations that provide technologies to researchers that can help strengthen software security and protect users. Security research on devices, which is vital to increasing the safety and security of people around the world, often requires these technologies to be effective.

Good faith security researchers depend on these tools to test security flaws and vulnerabilities in software, not to infringe on copyright. While Sec. 1201(j) purports to provide an exemption for good faith security testing, including using technological means, the exemption is both too narrow and too vague. Most critically, 1201(j)’s accommodation for using, developing or sharing security testing tools is similarly confined; the tool must be for the “sole purpose” of security testing, and not otherwise violate the DMCA’s prohibition against providing circumvention tools.

If security researchers must obtain permission from the software vendor to use third-party security tools, this significantly hinders the independence and ability of researchers to test the security of software without any conflict of interest. In addition, it would be unrealistic, burdensome, and risky to require each security researcher to create their own bespoke security testing technologies.

We, the undersigned, believe that legal threats against the creation of tools that let people conduct security research actively harm our cybersecurity. DMCA Section 1201 should be used in such circumstances with great caution and in consideration of broader security concerns, not just for competitive economic advantage. We urge policymakers and legislators to reform Section 1201 to allow security research tools to be provided and used for good faith security research In addition, we urge companies and prosecutors to refrain from using Section 1201 to unnecessarily target tools used for security research.

Bishop Fox
Black Hills Information Security
Cybersecurity Coalition
Digital Ocean
Electronic Frontier Foundation
Grand Idea Studio
Luta Security
NCC Group
Red Siege
SANS Technology Institute
Social Exploits LLC

announcement Creativity & Innovation DMCA Intelwars Legal Analysis

If Not Overturned, a Bad Copyright Decision Will Lead Many Americans to Lose Internet Access

This post was co-written by EFF Legal Intern Lara Ellenberg

In going after internet service providers (ISPs) for the actions of just a few of their users, Sony Music, other major record labels, and music publishing companies have found a way to cut people off of the internet based on mere accusations of copyright infringement. When these music companies sued Cox Communications, an ISP, the court got the law wrong. It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital internet access as ISPs start to cut off more and more customers to avoid massive damages.

EFF, together with the Center for Democracy & Technology, the American Library Association, the Association of College and Research Libraries, the Association of Research Libraries, and Public Knowledge filed an amicus brief this week urging the U.S. Court of Appeals for the Fourth Circuit to protect internet subscribers’ access to essential internet services by overturning the district court’s decision.

The district court agreed with Sony that Cox is responsible when its subscribers—home and business internet users—infringe the copyright in music recordings by sharing them on peer-to-peer networks. It effectively found that Cox didn’t terminate accounts of supposedly infringing subscribers aggressively enough. An earlier lawsuit found that Cox wasn’t protected by the Digital Millennium Copyright Act’s (DMCA) safe harbor provisions that protect certain internet intermediaries, including ISPs, if they comply with the DMCA’s requirements. One of those requirements is implementing a policy of terminating “subscribers and account holders … who are repeat infringers” in “appropriate circumstances.” The court ruled in that earlier case that Cox didn’t terminate enough customers who had been accused of infringement by the music companies.

In this case, the same court found that Cox was on the hook for the copyright infringement of its customers and upheld the jury verdict of $1 billion in damages—by far the largest amount ever awarded in a copyright case.

The District Court Got the Law Wrong

When an ISP isn’t protected by the DMCA’s safe harbor provision, it can sometimes be held responsible for copyright infringement by its users under “secondary liability” doctrines. The district court found Cox liable under both varieties of secondary liability—contributory infringement and vicarious liability—but misapplied both of them, with potentially disastrous consequences.

An ISP can be contributorily liable if it knew that a customer infringed on someone else’s copyright but didn’t take “simple measures” available to it to stop further infringement. Judge O’Grady’s jury instructions wrongly implied that because Cox didn’t terminate infringing users’ accounts, it failed to take “simple measures.” But the law doesn’t require ISPs to terminate accounts to avoid liability. The district court improperly imported a termination requirement from the DMCA’s safe harbor provision (which was already knocked out earlier in the case). In fact, the steps Cox took short of termination actually stopped most copyright infringement—a fact the district court simply ignored.

The district court also got it wrong on vicarious liability. Vicarious liability comes from the common law of agency. It holds that people who are a step removed from copyright infringement (the “principal,” for example, a flea market operator) can be held liable for the copyright infringement of its “agent” (for example, someone who sells bootleg DVDs at that flea market), when the principal had the “right and ability to supervise” the agent. In this case, the court decided that because Cox could terminate accounts accused of copyright infringement, it had the ability to supervise those accounts. But that’s not how other courts have ruled. For example, the Ninth Circuit decided in 2019 that Zillow was not responsible when some of its users uploaded copyrighted photos to real estate listings, even though Zillow could have terminated those users’ accounts. In reality, ISPs don’t supervise the Internet activity of their users. That would require a level of surveillance and control that users won’t tolerate, and that EFF fights against every day.

The consequence of getting the law wrong on secondary liability here, combined with the $1 billion damage award, is that ISPs will terminate accounts more frequently to avoid massive damages, and cut many more people off from the internet than is necessary to actually address copyright infringement.

The District Court’s Decision Violates Due Process and Harms All Internet Users

Not only did the decision get the law on secondary liability wrong, it also offends basic ideas of due process. In a different context, the Supreme Court decided that civil damages can violate the Constitution’s due process requirement when the amount is excessive, especially when it fails to consider the public interests at stake. In the case against Cox, the district court ignored both the fact that a $1 billion damages award is excessive, and that its decision will cause ISPs to terminate accounts more readily and, in the process, cut off many more people from the internet than necessary.

Having robust internet access is an important public interest, but when ISPs start over-enforcing to avoid having to pay billion-dollar damages awards, that access is threatened. Millions of internet users rely on shared accounts, for example at home, in libraries, or at work. If ISPs begin to terminate accounts more aggressively, the impact will be felt disproportionately by the many users who have done nothing wrong but only happen to be using the same internet connection as someone who was flagged for copyright infringement.

More than a year after the start of the COVID-19 pandemic, it’s more obvious than ever that internet access is essential for work, education, social activities, healthcare, and much more. If the district court’s decision isn’t overturned, many more people will lose access in a time when no one can afford not to use the internet. That harm will be especially felt by people of color, poorer people, women, and those living in rural areas—all of whom rely disproportionately on shared or public internet accounts. And since millions of Americans have access to just a single broadband provider, losing access to a (shared) internet account essentially means losing internet access altogether. This loss of broadband access because of stepped-up termination will also worsen the racial and economic digital divide. This is not just unfair to internet users who have done nothing wrong, but also overly harsh in the case of most copyright infringers. Being effectively cut off from society when an ISP terminates your account is excessive, given the actual costs of non-commercial copyright infringement to large corporations like Sony Music.

It’s clear that Judge O’Grady misunderstood the impact of losing Internet access. In a hearing on Cox’s earlier infringement case in 2015, he called concerns about losing access “completely hysterical,” and compared them to “my son complaining when I took his electronics away when he watched YouTube videos instead of doing homework.” Of course, this wasn’t a valid comparison in 2015 and it rightly sounds absurd today. That’s why, as the case comes before the Fourth Circuit, we’re asking the court to get the law right and center the importance of preserving internet access in its decision.

announcement Commentary DMCA Fair Use Intelwars

Free as in Climbing: Rock Climber’s Open Data Project Threatened by Bogus Copyright Claims

Rock climbers have a tradition of sharing “beta”—helpful information about a route—with other climbers. Giving beta is both useful and a form of community-building within this popular sport. Given that strong tradition of sharing, we were disappointed to learn that the owners of an important community website,, were abusing copyright to try to shut down another site The good news is that OpenBeta’s creator is not backing down—and EFF is standing with him.

Viet Nguyen, a climber and coder, created OpenBeta to bring open source software tools to the climbing community. He used Mountain Project, a website where climbers can post information about climbing routes, as a source of user-posted data about climbs, including their location, ratings, route descriptions, and the names of first ascensionists. Using this data, Nguyen created free, publicly available interfaces (APIs) that others can use to discover new insights about climbing—anything from mapping favorite crags to analyzing the relative difficulty of routes in different regions—using software of their own.

Rock climbers get a lot of practice at falling hard, taking a moment to recover, and continuing to climb. Mountain Project should take a lesson from their community: dust off, change your approach, and keep climbing.

The Mountain Project website is built on users’ contributions of information about climbs. Building on users’ contributions, Mountain Project offers search tools, “classic climbs” lists, climbing news links, and other content. But although the site runs on the contributions of its users, Mountain Project’s owners apparently want to control who can use those contributions, and how. They sent a cease-and-desist letter to Mr. Nguyen, claiming to “own[] all rights and interests in the user-generated work” posted to the site, and demanding that he stop using it in OpenBeta. They also sent a DMCA request to GitHub to take down the OpenBeta code repository.

As we explain in our response, these copyright claims are absurd. First, climbers who posted their own beta and other information to Mountain Project may be surprised to learn that the website is claiming to “own” their posts, especially since the site’s Terms of Use say just the opposite: “you own Your Content.”

As is typical for sites that host user-generated content, Mountain Project doesn’t ask its users to hand over copyright in their posts, but rather to give the site a “non-exclusive” license to use what they posted. Mountain Project’s owners are effectively usurping their users’ rights in order to threaten a community member.

And even if Mountain Project had a legal interest in the content, OpenBeta didn’t infringe on it. Facts, like the names and locations of climbing routes, can’t be copyrighted in the first place. And although copyright might apply to climbers’ own route descriptions, OpenBeta’s use is a fair use. As we explained in our letter:

The original purpose of the material was to contribute to the general knowledge of the climbing community. The OpenBeta data files do something more: Mr. Nguyen uses it to help others to learn about Machine Learning, making climbing maps, and otherwise using software to generate new insights about rock climbing.

In other words, a fair use.

Rock climbers get a lot of practice at falling hard, taking a moment to recover, and continuing to climb. Mountain Project blew it here by making legally bogus threats against OpenBeta. We hope they take a lesson from their community: dust off, change your approach, and keep climbing.

Commentary DMCA Intelwars Section 230 of the Communications Decency Act

Rewriting Intermediary Liability Laws: What EFF Asks – and You Should Too

Rewriting the legal pillars of the Internet is a popular sport these days. Frustration at Big Tech, among other things, has led to a flurry of proposals to change long-standing laws, like Section 230, Section 512 of the DMCA, and the E-Commerce Directive, that help shield online intermediaries from potential liability for what their users say or do, or for their content moderation decisions.

If anyone tells you revising these laws will be easy, they are gravely mistaken. For decades, Internet users – companies, news organizations, creators of all stripes, political activists, nonprofits, libraries, educators, governments and regular humans looking to connect – have relied on these protections. At the same time, some of the platforms and services that help make that all possible have hosted and amplified a great deal of harmful content and activity. Dealing with the latter without harming the former is an incredibly hard challenge. As a general matter, the best starting point is to ask: “Are intermediary protections the problem? Is my solution going to fix that problem? Can I mitigate the inevitable collateral effects?” The answer to all three should be a firm “Yes.” If so, the idea might be worth pursuing. If not, back to the drawing board.

That’s the short version. Here’s a little more detail about what EFF asks when policymakers come knocking.

What’s it trying to accomplish?

This may seem obvious, but it’s important to understand the goal of the proposal and then match that goal to its likely actual impacts. For example, if the stated goal of the proposal is to “rein in Big Tech,” then you must consider whether the plan might actually impede competition from smaller tech companies. If the stated goal is to prevent harassment, then we want to be sure the proposal won’t discourage platforms from moderating their content to cut down on harassment, and to consider whether the proposal will encourage overbroad censorship of non-harassing speech. In addition, we pay attention to whether the goal is consistent with EFF’s mission: to ensure that technology supports freedom, justice, and innovation for everyone.

Is it constitutional?

Too many policymakers seem to care too little about this detail – they’ll leave it for others to fight it out in the courts. Since EFF is likely to be doing the fighting, we want to plan ahead – and help others do the same. Call us crazy, but we also think voters care about staying within the boundaries set by the Constitution and also care about whether their representatives are wasting time (and public money) on initiatives that won’t survive judicial review.

Is it necessary – meaning, are intermediary protections the problem?

It’s popular right now to blame social media platforms for a host of ills. Sometimes that blame is deserved. And sometimes it’s not. Critics of intermediary liability protections too often forget that the law already affords rights and remedies to victims of harmful speech when it causes injury, and that the problem may stem from the failure to apply or enforce existing laws against users who violate those laws. State criminal penalties apply to both stalking and harassment, and a panoply of civil and criminal statutes address conduct that causes physical harm to an individual. Moreover, if an Internet company discovers that people are using its platforms to distribute child sexual abuse material, it must provide that information to the National Center for Missing and Exploited Children and cooperate with law enforcement investigations. Finally, law enforcement sometimes prefers to keep certain intermediaries active so that they can better investigate and track people who are using the platform to engage in illegal conduct.

If law enforcement lacks the resources they need to follow up on reports of harassment and abuse, or lacks clear understanding or commitment to enforcing those problems when they come up in the digital space, that’s a problem that needs fixing, immediately. But the solution probably doesn’t start or end with a person screening content in a cubicle, much less an algorithm attempting to do the same.

In addition to criminal charges, victims can use defamation, false light, intentional infliction of emotional distress, common law privacy, interference with economic advantage, fraud, anti- discrimination laws, and other civil causes of action to seek redress against the original author of the offending speech. They can also sue a platform if the platform owner is itself authoring the illegal content.

As for the platforms themselves, intermediary protections often contain important exceptions. To take just a few examples: Section 512 does not limit liability for service providers’ own infringing activities, and requires them to take action when they have knowledge of infringement by their users. Section 230 does not provide immunity against prosecutions under federal criminal law, or liability based on copyright law or certain sex trafficking laws. For example, backers of SESTA/FOSTA, the last Section 230 “reform,” pointed to as a primary target, but the FBI shut down the site without any help from that law. Nor does Section 230 provide immunity against civil or state criminal liability where the company is responsible, in whole or in part, for the creation or development of information. Nor does Section 230 immunize certain intermediary involvement with advertising, e.g., if a platform requires advertisers to choose ad recipients based on their protected status.

Against this backdrop, we ask: are intermediaries at fault here and, if so, are they beyond the reach of existing law? Will the proposed change help alleviate the problem in a practical way? Might targeting intermediaries impede enforcement under existing laws, such as by making it hard for law enforcement to locate and gather evidence about criminal wrongdoers?

Will it cause collateral damage? If so, can that damage be adequately mitigated?

As a civil liberties organization, one of the main reasons EFF defends limits on intermediary liability is because we know the crucial role intermediaries play in empowering Internet users who rely on those services to communicate. Attempts to change platform behavior by undermining Section 230 or Section 512, for example, may actually harm lawful users who rely on those platforms to connect, organize, and learn. This is a special risk to the historically marginalized communities that often lack a voice in traditional media and who often find themselves improperly targeted by content moderation systems. The ultimate beneficiaries of limits on intermediary liability are all of us who want those intermediaries to exist so that we can post things without having to code and host it ourselves, and so that we can read, watch, and re-use content that others create.

Further, we are always mindful that intermediary liability protections are not limited to brand name “tech companies,” of any size. Section 230, by its language, provides immunity to any “provider or user of an interactive computer service” when that “provider or user” republishes content created by someone or something else, protecting both decisions to moderate it and those to transmit it without moderation. “User,” in particular, has been interpreted broadly to apply “simply to anyone using an interactive computer service.” This includes anyone who maintains a website that hosts other people’s comments, posts another person’s op-ed to message boards or newsgroups, or anyone who forwards email written by someone else. A user can be an individual, a nonprofit organization, a university, a small brick-and-mortar business, or, yes, a “tech company.” And Section 512 protects a wide ranges of services, from your ISP to Twitter to the Internet Archive to a hobby site like Ravelry.

Against this backdrop, we ask: Who will be affected by the law? How will they respond?

For example, will intermediaries seek to limit their liability by censoring or curtailing lawful speech and activity? Will the proposal require intermediaries to screen or filter content before it is published? Will the proposal directly or indirectly compel intermediaries to remove or block user content, accounts, whole sections of websites, entire features or services? Will intermediaries shut down altogether? Will the cost of compliance become a barrier to entry for new competitors, further entrenching existing gate-keepers? Will the proposal empower a heckler’s veto, where a single notice or flag that an intermediary is being used for illegal purposes results in third-party liability if the intermediary doesn’t take action?

If the answer is yes to any of these, does the proposal include adequate remediation measures? For example (focusing just on competition) if compliance could make it difficult for smaller companies to compete or alternatives to emerge, does the proposal include mitigation measures? Will those mitigation measures be effective?

What expertise is needed to evaluate this proposal? Do we have it? Can we get it?

One of the many lessons of SESTA/FOSTA was that it’s hard to assess collateral effects if you don’t ask the right people. We asked sex workers and child safety experts what they thought about SESTA/FOSTA. They told us it was dangerous. They were right.

We take the same approach with the proposals coming in now. Do we understand the technological implications? For example, some proposed changes to Section 230 protections in the context of online advertising might effectively force systemic changes that will be both expensive and obsolete in a few years. Some might make a lot of sense and not be too burdensome for some services. Others might simply be difficult to assess without deeper knowledge of how the advertising system works now and will likely work in the future. Some implications for users might not be clear to us, so it’s especially important to seek out potentially affected communities and ensure they have a meaningful opportunity to consult and be heard on the impacts of any proposal. We try to make sure we know what questions to ask, but also to know what we don’t know.

Online Intermediary Reform Is Hard

Many intermediary liability reform proposals are little more than vaporware from policymakers who seem bent on willfully misunderstanding how intermediary protections and even the Internet work. But some are more serious, and deserve consideration and review. The above questions should help guide that process.



Creativity & Innovation DMCA Fair Use Intelwars

Cops Using Music to Try to Stop Being Filmed Is Just the Tip of the Iceberg

Someone tries to livestream their encounters with the police, only to find that the police started playing music. In the case of a February 5 meeting between an activist and the Beverly Hills Police Department, the song of choice was Sublime’s “Santeria.” The police may not got no crystal ball, but they do seem to have an unusually strong knowledge about copyright filters.

The timing of music being played when a cop saw he was being filmed was not lost on people. It seemed likely that the goal was to trigger Instagram’s over-zealous copyright filter, which would shut down the stream based on the background music and not the actual content. It’s not an unfamiliar tactic, and it’s unfortunately one based on the reality of how copyright filters work.

Copyright filters are generally more sensitive to audio content than audiovisual content. That sensitivity causes real problems for people performing, discussing, or reviewing music online. It’s a problem of mechanics. It is easier for filters to find a match just on a piece of audio material compared to a full audiovisual clip. And then there is the likelihood that a filter is merely checking to see if a few seconds of a video file seems to contain a few seconds of an audio file.

It’s part of why playing music is a better way of getting a video stream you don’t want seen shut down. (The other part is that playing music is easier than walking around with a screen playing a Disney film in its entirety. Much fun as that would be.)

The other side of the coin is how difficult filters make it for musicians to perform music that no one owns. For example, classical musicians filming themselves playing public domain music—compositions that they have every right to play, as they are not copyrighted—attract many matches. This is because the major rightsholders or tech companies have put many examples of copyrighted performances of these songs into the system. It does not seem to matter whether the video shows a different performer playing the song—the match is made on audio alone. This drives lawful use of material offline.

Another problem is that people may have licensed the right to use a piece of music or are using a piece of free music that another work also used. And if that other work is in the filter’s database, it’ll make a match between the two. This results in someone who has all the rights to a piece of music being blocked or losing income. It’s a big enough problem that, in the process of writing our whitepaper on YouTube’s copyright filter, Content ID, we were told that people who had experienced this problem had asked for it to be included specifically.

Filters are so sensitive to music that it is very difficult to make a living discussing music online. The difficulty of getting music clips past Content ID explains the dearth of music commentators on YouTube. It is common knowledge among YouTube creators, with one saying “this is why you don’t make content about music.”

Criticism, commentary, and education of music are all areas that are legally protected by fair use. Using parts of a thing you are discussing to show what you mean is part of effective communication. And while the law does not make fair use of music more difficult to prove than any other kind of work, filters do.

YouTube’s filter does something even more insidious than simply taking down videos, though. When it detects a match, it allows the label claiming ownership to take part or all of the money that the original creator would have made. So a video criticizing a piece of music ends up enriching the party being critiqued. As one music critic explained:

Every single one of my videos will get flagged for something and I choose not to do anything about it, because all they’re taking is the ad money. And I am okay with that, I’d rather make my videos the way they are and lose the ad money rather than try to edit around the Content ID because I have no idea how to edit around the Content ID. Even if I did know, they’d change it tomorrow. So I just made a decision not to worry about it.

This setup is also how a ten-hour white noise video ended up with five copyright claims against it. This taking-from-the-poor-and-giving-to-the-rich is a blatantly absurd result, but it’s the status quo on much of YouTube.

A group, like the police, who is particularly tech-savvy could easily figure out which songs result in videos being removed rather than have the money stolen. Internet creators talk on social media about the issues they run into and from whom. Some rightsholders are infamously controlling and litigious.

Copyright should not be a fast-track to getting speech removed that you do not like. The law is meant to encourage creativity by giving artists a limited period of exclusive rights to their creations. It is not a way to make money off of criticism or a loophole to be exploited by authorities.

DMCA Intelwars News Update

Section 1201’s Harm to Security Research Shown by Mixed Decision in Corellium Case

Under traditional copyright law, security research is a well-established fair use, meaning it does not infringe copyright. When it was passed in 1998, Section 1201 of the Digital Millennium Copyright Act upset the balance of copyright law. Since then, the balance has been further upset as it has been interpreted so broadly by some courts that it effectively eliminates fair use if you have to bypass an access control like encryption to make that fair use.

The District Court’s ruling in Apple v. Corellium makes this shift crystal-clear. Corellium is a company that enables security researchers to run smartphone software in a controlled, virtual environment, giving them greater insights into how the software functions and where it may be vulnerable. Apple sued the company, alleging that its interactions with Apple code infringed copyright and that it offered unlawful circumvention technology under Section 1201 of the Digital Millennium Copyright Act.

Corellium asked for “summary judgment” that it had not violated the law. Summary judgment is decided as a matter of law when the relevant facts are not in dispute. (Summary judgment is far less expensive and time-consuming for the parties and the courts, while having to go to trial can be prohibitive for individual researchers and small businesses.) Corellium won on fair use, but the court said that there were disputed facts that prevented it from ruling on the Section 1201 claims at this stage of the litigation. It also rejected Corellium’s argument that fair use is a defense to a claim under Section 1201.

Fair use is part of what makes copyright law consistent with both the First Amendment and the Constitution’s requirement that intellectual monopoly rights like copyright – if created at all – must promote the progress of “science and the useful arts.”

We’re disappointed that the District Court failed to uphold the traditional limitations on copyright law that protect speech, research, and innovation. Applying fair use to Section 1201 would reduce the harm it does to fundamental rights.

It’s also disappointing that the provisions of Section 1201 that were enacted to protect security testing are so much less protective than traditional fair use has been. If those provisions were doing their job, the 1201 claim would be thrown out on summary judgment just as readily as the infringement claim, saving defendants and the courts from unnecessary time and expense.

We’ll continue to litigate Section 1201 to protect security researchers and the many other technologists and creators who rely on fair use in order to share their knowledge and creativity.

Call to Action Creativity & Innovation DMCA DRM Intelwars

Let’s Stand Up for Home Hacking and Repair

Let’s tell the Copyright Office that it’s not a crime to modify or repair your own devices.

Every three years, the Copyright Office holds a rulemaking process where it grants the public permission to bypass digital locks for lawful purposes. In 2018, the Office expanded existing protections for jailbreaking and modifying your own devices to include voice-activated home assistants like Amazon Echo and Google Home, but fell far short of the broad allowance for all computerized devices that we’d asked for. So we’re asking for a similar exemption, but we need your input to make the best case possible: if you use a device with onboard software and DRM keeps you from repairing that device or modifying the software to suit your purposes, see below for information about how to tell us your story.

DMCA 1201: The Law That Launched a Thousand Crappy Products

Why is it illegal to modify or repair your own devices in the first place? It’s a long story. Congress passed the Digital Millennium Copyright Act in 1996. That’s the law that created the infamous “notice-and-takedown” process for allegations of copyright infringement on websites and social media platforms. The DMCA also included the less-known Section 1201, which created a new legal protection for DRM—in short, any technical mechanism that makes it harder for people to access or modify a copyrighted work. The DMCA makes it unlawful to bypass certain types of DRM unless you’re working within one of the exceptions granted by the Copyright Office.

Suddenly manufacturers had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

The technology landscape was very different in 1996. At the time, when most people thought of DRM, they were thinking of things like copy protection on DVDs or other traditional media. Some of the most dangerous abuses of DRM today come in manufacturers’ use of it to limit how customers use their products—farmers being unable to repair their own tractors, or printer manufacturers trying to restrict users from buying third-party ink.

When the DMCA passed, manufacturers suddenly had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

Section 1201 caught headlines recently when the RIAA attempted to use it to stop the distribution of youtube-dl, a tool that lets people download videos from YouTube and other user-uploaded video platforms. Fortunately, GitHub put the youtube-dl repository back up after EFF explained on behalf of youtube-dl’s developers that the tool doesn’t circumvent DRM.


Privacy info.
This embed will serve content from

Abuse of legal protections for DRM isn’t just a United States problem, either. Thanks to the way in which copyright law has been globalized through a series of trade agreements, much of the world has similar laws on the books to DMCA 1201. That creates a worst-of-both-worlds scenario for countries that don’t have the safety valve of fair use to protect people’s free expression rights or processes like the Copyright Office rulemaking to remove the legal doubt around bypassing DRM for lawful purposes. The rulemaking process is deeply flawed, but it’s better than nothing.

Let’s Tell the Copyright Office: Home Hacking Is Not a Crime

Which brings us back to this year’s Copyright Office rulemaking. We’re asking the Copyright Office to grant a broad exception for people to take advantage of in modifying and repairing all software-enabled devices for their own use.

If you have a story about how:

  • someone in the United States;
  • attempted or planned to modify, repair, or diagnose a product with a software component; and
  • encountered a technological protection measure (including DRM or digital rights management—any form of software security measure that restricts access to the underlying software code, such as encryption, password protection, or authentication requirements) that prevented completing the modification, repair, or diagnosis (or had to be circumvented to do so)

—we want to hear from you! Please email us at with the information listed below, and we’ll curate the stories we receive so we can present the most relevant ones alongside our arguments to the Copyright Office. The comments we submit to the Copyright Office will become a matter of public record, but we will not include your name if you do not wish to be identified by us. Submissions should include the following information:

  1. The product you (or someone else) wanted to modify, repair, or diagnose, including brand and model name/number if available.
  2. What you wanted to do and why.
  3. How a TPM interfered with your project, including a description of the TPM.
    • What did the TPM restrict access to?
    • What did the TPM block you from doing? How?
    • If you know, what would be required to get around the TPM? Is there another way you could accomplish your goal without doing this?
  4. Optional: Links to relevant articles, blog posts, etc.
  5. Whether we may identify you in our public comments, and your name and town of residence if so. We will treat all submissions as anonymous unless you expressly give us this permission to identify you.
Commentary Creativity & Innovation DMCA DRM Intelwars

RIAA Abuses DMCA to Take Down Popular Tool for Downloading Online Videos

“youtube-dl” is a popular free software tool for downloading videos from YouTube and other user-uploaded video platforms. GitHub recently took down youtube-dl’s code repository at the behest of the Recording Industry Association of America, potentially stopping many thousands of users, and other programs and services, that rely on it.

On its face, this might seem like an ordinary copyright takedown of the type that happens every day. Under the Digital Millennium Copyright Act (DMCA), a copyright holder can ask a platform to take down an allegedly infringing post and the platform must comply. (The platform must also allow the alleged infringer to file a counter-notice, requiring the copyright holder to file a lawsuit if she wants the allegedly infringing work kept offline.) But there’s a huge difference here with some frightening ramifications: youtube-dl doesn’t infringe on any RIAA copyrights.

youtube-dl doesn’t use RIAA-member labels’ music in any way. The makers of youtube-dl simply shared information with the public about how to perform a certain task—one with many completely lawful applications.

RIAA’s argument relies on a different section of the DMCA, Section 1201. DMCA 1201 says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work. Copyright holders have argued that it’s a violation of DMCA 1201 to bypass DRM even if you’re doing it for completely lawful purposes; for example, if you’re downloading a video on YouTube for the purpose of using it in a way that’s protected by fair use. (And thanks to the way that copyright law has been globalized via trade agreements, similar laws exist in many other jurisdictions too.) RIAA argues that since youtube-dl could be used to download music owned by RIAA-member labels, no one should be able to use the tool, even for completely lawful purposes.

This is an egregious abuse of the notice-and-takedown system, which is intended to resolve disputes over allegedly infringing material online. Again, youtube-dl doesn’t use RIAA-member labels’ music in any way. The makers of youtube-dl simply shared information with the public about how to perform a certain task—one with many completely lawful applications.

We’ve put together an explainer video on this takedown, and its implications for free speech online:


Privacy info.
This embed will serve content from

Please share this video with others who use YouTube and other video uploading services. And if you use youtube-dl for lawful purposes, we want to hear from you. Email us at and include “youtube-dl” in the subject line.

Commentary DMCA Intelwars

Ink-Stained Wretches: The Battle for the Soul of Digital Freedom Taking Place Inside Your Printer

Since its founding in the 1930s, Hewlett-Packard has been synonymous with innovation, and many’s the engineer who had cause to praise its workhorse oscillators, minicomputers, servers, and PCs. But since the turn of this century, the company’s changed its name to HP and its focus to sleazy ways to part unhappy printer owners from their money. Printer companies have long excelled at this dishonorable practice, but HP is truly an innovator, the industry-leading Darth Vader of sleaze, always ready to strong-arm you into a “deal” and then alter it later to tilt things even further to its advantage.

The company’s just beat its own record, converting its “Free ink for life” plan into a “Pay us $0.99 every month for the rest of your life or your printer stops working” plan.

Plenty of businesses offer some of their products on the cheap in the hopes of stimulating sales of their higher-margin items: you’ve probably heard of the “razors and blades” model (falsely) attributed to Gillette, but the same goes for cheap Vegas hotel rooms and buffets that you can only reach by running a gauntlet of casino “games,” and cheap cell phones that come locked into a punishing, eternally recurring monthly plan.

Printers are grifter magnets, and the whole industry has been fighting a cold war with its customers since the first clever entrepreneur got the idea of refilling a cartridge and settling for mere astronomical profits, thus undercutting the manufacturers’ truly galactic margins. This prompted an arms race in which the printer manufacturers devote ever more ingenuity to locking third-party refills, chips, and cartridges out of printers, despite the fact that no customer has ever asked for this.

Lexmark: First-Mover Advantage

But for all the dishonorable achievements of the printer industry’s anti-user engineers, we mustn’t forget the innovations their legal departments have pioneered in the field of ink- and toner-based bullying. First-mover advantage here goes to IBM, whose lawyers ginned up an (unsuccessful) bid to use copyright law to prevent a competitor, Static Controls, from modifying used Lexmark toner cartridges so they’d work after they were refilled.

A little more than a decade after its failure to get the courts to snuff out Static Controls, Lexmark was actually sold off to Static Controls’ parent company. Sadly, Lexmark’s aggressive legal culture came along with its other assets, and within a year of the acquisition, Lexmark’s lawyers were advancing a radical theory of patent law to fight companies that refilled its toner cartridges.

HP: A Challenger Appears

Lexmark’s fights were over laser-printer cartridges, filled with fine carbon powder that retailed at prices that rivaled diamonds and other exotic forms that element. But laser printers are a relatively niche part of the printer market: the real volume action is in inkjet printers: dirt-cheap, semi-disposable, and sporting cartridges (half-) full of ink priced to rival vintage Veuve-Clicquot.

For the inkjet industry, ink was liquid gold, and they innovated endlessly in finding ways to wring every drop of profit from it. Companies manufactured special cartridges that were only half-full for inclusion with new printers, so you’d have to quickly replace them. They designed calibration tests that used vast quantities of ink, and, despite all this calibration, never could quite seem to get a printer to register that there was still lots of ink left in the cartridge that it was inexplicably calling “empty” and refusing to draw from.

But all this ingenuity was at the mercy of printer owners, who simply did not respect the printer companies’ shareholders enough to voluntarily empty their bank accounts to refill their printers. Every time the printer companies found a way to charge more for less ink, their faithless customers stubbornly sought out competitors who’d patronize rival companies who’d refill or remanufacture their cartridges, or offer compatible cartridges.

Security Is Job One

Shutting out these rivals became job one. When your customers reject your products, you can always win their business back by depriving them of the choice to patronize a competitor. Printer cartridges soon bristled with “security chips” that use cryptographic protocols to identify and lock out refilled, third-party, and remanufactured cartridges. These chips were usually swiftly reverse-engineered or sourced out of discarded cartridges, but then the printer companies used dubious patent claims to have them confiscated by customs authorities as they entered the USA. (We’ve endorsed legislation that would end this practice.)

Here again, we see the beautiful synergy of anti-user engineering and anti-competition lawyering. It’s really heartwarming to see these two traditional rival camps in large companies cease hostilities and join forces.

Alas, the effort that went into securing HP from its customers left precious few resources to protect HP customers from the rest of the world. In 2011, the security researcher Ang Cui presented his research on HP printer vulnerabilities, “Print Me If You Dare.”

Cui found that simply by hiding code inside a malicious document, he could silently update the operating system of HP printers when the document was printed. His proof-of-concept code was able to seek out and harvest Social Security and credit-card numbers; probe the local area network; and penetrate the network’s firewall and allow him to freely roam it using the compromised printer as a gateway. He didn’t even have to trick people into printing his gimmicked documents to take over their printers: thanks to bad defaults, he was able to find millions of HP printers exposed on the public Internet, any one of which he could have hijacked with unremovable malware merely by sending it a print-job.

The security risks posed by defects in HP’s engineering are serious. Criminals who hack embedded systems like printers and routers and CCTV cameras aren’t content with attacking the devices’ owners—they also use these devices as botnets for devastating denial of service and ransomware attacks.

For HP, though, the “security update” mechanism built into its printers was a means for securing HP against its customers, not securing those customers against joining botnets or having the credit card numbers they printed stolen and sent off to criminals.

In March 2016, HP inkjet owners received a “security update available” message on their printers’ screens. When they tapped the button to install this update, their printers exhibited the normal security update behavior: a progress bar, a reboot, and then nothing. But this “security update” was actually a ticking bomb: a countdown timer that waited for five months before it went off in September 2016, activating a hidden feature that could detect and reject all third-party ink cartridges.

HP had designed this malicious update so that infected printers would be asymptomatic for months, until after parents had bought their back-to-school supplies. The delay ensured that warnings about the “security update” came too late for HP printer owners, who had by then installed the update themselves.

HP printer owners were outraged and told the company so. The company tried to weather the storm, first by telling customers that they’d never been promised their printers would work with third-party ink, then by insisting that the lockouts were to ensure printer owners didn’t get “tricked” with “counterfeit” cartridges, and finally by promising that future fake security updates would be clearly labeled.

HP never did disclose which printer models it attacked with its update, and a year later, they did it again, once again waiting until after the back-to-school season to stage its sneak attack, stranding cash-strapped parents with a year’s worth of useless ink cartridges for their kids’ school assignments.

You Don’t Own Anything

Other printer companies have imitated HP’s tactics but HP never lost its edge, finding new ways to transfer money from printer owners to its tax-free offshore accounts.

HP’s latest gambit challenges the basis of private property itself: a bold scheme! With the HP Instant Ink program, printer owners no longer own their ink cartridges or the ink in them. Instead, HP’s customers have to pay a recurring monthly fee based on the number of pages they anticipate printing from month to month; HP mails subscribers cartridges with enough ink to cover their anticipated needs. If you exceed your estimated page-count, HP bills you for every page (if you choose not to pay, your printer refuses to print, even if there’s ink in the cartridges).

If you don’t print all your pages, you can “roll over” a few of those pages to the next month, but you can’t bank a year’s worth of pages to, say, print out your novel or tax paperwork. Once you hit your maximum number of “banked” pages, HP annihilates any other pages you’ve paid for (but continues to bill you every month).

Now, you may be thinking, “All right, but at least HP’s customers know what they’re getting into when they take out one of these subscriptions,” but you’ve underestimated HP’s ingenuity.

HP takes the position that its offers can be retracted at any time. For example, HP’s “Free Ink for Life” subscription plan offered printer owners 15 pages per month as a means of tempting users to try out its ink subscription plan and of picking up some extra revenue in those months when these customers exceeded their 15-page limit.

But Free Ink for Life customers got a nasty shock at the end of last month: HP had unilaterally canceled their “free ink for life” plan and replaced it with “a $0.99/month for all eternity or your printer stops working” plan.

Ink in the Time of Pandemic

During the pandemic, home printers have become far more important to our lives. Our kids’ teachers want them to print out assignments, fill them in, and upload pictures of the completed work to Google Classroom. Government forms and contracts have to be printed, signed, and photographed. With schools and offices mostly closed, these documents are being printed from our homes.

The lockdown has also thrown millions out of work and subjected millions more to financial hardship. It’s hard to imagine a worse time for HP to shove its hands deeper into its customers’ pockets.

Industry Leaders

The printer industry leads the world when it comes to using technology to confiscate value from the public, and HP leads the printer industry.

But these are infectious grifts. For would-be robber-barons, “smart” gadgets are a moral hazard, an irresistible temptation to use those smarts to reconfigure the very nature of private property, such that only companies can truly own things, and the rest of us are mere licensors, whose use of the devices we purchase is bound by the ever-shifting terms and conditions set in distant boardrooms.

From Apple to John Deere to GM to Tesla to Medtronic, the legal fiction that you don’t own anything is used to force you to arrange your affairs to benefit corporate shareholders at your own expense.

And when it comes to “razors and blades” business-model, embedded systems offer techno-dystopian possibilities that no shaving company ever dreamed of: the ability to use law and technology to prevent competitors from offering their own consumables. From coffee pods to juice packets, from kitty litter to light-bulbs, the printer-ink cartridge business-model has inspired many imitators.

HP has come a long way since the 1930s, reinventing itself several times, pioneering personal computers and servers. But the company’s latest reinvention as a wallet-siphoning ink grifter is a sad turn indeed, and the only thing worse than HP’s decline is the many imitators it has inspired.

Commentary Creativity & Innovation DMCA DRM Intelwars

The Github youtube-dl Takedown Isn’t Just a Problem of American Law

The video downloading utility youtube-dl, like other large open source projects, accepts contributions from all around the globe. It is used practically wherever there’s an Internet connection. It’s especially shocking, therefore, when what looks like a domestic legal spat–involving a take-down demand written by lawyers representing the Recording Industry Association of America (RIAA),  a U.S. industry group, to Github, a U.S. code hosting service, citing the Digital Millennium Copyright Act (DMCA), a U.S. law–can rip a hole in that global development process and disrupt access for youtube-dl users around the world.

Those outside the United States, long accustomed to arbitrary take-downs with “DMCA” in their subject line, might reasonably assume that the removal of youtube-dl from Github is yet another example of the American rightsholders’ grip on U.S. copyright law. Tragically for Internet users everywhere, the RIAA was not citing DMCA Section 512, the usual takedown route, but DMCA Section 1201, the ban on breaking digital locks. And the failures of that part of American law that can allow a rightsholder to intimidate an American company into an act of global censorship are coded into more than just the U.S. legal system.

The RIAA’s letter against youtube-dl cites the DMCA 1201’s criminalization of the distribution of technology that can bypass DRM: what’s called the “circumvention of technical protection measures”. It also mentions German law, which contains similar language. Here’s the core of the relevant U.S. statute, in 1201(b): 

1201 (b) Additional Violations.—

  1. No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that—
    1. is primarily designed or produced for the purpose of circumventing protection afforded by a technological measure that effectively protects a right of a copyright owner under this title in a work or a portion thereof;
    2. has only limited commercially significant purpose or use other than to circumvent protection afforded by a technological measure that effectively protects a right of a copyright owner under this title in a work or a portion thereof; or 
    3. is marketed by that person or another acting in concert with that person with that person’s knowledge for use in circumventing protection afforded by a technological measure that effectively protects a right of a copyright owner under this title in a work or a portion thereof. 

(While the law also has some important and hard-fought exceptions, they mostly apply only to using a circumvention tool, not to creating or distributing one.)

DMCA 1201 is incredibly broad, apparently allowing rightsholders to legally harass any “trafficker” in code that lets users re-take control of their devices from DRM locks.

EFF has been warning against the consequences of this approach even before the DMCA was passed in 1998. That’s because DMCA 1201 is not the first time the U.S. considered adopting such language. DMCA 1201 is the enactment of the provisions of an earlier global treaty: the World Intellectual Property Organization (WIPO)’s Copyright Treaty of 1996. That treaty’s existence is itself largely due to American rightsholders’ abortive attempt to pass a similar anti-circumvention proposal devised in the Clinton administration’s notoriously pro-industry 1995 White Paper on Intellectual Property and the National Information Infrastructure.

Stymied at the time by campaigns by a coalition of early Internet users, librarians, technologists, and civil libertarians in the United States, supporters of U.S. rightsholders laundered their proposal through the WIPO, an international treaty organization controlled by enthusiastic intellectual property maximalists with little understanding of the fledgling Net. The Clinton White Paper proposals failed, but the WIPO Copyright Treaty passed, and was later enacted by the U.S. Senate, smuggling back the provisions that had been rejected years before.

Since 1996, over 100 countries have signed onto the WIPO Copyright Treaty. The Treaty itself uses notably less harsh language in what it requires from its signatories than the DMCA. It says, more simply:

Contracting Parties shall provide adequate legal protection and effective legal remedies against the circumvention of effective technological measures that are used by authors in connection with the exercise of their rights under this Treaty or the Berne Convention and that restrict acts, in respect of their works, which are not authorized by the authors concerned or permitted by law.

But rightsholders ratcheted up the punishments and scope of the Treaty when it was incorporated in U.S. law. 

Most countries adopted the far stronger DMCA 1201 language in their own implementations. That was partly because the U.S. was one of the earliest adopters, and it’s much easier to simply copy-and-paste another nation’s implementation than craft your own. But it’s also because it has been the continuing policy of the United States Trade Representative to pressure other countries to mirror the DMCA 1201 language, either through diplomatic lobbying, or by requiring it as a condition of signing trade agreements with the U.S.

DMCA 1201 has been loaded with terrible implications for innovation and free expression since the day it was passed. For many years, EFF documented these issues in our “Unintended Consequences” series; we continue to organize and lobby for temporary exemptions to its provisions for the purposes of cellphone unlocking, restoring vintage videogames and similar fair uses, as well as file and defend lawsuits in the United States to try and mitigate its damage. We look forward to the day when it is no longer part of U.S. law.

But due to the WIPO Copyright Treaty, the DMCA’s anti-circumvention provisions infest much of the world’s jurisdictions too, including the European Union via the Information Society Directive 2001/29/EC, which stipulates:

Member States shall provide adequate legal protection against the manufacture, import, distribution, sale, rental, advertisement for sale or rental, or possession for commercial purposes of devices, products or components or the provision of services which:

(a) are promoted, advertised or marketed for the purpose of circumvention of, or

(b) have only a limited commercially significant purpose or use other than to circumvent, or

(c) are primarily designed, produced, adapted or performed for the purpose of enabling or facilitating the circumvention of, any effective technological measures.

The EU directive already mirrors the worst of U.S. law in that it apparently prohibits the possession and distribution of anti-circumvention components (the language that led to ridiculous spectacle in the 2000s of legal threats against anyone who posted the DeCSS algorithm online.) Transpositions into domestic European law, and their domestic interpretations, have had the opportunity to make it even worse. 

Fortunately this time, hosts and developers in Germany were confident enough in their rights under German law to reject the RIAA’s take-down demands. But if rightsholders’ organizations wish to continue to misuse the provisions of the Copyright Treaty to go after tools like youtube-dl in yet more countries, they will have to be fought in every country, under the terms of each countries’ version of the WIPO anti-circumvention provisions.

EFF has a long-term plan to beat the anti-circumvention laws, wherever they are, which we call Apollo 1201. But we need help from a global movement to finally revoke this ongoing attack on the world’s creators, innovators, and consumers. You can do your part by examining and understanding your own country’s anti-circumvention provision–and prepare and organize for the moment when your local RIAA comes knocking on your door.

Creativity & Innovation Defend Your Right to Repair! DMCA DMCA Rulemaking DRM Intelwars

Tell Us How You Want to Modify and Repair the Devices in Your Life

Have you tried modifying, repairing, or diagnosing a product but bumped into encryption, a password requirement, or some other technological roadblock that got in the way? EFF wants your stories to help us fight for your right to get around those obstacles.

Section 1201 of the Digital Millennium Copyright Act (DMCA) makes it illegal to circumvent certain digital access controls (also called “technological protection measures” or “TPMs”). Because software code can be copyrightable, this gives product manufacturers a legal tool to control the way you interact with the increasingly powerful devices in your life. While Section 1201’s stated goal was to prevent copyright infringement, the law has been used against artists, researchers, technicians, and other product owners, even when their reasons for circumventing manufacturers’ digital locks were completely lawful.

Every three years, there is a window of opportunity to get exemptions to this law to protect legitimate uses of copyrighted works. Last time around, we were able to preserve your right to repair, maintain, and diagnose your smartphones, home appliances, and home systems. For 2021, we’re asking the Copyright Office to expand that exemption to cover all software-enabled devices and to cover the right to modify or customize those products, not just repair them. As more and more products have computerized components, TPM-encumbered software runs on more of the devices we use daily—from toys to microwaves—putting you at risk of violating the statute if you access the code of something you own to lawfully customize it.

How You Can Help

To help make our case for this new exemption, we want to hear from you about your experiences with anything that might have a software component with a TPM that prevents you from making full use of the products you already own. From the Internet of Things and medical devices, to Smart TVs and game consoles, to appliances and computer peripherals, to any other items you can think of—that’s what we want to hear about! As an owner, you should have the right to repair, modify, and diagnose the products you rely on by being able to access all of the software code contained in the products. These may include products you may not necessarily associate with software, such as insulin pumps and smart products, like toys and refrigerators.

If you have a story about how:

  • someone in the United States;
  • attempted or planned to repair, modify, or diagnose a product with a software component; and
  • encountered a technological protection measure (including DRM or digital rights management—any form of software security measure that restricts access to the underlying software code, such as encryption, password protection, or authentication requirements) that prevented completing the modification, repair, or diagnosis (or had to be circumvented to do so)

—we want to hear from you! Please email us at with the information listed below, and we’ll curate the stories we receive so we can present the most relevant ones alongside our arguments to the Copyright Office. The comments we submit to the Copyright Office will become a matter of public record, but we will not include your name if you do not wish to be identified by us. Submissions should include the following information:

  1. The product you (or someone else) wanted to modify, repair, or diagnose, including brand and model name/number if available.
  2. What you wanted to do and why.
  3. How a TPM interfered with your project, including a description of the TPM.
    • What did the TPM restrict access to?
    • What did the TPM block you from doing? How?
    • If you know, what would be required to get around the TPM? Is there another way you could accomplish your goal without doing this?
  4. Optional: Links to relevant articles, blog posts, etc.
  5. Whether we may identify you in our public comments, and your name and town of residence if so. We will treat all submissions as anonymous unless you expressly give us this permission to identify you.

This is a team effort. In seeking repair and tinkering exemptions over the years, we’ve used some great stories from you about your repair problems and projects involving your cars and other devices. In past cycles, your stories helped the Copyright Office understand the human impact of this law. Help us fight for your rights once more!

DMCA Intelwars

Defending Fair Use in the Omegaverse

Copyright law is supposed to promote creativity, not stamp out criticism. Too often, copyright owners forget that – especially when they have a convenient takedown tool like the Digital Millennium Copyright Act (DMCA).

EFF is happy to remind them – as we did this month on behalf of Internet creator Lindsay Ellis. Ellis had posted a video about a copyright dispute between authors in a very particular fandom niche: the Omegaverse realm of wolf-kink erotica. The video tells the story of that dispute in gory and hilarious detail, while breaking down the legal issues and proceedings along the way. Techdirt called it “truly amazing.” We agree. But feel free to watch “Into the Omegaverse: How a Fanfic Trope Landed in Federal Court,” and decide for yourself.

The dispute described in the video began with a series of takedown notices to online platforms with highly dubious allegations of copyright infringement. According to these, one Omegaverse author, Zoey Ellis (no relation) had infringed the copyright of another, Addison Cain, by copying common thematic aspects of characters in the Omegaverse genre, i.e., tropes. As Ellis’ video explains, these themes not only predate Cain’s works, but are uncopyrightable as a matter of law. Further litigation ensued, and Ellis’ video explains what happened and the opinions she formed based on the publicly available records of those proceedings. Some of those opinions are scathingly critical of Ms. Cain. But the First Amendment protects scathing criticism. So does copyright law: criticism and parody are classic examples of fair use that are authorized by law. Still, as we have written many times, DMCA abuse targeting such fair uses remains a pervasive and persistent problem. 

Nevertheless, it didn’t take long for Cain to send (through counsel) outlandish allegations of copyright infringement and defamation. Soon after, Patreon and YouTube received DMCA notices from email addresses associated with Cain that raised the same allegations.

That’s when EFF stepped in. The video is a classic fair use. It uses a relatively small amount of a copyrighted work for purposes of criticism and parody in an hour-long video that consists overwhelmingly of Ellis’ original content. In short, the copyright claims were deficient as a matter of law. 

The defamation claims were also deficient, but their presence alone was cause for concern: defamation claims have no place in a DMCA notice.  If you want defamatory content taken down, you should seek a court order and satisfy the First Amendment’s rigorous requirements. Platform providers should be extremely skeptical of DMCA notices that include such claims and examine them carefully.

We explained these points in a letter to Cain’s counsel. We hoped the reminder would be well-taken. We were wrong. In response, Cain’s counsel accused EFF of colluding with the Organization for Transformative Works to undermine her client and demanded apologies from both EFF and Ellis.

As we explain in today’s response, that’s not going to happen. EFF has fought for years to protect the rights of content creators like Lindsay Ellis and we will not apologize for our commitment to this work. Nor will Ellis apologize for exercising her right to speak critically about public figures and their work. It’s past time to put an end to this entire matter.


Commentary DMCA DMCA Rulemaking DRM Intelwars International Trade Agreements and Digital Rights

Human Rights and TPMs: Lessons from 22 Years of the U.S. DMCA


In 1998, Bill Clinton signed the Digital Millennium Copyright Act (DMCA), a sweeping overhaul of U.S. copyright law notionally designed to update the system for the digital era. Though the DMCA contains many controversial sections, one of the most pernicious and problematic elements of the law is Section 1201, the “anti-circumvention” rule which prohibits bypassing, removing, or revealing defects in “technical protection measures” (TPMs) that control not just use but also access to copyrighted works.

In drafting this provision, Congress ostensibly believed it was preserving fair use and free expression but failed to understand how the new law would interact with technology in the real world and how some courts could interpret the law to drastically expand the power of copyright owners. Appellate courts disagree about the scope of the law, and the uncertainty and the threat of lawsuits have meant that rightsholders have been able to effectively exert control over legitimate activities that have nothing to do with infringement, to the detriment of basic human rights.. Manufacturers who designed their products with TPMs that protected business models, rather than profits, can claim that using those products in ways that benefited their customers, (rather than their shareholders) is illegal.

22 years later, TPMs are everywhere, sometimes called “DRM” (“digital rights management”). TPMs control who can fix cars and tractors, who can audit the security of medical implants, who can refill a printer cartridge and whether you can store a cable broadcast and what you can do with it.

Last month, the Mexican Congress passed amendments to the Federal Copyright Law and the Federal Criminal Code, notionally to comply with the country’s treaty obligations under Donald Trump’s USMCA, the successor to NAFTA. This law included many provisions that interfered with human rights, so much so that the Mexican National Commission for Human Rights has filed a constitutional challenge before the Supreme Court seeking to annul these amendments.

Among the gravest of the defects in the new amendments to the Mexican copyright law and the Federal Criminal Code are the rules regarding TPMs, which replicate the defects in DMCA 1201. Notably, the new law does not address the flawed language of the DMCA that has allowed rightsholders to block legitimate and noninfringing uses of copyrighted works that depend on circumvention and creates harsh and disproportionate criminal penalties that creates unintended consequences for privacy and freedom of expression . Such criminal provisions are so broad and vague that it can be applied to any person, even the owner of the device, even if that person hasn’t committed any malicious intent to commit a wrongful act that will result in harm to another. To make things worse, the Mexican law does not provide even the inadequate protections the US version offers, such as an explicit, regular regulatory proceeding that creates exemptions for areas where the law is provably creating harms.

As with DMCA 1201, the new amendments to the Mexican copyright law contains language that superficially appears to address these concerns; however, as with DMCA 1201, the Mexican law’s safeguard provisions are entirely cosmetic, so burdened with narrow definitions and onerous conditions that they are unusable. That is why, in 22 years of DMCA 1201, no one has ever successfully invoked the exemptions written into the statute.

EFF has had 22 years of experience with the fallout from DMCA 1201. In this article, we offer our hard-won expertise to our colleagues in Mexican civil society, industry, lawmaking and to the Mexican public.

Below, we have set out examples of how DMCA 1201 — and its Mexican equivalent — is incompatible with human rights, including free expression, self-determination, the rights of people with disabilities, cybersecurity, education, and archiving; as well as the law’s consequences for Mexico’s national resiliency and economic competitiveness and food- and health-security.

Free Expression

Copyright and free expression are in obvious tension with one another: the former grants creators exclusive rights to reproduce and build upon expressive materials; the latter demands the least-possible restrictions on who can express themselves and how.

Balancing these two priorities is a delicate act, and while different countries manage their limitations and exceptions to copyright differently — fair use, fair dealing, derecho de autor, and more — these systems typically require a subjective, qualitative judgment in order to evaluate whether a use falls into one of the exempted categories: for example, the widespread exemptions for parody or commentary, or rules that give broad latitude to uses that are “transformative” or “critical.” These are rules that are designed to be interpreted by humans — ultimately by judges.

TPM rules that have no nexus with copyright infringement vaporize the vital qualitative considerations in copyright’s free expression exemptions, leaving behind a quantitative residue that is easy for computers to act upon, but which does not correspond closely to the policy objectives of limitations in copyright.

For example, a computer can tell if a video includes more than 25 frames of another video, or if the other works included in its composition do not exceed 10 percent of its total running time. But the computer cannot tell if the material that has been incorporated is there for parody, or commentary, or education — or if the video-editor absentmindedly dragged a video-clip from another project into the file before publishing it.

And in truth, when TPMs collide with copyright exemptions, they are rarely even this nuanced.

Take the TPMs that prevent recording or duplication of videos, beginning with CSS, the system used in the first generation of DVD players, and continuing through the suite of video TPMs, including AACS (Blu-Ray) and HDCP (display devices). These devices can’t tell if you are making a recording in order to produce a critical or parodical video commentary. In 2018, the US Copyright Office recognized that these TPMs interfere with the legitimate free expression rights of the public and granted an exemption to DMCA 1201 permitting the public to bypass these TPMs in order to make otherwise lawful recordings.The Mexican version of the DMCA does not include a formal procedure for granting comparable exemptions.

Other times, TPMs collide with free expression by allowing third parties to interpose themselves between rightsholders and their audiences, preventing the former from selling their expressive works to the latter.

The most prominent example of this interference is to be found in Apple’s App Store, the official monopoly retailer for apps that can run on Apple’s iOS devices, such as iPhones, iPads, Apple Watches, and iPods. Apple’s devices use TPMs that prevent owners of these devices from choosing to acquire software from rivals of the App Store. As a result, Apple’s editorial choices about which apps it includes in the App Store have the force of law. For an Apple customer to acquire an app from someone other than Apple, they must bypass the TPM on their device. Though we have won the right for customers to “jailbreak” their devices, anyone who sells them a tool to effect this ommits a felony under DMCA 1201 and risks both a five-year prison sentence and a $500,000 fine (for a first offense).

While the recent dispute with Epic Games has highlighted the economic dimension of this system (Epic objects to paying a 30 percent commission to Apple for transactions related to its game Fortnite), there are many historic examples of pure content-based restrictions on Apple’s part:

In these cases, Apple’s TPM interferes with speech in ways that are far more grave than merely blocking recording to advantage rightsholders. Rather, Apple is using TPMs backed by DMCA 1201 to interfere with rightsholders as well. Thanks to DMCA 1201, the creator of an app and a person who wants to use that app on a device that they own cannot transact without Apple’s approval.

If Apple withholds that approval, the owner of the device and the creator of the copyrighted work are not allowed to consummate their arrangement, unless they bypass a TPM. Recall that commercial trafficking in TPM-circumvention tools is a serious crime under DMCA 1201, carrying a penalty of a five year prison sentence and a $500,000 fine for a first criminal offense, even if those tools are used to allow rightsholders to share works with their audiences.

In the years since Apple perfected the App Store model, many manufacturers have replicated it, for categories of devices as diverse as games consoles, cars and tractors, thermostats and toys. In each of these domains — as with Apple’s App Store — DMCA 1201 interferes with free expression in arbitrary and anticompetitive ways.

Self Determination

What is a “family?”

Human social arrangements don’t map well to rigid categories. Digital systems can take account of the indeterminacy of these social connections by allowing their users to articulate the ambiguous and complex nature of their lives within a database. For example, a system could allow users to enter several names of arbitrary length to accommodate the common experience of being called different things by different people, or it could allow them to define their own familial relationships, declaring the people they live with as siblings to be their “brothers” or “sisters” — or declaring an estranged parent to be a stranger, or a re-married parent’s spouse to be a “mother.”

But when TPMs enter the picture, these necessary and beneficial social complexities are collapsed down into a set of binary conditions, fenced in by the biases and experiences of their designers. These systems are suspicious of their users, designed to prevent “cheating,” and they treat attempts to straddle their rigid categorical lines as evidence of dishonesty — not as evidence that the system is too narrow to accommodate its users’ lived experience.

One such example is CPCM, the “Content Protection and Copy Management component of DVB, a standard for digital television broadcasts used all over the world.

CPCM relies on the concept of an “authorized domain” that serves as a proxy for a single family. Devices designated as belonging to an “authorized domain” can share video recordings freely with one another, but may not share videos with people from outside the domain — that is, with people who are not part of their family.

The committee that designed the authorized domain was composed almost exclusively of European and US technology, broadcast, and media executives, and they took pains to design a system that was flexible enough to accommodate their lived experience.

If you have a private boat, or a luxury car with its own internal entertainment system, or a summer house in another country, the Authorized Domain is smart enough to understand that all these are part of a single family and will permit content to move seamlessly between them.

But the Authorized Domain is far less forgiving to families that have members who live abroad as migrant workers, or who are part of the informal economy in another state or country, or nomads who travel through the year with a harvest. These “families” are not recognized as such by DVB-CPCM, even though there are far more families in their situation than there are families with summer homes in the Riviera.

All of this would add up to little more than a bad technology design, except for DMCA 1201 and other anti-circumvention laws.

Because of these laws — including Mexico’s new copyright law — defeating CPCM in order to allow a family member to share content with you is itself a potential offense, and selling a tool to enable this is a potential criminal offense, carrying a five-year sentence and a $500,000 fine for a first offense.

Mexico’s familial relations should be defined by Mexican lawmakers and Mexican courts and the Mexican people — not by wealthy executives from the global north meeting in board-rooms half a world away.

The Rights of People With Disabilities

Though disabilities are lumped into broad categories — “motor disabilities,” “blindness,” “deafness,” and so on — the capabilities and challenges of each person with a disability are as unique as the capabilities and challenges faced by each able-bodied person.

That is why the core of accessibility isn’t one-size-fits-all “accommodations” for people with disabilities; rather, it is “universal design” is “design of systems so that they can be accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability.”

The more a system can be altered by its user, the more accessible it is. Designers can and should build in controls and adaptations, from closed captions to the ability to magnify text or increase its contrast, but just as important is to leave the system open-ended, so that people whose needs were not anticipated during the design phase can suit them to their needs, or recruit others to do so for them.

This is incompatible with TPMs. TPMs are designed to prevent their users from modifying them. After all, if users could modify TPMs, they could subvert their controls.

Accessibility is important for people with disabilities, but it is also a great boon to able-bodied people: first, because many of us are merely “temporarily able-bodied” and will have to contend with some disability during our lives; and second, because flexible systems can accommodate use-cases that designers have not anticipated that able-bodied people also value: from the TV set with captions turned on in a noisy bar (or for language-learners) to the screen magnifiers used by people who have mislaid their glasses.

Like able-bodied people, many people with disabilities are able to effect modifications and improvements in their own tools. However, most people — whether they are able-bodied and people with disabilities — rely on third parties to modify the systems they rely on because they lack the skill or time to make these modifications themselves.

That is why DMCA 1201’s prohibition on “trafficking in circumvention devices” is so punitive: it not only deprives programmers of the right to improve their tools, but it also deprives the rest of us of the right to benefit from those programmers’ creations, and programmers who dare defy this stricture face lengthy prison sentences and giant fines if they are prosecuted.

Recent examples of TPMs interfering with disabilities reveal how confining DMCA 1201 is for people with disabilities.

In 2017, the World Wide Web Consortium (W3C) approved a controversial TPM for videos on the Web called Encrypted Media Extensions (EME). EME makes some affordances for people with disabilities, but it lacks other important features. For example, people with photosensitive epilepsy cannot use automated tools to identify and skip past strobing effects in videos that could trigger dangerous seizures, while color-blind people can’t alter the color-palette of the videos to correct for their deficit.

A more recent example comes from the med-tech giant Abbott Labs, which used DMCA 1201 to suppress a tool that allowed people with diabetes to link their glucose monitors to their insulin pumps, in order to automatically calculate and administer doses of insulin in an “artificial pancreas.”

Note that there is no copyright infringement in any of these examples: monitoring your blood sugar, skipping past seizure-inducing video effects, or changing colors to a range you can perceive do not violate anyone’s rights under US copyright law. These are merely activities that are dispreferred by manufacturers.

Normally, a manufacturer’s preference is subsidiary to the interests of the owner of a product, but not in this case. Once a product is designed so that you must bypass a TPM to use it in ways the manufacturer doesn’t like, DMCA 1201 gives the manufacturer’s preferences the force of law,


In 1991, the science fiction writer Bruce Sterling gave a keynote address to the Game Developer’s Conference in which he described the assembled game creators as practitioners without a history, whose work crumbled under their feet as fast as they could create it: “Every time a [game] platform vanishes it’s like a little cultural apocalypse. And I can imagine a time when all the current platforms might vanish, and then what the hell becomes of your entire mode of expression?”

Sterling contrasted the creative context of software developers with authors: authors straddle a vast midden of historical material that they — and everyone else — can access. But in 1991, as computers and consoles were appearing and disappearing at bewildering speed, the software author had no history to refer to: the works of their forebears were lost to the ages, no longer accessible thanks to the disappearance of the hardware needed to run them.

Today, Sterling’s characterization rings hollow. Software authors, particularly games developers, have access to the entire corpus of their industry, playable on modern computers, thanks to the rise and rise of “emulators” — programs that simulate primitive, obsolete hardware on modern equipment that is orders of magnitude more powerful.

However, preserving the history of an otherwise ephemeral medium was not for the faint of heart. From the earliest days of commercial software, companies have deployed TPMs to prevent their customers from duplicating their products or running them without authorization. Preserving the history of software is impossible without bypassing TPMs, and bypassing TPMs is a potential felony that can send you to prison for five years and/or cost you half a million dollars if you supply a tool to do so.

That is why the US Copyright Office has repeatedly granted exemptions to DMCA 1201, permitting archivists in the United States to bypass software TPMs for preservation purposes.

Of course, it’s not merely software that is routinely restricted with TPMs, frustrating the efforts of archivists: from music to movies, books to sound recordings, TPMs are routine. Needless to say, these TPMs interfere with routine, vital archiving activities just as much as they interfere with the archiving and preservation of software.


Copyright systems around the world create exemptions for educational activities; U.S. copyright law specifically mentions education in the criteria for exempted use.

But educators frequently run up against the blunt, indiscriminate restrictions imposed by TPMs, whose code cannot distinguish between someone engaged in educational activities and someone engaged in noneducational activities.

Educators’ conflicts with TPMs are many and varied: a teacher may build a lesson plan around an online video but be unable to act on it if the video is removed; in the absence of a TPM, the teacher could make a local copy of the video as a fallback.

For a decade, the U.S. Copyright Office has affirmed the need for educators to bypass TPMs in order to engage in normal pedagogical activities, most notably the need for film professors to bypass TPMs in order to teach their students and so that their students can analyze and edit commercial films as part of their studies.

National Resiliency

Thus far, this article has focused on the TPMs’ impact on individual human rights, but human rights are dependent on the health and resiliency of the national territory in which they are exercised. Nutrition, health, and security are human rights just as surely as free speech, privacy and accessibility.

The pandemic has revealed the brittleness and transience of seemingly robust supply chains and firms. Access to replacement parts and skilled technicians has been disrupted and firms have failed, taking down their servers and leaving digital tools in unusable or partially unusable states.

But TPMs don’t understand pandemics or other emergencies: they enforce restrictions irrespective of the circumstances on the ground. And where laws like DMCA 1201 prevent the development of tools and knowledge for bypassing TPMs, these indiscriminate restrictions take on the force of law and acquire a terrible durability, as few firms or even individuals are willing to risk prison and fines to supply the tools to make repairs to devices that are locked with TPMs.

Nowhere is this more visible than in agriculture, where the markets for key inputs like heavy machinery, seeds and fertilizer have grown dangerously concentrated, depriving farmers of meaningful choice from competitors with distinctive offers.

Farmers work under severe constraints: they work in rural, inaccessible territories, far from authorized service depots, and the imperatives of the living organisms they cultivate cannot be argued with. When your crop is ripe, it must be harvested — and that goes double if there’s a storm on the horizon.

That’s why TPMs in tractors constitute a severe threat to national resiliency, threatening the food supply itself. Ag-tech giant John Deere has repeatedly asserted that farmers may not effect their own tractor repairs, insisting that these repairs are illegal unless they are finalized by an authorized technician who can take days to arrive (even when there isn’t a pandemic), and who charge hundreds of dollars to inspect the farmer’s own repairs and type an unlock code into the tractor’s keyboard.

John Deere’s position is that farmers are not qualified and should not be permitted to repair their own property. However, farmers have been fixing their own equipment for as long as agriculture has existed — every farm has a workshop and sometimes even a forge. Indeed, John Deere’s current designs are descended from modifications that farmers themselves made to earlier models: Deere used to dispatch field engineers to visit farms and copy farmers’ innovations for future models.

This points to another key feature for national resiliency: adaptation. Just as every person has unique needs that cannot be fully predicted and accounted for by product designers, so too does every agricultural context. Every plot of land has its own biodynamics, from soil composition to climate to labor conditions, and farmers have always adapted their tools to suit their needs. Multinational ag-tech companies can profitably target the conditions of the wealthiest farmers, but if you fall too far outside the median use-case, the parameters of your tractor are unlikely to fully suit your needs. That is why farmers are so accustomed to adapting their equipment.

To be clear, John Deere’s restrictions do not prevent farmers from modifying their tractors — they merely put those farmers in legal peril. Instead, farmers have turned to black market Ukrainian replacement software for their tractors; no one knows who made this software, it comes with no guarantees, and if it contained malicious or defective code, there would be no one to sue.

And John Deere’s abuse of TPMs doesn’t stop at repairs. Tractors contain sophisticated sensors that can map out soil conditions to a high degree of accuracy, measuring humidity, density and other factors and plotting them on a centimeter-accurate grid. This data is automatically generated by farmers driving tractors around their own fields, but the data does not go to the farmer. Rather, John Deere harvests the data that farmers generate while harvesting their crops and builds up detailed pictures of regional soil conditions that the company sells as market intelligence to the financial markets for bets in crop futures.

That data is useful to the farmers who generated it: accurate soil data is needed for “precision agriculture,” which improves crop yields by matching planting, fertilizing and watering to soil conditions. Farmers can access a small slice of that data, but only through an app that comes bundled with seed from Bayer-Monsanto. Competing seed companies, including domestic seed providers, cannot make comparable offers.

Again, this is bad enough under normal conditions, but when supply chains fail, the TPMs that enforce these restrictions prevent local suppliers from filling in the gaps.

Right to Repair

TPMs don’t just interfere with ag-tech repairs: dominant firms in every sector have come to realize that repairs are a doubly lucrative nexus of control. First, companies that control repairs can extract money from their customers by charging high prices to fix their property and by forcing customers to use high-priced manufacturer-approved replacement parts in those repairs; and second, companies can unilaterally declare some consumer equipment to be beyond repair and demand that they pay to replace it.

Apple spent lavishly in 2018 on a campaign that stalled 20 state-level Right to Repair bills in the U.S.A., and, in his first shareholder address of 2019, Apple CEO Tim Cook warned that a major risk to Apple’s profitability came from consumers who chose to repair, rather than replace, their old phones, tablets and laptops.

The Right to Repair is key to economic self-determination at any time, but in times of global or local crisis, when supply chains shatter, repair becomes a necessity. Alas, the sectors most committed to thwarting independent repair are also sectors whose products are most critical to weathering crises.

Take the automotive sector: manufacturers in this increasingly concentrated sector have used TPMs to prevent independent repair, from scrambling the diagnostic codes used on cars’ internal communications networks to adding “security chips” to engine parts that prevent technicians from using functionally equivalent replacement parts from competing manufacturers.

The issue has simmered for a long time: in 2012, voters in the Commonwealth of Massachusetts overwhelmingly backed a ballot initiative that safeguarded the rights of drivers to choose their own mechanics, prompting the legislature to enact a right-to-repair law. However, manufacturers responded to this legal constraint by deploying TPMs that allow them to comply with the letter of the 2012 law while still preventing independent repair. The situation is so dire that Massachusetts voters have put another ballot initiative on this year’s ballot, which would force automotive companies to disable TPMs in order to enable independent repair.

It’s bad enough to lose your car while a pandemic has shut down public transit, but it’s not just drivers who need the Right to Repair: it’s also hospitals.

Medtronic is the world’s largest manufacturer of ventilators. For 20 years, it has manufactured the workhorse Puritan Bennett 840 ventilator, but recently the company added a TPM to its ventilator design. The TPM prevents technicians from repairing a ventilator with a broken screen by swapping in a screen from another broken ventilator; this kind of parts-reuse is common, and authorized Medtronic technicians can refurbish a broken ventilator this way because they have the code to unlock the ventilator.

There is a thriving secondary market for broken ventilators, but refurbishers who need to transplant a monitor from one ventilator to another must bypass Medtronic’s TPM. To do this, they rely on a single Polish technician who manufacturers a circumvention device and ships it to medical technicians around the world to help them with their repairs.

Medtronic strenuously objects to this practice and warns technicians that unauthorized repairs could expose patients to risk — we assume that the patients whose lives were saved by refurbished ventilators are unimpressed by this argument. In a cruel twist of irony, the anti-repair Medtronic was founded in 1949 as a medical equipment repair business that effected unauthorized repairs.


In the security field, it’s a truism that “there is no security in obscurity” — or, as cryptographer Bruce Schneier puts it, “anyone can design a system that they can’t think of a way around. That doesn’t mean it’s secure, it just means it’s secure against people stupider than you.”

Another truism in security is that “security is a process, not a product.” You can never know if a system is secure — all you can know is whether any defects have been discovered in it. Grave defects have been discovered even very mature, widely used systems that have been in use for decades.

The corollary of these two rules is that security requires that systems be open to auditing by as many third parties as possible, because the people who designed those systems are blind to their own mistakes, and because each auditor brings their own blind spots to the exercise.

But when a system has TPMs, they often interfere with security auditing, and, more importantly, security disclosures. TPMs are widely used in embedded systems to prevent competitors from creating interoperable products — think of inkjet printers using TPMs to detect and reject third-party ink cartridges — and when security researchers bypass these to investigate products, their reports can run afoul of DMCA 1201. Revealing a defect in a TPM, after all, can help attackers disable that TPM, and thus constitutes “circumvention” information. Recall that supplying “circumvention devices” to the public is a criminal offense under DMCA 1201.

This problem is so pronounced that in 2018, the US Copyright Office granted an exemption to DMCA 1201 for security researchers.

However, that exemption is not broad enough to encompass all security research. A coalition of security researchers is returning to the Copyright Office this rulemaking to explain again why regulators have been wrong to impose restrictions on legitimate research.


Firms use TPMs in three socially harmful ways:

  1. Controlling customers: From limiting repairs to forcing the purchase of expensive spares and consumables to arbitrarily blocking apps, firms can use TPMs to compel their customers to behave in ways that put corporate interests above the interests of their customers;
  2. Controlling critics: DMCA 1201 means that when a security researcher discovers a defect in a product, the manufacturer can exercise a veto over the disclosure of the defect by threatening legal action;
  3. Controlling competitors: DMCA 1201 allows firms to unilaterally decide whether a competitor’s parts, apps, features and services are available to its customers.

This concluding section delves into three key examples of TPMs’ interference with competitive markets.

App Stores

In principle, there is nothing wrong with a manufacturer “curating” a collection of software for its products that are tested and certified to be of high quality. However, when devices are designed so that using a rival’s app store requires bypassing a TPM, manufacturers can exercise a curator’s veto, blocking rival apps on the basis that they compete with the manufacturer’s own services.

The most familiar example of this is Apple’s repeated decision to block rivals on the grounds that they offer alternative payment mechanisms that bypass Apple’s own payment system and thus evade paying a commission to Apple. Recent high-profile examples include the HEY! email app, and the bestselling Fortnite app.

Streaming media

This plays out in other device categories as well, notably streaming video: AT&T’s HBO Max is deliberately incompatible with leading video-to-TV bridges such as Amazon Fire and Roku TV, who command 70% of the market. The Fire and Roku are often integrated directly into televisions, meaning that HBO Max customers must purchase additional hardware to watch the TV they’re already paying for on their own television sets. To make matters worse, HBO has cancelled its HBO Go service, which enabled people who paid for HBO over satellite and cable to watch programming on Roku and Amazon devices .


TPMs also allow for the formation of cartels that can collude to exclude entire development methodologies from a market and to deliver control over the market to a single company. For example, the W3C’s Encrypted Media Extensions (see “The Rights of People With Disabilities,” above) is a standard for streaming video to web browsers.

However, EME is designed so that it does not constitute a complete technical solution: every browser vendor that implements EME must also separately license a proprietary descrambling component called a “content decryption module” (CDM).

In practice, only one company makes a licensable CDM: Google, whose “Widevine” technology must be licensed in order to display commercial videos from companies like Netflix, Amazon Prime and other market leaders in a browser.

However, Google will not license this technology to free/open source browsers except for those based on its own Chrome/Chromium browser. In standardizing a TPM for browsers, the W3C — and Section 1201 of the DMCA — has delivered gatekeeper status to Google, who now get to decide who may enter the browser market that it dominates; rivals that attempt to implement a CDM without Google’s permission risk prison sentences and large fines.


The U.S.A. has had 22 years of experience with legal protections for TPMs under Section 1201 in the DMCA. In that time, the U.S. government has repeatedly documented multiple ways in which TPMs interfere with basic human rights and the systems that permit their exercise. The Mexican Supreme Court has now taken up the question of whether Mexico can follow the U.S.’s example and establish a comparable regime in accordance with the rights recognized by the Mexican Constitution and international human rights law. In this document, we provide evidence that TPM regimes are incompatible with this goal.

The Mexican Congress — and the U.S. Congress — could do much to improve this situation by tying offenses under TPM law to actual acts of copyright violation. As the above has demonstrated, the most grave abuses of TPMs stem from their use to interfere with activities that do not infringe copyright.

However, rightsholders already have a remedy for copyright infringements: copyright law. A separate liability regime for TPM circumvention serves no legitimate purpose. Rather, its burden falls squarely on people who want to stay on the right side of the law and find that their important, legitimate activities and expression are put in legal peril.

Big tech competition DMCA Intelwars

A Legislative Path to an Interoperable Internet

It’s not enough to say that the Internet is built on interoperability. The Internet is interoperability. Billions of machines around the world use the same set of open protocols—like TCP/IP, HTTP, and TLS—to talk to one another. The first Internet-connected devices were only possible because phone lines provided interoperable communication ports, and scientists found a way to send data, rather than voice, over those phone lines.

In the early days of the Internet, protocols dictated the rules of the road. Because the Internet was a fundamentally decentralized, open system, services on the Internet defaulted to acting the same way. Companies may have tried to build their own proprietary networking protocols or maintain unilateral control over the content on the network, but they ultimately failed. The ecosystem was fast-moving, chaotic, and welcoming to new ideas.

Today, big platforms are ecosystems unto themselves. Companies create accounts on Twitter, Facebook, and YouTube in order to interact with consumers. Platforms maintain suites of business-facing APIs that let other companies build apps to work within the boundaries of those platforms. And since they control the infrastructure that others rely on, the platforms have unilateral authority to decide who gets to use it.

This is a problem for competition. It means that users of one platform have no easy way of interacting with friends on other services unless the platform’s owners decide to allow it. It means that network effects create enormous barriers to entry for upstart communications and social networking companies. And it means that the next generation of apps that would work on top of the new ecosystems can only exist at big tech’s pleasure.

That’s where interoperability can help. In this post, we’ll discuss how to bring about a more interoperable ecosystem in two ways: first, by creating minimum standards for interoperability that the tech giants must support; and second, by removing the legal moat that incumbents use to stave off innovative, competitive interoperators.

Interoperability is corporate entropy. It opens up space for chaotic, exciting new innovations, and erodes the high walls that monopolies build to protect themselves.

If Facebook and Twitter allowed anyone to fully and meaningfully interoperate with them, their size would not protect them from competition nearly as much as it does. But platforms have shown that they won’t choose to do so on their own. That’s where governments can step in: regulations could require that large platforms offer a baseline of interoperable interfaces that anyone, including competitors, can use. This would set a “floor” for how interoperable very large platforms must be. It would mean that once a walled garden becomes big enough, its owner needs to open up the gates and let others in.

Requiring big companies to open up specific interfaces would only win half the battle. There are always going to be upstarts who find new, unexpected, and innovative ways to interact with platforms—often against the platforms’ will. This is called “adversarial interoperability” or “competitive compatibility.” Currently, U.S. law gives incumbents legal tools to shut down those who would interoperate without the big companies’ consent. This limits the agency that users have within the services that are supposed to serve them, and it creates an artificial “ceiling” on innovation in markets dominated by monopolists. 

It’s not enough to create new legal duties for monopolists without dismantling the legal tools they themselves use to stave off competition. Likewise, it’s not enough to legalize competitive compatibility, since the platforms have such an advantage in technical resources that serious competitors’ attempts to interoperate face enormous engineering challenges. To break out of the big platforms’ suffocating hold on the market, we need both approaches. 

Mandating Access to Monopolist Platforms: Building a Floor

This post will look at one possible set of regulations, proposed in the bipartisan ACCESS Act, that would require platforms to interoperate with everyone else. At a high level, the ACCESS Act provides a good template for ensuring upstart competitors are able to interoperate and compete with monopolists. It won’t level the playing field, but it will ensure smaller companies have the right to play at all.

We’ll present three specific kinds of interoperability mandate, borrowed from the ACCESS Act’s framing. These are data portability, back-end interoperability, and delegability. Each one provides a piece of the puzzle: portability allows users to take their data and move to another platform; back-end interoperability lets users of upstart competitors interact with users of large platforms; and delegability allows users to interact with content from the big platforms through an interface of their choosing. All three address different ways that large platforms consolidate and guard their power. We’ll break these concepts down one at a time.

Data Portability

Data portability is the idea that users can take their data from one service and do what they want with it elsewhere. Portability is the “low-hanging fruit” of interoperability policy. Many services, Facebook and Google included, already offer relatively robust data portability tools. Furthermore, data portability mandates have been included in several recent data privacy laws, including the General Data Privacy Regulation (GDPR) and the California Consumer Privacy Act (CCPA). 

Portability is relatively uncontroversial, even for the companies subject to regulation. In 2019, Facebook published a whitepaper supporting some legal portability mandates. For its part, Google has repeatedly led the way with user-friendly portability tools. And Google, Facebook, Microsoft, Twitter, and Apple have all poured resources into the Data Transfer Project, a set of technical standards to make data portability easier to implement.

The devil is in the details. Portability is hard at the edges, because assigning “ownership” to data is often hard. Who should have access to a photo that one person takes of another’s face, then uploads to a company’s server? Who should be able to download a person’s phone number: just the owner, or everyone they’re friends with on Facebook? It is extremely difficult for a single law to draw a bright line between what data a user is entitled to and what constitutes an invasion of another’s privacy. While creating portability mandates, regulators should avoid overly prescriptive orders that could end up hurting privacy. 

Users should have a right to data portability, but that alone won’t be enough to loosen the tech giants’ grip. That’s because portability helps users leave a platform but doesn’t help them communicate with others who still use it.

Back-end Interoperability

The second, more impactful concept is back-end interoperability. Specifically, this means enabling users to interact with one another across the boundaries of large platforms. Right now, you can create an account on any number of small social networks, like diaspora or mastodon. But until your friends also move off of Facebook or Twitter, it’s extremely difficult to interact with them. Network effects prevent upstart competitors from taking off. Mandatory interoperability would force Facebook to maintain APIs that let users on other platforms exchange messages and content with Facebook users. For example, Facebook would have to let users of other networks post, like, comment, and send messages to users on Facebook without a Facebook account. This would enable true federation in the social media space.

Imagine a world where social media isn’t controlled by a monopoly. There are dozens of smaller services that look kind of like Facebook, but each has its own policies and priorities. Some services maintain tight control over what kind of content is posted. Others allow pseudonymous members to post freely with minimal interference. Some are designed for, and moderated by, specific cultural or political communities. Some are designed to share and comment on images; others lend themselves better to microblogs; others still to long textual exchanges.

Now imagine that a user on one platform can interact with any of the other platforms through a single interface. Users on one service can engage freely with content hosted on other services, subject to the moderation policies of the hosting servers. They don’t need to sign up for accounts with each service if they don’t want to (though they are more than free to do so). Facebook doesn’t have an obligation to host or promote content that violates its rules, but it does have a duty to connect its users to people and pages of their choosing on other networks. If users don’t like how the moderators of one community run things, they can move somewhere else. That’s the promise of federation.

Open technical standards to federate social networking already exist, and Facebook already maintains interfaces that do most of what the bill would require. But Facebook controls who can access its interfaces, and reserves the right to restrict or revoke access for any reason. Furthermore, Facebook requires that all of its APIs be accessed on behalf of a Facebook user, not a user of another service. It offers “interoperability” in one direction—flowing into Facebook—and it has no incentive to respect users who host their data elsewhere. An interoperability mandate, and appropriate enforcement, could solve both of these problems.


The third and final piece of the legislative framework is delegability. This is the idea that a user can delegate a third-party company, or a piece of third-party software, to interact with a platform on their behalf. Imagine if you could read your Facebook feed as curated by a third party that you trust. You could see things in raw chronological order, or see your friends’ posts in a separate feed from the news and content companies that you follow. You could calibrate your own filters for hate speech and misinformation, if you chose. And you could assign a trusted third party to navigate Facebook’s twisted labyrinth of privacy settings for you, making sure you got the most privacy-protective experience by default.

A great deal of the problems caused by monopolistic platforms are due to their interfaces. Ad-driven tech companies use dark patterns and the power of defaults to get users’ “consent” for much of their rampant data collection. In addition, ad-driven platforms often curate information in ways that benefit advertisers, not users. The ways Facebook, Twitter, and Youtube present content are designed to maximize engagement and drive up quarterly earnings. This frequently comes at the expense of user well-being.

A legal mandate for delegability would require platforms to allow third-party software to interface with their systems in the same way users do. In other words, they would have to expose interfaces for common user interactions—sending messages, liking and commenting on posts, reading content, and changing settings—so that users could delegate a piece of software to do those things for them. At a minimum, it would mean that platforms can leave their tech the way it is—after all, these functions are already exposed through a user interface, and so may be automated—and stop suing companies that try to build on top of it. A more interventionist regulation could require platforms to maintain stable, usable APIs to serve this purpose.

This is probably the most interventionist of the three avenues of regulation. It also has the most potential to cause harm. If platforms are forced to create new interfaces and given only limited authority to moderate their use, Facebook and Twitter could become even more overrun with bots. In addition, any company that is able to act on a user’s behalf will have access to all of that person’s information. Safeguards need to be created to ensure that user privacy is not harmed by this kind of mandate.

Security, Privacy, Interoperability: Choose All Three

Interoperability mandates are a heavy-duty regulatory tool. They need to be implemented carefully to avoid creating new problems with data privacy and security. 

Mandates for interoperability and delegability have the potential to exacerbate the privacy problems of existing platforms. Cambridge Analytica acquired its hoard of user data through Facebook’s existing APIs. If we require Facebook to open those APIs to everyone, we need to make sure that the new data flows don’t lead to new data abuses. This will be difficult, but not impossible. The key is to make sure users have control. Under a new mandate, Facebook would have to open up APIs for competing companies to use, but no data should flow across company boundaries until users give explicit, informed consent. Users must be able to withdraw that consent easily, at any time. The data shared should be minimized to what is actually necessary to achieve interoperability. And companies that collect data through these new interoperable interfaces should not be allowed to monetize that data in any way, including using it to profile users for ads.

Interoperability may also clash with security. Back-end interoperability will mean that big platforms need to keep their public-facing APIs stable, because changing them frequently or without notice could break the connections to other services. However, once a service becomes federated, it can be extremely difficult to change the way it works at all. Consider email, the archetypal federated messaging service. While end-to-end encryption has taken off on centralized messaging services like iMessage and WhatsApp, email servers have been slow to adopt even basic, point-to-point encryption with STARTTLS. It’s proven frustratingly difficult to get stakeholders on the same page, so inertia wins, and many messages are sent using the same technology we used in the ‘90s. Some encryption experts have stated, credibly, that they believe federation makes it too “slow” to build a competitive encrypted messaging service.  

But security doesn’t have to take a backseat to interoperability. In a world with interoperability mandates, standards don’t have to be decided by committee: the large platform that is subject to regulation can dictate how its services evolve, as long as it continues to grant fair access to everyone. If Facebook is to make its encrypted chat services interoperable with third parties, it must reserve the right to aggressively fix bugs and patch vulnerabilities. Sometimes, this will make it difficult for competitors to keep up, but protocol security is not something we can afford to sacrifice. Anyone who wants to be in the business of providing secure communications must be ready to tackle vulnerabilities quickly and according to industry best practices.

Interoperability mandates will present new challenges that we must take seriously. That doesn’t mean interoperability has to destroy privacy or undermine security. Lawmakers must be careful when writing new mandates, but they should diligently pursue a path that gives us interoperability without creating new risks for users.

Unlocking Competitive Compatibility: Removing the Ceiling

Interoperability mandates could make a great floor for interoperability. By their nature, mandates are backward-looking, seeking to establish competitive ecosystems instead of incumbent monopolies. No matter how the designers of these systems strain their imaginations, they can never plan for the interoperability needs of all the future use-cases, technologies, and circumstances.

Enter “competitive compatibility,” or ComCom, which will remove the artificial ceiling on innovation imposed by the big platforms. A glance through the origin stories of technologies as diverse as cable TV, modems, the Web, operating systems, social media services, networks, printers, and even cigarette-lighter chargers for cellphones reveals that the technologies we rely on today were not established as full-blown, standalone products, but rather, they started as adjuncts to the incumbent technologies that they eventually grew to eclipse. When these giants were mere upstarts, they shouldered their way rudely into the market by adding features to existing, widely used products, without permission from the companies whose products they were piggybacking on.

Today, this kind of bold action is hard to find, though when it’s tried, it’s a source of tremendous value for users and a real challenge to the very biggest of the Big Tech giants. 

Competitive compatibility was never rendered obsolete. Rather, the companies that climbed up the ComCom ladder kicked that ladder away once they had comfortably situated themselves at the peak of their markets. 

They have accomplished this by distorting existing laws into anti-competitive doomsday devices. Whether it’s turning terms of service violations into felonies, making independent repair into a criminal copyright violation, banning compatibility altogether, or turning troll with a portfolio of low-grade patents, it seems dominant firms are never more innovative than when they’re finding ways to abuse the law to make it illegal to compete with them.

Big Tech’s largely successful war on competitive compatibility reveals one of the greatest dangers presented by market concentration: its monopoly rents produce so much surplus that firms can afford to pursue the maintenance of their monopolies through the legal system, rather than by making the best products at the best prices.

EFF has long advocated for reforms to software patents, anti-circumvention rules, cybersecurity law, and other laws and policies that harm users and undermine fundamental liberties. But the legal innovations on display in the war on competitive compatibility show that fixing every defective tech law is not guaranteed to restore a level playing field. The lesson of legal wars like Oracle v. Google is that any ambiguity in any statute can be pressed into service to block competitors. 

After all, patents, copyrights, cybersecurity laws, and other weapons in the monopolist’s arsenal were never intended to establish and maintain industrial monopolies. Their use as anti-competitive weapons is a warning that a myriad of laws can be used in this way.

The barriers to competitive compatibility are many and various: there are explicitly enumerated laws, like section 1201 of the DMCA; then there are interpretations of those laws, like the claims that software patents cover very obvious “inventions” if the words “with a computer” are added to them; and then there are lawsuits to expand existing laws, like Oracle’s bid to stretch copyright to cover APIs and other functional, non-copyrightable works. 

There are several ways to clear the path for would-be interoperators. These bad laws can be worked around or struck down, one at a time, through legislation or litigation. Legislators could also enshrine an affirmative right to interoperate in law that would future-proof against new legal threats. Furthermore, regulators could require that entities receiving government contracts, settling claims of anticompetitive conduct, or receiving permission to undertake mergers make binding covenants not to attack interoperators under any legal theory.

Comprehensively addressing threats to competitive compatibility will be a long and arduous process, but the issue is urgent. It’s time we got started.

Commentary Creativity & Innovation DMCA Fair Use Intelwars

When the Senate Talks About the Internet, Note Who They Leave Out

In the midst of pandemic and protest, the Senate Judiciary Committee continued on with the third of many planned hearings about copyright. It is an odd moment to be considering whether or not the notice-and-takedown system laid out by Section 512 of the Digital Millennium Copyright Act is working or not, but since Section 512 is a cornerstone of the Internet and because protestors and those at home trying to avoid disease depend on the Internet, we watched it.

There was not a lot said at the hearing that we have not heard before. Major media and entertainment companies want Big Tech companies to implement copyright filters. Notice and takedown is burdensome to them, and they believe that technologists surely have a magic solution to the complicated problem of reconciling free expression with copyright that they simply have not implemented because Section 512 doesn’t require them to.

Artists have real problems and real concerns. In many sectors, including publishing and music, profits are high, but after the oligopolists of media and technology have taken their cut, there’s little left for artists. But the emphasis on Section 512 as the problem is misplaced and doesn’t ultimately serve artists. Before the DMCA created a way to take down material by sending a notice to platforms, the only remedy was to go to court. DMCA takedowns, by comparison, are as simple as sending email—or hiring an outside company to send emails on an artist’s behalf. The call for more Internet speech to be taken down automatically, on the algorithmic decision of some highly mistrusted tech monopolists, and without even an unproven allegation of infringement, is calling for a remedy without a process. It is calling for legal, protected expression to be in danger.

Artists are angry, as so many are, at Big Tech. But Big Tech can already afford to do the things that rightsholders want. And large rightsholders—like Hollywood studios and major music labels—likewise have an interest in taking down as much as they can, be it protected fair uses of their works or true infringement. That places Internet users in between the rock of Big Tech and the hard place of major entertainment companies. Artists and Internet users deserve alternatives to both Big Tech and major entertainment companies. Requiring tech companies to have filters, to search out infringement on their own, or any proposals requiring tech companies to do more will only solidify the positions of companies like Google and Facebook, which can afford to do these measures, and create more barriers for new competitors.

As Meredith Rose, Policy Counsel at Public Knowledge, said during the hearing:

This is not about content versus tech. I am here to speak about how Section 512 impacts the more than 229 million American adults who use the Internet as more than just a delivery mechanism for copyrighted content. They use it to pay bills, to learn, to work, to socialize, to receive healthcare. And yet they are missing from the Copyright Office’s Section 512 report, they are missing from the systems and procedures that govern their rights, and too often they are missing from the debate on Capitol Hill.

 We likewise note the absence of Internet users—a group that grows and grows and, whether they identify themselves as such or not, now includes 90% of Americans.

During the hearing, a witness wondered if there was a generation of artists who will be lost because it is just too difficult to police their copyrights online. This ignores the generation of artists who already share their work online, and who run into so many problems asserting their fair use rights. We note their absence as well.

We have already gone into depth about how the Copyright Office’s report on Section 512—mentioned quite a bit in the hearing—fails to take users and the importance of Internet access into account. Changing the foundation of the Internet, throwing up roadblocks to people expressing themselves online, creating new quasi-courts for copyright, or forcing the creation and adoption of faulty and easily abused automated filters will hurt users. And we are, almost all of us, Internet users.

Commentary Creativity & Innovation DMCA Intelwars

Internet Users of All Kinds Should Be Concerned by a New Copyright Office Report

Outside of the beltway, people all over the United States are taking to the streets to demand fundamental change. In the halls of Congress and the White House, however, many people seem to think the biggest thing that needs to be restructured is the Internet. Last week, the president issued an order taking on one legal foundation for online expression: Section 230. This week, the Senate is focusing on another: Section 512 of the Digital Millennium Copyright Act (DMCA).

The stage for this week’s hearing was set by a massive report from the Copyright Office that’s been five years in the making. We read it, so you don’t have to.

Since the DMCA passed in 1998, the Internet has grown into something vital that we all use. We are the biggest constituency of the Internet—not Big Tech or major media companies—and when we go online we depend on an Internet that depends on Section 512

Section 512 of the DMCA is one of the most important provisions of U.S. Internet law. Congress designed the DMCA to give rightsholders, service providers and users relatively precise “rules of the road” for policing online copyright infringement. The center of that scheme is the “notice and takedown” process. In exchange for substantial protection from liability for the actions of their users, service providers must promptly take down any content on their platforms that has been identified as infringing, and take several other prescribed steps. Copyright owners, for their part, are given a fast, extra-judicial procedure for obtaining redress against alleged infringement, paired with explicit statutory guidance regarding the process for doing so, and provisions designed to deter and remedy abuses of that process.

Without Section 512, the risk of crippling liability for the acts of users would have prevented the emergence of most social media outlets and online forums we use today. With the protection of that section, the Internet has become the most revolutionary platform for the creation and dissemination of speech that the world has ever known. Thousands of companies and organizations, big and small, rely on it every day. Interactive platforms like video hosting services and social networking sites that are vital to democratic participation, and also to the ability of ordinary users to forge communities, access information, and discuss issues of public and private concern, rely on Section 512 every day.

But large copyright holders, led by major media and entertainment companies, have complained for years that Section 512 doesn’t put enough of a burden on service providers to actively police online infringement. Bowing to their pressure, in December of 2015, Congress asked the Copyright Office to report on how Section 512 is working. Five years later, we have its answer—and overall it’s pretty disappointing.

Just Because One Party Is Unhappy Doesn’t Mean the Law is Broken

The Office believes that because rightsholders are dissatisfied with the DMCA, the law’s objectives aren’t being met. There are at least two problems with this theory. First, major rightsholders are never satisfied with the state of copyright law (or how the Internet works today in general)—they constantly seek broader restrictions, higher penalties, and more control over users of creative work. Their displeasure with Section 512 may in fact be a sign that the balance is working just fine.

Second, Congress’s goal was to ensure that the Internet would be an engine for innovation and expression, not to ensure perfect infringement policing. By that measure, Section 512, though far from perfect, is doing reasonably well when we consider the ease of which we can distribute knowledge and culture.

Misreading the Balance, Discounting Abuse

Part of the problem may be that the Office fundamentally misconstrues the bargain that Congress struck when it passed the DMCA. The report repeatedly refers to Section 512 as a balance between rightsholders and service providers. But Section 512 is supposed to benefit a third group: the public.

We know this because Congress built-in protections for free speech, knowing that the DMCA could be abused. Congress knew that Section 512’s quick and easy takedown process could result in lawful material being censored from the Internet, without any court supervision, much less advance notice to the person who posted the material, or any opportunity to contest the removal. To inhibit abuse, Congress made sure that the DMCA included a series of checks and balances. First, it created a counter-notice process that allows for putting content back online after a two-week waiting period. Second, Congress set out clear rules for asserting infringement under the DMCA. Third, it gave users the ability to hold rightsholders accountable if they send a DMCA notice in bad faith.

With these provisions, Section 512 creates a carefully crafted system. When properly deployed, it gives service providers protection from liability, copyright owners tools to police infringement, and users the ability to challenge the improper use of those tools.

The Copyright Office’s report speaks of the views of online service providers and rightsholders, while paying only lip service to the millions of Internet users that don’t identify with either group. That may be what led the Office to give short shrift to the problem of DMCA abuse, complaining that there wasn’t enough empirical evidence. In fact, a great deal of evidence was submitted into the record, among them a detailed study by Jennifer Urban, Joe Karaganis, and Brianna Schofield. Coming on the heels of a lengthy Wall Street Journal report describing how people use fake DMCA claims to get Google to take news reports offline, the Office’s dismissive treatment of DMCA abuse is profoundly disappointing.

Second-Guessing the Courts

An overall theme of the report is that courts all over the country have been misinterpreting the DMCA ever since its passage in 1998.

One of the DMCA’s four safe harbors covers “storage at the direction of a user.” The report suggests that appellate courts “expanded” the DMCA when they concluded, one court after another, that services such as transcoding, playback, and automatically identifying related videos, qualify as part of that storage because they are so closely related to it.  The report questions another appellate court ruling that peer-to-peer services qualify for protection.

And the report is even more critical of court rulings regarding when a service provider is on notice of infringement, triggering a duty to police that infringement. The report challenges one appellate ruling which requires awareness of facts and circumstances from which a reasonable person would know a specific infringement had occurred. Echoing an argument frequently raised by rightsholders and rejected by courts, the report contends that general knowledge that infringement is happening on a platform should be enough to mandate more active intervention.

What about the subsection of the DMCA that says plainly that service providers do not have a duty to monitor for infringement? The Office concludes that this provision is merely intended to protect user privacy.

The Office also suggests the Ninth Circuit’s decision in Lenz v Universal Music was mistaken. In that case, the appeals court ruled that entities who send takedown notices must consider whether the use they are targeting is a lawful fair use, because failure to do so would necessarily mean they could not have formed a good faith belief that the material was infringing, as the DMCA requires. The Office worries that, if the Ninth Circuit is correct, rightsholders might be held liable for not doing the work even if the material is actually infringing.

This is nonsensical—in real life, no one would sue under Section 512(f) to defend unlawful material, even if the provision had real teeth, because doing so would risk being slapped with massive and unpredictable statutory damages for infringement. And the Office’s worry is overblown. It is not too much to ask a person wielding a censorship tool as powerful as Section 512, which lets a person take others’ speech offline based on nothing more than an allegation, to take the time to figure out if they are wielding that tool appropriately. Given that one Ninth Circuit judge concluded that the Lenz decision actually “eviscerates § 512(f) and leaves it toothless against frivolous takedown notices,” it is hard to take rightsholders’ complaints seriously—but the Office did.

In short, the Office has taken upon itself to second-guess the many judges actually tasked with interpreting the law because it does not like their conclusions. Rather than describe the state of the law today and advise Congress as an information resource, it argues for what the law should be per the viewpoint of a discrete special interest. Advocacy for changing the law belongs to the public and their elected officials. It is not the Copyright Office’s job, and it sharply undermines any claim the Report might make to a neutral approach.

Mere Allegations Can Mean Losing Internet Access for Everyone on the Account

In order to take advantage of the safe harbor included in Section 512 of the DMCA, companies have to have a “repeat infringer” policy. It’s fairly flexible, since different companies have different uses, but the basic idea is that a company must terminate the account of a user who has repeatedly infringed. Perhaps the most famous iteration of this requirement is YouTube’s “Three Strikes” policy: if you get three copyright strikes in 90 days on YouTube, your whole account is deleted, all your videos are removed, and you can’t create new channels.

Fear of getting to three strikes has not only made YouTubers very cautious, it has created a landscape where extortion can flourish. One such troll would make bogus copyright claims, and then send messages to users demanding money in exchange for withdrawing the claims. When one user responded with a counter-notification—which is what they are supposed to do to get bogus claims dismissed—the troll allegedly “swatted” the user with the information in the counter-notice.

And that’s just the landscape for YouTube. The Copyright Office’s report suggests that the real problem of repeat infringer policies is that courts aren’t requiring service providers to create and enforce stricter ones, kicking more people off the Internet.

The Office does suggest that a different approach might be needed for students and universities, because students need the Internet for “academic work, career searching and networking, and personal purposes, such as watching television and listening to music,” and students living in campus housing would have no other choice for Internet access if they were kicked off the school’s network.

But all of us, not just students, use the Internet for work, career building, education, communication, and personal purposes. And few of us could go to another provider if an allegation of infringement kicked us off the ISP we have. Most Americans have only one or two high-speed broadband providers with a majority of us stuck with a cable monopoly for high-speed access.

The Internet is vital to people’s everyday lives. To lose access entirely because of an unproven accusation of copyright infringement would be, as the Copyright Office briefly acknowledges, “excessively punitive.”

 The Copyright Office to the Rescue?

Having identified a host of problems, the Office concludes by offering to help fix some of them. Its offer to provide educational materials seems appropriate enough, though given the skewed nature of the Report itself, we worry that those materials will be far from neutral.

Far more worrisome, however, is the offer to help manufacture an industry consensus on standard technical measures (STMs) to police copyright infringement. According to Section 512, service providers must accommodate STMs in order to receive the safe harbor protections. To qualify as an STM, a measure must (1) have been developed pursuant to a broad consensus in an “open, fair, voluntary, multi-industry standards process”; (2) be available on reasonable and nondiscriminatory terms; and (3) cannot impose substantial costs on service providers. Nothing has ever met all three requirements, not least because no “open, fair, voluntary, multi-industry standards process” exists.

The Office would apparently like to change that, and has even asked Congress for regulatory authority to help make it happen. Trouble is, any such process is far too likely to result in the adoption of filtering mandates. And filtering has many, many, \issues, such that the Office itself says filtering mandates should not be adopted, at least not now.

The Good News

Which brings us to the good news. The Copyright Office stopped short of recommending that Congress require all online services to filter for infringing content—a dangerous and drastic step they describe with the bland-sounding term “notice and staydown”—or require a system of website blocking. The Office wisely noted that these proposals could have a truly awful impact on freedom of speech. It also noted that filtering mandates could raise barriers to competition for new online services, and entrench today’s tech giants in their outsized control over online speech—an outcome that harms both creators and users. And the Office also recognized the limits of its expertise, noting that filtering and site-blocking mandates would require “an extensive evaluation of . . . the non-copyright implications of these proposals, such as economic, antitrust, [and] speech. . . .”

The Can of Worms Is Open

Looking ahead, the most dangerous thing about the Report may be that some Senators are treating its recommendations for “clarification” as an invitation to rewrite Section 512, inviting the exact legal uncertainty the law was intended to eliminate. Senators Thom Tillis and Patrick Leahy have asked the Office to provide detailed recommendations for how to rewrite the statute – including asking what it would do if it were starting from scratch.

Based on the report, we suspect the answer won’t include strong protections for user rights.

Creativity & Innovation DMCA Fair Use Intelwars

Reevaluating the DMCA 22 Years Later: Let’s Think of the Users

The Digital Millennium Copyright Act (DMCA) is one of the most important laws affecting the Internet and technology. Without the DMCA’s safe harbors from crippling copyright liability, many of the services on which we rely, big and small, commercial and noncommercial, would not exist. That means Youtube, but also Wikipedia, Etsy, and your neighborhood blog.  At the same time, the DMCA has encouraged private censorship and hampered privacy, security, and competition.

The DMCA is 22 years old this year and the Senate Subcommittee on Intellectual Property is marking that occasion with a series of hearings reviewing the law and inviting ideas for “reform.” It launched this week with a hearing on “The Digital Millennium Copyright Act at 22: What is it, why was it enacted, and where are we now,” which laid out the broad strokes of the DMCA’s history and current status. In EFF’s letter to the Committee, we explained that Section 1201 of the DMCA has no redeeming value. It has caused a lot of damage to speech, competition, innovation, and fair use. However, the safe harbors of Section 512 of the DMCA have allowed the Internet to be an open and free platform for lawful speech. 

This hearing had two panels. The first featured four panelists who were involved in the creation of the DMCA 22 years ago. The second panel featured four law professors talking about the current state of the law. A theme emerged early in the first panel and continued in the second: the conversation about the DMCA should not focus on whether it is and is not working for companies, be they Internet platforms, major labels and studios, or even, say, car manufacturers. Usersbe they artists, musicians, satirists, parents who want to share videos of their kids, nonprofits trying to make change, repair shops or researchersneed a place and a voice. 

The intent of the DMCA 22 years ago was to discourage copyright infringement but create space for innovation and expression, for individuals as well as Hollywood and service providers. Over the course of the last two decades, however,  many have forgotten who is supposed to occupy that space. As we revisit this law over the course of many hearings this year, we need to remember that this is not “Big Content v Big Tech” and ensure that users take center stage. Thankfully, at least at this hearing, there were people reminding Congress of this fact.

Section 512: Enabling Online Creativity and Expression

The DMCA has two main sections. The first is Section 512, which lays out the “safe harbor” provisions that protect service providers who meet certain conditions from monetary damages for the infringing activities of their users and other third parties on the net. Those conditions include a notice and takedown process that gives copyright holders an easy way to get content taken offline and, in theory, gives users redress if their content is wrongfully targeted. Without the safe harbor, the risk of potential copyright liability would prevent many services from doing things like hosting and transmitting user-generated content. Thus the safe harbors, while imperfect, have been essential to the growth of the Internet as an engine for innovation and free expression.

In the second part of the hearing, Professor Rebecca Tushnet, a Harvard law professor and former board member of the Organization for Transformative Works (OTW), stressed how much that the safe harbor has done for creativity online. Tushnet pointed out that OTW runs the Archive of Our Own, home to over four million works from over one million users and which receives over one billion page views a month. And yet, the number of DMCA notices averages to less than one per month. Most notices they receive are invalid, but the volunteer lawyers for OTW must still spend time and expense to investigate and respond. This processsmall numbers of notices and individual responsesis how most services experience the DMCA. A few playersthe biggest playershowever, rely on automated systems to parse large numbers of complaints. “It’s important,” said Tushnet, “not to treat YouTube like it was the Internet. If we do that, the only service to survive will be YouTube.”

We agree. Almost everything you use online relies in some way on the safe harbor provided by section 512 of the DMCA. Restructuring the DMCA around the experiences of the largest players like YouTube and Facebook will hurt users, many of which would like more options rather than fewer.

“The system is by no means perfect, there remain persistent problems with invalid takedown notices used to extort real creators or suppress political speech, but like democracy, it’s better than most of the alternatives that have been tried,” said Tushnet. “The numbers of independent creators and the amount of money spent on content is growing every year. Changes to 512 are likely to make things even worse.”

Section 1201: Copyright Protection Gone Horribly Wrong

On the other hand, the DMCA also includes Section 1201, the “anti-circumvention” provisions that bar circumvention of access controls and technical protection measures, i.e. digital locks on software. It was supposed to prevent copyright “pirates” from defeating things like digital rights management (DRM is a form of access control) or building devices that would allow others to do so. In practice, the DMCA anti-circumvention provisions have done little to stop “Internet piracy.” Instead, they’ve been a major roadblock to security research, fair use, and repair and tinkering.

Users don’t experience Section 1201 as a copyright protection. They experience it as the reason they can’t fix their tractor, repair their car, or even buy cheaper printer ink. And attempts to get exemptions to this law for these purposeswhich, again, are unrelated to copyright infringement and create absurd conditions for users trying to use things they ownare always met with resistance. 

Professor Jessica Litman, of University of Michigan Law School, laid out the problem of 1201 clearly:

The business that make products with embedded software have used the anti-circumvention provisions to discourage the marketing of compatible after-market parts or hobble independent repair and maintenance businesses. Customers who would prefer to repair their broken products rather than discard and replace them face legal obstacles they should not. It’s unreasonable to tell the owner of a tractor that if her tractor needs repairs, she ought to petition the Librarian of Congress for permission to make those repairs. 

1201 covers pretty much anything that has a computer in it. In 1998, that meant a DVD; in 2020, it means the Internet of Things, from TVs to refrigerators to e-books to tractors.  Which is why farmers trying to repair, modify or test those thingsfarmers, independent mechanics, security researchers, people making ebooks accessible to those with print disabilities, and so oneither have to abandon their work, risk being in violation of the law (including criminal liability), or ask the Library of Congress for an exemption to the law every three years. 

Put simply, the existing scheme doesn’t discourage piracy. Instead, it prevents people from truly owning their own devices; and, as Litman put it, “prevents licensed users from making licensed uses.”

The DMCA is a mixed bag. Section 512’s safe harbor makes expression online possible, but the specific particulars of the system have failures. And section 1201 has strayed far away from whatever its original purpose was and hurts users far more than it helps rightsholders. To see why, the DMCA hearings must include testimony focused needs and experiences of all kinds of users and services.