Categories
Fair Use Intelwars Legal Analysis

Victory for Fair Use: The Supreme Court Reverses the Federal Circuit in Oracle v. Google

In a win for innovation, the U.S. Supreme Court has held that Google’s use of certain Java Application Programming Interfaces (APIs) is a lawful fair use. In doing so, the Court reversed the previous rulings by the Federal Circuit and recognized that copyright only promotes innovation and creativity when it provides breathing room for those who are building on what has come before.

This decision gives more legal certainty to software developers’ common practice of using, re-using, and re-implementing software interfaces written by others, a custom that underlies most of the internet and personal computing technologies we use every day.

To briefly summarize over ten years of litigation: Oracle claims a copyright on the Java APIs—essentially names and formats for calling computer functions—and claims that Google infringed that copyright by using (reimplementing) certain Java APIs in the Android OS. When it created Android, Google wrote its own set of basic functions similar to Java (its own implementing code). But in order to allow developers to write their own programs for Android, Google used certain specifications of the Java APIs (sometimes called the “declaring code”).

APIs provide a common language that lets programs talk to each other. They also let programmers operate with a familiar interface, even on a competitive platform. It would strike at the heart of innovation and collaboration to declare them copyrightable.

EFF filed numerous amicus briefs in this case explaining why the APIs should not be copyrightable and why, in any event, it is not infringement to use them in the way Google did. As we’ve explained before, the two Federal Circuit opinions are a disaster for innovation in computer software. Its first decision—that APIs are entitled to copyright protection—ran contrary to the views of most other courts and the long-held expectations of computer scientists. Indeed, excluding APIs from copyright protection was essential to the development of modern computers and the internet.

Then the second decision made things worse. The Federal Circuit’s first opinion had at least held that a jury should decide whether Google’s use of the Java APIs was fair, and in fact a jury did just that. But Oracle appealed again, and in 2018 the same three Federal Circuit judges reversed the jury’s verdict and held that Google had not engaged in fair use as a matter of law.

Fortunately, the Supreme Court agreed to review the case. In a 6-2 decision, Justice Breyer explained why Google’s use of the Java APIs was a fair use as a matter of law. First, the Court discussed some basic principles of the fair use doctrine, writing that fair use “permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.”

Furthermore, the court stated:

Fair use “can play an important role in determining the lawful scope of a computer program copyright . . . It can help to distinguish among technologies. It can distinguish between expressive and functional features of computer code where those features are mixed. It can focus on the legitimate need to provide incentives to produce copyrighted material while examining the extent to which yet further protection creates unrelated or illegitimate harms in other markets or to the development of other products.”

In doing so, the decision underlined the real purpose of copyright: to incentivize innovation and creativity. When copyright does the opposite, fair use provides an important safety valve.

Justice Breyer then turned to the specific fair use statutory factors. Appropriately for a functional software copyright case, he first discussed the nature of the copyrighted work. The Java APIs are a “user interface” that allow users (here the developers of Android applications) to “manipulate and control” task-performing computer programs. The Court observed that the declaring code of the Java APIs differs from other kinds of copyrightable computer code—it’s “inextricably bound together” with uncopyrightable features, such as a system of computer tasks and their organization and the use of specific programming commands (the Java “method calls”). As the Court noted:

Unlike many other programs, its value in significant part derives from the value that those who do not hold copyrights, namely, computer programmers, invest of their own time and effort to learn the API’s system. And unlike many other programs, its value lies in its efforts to encourage programmers to learn and to use that system so that they will use (and continue to use) Sun-related implementing programs that Google did not copy.

Thus, since the declaring code is “further than are most computer programs (such as the implementing code) from the core of copyright,” this factor favored fair use.

Justice Breyer then discussed the purpose and character of the use. Here, the opinion shed some important light on when a use is “transformative” in the context of functional aspects of computer software, creating something new rather than simply taking the place of the original. Although Google copied parts of the Java API “precisely,” Google did so to create products fulfilling new purposes and to offer programmers “a highly creative and innovative tool” for smartphone development. Such use “was consistent with that creative ‘progress’ that is the basic constitutional objective of copyright itself.”

The Court discussed “the numerous ways in which reimplementing an interface can further the development of computer programs,” such as allowing different programs to speak to each other and letting programmers continue to use their acquired skills. The jury also heard that reuse of APIs is common industry practice. Thus, the opinion concluded that the “purpose and character” of Google’s copying was transformative, so the first factor favored fair use.

Next, the Court considered the third fair use factor, the amount and substantiality of the portion used. As a factual matter in this case, the 11,500 lines of declaring code that Google used were less than one percent of the total Java SE program. And even the declaring code that Google used was to permit programmers to utilize their knowledge and experience working with the Java APIs to write new programs for Android smartphones. Since the amount of copying was “tethered” to a valid and transformative purpose, the “substantiality” factor favored fair use.

Finally, several reasons led Justice Breyer to conclude that the fourth factor, market effects, favored Google. Independent of Android’s introduction in the marketplace, Sun didn’t have the ability to build a viable smartphone. And any sources of Sun’s lost revenue were a result of the investment by third parties (programmers) in learning and using Java. Thus, “given programmers’ investment in learning the Sun Java API, to allow enforcement of Oracle’s copyright here would risk harm to the public. Given the costs and difficulties of producing alternative APIs with similar appeal to programmers, allowing enforcement here would make of the Sun Java API’s declaring code a lock limiting the future creativity of new programs.” This “lock” would interfere with copyright’s basic objectives.

The Court concluded that “where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.”

The Supreme Court left for another day the issue of whether functional aspects of computer software are copyrightable in the first place. Nevertheless, we are pleased that the Court recognized the overall importance of fair use in software cases, and the public interest in allowing programmers, developers, and other users to continue to use their acquired knowledge and experience with software interfaces in subsequent platforms.

Share
Categories
announcement Commentary DMCA Fair Use Intelwars

Free as in Climbing: Rock Climber’s Open Data Project Threatened by Bogus Copyright Claims

Rock climbers have a tradition of sharing “beta”—helpful information about a route—with other climbers. Giving beta is both useful and a form of community-building within this popular sport. Given that strong tradition of sharing, we were disappointed to learn that the owners of an important community website, MountainProject.com, were abusing copyright to try to shut down another site OpenBeta.io. The good news is that OpenBeta’s creator is not backing down—and EFF is standing with him.

Viet Nguyen, a climber and coder, created OpenBeta to bring open source software tools to the climbing community. He used Mountain Project, a website where climbers can post information about climbing routes, as a source of user-posted data about climbs, including their location, ratings, route descriptions, and the names of first ascensionists. Using this data, Nguyen created free, publicly available interfaces (APIs) that others can use to discover new insights about climbing—anything from mapping favorite crags to analyzing the relative difficulty of routes in different regions—using software of their own.

Rock climbers get a lot of practice at falling hard, taking a moment to recover, and continuing to climb. Mountain Project should take a lesson from their community: dust off, change your approach, and keep climbing.

The Mountain Project website is built on users’ contributions of information about climbs. Building on users’ contributions, Mountain Project offers search tools, “classic climbs” lists, climbing news links, and other content. But although the site runs on the contributions of its users, Mountain Project’s owners apparently want to control who can use those contributions, and how. They sent a cease-and-desist letter to Mr. Nguyen, claiming to “own[] all rights and interests in the user-generated work” posted to the site, and demanding that he stop using it in OpenBeta. They also sent a DMCA request to GitHub to take down the OpenBeta code repository.

As we explain in our response, these copyright claims are absurd. First, climbers who posted their own beta and other information to Mountain Project may be surprised to learn that the website is claiming to “own” their posts, especially since the site’s Terms of Use say just the opposite: “you own Your Content.”

As is typical for sites that host user-generated content, Mountain Project doesn’t ask its users to hand over copyright in their posts, but rather to give the site a “non-exclusive” license to use what they posted. Mountain Project’s owners are effectively usurping their users’ rights in order to threaten a community member.

And even if Mountain Project had a legal interest in the content, OpenBeta didn’t infringe on it. Facts, like the names and locations of climbing routes, can’t be copyrighted in the first place. And although copyright might apply to climbers’ own route descriptions, OpenBeta’s use is a fair use. As we explained in our letter:

The original purpose of the material was to contribute to the general knowledge of the climbing community. The OpenBeta data files do something more: Mr. Nguyen uses it to help others to learn about Machine Learning, making climbing maps, and otherwise using software to generate new insights about rock climbing.

In other words, a fair use.

Rock climbers get a lot of practice at falling hard, taking a moment to recover, and continuing to climb. Mountain Project blew it here by making legally bogus threats against OpenBeta. We hope they take a lesson from their community: dust off, change your approach, and keep climbing.

Share
Categories
Creativity & Innovation Fair Use Intelwars

Thank You for Speaking Against a Terrible Copyright Proposal

Last week was the deadline for comments on the draft of the so-called “Digital Copyright Act,” a proposal which would fundamentally change how creativity functions online. We asked for creators to add their voices to the many groups opposing this draft, and you did it. Ultimately, over 900 of you signed a letter expressing your concern.

The “Digital Copyright Act” was the result of a year of hearings in the U.S. Senate’s Subcommittee on Intellectual Property. Many of the hearings dismissed or marginalized the voices of civil society, Internet users, and Internet creators. Often, it was assumed that the majority of copyrighted work worth protecting is the content made by major media conglomerates or controlled by traditional gatekeepers. We know better.

We know there is a whole new generation of creators whose work is shared online. Some of that work makes fair use of other copyrighted material. Some work is entirely original or based on a work in the public domain. All of it can run afoul of ranking and promotion algorithms, terms of service, and takedowns. The “Digital Copyright Act” would put all of that creativity at risk, entrenching the power of major studios, big tech companies, and major labels.

Along with your signatures and letter, EFF submitted our own comments on the DCA. We urged Congress to set aside the proposal entirely, as many of the policies it contained would cause deep and lasting damage to online speech, creativity, and innovation. We do not only want this particular draft to be put in the bin where it belongs, we want to be clear that even watered-down versions of the policies it contains would further tip the balance away from individuals or small creators and towards large, well-resourced corporations.

One of our concerns remains a call from many a large corporate rightsholder for Internet services to take down more speech, prevent more from being uploaded, and monitor everything on their services for copyrighted material. The “Digital Copyright Act” proposal does just that, in many places and in many ways. Any one of those provisions would result in a requirement for services to use filters or copyright bots.

Filters alone do not work. They simply cannot do the necessary contextual analysis to determine if something is copyright infringement or not. But many of them are used this way, resulting in legal speech being blocked or demonetized. As bad as the current filter use is, it would be much worse if it became legally mandated. Imagine YouTube’s Content ID being the best case scenario for uploading video to the Internet.

So we want to thank you for speaking up and letting Congress know this issue is not simply academic. And letting them know this is not simply Big Tech versus Big Content. For our part, EFF will continue keeping an eye out and helping you be heard

Share
Categories
Article 13 EU Policy Fair Use Intelwars

From Creativity to Exclusivity: The German Government’s Bad Deal for Article 17

The implementation process of Article 17 (formerly Article 13) of the controversial Copyright Directive into national laws is in full swing, and it does not look good for users’ rights and freedoms. Several EU states have failed to present balanced copyright implementation proposals, ignoring the concerns off EFF, other civil society organizations, and experts that only strong user safeguards can help preventing Article 17 from turning tech companies and online services operators into copyright police.

A glimpse of hope was presented by the German government in a recent discussion paper. While the draft proposal fails to prevent the use of upload filters to monitor all user uploads and assess them against the information provided by rightsholders, it showed creativity by giving users the option of pre-flagging uploads as “authorized” (online by default) and by setting out exceptions for everyday uses. Remedies against abusive removal requests by self-proclaimed rightsholders were another positive feature of the discussion draft.

Inflexible Rules in Favor of Press Publishers

However, the recently adopted copyright implementation proposal by the German Federal Cabinet has abandoned the focus on user rights in favor of inflexible rules that only benefit press publishers. Instead of opting for broad and fair statutory authorization for non-commercial minor uses, the German government suggests trivial carve-outs for “uses presumably authorized by law,” which are not supposed to be blocked automatically by online platforms. However, the criteria for such uses are narrow and out of touch with reality. For example, the limit for minor use of text is 160 characters.

By comparison, the maximum length of a tweet is 280 characters, which is barely enough substance for a proper quote. As those uses are only presumably authorized, they can still be disputed by rightsholders and blocked at a later stage if they infringe copyright. However, this did not prevent the German government from putting a price tag on such communication as service providers will have to pay the author an “appropriate remuneration.” There are other problematic elements in the proposal, such as the plan to limit the use of parodies to uses that are “justified by the specific purpose”—so better be careful about being too playful.

The German Parliament Can Improve the Bill

It’s now up to the German Parliament to decide whether to be more interested in the concerns of press publishers or in the erosion of user rights and freedoms. EFF will continue to reach out to Members of Parliament to help them make the right decision.

Share
Categories
Creativity & Innovation DMCA Fair Use Intelwars

Cops Using Music to Try to Stop Being Filmed Is Just the Tip of the Iceberg

Someone tries to livestream their encounters with the police, only to find that the police started playing music. In the case of a February 5 meeting between an activist and the Beverly Hills Police Department, the song of choice was Sublime’s “Santeria.” The police may not got no crystal ball, but they do seem to have an unusually strong knowledge about copyright filters.

The timing of music being played when a cop saw he was being filmed was not lost on people. It seemed likely that the goal was to trigger Instagram’s over-zealous copyright filter, which would shut down the stream based on the background music and not the actual content. It’s not an unfamiliar tactic, and it’s unfortunately one based on the reality of how copyright filters work.

Copyright filters are generally more sensitive to audio content than audiovisual content. That sensitivity causes real problems for people performing, discussing, or reviewing music online. It’s a problem of mechanics. It is easier for filters to find a match just on a piece of audio material compared to a full audiovisual clip. And then there is the likelihood that a filter is merely checking to see if a few seconds of a video file seems to contain a few seconds of an audio file.

It’s part of why playing music is a better way of getting a video stream you don’t want seen shut down. (The other part is that playing music is easier than walking around with a screen playing a Disney film in its entirety. Much fun as that would be.)

The other side of the coin is how difficult filters make it for musicians to perform music that no one owns. For example, classical musicians filming themselves playing public domain music—compositions that they have every right to play, as they are not copyrighted—attract many matches. This is because the major rightsholders or tech companies have put many examples of copyrighted performances of these songs into the system. It does not seem to matter whether the video shows a different performer playing the song—the match is made on audio alone. This drives lawful use of material offline.

Another problem is that people may have licensed the right to use a piece of music or are using a piece of free music that another work also used. And if that other work is in the filter’s database, it’ll make a match between the two. This results in someone who has all the rights to a piece of music being blocked or losing income. It’s a big enough problem that, in the process of writing our whitepaper on YouTube’s copyright filter, Content ID, we were told that people who had experienced this problem had asked for it to be included specifically.

Filters are so sensitive to music that it is very difficult to make a living discussing music online. The difficulty of getting music clips past Content ID explains the dearth of music commentators on YouTube. It is common knowledge among YouTube creators, with one saying “this is why you don’t make content about music.”

Criticism, commentary, and education of music are all areas that are legally protected by fair use. Using parts of a thing you are discussing to show what you mean is part of effective communication. And while the law does not make fair use of music more difficult to prove than any other kind of work, filters do.

YouTube’s filter does something even more insidious than simply taking down videos, though. When it detects a match, it allows the label claiming ownership to take part or all of the money that the original creator would have made. So a video criticizing a piece of music ends up enriching the party being critiqued. As one music critic explained:

Every single one of my videos will get flagged for something and I choose not to do anything about it, because all they’re taking is the ad money. And I am okay with that, I’d rather make my videos the way they are and lose the ad money rather than try to edit around the Content ID because I have no idea how to edit around the Content ID. Even if I did know, they’d change it tomorrow. So I just made a decision not to worry about it.

This setup is also how a ten-hour white noise video ended up with five copyright claims against it. This taking-from-the-poor-and-giving-to-the-rich is a blatantly absurd result, but it’s the status quo on much of YouTube.

A group, like the police, who is particularly tech-savvy could easily figure out which songs result in videos being removed rather than have the money stolen. Internet creators talk on social media about the issues they run into and from whom. Some rightsholders are infamously controlling and litigious.

Copyright should not be a fast-track to getting speech removed that you do not like. The law is meant to encourage creativity by giving artists a limited period of exclusive rights to their creations. It is not a way to make money off of criticism or a loophole to be exploited by authorities.

Share
Categories
Creativity & Innovation Fair Use Intelwars

Some Answers to Questions About the State of Copyright in 2021

In all the madness that made up the last month of 2020, a number of copyright bills and proposals popped up—some even became law before most people had any chance to review them. So now that the dust has settled a little and we have some better idea what the landscape is going to look like, it is time to answer a few frequently asked questions.

What Happened?

In December 2020, Congress was rushing to pass a massive spending bill and coronavirus relief package. This was “must-pass” legislation, in the sense that if it didn’t pass there would be no money to do things like fund the government. Passing the package was further complicated by a couple of threats from President Trump to veto the bill unless certain things were in it.

In all this, two copyright bills were added to the spending package, despite them not having any place there—not least because there hadn’t been robust hearings where the issues with them could be pointed out. One of the bills didn’t even have text available to the public until the very last second. And they are now law.

The omnibus bill is 5,593 pages long. These new copyright laws are pretty close to smack dab in the middle, starting on page 2,539.

What Are These Laws?

They are the Protecting Lawful Streaming Act of 2020 and the Copyright Alternative in Small-Claims Enforcement Act (CASE Act). The former makes operating certain kinds of commercial streaming services a felony. The second creates a weird “Copyright Claims Board” within the Copyright Office that can hand out $30,000 awards for claims of copyright infringement. One is not going to impact the average Internet user that much. One is more dangerous.

What Is the Felony Streaming Law?

The Protecting Lawful Streaming Act of 2020 only had text publicly released about two weeks before it became law, and interest in it was high. This was partially because people heard there was a felony streaming law but no details whatsoever.

It isn’t a great law—we simply do not need more penalties for copyright infringement and definitely not ones that make it a felony—but the good news is it won’t affect most people.

Since most people don’t run such services, and the law does not affect the safe harbor provisions of the Digital Millennium Copyright Act, most of us won’t be running afoul of this law.

What Is the Copyright Alternatives in Small-Claims Enforcement (CASE) Act?

The CASE Act is a different story altogether. It is, at best, a huge waste of time and money. At worst, it will hover unconstitutionally like a dark cloud over everyone attempting to share anything online.

The CASE Act creates a “Copyright Claims Board” in the Copyright Office that can hear infringement claims by rightsholders seeking redress of no more than $30,000 per proceeding.

CASE Act’s proponents claim this process is voluntary, but rather than both parties agreeing to this process—aka an “opt in” system—everyone is presumptively bound by the board’s decisions unless they “opt out.” That is, you must affirmatively, in whatever manner the Copyright Office decides, say you do not want to participate in this system. You must do this every time you get a notice from this board if you don’t want to be subject to its decisions. If you don’t, if you ignore it in any way, you are on the hook for whatever they decide. And it’s a decision they can make without you defending yourself. And it’s a decision that has very limited appeal options.

For many people, opting out will be the best option as this process does not have the protections and limitations that a court case has. For example, a bad decision on fair use in court is subject to multiple levels of appeal. Under the CASE Act, decisions made by claims officers are extremely difficult to appeal. Making matters worse, the penalties the Copyright Claims Board is authorized to impose are high and will be, especially at first, unpredictable.

Okay, How Do I Opt Out?

Sadly, we cannot tell you that yet. A lot of this is left up to the Copyright Office to determine. The Copyright Office has until December of 2021 to get this thing up and running (with an option to extend that deadline by 180 days). In that time, they have to establish regulations about opting out. We hope that the regulations and system will be simple, clear, and easy to use.

That also means that the Copyright Claims Board does not exist yet. It could come into existence at any point this year. At the latest, it will start hearing cases in mid-2022.

What Should I Do If I Get Anything Related to the CASE Act?

If you get a letter from someone threatening to take you to the Copyright Claims Board unless you pay them and you don’t know what to do, get in contact with us by emailing info@eff.org.

One of the bigger problems with the CASE Act—and there are many—is that anyone with money or access to other resources like lawyers will know how to opt out and will be able to decide if that is the right decision for them. Such individuals or companies are unlikely to miss a notice or forget to opt out. Regular people, however, will be vulnerable to copyright trolls, who will profit from people unintentionally forfeiting their rights or caving to threats like we describe above.

Is That All?

Sadly not! In addition to these laws, there is also a proposed wholesale change to the online copyright ecosystem called the “Digital Copyright Act” or DCA. A draft of it was released in late December 2020, and it is very bad for anyone who uses the Internet. Worse, in many ways, than any other copyright proposal we’ve seen. We will continue to fight to keep these bad ideas out of the law, and we will need your help to do so.

Share
Categories
Commentary Creativity & Innovation Fair Use Intelwars

A Smorgasbord of Bad Takedowns: 2020 Year in Review

Here at EFF, we take particular notice of the way that intellectual property law leads to expression being removed from the Internet. We document the worst examples in our Takedown Hall of Shame. Some, we use to explain more complex ideas. And in other cases, we offer our help.

In terms of takedowns, 2020 prefaced the year to come with a January story from New York University School of Law. The law school posted a video of a panel titled “Proving Similarity,” where experts explained how song similarity is analyzed in copyright cases. Unsurprisingly, that involved playing parts of songs during the panel. And so, the video meant to explain how copyright infringement is determined was flagged by Content ID, YouTube’s automated copyright filter.

While the legal experts at, let’s check our notes, NYU Law were confident this was fair use, they were less confident that they understood how YouTube’s private appeals system worked. And, more specifically, whether challenging Content ID would lead to NYU losing its YouTube channel. They reached out privately to ask questions about the system, but got no answers. Instead, YouTube just quietly restored the video.

And with that, a year of takedowns was off. There was Dr. Drew Pinsky’s incorrect assessment that copyright law let him remove a video showing him downplaying COVID-19. A self-described Twitter troll using the DMCA to remove from Twitter an interview he did about his tactics and then using the DMCA to remove a photo of his previous takedown. And, when San Diego Comic Con went virtual, CBS ended up taking down its own Star Trek panel.

On our end, we helped Internet users push back on attempts to use IP claims as a tool to silence critics. In one case, EFF helped a Redditor win a fight to stay anonymous when Watchtower Bible and Tract Society, a group that publishes doctrines for Jehovah’s Witnesses, tried to learn their identity using copyright infringement allegations.

We also called out some truly ridiculous copyright takedowns. One culprit, the ironically named No Evil Foods, went after journalists and podcasters who reported on accusations of union-busting, claiming copyright in a union organizer’s recordings of anti-union presentations by management. We sent a letter telling them to knock it off: if the recorded speeches were even copyrightable, which is doubtful, this was an obvious fair use, and they were setting themselves up for a lawsuit under DMCA section 512(f), the provision that provides penalties for bad-faith takedowns. The takedowns stopped after that.

Another case saw a university jumping on the DMCA abuse train. Nebraska’s Doane University used a DMCA notice to take down a faculty-built website created to protest deep academic program cuts, claiming copyright in a photo of the university. One problem: that photo was actually taken by an opponent of the cuts, specifically for the website. The professor who made the website submitted a counternotice, but the university’s board was scheduled to vote on the cuts before the DMCA’s putback waiting period would expire. EFF stepped in and demanded that Doane withdraw its claim, and it worked—the website was back up before the board vote.

Copyright takedowns aren’t the only legal tool we see weaponized against online speech—brands are just as happy to use trademarks this way. Sometimes that can take the form of a DMCA-like takedown request, like the NFL used to shut down sales of “Same Old Jets” parody merchandise for long-suffering New York Jets fans. In other cases, a company might use a tool called the Uniform Domain-Name Dispute-Resolution Policy (UDRP) to take over an entire website. The UDRP lets a trademark holder take control of a domain name if it can convince a private arbitrator that Internet users would think it belonged to the brand and that the website owner registered the name in “bad faith,” without a legitimate interest in using it.

This year, we helped the owner of instacartshoppersunited.com stand up to a UDRP action and hold on to her domain name. Daryl Bentillo was frustrated by her experience as an Instacart shopper and registered that domain name intending to build a site that would help organize shoppers to advocate for better pay practices. But before she even had a chance to get started, Ms. Bentillo got an email saying that Instacart was trying to take her domain name away using this process she’d never heard of. That didn’t sit right with us, so we offered our help. We talked to Instacart’s attorneys about how Ms. Bentillo had every right to use the company’s name this way to refer to it (called a nominative fair use in trademark-speak)—and about how it sure looked like they were just using the UDRP process to shut down organizing efforts. Instacart was ultimately persuaded to withdraw its complaint.

Back in copyright land, we also dissected the problem of the RIAA’s takedown of youtube-dl, a popular tool for downloading videos from Internet platforms. Youtube-dl didn’t infringe on RIAA’s copyright, but the RIAA made the takedown claiming that because DMCA 1201 says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work and that youtube-dl could be used to download RIAA-member music, it should be removed.

RIAA and other copyright holders have argued that it’s a violation of DMCA 1201 to bypass DRM even if you’re doing it for completely lawful purposes; for example, if you’re downloading a video on YouTube for the purpose of using it in a way that’s protected by fair use.

Trying to use the notice-and-takedown process against a tool that does not infringe on any music label’s copyright and has lawful uses was an egregious abuse of the system, and we said so.

And to bring us full circle: we end with a case where discussing copyright infringement brought a takedown. Lindsay Ellis, a video creator, author, and critic, created a video called “Into the Omegaverse: How a Fanfic Trope Landed in Federal Court,” dissecting a story where one author, Addison Cain, has sent numerous takedowns to platforms with dubious copyright claims. Eventually, one of the targets sued and the question of who owns what in a genre that developed entirely online ended up in court. It did not take long for Cain to send a series of takedowns against this video about her history of takedowns.

That’s when EFF stepped in. The video is a classic fair use. It uses a relatively small amount of a copyrighted work for purposes of criticism and parody in an hour-long video that consists overwhelmingly of Ellis’ original content. In short, the copyright claims (and the other, non-copyright claims) were deficient. We were happy to explain this to Cain and her lawyer.

It’s been an interesting year for takedowns. Some of these takedowns involved automated filters, a problem we dived deep into with our whitepaper Unfiltered: How YouTube’s Content ID
Discourages Fair Use and Dictates What We See Online. Filters like Content ID not only remove lots of lawful expression; they also sharply restrict what we do see. Remember: if you encounter problems with bogus legal threats, DMCA takedowns, or filters, you can contact EFF at info@eff.org.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Share
Categories
Call to Action Creativity & Innovation Fair Use Intelwars

The CASE Act, Hidden in the Coronavirus Relief Bill, Is Just the Beginning of the Next Copyright Battle

As we feared, the “Copyright Alternative in Small-Claims Enforcement Act”—the CASE Act—that we’ve been fighting in various forms for two years has been included in Congress’ $900 billion Coronavirus relief package. This new legislation means Internet users could face up to $30,000 in penalties for sharing a meme or making a video, with liability determined not by neutral judges but by biased bureaucrats.

The CASE Act is supposed to be a solution to the complicated problem of online copyright infringement. In reality, it creates a system that will harm everyday users who, unlike the big players, won’t have the time and capacity to negotiate this new bureaucracy. In essence, it creates a new “Copyright Claims Board” in the Copyright Office that will be empowered to adjudicate copyright infringement claims, unless the accused received a notice, recognizes what it means, and opts out—in a very specific manner, within a limited time period. The Board will be staffed by “claims officers,” not judges or juries. You can appeal their rulings, but only on a limited basis, so you may be stuck with whatever amount the “claims board” decides you owe. Large, well-resourced players will not be affected, as they will have the resources to track notices and simply refuse to participate. The rest of us? We’ll be on the hook. 

The relief bill also included an altered version of a felony streaming bill that is, thankfully, not as destructive as it could have been. While the legislation as written is troubling, an earlier version would have been even more dangerous, targeting not only large-scale, for-profit streaming services, but everyday users as well. 

We’re continuing the fight against the CASE Act, but today brings even bigger problems. Senator Thom Tillis, who authored the felony streaming legislation, launched a “discussion draft” of the so-called Digital Copyright Act. Put simply, it is a hot mess of a bill that will rewrite decades of copyright law, give the Copyright Office (hardly a neutral player) the keys to the Internet, and drastically undermine speech and innovation in the name of policing copyright infringement. Read more analysis of this catastrophic bill here

Internet users and innovators, as well as the basic legal norms that have supported online expression for decades, are under attack. With your help, we will be continuing to fight back, as we have for thirty years, into 2021 and beyond. Fair use has a posse, and we hope you’ll join it.

Share
Categories
Creativity & Innovation Fair Use Intelwars

This Disastrous Copyright Proposal Goes Straight to Our Naughty List

Just yesterday we saw two wretched copyright bills-the CASE Act and a felony streaming bill — slipped into law via a must-pass spending bill. But it seems some people in Congress were just getting started. Today, Senator Thom Tillis launched a “discussion draft” of the so-called Digital Copyright Act. But there’s nothing to discuss: the bill, if passed, would absolutely devastate the Internet.

We’ll have a more in-depth analysis of this draft bill later, but we want to be clear about our opposition from the start: this bill is bad for Internet users, creators, and start-ups. The ones with the most to gain? Big Tech and Big Content.

This draft bill contains so many hoops and new regulations that the only Internet companies that will be able to keep up and keep on the “right” side of the law will be the Big Tech companies, who already have the resources and, frankly, the money to do so. It also creates a pile of new ways to punish users and creators in the service of Hollywood and the big record labels. Unless we stop this proposal, DMCA reform will crush huge swaths of online expression and innovation, not to mention the competition we need to develop alternatives to the largest platforms.

Some especially important things to note:

Filters, Filters Everywhere, Nor Any Drop to Drink

In several places in this bill—the requirements for “notice-and-staydown,” a duty for providers to monitor uploads, and development of “standard technical measures”—there are hidden filter requirements. The words “filter” or “copyright bots” may not appear in the text, but make no mistake: these new requirements will essentially mandate filters.

Filters not only do not work, they actively cause harm to legal expression. They operate on a black-and-white system of whether part of one thing matches part of another thing, not taking into account the context. So criticism, commentary, education—all of it goes out the window when a filter is in place. The only route left is not fair use but, as our whitepaper demonstrated, to edit around the filter’s requirements (or refrain from speaking altogether).

Once again, major studios, labels, and media companies will be entrenched as gatekeepers for art and expression. This is not the Internet we want or need. It’s a return to the days of Big Content domination, at the expense of small, independent creators.

“Repeat” Infringer Policies That Cut Off Internet Access

Under the Digital Millennium Copyright Act (DMCA), service providers get immunity from copyright liability if their users commit infringement if they meet certain requirements. One of those is to have a “repeat infringer policy,” which can terminate the account of someone who has committed multiple acts of copyright infringement. The details are left to the provider.

This draft changes that. It gives power to the Copyright Office, in consultation with the National Telecommunications and Information Administration, to develop a model repeat infringer policy to act as the minimum requirement for the policies of service providers.

That’s incredibly concerning. Earlier this year, the Copyright Office gave us a preview of what it thinks a reasonable repeat infringer policy looks like in its report on section 512 of the DMCA. And the Copyright Office concluded that not enough people are being punished by these policies and that a single, unsubstantiated claim of infringement should be enough to not only terminate a YouTube account but to terminate Internet access. Basically, according to the Copyright Office, your ISP—likely the only one in your area, if you’re among the majority of Americans who only have one choice for high-speed Internet—should terminate an entire household’s Internet based on copyright infringement.

Internet access is vital for participating in today’s world. An office that thinks it should be easier to cut off that access makes sense should not be in charge of determining a model repeat infringer policy.

And Speaking of the Copyright Office…

This draft gives the Copyright Office a whole suite of new regulatory powers over the Internet, basically making it the Internet Cops. Given the power and influence of U.S. based platforms, this means that the governing law of the Internet will be based not on human rights norms but on copyright restrictions.

An office that sees its constituency as copyright holders and not Internet users or the public interest should not be in charge of the Internet.

Whatever good things are in this draft—and there are a few modest improvements proposed—are vastly outweighed by how catastrophically bad the rest of it is. Do not worry too much, though; as ever, EFF will be fighting for the Internet every step of the way, just as we did during the SOPA/PIPA fight with the help of countless Internet users and a broad coalition committed to the free and open Internet. This proposal is far worse than SOPA/PIPA, so our coalition will have to be stronger and more united than ever before. But we can meet that challenge. We—the Internet—must stop this terrible legislation in its tracks.

Share
Categories
announcement Creativity & Innovation Fair Use Intelwars

Filters Do More Than Just Block Content, They Dictate It

Today, EFF is publishing a new white paper, “How YouTube’s Content ID Discourages Fair Use and Dictates What We See Online.” The paper analyzes the effects of YouTube’s automated copyright filter, Content ID, on the creative economy that has sprung up around the platform. Major platforms like YouTube have used copyright filters that prevent their users from expressing themselves even in ways that are allowed under copyright law. As lobbyists for Big Content—major record labels, big Hollywood studios, and the like—push for the use of broader and stricter filtering technology, it’s important to understand the harms these filters cause and how they lead to unfair shakedowns of online creators.

YouTube is the largest streaming video service and one that hosts the most user-generated content. As a result, Content ID has an outsized effect on the online video creator ecosystem. There is a terrible, circular logic that traps creators on YouTube. They cannot afford to dispute Content ID matches because that could lead to Digital Millennium Copyright Act (DMCA) notices. They cannot afford DMCA notices because those lead to YouTube. They cannot afford copyright strikes because that could lead to a loss of their account. They cannot afford to lose their account because they cannot afford to lose access to YouTube’s giant audience. And they cannot afford to lose access to that audience because they cannot count on making money from YouTube’s ads alone—they need as many eyes as possible to watch them in order to make money from sponsorships and direct fan contributions, partially because Content ID often diverts advertising money to rightsholders when there is Content ID match. Which they cannot afford to dispute. 

Within the paper is a diagram of the full maze of Content ID, capturing just how difficult it can be to navigate. Here it is, reproduced in full:

Chart displaying Content ID

In addition to a broad overview of the issues of Content ID, this paper also includes case studies of three YouTube-based creators, who explain how they experience Content ID. All conclude that a) they have no choice but to be on YouTube b) they make substantial creative decisions based on Content ID and c) they give up or lose money to the system. We hope these interviews make clear the very real harm of filters to expression. When lawmakers, companies, and others call for these things, they do so at the expense of a whole new generation of creators.

Read the paper

How YouTube’s Content ID Discourages Fair Use and Dictates What We See Online

Share
Categories
announcement Fair Use How to Fix the Internet: Podcast Intelwars

Podcast Episode: You Bought It, But Do You Own It?

Episode 006 of EFF’s How to Fix the Internet

Chris Lewis joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss how our access to knowledge is increasingly governed by click-wrap agreements that prevent users from ever owning things like books and music, and how this undermines the legal doctrine of “first sale” – which states that once you buy a copyrighted work, it’s yours to resell or give it away as you choose. They talk through the ramifications of this shift on society, and also start to paint a brighter future for how the digital world would thrive if we safeguard digital first sale.

In this episode you’ll learn about:

  • The legal doctrine of first sale—in which owners of a copyrighted work can resell it or give it away as they choose—and why copyright maximalists have fought it for so long;
  • The Redigi case, in which a federal court held that the Redigi music service, which allows music fans to store and resell music they buy from iTunes, violated copyright law—and why that set us down the wrong path;
  • The need for a movement that can help champion digital first sale and access to knowledge more generally;
  • How digital first sale connects to issues of access to knowledge, and how this provides a nexus to issues of societal equity;
  • Why the shift to using terms of service to govern access to content such as music and books has meant that our access to knowledge is intermediated by contract law, which is often impenetrable to average users;
  • How not having a strong right of digital first sale undermines libraries, which have long benefited from bequests and donations;
  • How getting first sale right in the digital world will help to promote equitable access to knowledge and create a more accessible digital world.

Christopher Lewis is President and CEO at Public Knowledge. Prior to being elevated to President and CEO, Chris served for as PK’s Vice President from 2012 to 2019 where he led the organization’s day-to-day advocacy and political strategy on Capitol Hill and at government agencies. During that time he also served as a local elected official, serving two terms on the Alexandria City Public School Board. Chris serves on the Board of Directors for the Institute for Local Self Reliance and represents Public Knowledge on the Board of the Broadband Internet Technical Advisory Group (BITAG).

Before joining Public Knowledge, Chris worked in the Federal Communications Commission Office of Legislative Affairs, including as its Deputy Director. He is a former U.S. Senate staffer for the late Sen. Edward M. Kennedy and has over 18 years of political organizing and advocacy experience, including serving as Virginia State Director at GenerationEngage, and working as the North Carolina Field Director for Barack Obama’s 2008 Presidential Campaign and other roles throughout the campaign. Chris graduated from Harvard University with a Bachelors degree in Government and lives in Alexandria, VA where he continues to volunteer and advocate on local civic issues. You can find Chris on Twitter at @ChrisJ_Lewis

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple PodcastsGoogle PodcastsSpotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive.  If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Resources

Legal Cases

Digital First Sale

Abuses and Failures of Digital Rights Management (DRM) and End User Licensing Agreements (EULAs)

Other Resources

Transcript of Episode 006: You Bought It, But Do You Own It?

Danny O’Brien:

Welcome to How to Fix the Internet with the Electronic Frontier Foundation, the podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of internet law.

Cindy Cohn:

Hi, everyone. I’m Cindy Cohn, and I am the executive director of the Electronic Frontier Foundation. For these purposes, I’m a lawyer and now I’m apparently also a podcast host.

Danny O’Brien:

I feel podcast hosts should really be higher up on any of those lists. Hi, I’m Danny O’Brien, and I also work at EFF. Welcome to How To Fix the Internet, where in a world full of problems, we take on a few of the digital kind and point the way forward.

Cindy Cohn:

Though this week’s episode is really about how the online world and the offline world cross. And Danny, if it’s okay for me to rant a little here?

Danny O’Brien:

I can think of no better place.

Cindy Cohn:

A lot of what’s wrong with debating how to preserve civil liberties in the online world comes from approaching it as if it’s somewhere magically apart from the offline world, but we live in both of those worlds, and they both overlay our lives. So, it doesn’t make so much sense to say that we have rights in one part of our life, but not in the other when we’re essentially doing the same thing.

Cindy Cohn:

As things move, it’s even difficult to figure out where one ends and the other begins, yet too often, powerful forces try to use this shift to get things in the online world that they would never even ask for, much less get in the offline world.

Danny O’Brien:

And I think the topic of this podcast, first sale is a great example of that. First sale is the American legal doctrine, which is I’m sure we’ll hear from our guest, has its roots firmly in the analog world. It was created to protect the intuitive idea that if you buy a book or another creative work, it’s yours. You can lend or resell it. You can read it to your friends.

Danny O’Brien:

Without first sale, rights holders or patent owners could claim that they should control some of the use of those works, including banning you from giving them away, selling them at a garage sale, or even just showing them to someone else.

Cindy Cohn:

So, like the best legal principles, the first-sale doctrine is an embodiment of common sense, when you buy something, you own it. But in the digital space, physical sales have been replaced secretly with these click-through licenses and other tiny little words at the bottom of a website that no one reads. But what those words are doing is they’re changing what’s happening from a sale to a license that’s limited in many ways. And what that means is that you’re limited in what you can do with the software or other media that you’ve bought.

Danny O’Brien:

Worst, corporations have even tried to twist th king sure that we don’t lose a right that we’ve always had before the net came along.

Cindy Cohn:

Exactly. And to help us understand that we’ve asked Chris Lewis to join us. Chris is the president and CEO of Public Knowledge, the D.C. based public interest group that works at the intersection of copyright, telecommunications, and internet law. We love working with PK at EFF. We really consider you some of our chief partners. Back to Chris though, before that Chris was deputy director of the FCC’s Office of Legislative Affairs, and he’s a graduate of Harvard University. Welcome, Chris.

Chris Lewis:

Thanks Cindy. It’s great to be with you guys.

Cindy Cohn:

Chris, I think of you as someone who’s able to look deeply at complex problems, nerdy stuff like copyright online, net neutrality and our topic today, digital first sale. And what I love about working with you is that you get the nuance of the issues, but also the political practicalities and the storytelling about how we fix it. And also, it’s just really always fun to work together. I’m also delighted that you joined us on the EFF advisory board.

Chris Lewis:

It’s an honor to be invited to be on the advisory board, though. We love working with you guys as well, and it’s a great bi-coastal partnership for the public interest.

Cindy Cohn:

So, how did you get interested in digital first sale? I think folks like us are such a rare breed, people passionate about the side of intellectual property that actually stands with the users and not about making sure rock stars can buy their second island. How did you get passionate about this?

Chris Lewis:

Great question, and it alludes to what you were talking about earlier about common sense. I can recall the questions I had in the era that I grew up in. I was a teenager in the ’90s. I was in college in the late ’90s and early 2000s. And during that time, my favorite uncle, my uncle Joey, who’s a talented musician has this awesome record collection, music collection. And he could have given that to me, but I’d probably have to wait for him to pass away in order for him to relinquish it.

Chris Lewis:

Instead, he digitized his own library of music and chose to share it with me. And that was one of the first examples of where I was uncertain if the digital rights laws had been updated for what is a normal handing off or passing off or downstream transfer of goods. When I got to Public Knowledge, as you said, I came from the world of telecom policy. I was at the Federal Communications Commission. And unlike you, Cindy, I’m not a lawyer. I’m a organizer. I’m a organizer and an advocate.

Chris Lewis:

And for me, it’s stories like that that are important for people to think about in their own lives when they think about how the law needs to be updated for the digital age. I have to admit, when I got to Public Knowledge, I was introduced to copywrite law by some expert lawyers. It’s why I love working there. It’s why we love working with EFF. We have that in common, lawyers who get into those details of how the law can be adjusted for what are just common sense expectations from average consumers that you should be able to bequeath your fantastic record collection to your niece or your nephew.

Chris Lewis:

That you should be able to sell a book that you got, secondhand. And yes, there are challenges that the technology poses, but it doesn’t mean they can’t be overcome. And how we overcome that I think is an exciting topic.

Danny O’Brien:

Let me just pick apart exactly what the theory is around what rights holders want to be able to do in these situations. Because I think for anything that didn’t have one foot in the intellectual property space, it would seem crazy that if somebody sold you an artifact, that they will be able to require you to not resell it on or limit what you do with it. So what’s the legal theory against having something like first sale?

Chris Lewis:

I think the legal theory is that … It’s hard to sometimes for me to think about it from the opposite side, but there is a valid … I think to be fair, there’s a valid argument for the importance of protecting the ability for creative artists to be compensated and to recoup some value for the work that they create, and that’s important. But what we often miss or that folks who don’t like the idea of a digital first sale miss, what they often miss is that there’s a balance to copyright law. That it has dual purposes, not just the protection of those artists and their ability to make a living, to have value in their creativity. But also to continue to promote the useful arts and sciences. That’s probably a direct quote.

Chris Lewis:

So when first sale was created in the early 1900s, it came out of court cases that recognized that when you buy something, there’s a common sense expectation that you can do certain things with it that are common sense. And that balance was struck back then. The first court case that I’m aware of was the Bobbs-Merrill case against Macy’s. Macy’s had a book that Bobbs-Merrill published. And Macy’s wanted to sell it for 11 cents less than the Bobbs-Merrill people wanted.

Chris Lewis:

And they actually printed on the cover of the book that because of the rights owner’s wishes, they did not want that book sold for less than a dollar. Macy’s was sued for trying to sell it for 89 cents. And the court ruled that once Macy’s bought that product, they had the right to resell it. They had the right beyond that first sale to give it away if they want to. And that is common sense, and that does not infringe on the ability for the creator to make a profit off of that first sale. And so that’s an important balance.

Danny O’Brien:

So in a way what this is, is it’s a way of preventing riders on artifacts. So the idea is that if you have physically bought something, you can’t put ifs and buts and controls on that. And I guess the reason why it gets caught up with intellectual property is a lot of intellectual property law, like patents and copyright, is about controlling various kinds of use, even of copies of your material.

Chris Lewis:

Yes, that’s right. There are protections for the original creator of the work as well, but this right, the right of first sale, in non-lawyer speak, I just like to say if you, if you bought it, you own it. This right is critically important for basic capitalism of goods and services moving.

Cindy Cohn:

Let me see if I can say, the thing about the digital space … Let me try Chris, and correct me if I’m not reading your mind correctly. But in the digital space, one of the things we’ve been freed from is the artificial constraints of the physical. So this is the Jeffersonian idea. This is an old idea in copyright law that when I give you something, I still have it in the digital world. And Jefferson likened it to a candle. Like when I take a light off of your candle, I still have my candle lit. And he said that that was one of the great things about ethereal works, copyrighted works.

Cindy Cohn:

What happens now, the things that used to be ethereal works like music or software, they used to be locked in a physical thing. And so we would treat them like a physical thing where once you had one CD, that was the one CD you had. But digital works, you can copy multiple times, and we don’t have the constraints of the physical world on that. And that should free us up. If you’re passionate about something, you want to share it with the world, whether that’s the next band or the cool new tool you found, or those other things.

Cindy Cohn:

And the first-sale doctrine protects us in the real world so that we can share the things we love, like your uncle sharing his love of music with you. We just want that same thing to be able to happen in the digital world.

Chris Lewis:

Certainly there’s a difference between that physical good and a digital good and how easy it is to copy it. To me, that comes with the power of technology. Technology is more powerful. Digital technology is more powerful in certain ways. Copying is one. But what we also miss when we move from the physical to the digital world is that we’re also dealing with information, and we’re dealing with knowledge, for lack of a better word. And that good is no longer just a hard good. It is access to information. It is access to knowledge.

Chris Lewis:

And so if we take the values of digital first sale and transfer them into the digital age, you have to account for the power of the technology, you have to account for the knowledge that it transmits. And the power to transmit so much knowledge combined I think increases the importance of protecting the principles around digital first sale, but it also makes it more challenging.

Cindy Cohn:

I think that’s right. I think that what’s going on here is that the folks in the content industry who didn’t like first sale in the first place are trying to use this shift to get rid of it. And what we’re trying to say is no, those values are really important, whether something’s physical or digital. And if the digital world presents some more challenges, then we should just address those challenges and not just throw away the values of your uncle sharing his music with you, or someone being able to find some software at a garage sale and use it.

Cindy Cohn:

So those are the kinds of things that they’re just as important in the digital world as they were in the analog world. And it’s one of those areas where we shouldn’t be drawing a distinction between the two, at least in terms of the values we’re trying to protect. We may protect them in different ways, but the values are still strong.

Danny O’Brien:

The limits of first sale seem pretty clear and intuitive in the physical world. And really, they reflect what we might imagine we could do with an object that we owned. But I think isn’t it true that in the digital environment, those limits have faded away. And actually if I get a copy of something from somebody, not only can I do a lot of things with it, including not just digitizing and sharing it with my favorite nephew, but with hundreds of thousands of people. It makes it a little bit harder to understand what the limits and what we can do with a product that we own in the digital space.

Chris Lewis:

Yeah, that’s right. And it also reminds us of the responsibilities that come with the power of new technology. So when the photocopier was created, yes, a photocopier or a Xerox machine could be used to pirate copyrighted works. But we didn’t outlaw photocopiers. We didn’t outlaw Xerox machines just because there is power in the technology. What we did was we made sure that we had quality enforcement and also expectations in society of the responsibility that comes with the power of that technology.

Chris Lewis:

And I think that’s critically important as we move into a digital space. There are multiple examples of new technologies before and after our move into the world of the internet where technologies were challenged. And the understanding was is that there is a balance of responsibility of use when you have a tool that allows you to do things that shouldn’t be legal, but also could be used for things that could be infringing. And I think that’s important for us to highlight.

Danny O’Brien:

So what are some examples of attempts by rights holders to place controls on digital artifacts that violate that idea of digital first sale, and are a step too far in the power of those rights holders?

Chris Lewis:

Well, I think probably the best known recent case from 2013 was the ReDigi court case. ReDigi was a file sharing service, was sharing music. And that case, I think, got this balance of power wrong where it said that if you’re sharing a file and using it to do a transaction. Maybe you’re selling a piece of music, a recording, that as long as you ensure that you sell it to one person and you eliminate ownership on your end, just as you would do with a physical good, that that should be legal. And that was not upheld in the ReDigi case. ReDigi lost. And a service that could have provided for sales and purchases of used digital goods, if that’s a term we could use, was basically outlawed.

Chris Lewis:

This is dangerous. It’s dangerous for the ability for a society to share ideas. It’s not just an academic exercise that we’re talking about here. More and more of the world’s goods are being moved into a digital-only space. And it’s really popular right now in the middle of this pandemic to talk about the digital divide with broadband. The low-income communities or rural communities may be left out of having access to broadband. But once they get access to broadband, once we all get access to broadband, that’s not the end of the digital divide.

Chris Lewis:

Folks could still be excluded from knowledge. That’s another digital divide. If it is cost prohibited because of the new structures around licensing, or if they are limited from sharing information with each other in commonly accepted ways like selling a good or bequeathing a good. So I think that ReDigi case really set us off on the wrong path and highlights some of the harms that we can have if that idea is duplicated with other technologies.

Danny O’Brien:

To put a point on this, I grew up reading very worn secondhand paper books, mostly science fiction that I bought at secondhand bookstores. And right now as a grownup, I have one of those little libraries outside, and there’s a huge traffic in my neighborhood of people leaving books and taking books. And I think what you’re describing here is that both of those acts become, if not actively illegal, very difficult to do because there’s no way that I can give someone a digital artifact and demonstrate that I destroyed the other copy that I might theoretically have when I gave it to the other person. Is that right?

Chris Lewis:

Exactly. I know Cindy knows. And Danny, you may not know. I spent six years on my local school board here in Alexandria, Virginia, and lived two blocks from the public library. And at our public library in our school division that had a high rates of poverty, that was a place for students to go, not only to get access to the internet, if they did not have it, but also to get access to books and get access to knowledge. And all of those books, whether you’re talking about secondhand stores or the public library, which really provides equity and access to knowledge, if those books are not available on a digital space where our students in Alexandria or around the country can get access to them, we’re really going to have an increasing digital divide.

Cindy Cohn:

I think that that is to me the most dangerous piece about this. Of course, I care about people being able to get their brother’s record collections or buy a piece of software at a garage sale and still be able to use it. But really this is about knowledge. And it really brings one of the big fights at EFF. And both Public Knowledge and EFF are working on today is the attack on the Internet Archive which is trying to make access to all the world’s knowledge available to all the world’s people.

Cindy Cohn:

The implications of that battle, there are specific things about that battle that are unique to the Internet Archive, but what’s really going on in a broader scope is an attempt to really limit digital lending in a way that will make it very difficult for people without access to power and complicated systems to be able to have knowledge. And we’re already seeing this with school kids who can’t afford the books or the license on the books. And I grew up in a pretty remote rural area, and people shared textbooks and other books just because it was just hard to get them from the publishers.

Cindy Cohn:

And I think we’re headed back to that time when the internet should have freed us from that time. It’s exactly backwards.

Chris Lewis:

That’s right. Who’s going to get to use the power of the technology? Is it going to be limited to a few stakeholders who are going to control that knowledge and information through far more expensive licensing schemes and setups than we had in the old book and public library markets, or are we going to use that technology to benefit everyone?

Cindy Cohn:

And the practical impact of turning everything into a service is that everybody’s got to have access to credit cards or licensing schemes, or ongoing things that are pretty hard for people who are already at the margins. They already have to go to the library to get access or hang out outside of McDonald’s to get access. If you need a service and an account, and probably a credit card to be able to continue to have access to stuff that you already bought, but it goes away if you don’t keep paying monthly.

Cindy Cohn:

That’s a huge implication of this shift. And again, the first-sale doctrine in the offline world means that you get your hands on that book and you bought it, whether you bought it first sale, you bought it from the first person who bought it, or the third person who bought it, or you checked it out of the library. That knowledge stays where you have it. Whereas with these services, things can go away. And as we’ve learned, a lot of the services that are offering these kinds of things are not operations that exists forever. So the business goes out of business, and the next thing you know, you’ve lost access to all the information that you have been paying monthly to get access to.

Cindy Cohn:

I think that leaves people really at the mercy of hoping that the entity that they signed up for to get access to knowledge survives bankruptcy. So there’s a lot of implications in a bunch of different directions about trying to remove the first sale rights in the digital world.

Chris Lewis:

Right. I think we’ve seen a retrenching of how people understand ownership, whether you’ve bought something or whether you’re renting it or leasing it. And I think unfortunately in a lot of digital platforms we’re seeing that twisted where someone tells you that you’re buying a book or you’re buying access to a TV show. And you’re really not buying that good. You’re leasing, or you’re renting access is what you’re doing.

Danny O’Brien:

I think we’ve talked a bit about intuitions here. And it’s definitely the point of which people realize that they’ve lost something is when their intuition about ownership hits the hard wall of what the business thinks or claims that they’ve done. Just a couple of examples from both the far ancient past of the digital world and more recently. I remember people’s shock when due to a licensing disagreement, Amazon started removing copies of George Orwell books that people had purchased, and they believed that they bought a legitimate copy.

Danny O’Brien:

Amazon decided it wasn’t a legitimate copy, so they just deleted it from their library collection. It’s really odd to have 1984 disappear in that way. And then more recently when people were playing, a little more trivially, ‘Fortnite’ and suddenly to find that because there was an argument there between Apple and Fortnite’s publishers, they were suddenly stopped from being able to play that going forward. We’ve talked a little bit about how these ideas of ownership have evaporated, but not too much about the click-through contracts and the licenses that are used to enforce this thing.

Danny O’Brien:

It is crazy that we clicked through these multi-page documents that nobody ever reads, but how did that get embedded in the law? How can it be that I can sign something that I didn’t read and it still be enforced against me?

Chris Lewis:

Right. Contract law is a powerful thing, right? And these pop-up terms of services or EULAs, the end user licensing agreements, they are long. And in a world where everything is digital, you’re having to see one or read one and understand the legal implications of one every day and multiple times a day oftentimes. So I understand the importance of making sure that there are notices and best practices on online services, but at the same time, we need to find ways to be very clear with the consumer on what they’re actually purchasing.

Chris Lewis:

And even more so, we need to try to find and promote ways to rebalance or to find that balance of the power of the consumer to actually have control over the things that they’ve bought. And if we don’t, it’s going to have a negative impact on access of knowledge. It’s going to have a negative impact on the marketplace for consumers as well. We saw this recently in the digital marketplace around tickets and ticketing. Have you guys been following the BOSS Act that was introduced in Congress?

Cindy Cohn:

No. I suspect my colleagues have been, but I have not.

Chris Lewis:

The quick breakdown is it is called BOSS, yes, after someone’s love of Bruce Springsteen. A congressman from New Jersey, Bill Pascrell and Senator Blumenthal introduced this because of the dominance of consumers by Ticketmaster in the ticketing marketplace. They not only control the venues and therefore control the services that are offered, but they control the goods, the ticketing itself. And there are secondhand ticketing sites, but they’re often squeezed out because of these intellectual property and other agreements around what you can do with a ticket once you purchase it.

Chris Lewis:

And so that’s creeping into the digital marketplace for tickets, and that reduces competition for customers which lowers their prices. And also it also eliminates the ability to have that secondhand marketplace, just like we talked about books where you could sell a ticket if you all of a sudden can’t go to a show or a sporting event, or to hand it off to someone else. If you can’t go, sometimes you give it to your kids, right? So it comes up again and again as everything gets digitized. That these agreements, these terms and conditions need to be clear, but they also need to be fair. And if we need to enact some reforms and laws to achieve that, I think that’s a good thing.

Cindy Cohn:

It happened first with some airline tickets. I’m old enough to remember when you could sell or give your airline ticket to someone else. And those days are long gone, and it loses consumers millions and millions of dollars every year, because they end up having to pay these fees if they want to change it or just let the ticket lie. So this love of contract as a way to organize society and the recognition that that really empowers the powerful and disempowers the less powerful. And in this scenario, the less powerful is you, dear consumer. And the powerful are the companies that are selling you things.

Cindy Cohn:

We had a period in time when it was contracts, and we moved away from that. And I think it’s time to think about that as well, is putting a baseline in of consumer protection that prevents contracts that really are not fair. And recognizing the power imbalance between you as … These are called contracts of adhesion. A contract where you don’t really get to sit down and negotiate it. You just get handed it and you either do the thing or you don’t do the thing, but you don’t really have any bargaining power.

Cindy Cohn:

They’re different. They’re called contracts of adhesion. Then the kinds of contract where you sit down with somebody and you negotiate, “I’ll give you this if you give me that.” That business to business might happen, or even person to person might happen. And recognizing those differences and empowering people to really have a baseline of fairness and be able to attack fairness is something that is just desperately needed. We’re getting a little away from … What the first-sale doctrine was in the analog world was a way to not let contract law always be the only thing that mattered in the sale of an analog good.

Cindy Cohn:

That’s why when we think of about digital first sale, we need something as well that protects us against the only thing that matters is the so-called contract, which isn’t really a contract in the way that I think of it, like two people have a meeting of the minds about what’s going to happen between them. That we need to have some baseline. And there is a couple ways to do that in the law. One is California has a bunch of baseline consumer protection laws that you can’t go below on certain things.

Cindy Cohn:

The other way to do it is to create these independent doctrines, like the first-sale doctrine that just says you just can’t contract away this ability of somebody to own something when they bought it. And the third way is to really empower consumers to be able to say that something is unfair at a level that they can’t right now. That really those claims don’t really succeed. So those are three things. I’m the lawyer who litigates, so I think of, well what could you do in court?

Chris Lewis:

In the same way, you’re thinking from a Lawyers perspective, I think from an advocate’s perspective. How do we help the public demand that the power remains in their hands? And I think we do that through stories. That’s why I tell the story of my uncle and his music. That’s why we tell the story of what consumers are facing with tickets and whether they bought them and then have the right to do what they want with them. These stories are important because that’s when it makes sense to an average consumer.

Chris Lewis:

And we have to continue to share these. And even if we get some piecemeal fixes, Public Knowledge is advocating for -You talked about the internet archive situation in public libraries – we’re advocating for a very simple law in Congress to just say that it is not illegal for a library to do what libraries have always done. That they bought a book, they own it, they can scan it, and they can lend it out to one customer at a time. Everyone knows their public library. Everyone understands it as a concept. And we need to take these easy to understand examples to promote how a digital first sale can work across the entire marketplace.

Danny O’Brien:

Yes. Listening to all of these examples, I do feel that we’re slipping into losing these rates, and on the hour by hour, minute by minute basis. I can see how we get there. For instance, when you’re talking about ticket sales, there’s a sense in which the biggest concern people have on a day-to-day basis when they buy a ticket is that bots come and buy these tickets first and then and scalp people essentially. So you can understand why the companies might make those tickets tied to a particular individual, but I’ve never thought about the fact that actually that prevents me from being able to give it to a friend. And it gives the company itself a huge monopoly power over that.

Danny O’Brien:

And similarly, even in this conversation, I’ve been sitting and thinking, well, these days the books I buy, I often buy as eBooks, but I want to be able to leave my library, my collection to either my family when I die, or how the great libraries were always constructed, which is by people leaving their private library to the public system. And we have to have that as a possibility in the future. And with the current situation with eBooks, what happens at the end of it? Do they just evaporate? Do they have to be buried with me in my grave like an Egyptian pharaoh? It does feel like we have to create some rules about this before we find ourselves in a much more greatly impoverished situation.

Chris Lewis:

And I think there are three ways to attack this that we should keep in mind, because they can work together. Cindy, you noted legal changes, changes in the law, changes in the understanding of how the law is applied. That’s one very important way just to set clear expectations and clear protections. And I’m sure there’s other specific ones that we can come up with. I mentioned the one about public libraries. But the public library law that we’re suggesting wouldn’t even be possible if we also weren’t attacking it from the technological perspective. The good old let’s nerd harder.

Chris Lewis:

So if you law harder and you nerd harder, we can come up with a lot of great solutions here. Controlled digital lending, the ability to lend out a book and then retrieve it, and make sure that the person you lend it to does not keep it. That’s an innovation in technology. And it allows for the power of technology to work for everyone in the way public libraries have always worked. So if we law harder and we nerd harder, and then we set better norms around our society, I think all of those have to work together as a solution.

Chris Lewis:

People are already doing the norm stuff and they don’t even know it when instead of buying access to go to the movies, people are buying access to Netflix, or they’re buying access to some other streaming service. When they do that and then they share their password with their family members, that’s a change in norms. And a change in norms that, by the way, many of the streaming companies are okay with. They have not actively stopped it. They haven’t tried to stop it through licensing. They probably could, the way that the law is interpreted right now, but they realized that they’re adding viewers.

Chris Lewis:

And I think that’s a good thing. It shifts the norm from buying to one-time viewing or buying a DVD to buying access to a library of information, a library of digital goods.

Cindy Cohn:

No, I think you’re right that a lot of the services are quietly cool with password sharing. One of the things that we have long pointed out is that we’re sliding into other EFF territory. The Computer Fraud and Abuse Act and some of the other state anti-hacking laws actually do make that illegal, and not just illegal, but criminal. And copyright law itself of course, has criminal penalties. So I do think that this is a long time issue in copyright law, which is that there’s lots of things that violate copyright law, or at least the content owner’s view of copyright law and the law that are actually good for them.

Cindy Cohn:

The EFF was founded by John Perry Barlow who wrote songs for the Grateful Dead. And the big insight that the Grateful Dead had is that if you let people record your live concerts, which is not legal, they’ll share those tapes around and you end up with more people coming to your concerts. Similarly, if you let people share their passwords, more people get turned on to your service and it’ll end up growing your pie. So I think that I would like to see all the things that people do that are technically illegal, but really useful to society become legal so they can come out of the shadows.

Danny O’Brien:

I often find that one of the challenges here is that because the responsibility for how these systems are built is so diffused, there are a thousand lawyers, present company excepted, and thousands of vested interests trying to decide how to do this, that no one is able to stand up, either for the user, or just for common sense. So just to give an example, we had a huge fight with Netflix when they were trying to force DRM into web standards. But in the conversations with the people who were advocating for this at venues like the W3C, the main web standards organization, these were technologists.

Danny O’Brien:

And we were sitting down going, “You do know that first of all, this doesn’t work to stop mass piracy. And also it stops your customers from doing perfectly reasonable things with the products that they’ve purchased from you like pulling out a little clip to show their friends or sharing it on a big screen for their friends and family.” And the off the record conversations I had is they would go, “We would love to do this without DRM. We think it would be far more successful, far more popular. But we’re tied to these contracts with the original rights holders, and we can’t get out of those, so we have to enforce the impossible.”

Danny O’Brien:

I guess this is my question to you, Chris as an activist. When you have this debate over the limits of copyright and the limits of ownership, a lot of the time this ends up being a debate between big tech and the big rights holders, Hollywood. How do we get the voice of the user and the public interest into that debate?

Chris Lewis:

Right. This is something that we just have to continue to grow. I don’t know how many times a month I say to folks in Washington, “Please stop saying this is a fight between big tech and big content. It is not. There is a third and far more important perspective and it’s public interest perspective, what is important for our society?” Again, it’s that balance of values that has to be included in the policymaking that I think when balanced properly takes care of the needs or can take care of the needs of big tech and big content in the long run. And especially if we combine it with smart technological innovations.

Chris Lewis:

So we have a movement to build. The raw politics of it righty now is that I don’t think we have enough allies and champions in Washington who understand this balance. And we need to continue to organize and bring these stories to them. And the crazy thing is when they hear the common sense stories, people are easily converted to Washington. Cindy, remember the small, but fierce fight we had a few years ago on unlocking cell phones. And the reason that worked is because everyone owns a cell phone, every member of Congress, the president.

Chris Lewis:

And so as soon as the public started to say, “It’s insane that I can’t unlock this digital lock on something that I bought and I own in order to switch to another service or to get it fixed properly, or to sell it to someone else,” that was an easy story to tell because everyone understood it. So we just got to continue to do that in order to build more champions and more understanding among policymakers.

Cindy Cohn:

I’m just sitting here nodding my head, because we really do need a movement. The public interest side of this debate is just so tiny compared to the size of the problems that we’re trying to address and the size of the people on the other side. And I think that you’re completely right. PK, you guys do heroic work making voices heard in Congress and we do our best to help you, but we’re so tiny compared to the size of these fights and the size that we need to be. That we really do need to turn this into a movement, and a movement that really doesn’t go away in between the crises. That’s there all the time, because that’s what happens on the other side.

Danny O’Brien:

To end on a somewhat positive note, I do remember the amazing shift that happened when members of Congress first started using iPods. We had all of these debates about the terrors of piracy online. And then when folks got iPods, I just remember congresspeople standing up and saying, “You mean it’s criminal. These people want to stop me from transferring my CD collection to my iPod? That’s insane.” And I think that we have a good … I think time is on our side in many ways, because as a new generation of politicians come up who recognize these intuitions that I think technology users have, I think there’s a better chance of encoding those intuitions in law.

Cindy Cohn:

Well, that’s a great segue I think, Danny. So this podcast is, ‘how are we fixing the internet?’, Chris, so what does the world look like if we get it right?

Chris Lewis:

Oh, wow. The possibilities are fantastic if we get it right. A world where average users of technology are empowered to have information at their fingertips no matter where they’re born as long as they have access to the internet. That’s a powerful world. Gosh, I remember coming back from school. I’m not a lawyer. I just went and got an undergraduate degree at Harvard University and I came back home to Virginia. And I remember meeting with other African-American high school students, kids who look like me, who grew up in the same county I grew up in. And they literally said things to me like, “I didn’t know that black people went to Harvard.”

Chris Lewis:

I’m not joking. And I said, kid, “You haven’t even been across the county, let alone to a place like Boston.” The power of the internet to open up the worldview of those kids or other kids, if they have access to it is limitless. I can’t emphasize that enough.

Cindy Cohn:

Getting digital first sales right to me is exactly that. It’s the ability for us to really let people who don’t come from privileged backgrounds or privileged places have access to the world’s knowledge. And then this to me goes back to why copyright is in the constitution. It’s in the constitution to promote the progress of science and the useful arts. If we get this right, we’re going to unleash the power of so many people around the world who haven’t been able to contribute to promoting the progress of science and the useful arts, because they haven’t had access to the information they need to get to the place where they can help us.

Cindy Cohn:

So I feel like we’re just unleashing the potential of so many people, especially kids who really haven’t been able to participate in growing the sciences and the useful arts.

Chris Lewis:

Right. And the things that they’re going to produce are going to be amazing. Being creative requires some spark or inspiration. It’s why if you look at the history of American music, the blues built off of Negro spirituals and rock and roll built off of the blues and country. Creativity builds on itself. And if young people don’t have access to the world’s knowledge, the world’s information, they’re not going to have that foundation to build off of and to improve and innovate.

Danny O’Brien:

I think that we always think of knowledge as a gift, but it’s not going to be much of a gift if we can’t give it on, if we can’t like pass it to either the next generation or to people that don’t currently have it. So I think at the heart of first sale and digital first sale is really about that ability to share what you have and pass it on, and it just not be a dead end.

Cindy Cohn:

Thank you so much, Chris.

Danny O’Brien:

Cindy, that was a great discussion. Really, the thing that stuck with me, I think is once again this idea that debates over copyright aren’t really industry deals between big tech and big content. They’re actually about what a future society is going to be like. And we have to build them with the public interest and long-term goals in mind. And particularly, the thing I hadn’t really got about digital first sale is a lot of it’s about posterity and legacy. A lot of it’s about being able to have the right to give somebody what you’ve received.

Cindy Cohn:

I think that that’s tremendously important. And of course, the place where that has traditionally happened so much in our society is in libraries. So they’re central to the conversation.

Danny O’Brien:

I hadn’t really wrapped my head around, if you don’t have first sale or if you don’t have this idea of, “I bought it, so I own it”, then you don’t get the opportunity to create a system around lending, and lending libraries. But how do we change something like that that’s about a norm that’s developing and we want to shift it around? Chris talked a bit about legal reform. The lawmaking reform. And one of the ways I know that you can achieve this thing is by having very specific, targeted copyright reform rather than these big omnibus copyright bills that you can just nudge things along. And I liked that Chris talked about very small bills that you can drop in if we can get support from politicians.

Cindy Cohn:

I think that it may be that we have to do this in tiny little bites, like the BOSS Act he talked about and the very narrow protection for libraries. I find that a little frustrating. The other thing in talking about the court cases, and especially the ReDigi case that Chris mentioned, the courts have gotten some of this wrong. And so that’s an opportunity for … There’s an opportunity there for them to just get it right. Copyright is in the constitution. It’s a constitutional analysis about whether we’re promoting science and the useful arts. There’s some undoing that we’re going to need to do for digital first sale, but we certainly could get there.

Danny O’Brien:

And this is largely a co-created doctrine. Again, I’m not the lawyer, but if you have disagreements between the circuits in the United States, at least, or you can bring this to the Supreme Court. You can actually, I guess, create a more systemic view of the whole problem, the whole solution. But again, that needs people to advocate for it, or at least to point out the problems, which comes back to this idea about building a movement. And I know that in the ’90s, in the 2000s, we did have that free culture movement that recognized that copyright and issues around copyright were important.

Danny O’Brien:

And I now feel like we’re facing the hard end of that fight. That were in a situation where it’s not just theoretical anymore. It’s really about how people are living their lives. And that requires people to stand up and actually consult with other people to work out what a good strategy might be going forward.

Cindy Cohn:

This is one of the real geniuses of Chris is that he is an activist, and he thinks about this from the context of building a movement and collecting the stories and setting the groundwork to get the lawmakers to act, or the courts to recognize the issues at stake beyond big tech and big content as you started this out with. And that’s the place where we’re in complete agreement. This is a movement. And the good thing about everybody relying on digital first sale is that we have a lot of people who are impacted by it. And so there’s an opportunity there. I think the trick for us in building this movement is to make sure people see what they’re losing so that we don’t just slide into a world in which it seems normal to just license knowledge, as opposed to actually owning it and being able to hand it on to people.

Cindy Cohn:

I also really appreciated Chris’s recognition that this movement is one that will be grounded in part in standing up for marginalized voices. The people who are really going to lose out if they don’t have the money to continue to pay for access to knowledge. And I appreciate that he really brings that lens to the conversation, because I think it’s exactly right.

Danny O’Brien:

Well, Cindy, I don’t know whether you’ve realized this, but actually this was our last episode in this mini-series, which I think technically means we fixed the internet. Although in practice, I think this is only a smattering of the many challenges you and I and the rest of our colleagues have in putting the world to rights.

Cindy Cohn:

I can’t believe we reached the end of this. This has been really great fun. And I really want to give a special shout-out to our colleague, Rainey Reitman who carried this along and really made it happen. I appreciate your view that we fixed the internet, but really this podcast is how to fix the internet. And hopefully we’ve pointed the way to how to fix it, but I think that what’s come up over and over again in all of these conversations is how we need you.

Cindy Cohn:

We need the engagement of a broader community, whether we call it a movement or we call it the users, or we call it the bigger community. So there’s lots of ways that you can get involved to help us get to that place where we don’t need this podcast anymore because the internet’s fixed. Of course, I’m the head of the EFF, so we work for tips. And if you think that what we’re doing to try to move things forward is good, please donate and join us at EFF, join our part of the movement. We’ve had lots of friends on this podcast who also are other pieces of the movement. And if any of those called to you, please support them as well.

Cindy Cohn:

Again, the way a movement works is that you don’t have to just pick one horse and go for it. You can really support a range of things that interest you. And I certainly hope that you will.

Danny O’Brien:

And if you have any feedback on this particular branch of the digital revolution, let us know at the eff.org/podcast. And we hope to see you in the very near future.

Danny O’Brien:

Thanks again for joining us. If you’d like to support the Electronic Frontier Foundation, here are three things you can do today. One, you can hit subscribe in your podcast player of choice. And if you have time, please leave a review. It helps more people find us. Two, please share on social media and with your friends and family. Three, please visit eff.org/podcasts where you will find more episodes, learn about these issues. You can donate to become a member and lots more.

Danny O’Brien:

Members are the only reason we can do this work, plus you can get cool stuff like an EFF hat or an EFF hoodie, or even a camera cover for your laptop. Thanks once again for joining us. And if you have any feedback on this episode, please email podcast@eff.org. We do read every email. This podcast was produced by the Electronic Frontier Foundation with the help from Stuga Studios. Music by Nat Keefe of BeatMower.

 

Share
Categories
Call to Action Creativity & Innovation Fair Use Intelwars

Today: Tell Congress Not To Bankrupt Internet Users

We are at a critical juncture in the world of copyright claims. The “Copyright Alternative in Small-Claims Enforcement Act”—the CASE Act—is apparently being considered for inclusion in next week’s spending bill. That is “must pass” legislation—in other words, legislation that is vital to the function of the government and so anything attached to it, related to spending or not, has a good chance of becoming law. The CASE Act could mean Internet users facing $30,000 penalties for sharing a meme or making a video. It has no place in must pass legislation.

PREVENT COPYRIGHT TROLLING

TELL CONGRESS NOT TO TREAT FREE EXPRESSION LIKE A TRAFFIC TICKET

The CASE Act purports to be a simple fix to the complicated problem of online copyright infringement. In reality, it creates an obscure, labyrinthine system that will be easy for the big players to find the way out of. The new “Copyright Claims Board” in the Copyright Office would be empowered to levy large penalties against anyone accused of copyright infringement. The only way out would be to respond to the Copyright Office—in a very specific manner, within a limited time period. Regular Internet users, those who can’t afford the $30,000 this “small claims” board can force you to pay, will be the ones most likely to get lost in the shuffle.

The CASE Act doesn’t create a small-claims court, which might a least have some hard-fought for protections for free expression built in. Instead, claims under the CASE Act would be heard by neither judges or juries, just “claims officers.” And CASE limits appeals, so you may be stuck with whatever penalty the “claims board” decides you owe.

Previous versions of the CASE Act all failed. This version is not an improvement, and Congress has not heard enough from those of us who would be most affected by CASE: regular, everyday Internet users who could end up owing thousands of dollars. Large, well-resourced players will not be affected, as they will have the resources to track notices and simply opt out.

How do we know that the effect of this bill on people who do not have those resources has not been understood? For one thing, Representative Doug Collins of Georgia said in an open hearing any claim with a $30,000 cap on damages was “truly small.” Of course, for many many people – often the same people who don’t have lawyers to help them opt-out in time – paying those damages would be ruinous.

That’s why we’re asking you to take some time today to call and tell your representatives how dangerous this bill really is and that has no place being snuck through via a completely unrelated “must pass” spending bill.

Tell Congress that a bill like this has no place being added to any spending bill. That it must rise or fall on its own merits and that there are people who will be harmed and who are speaking out against this bill.

PREVENT COPYRIGHT TROLLING

TELL CONGRESS NOT TO TREAT FREE EXPRESSION LIKE A TRAFFIC TICKET

Share
Categories
announcement Creativity & Innovation DRM Fair Use Intelwars

GitHub Reinstates youtube-dl After RIAA’s Abuse of the DMCA

GitHub recently reinstated the repository for youtube-dl, a popular free software tool for downloading videos from YouTube and other user-uploaded video platforms. GitHub had taken down the repository last month after the Recording Industry Association of America (RIAA) abused the Digital Millennium Copyright Act’s notice-and-takedown procedure to pressure GitHub to remove it.

By shoehorning DMCA 1201 into the notice-and-takedown process, RIAA potentially sets a very dangerous precedent.

The removal of youtube-dl’s source code caused an outcry. The tool is used by journalists and activists to save eyewitness videos, by YouTubers to save backup copies of their own uploaded videos, and by people with slow or unreliable network connections to download videos in high resolution and watch them without buffering interruptions, to name just a few of the uses we’ve heard about. youtube-dl is a lot like the videocassette recorders of decades past: a flexible tool for saving personal copies of video that’s already accessible to the public.

Under the DMCA, an online platform like GitHub is not responsible for the allegedly infringing activities of its users so long as that platform follows certain rules, including complying when a copyright holder asks it to take down infringing material. But unlike most DMCA takedowns, youtube-dl contained no material belonging to the RIAA or its member companies. RIAA’s argument hinges on a separate section of the DMCA, Section 1201, which says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work—or to provide tools to others that bypass digital locks. The RIAA argued that since youtube-dl could be used to infringe on copyrighted music, GitHub must remove it. By shoehorning DMCA 1201 into the notice-and-takedown process, RIAA potentially sets a very dangerous precedent, making it extremely easy for copyright holders to remove software tools from the Internet based only on the argument that those tools could be used for copyright infringement.

mytubethumb
play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fck7utXYcZng%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

youtube-dl’s code did contain the titles and URLs of certain commercial music videos as a part of a list of videos to use to test the tool’s functionality. Of course, simply mentioning a video’s URL is not an infringement, nor is streaming a few seconds of that video to test a tool’s functionality. As EFF explained in a letter to GitHub on behalf of youtube-dl’s team of maintainers:

First, youtube-dl does not infringe or encourage the infringement of any copyrighted works, and its references to copyrighted songs in its unit tests are a fair use. Nevertheless, youtube-dl’s maintainers are replacing these references. Second, youtube-dl does not violate Section 1201 of the DMCA because it does not “circumvent” any technical protection measures on YouTube videos.

Fortunately, after receiving EFF’s letter, GitHub has reversed course. From GitHub’s announcement:

Although we did initially take the project down, we understand that just because code can be used to access copyrighted works doesn’t mean it can’t also be used to access works in non-infringing ways. We also understood that this project’s code has many legitimate purposes, including changing playback speeds for accessibility, preserving evidence in the fight for human rights, aiding journalists in fact-checking, and downloading Creative Commons-licensed or public domain videos. When we see it is possible to modify a project to remove allegedly infringing content, we give the owners a chance to fix problems before we take content down. If not, they can always respond to the notification disabling the repository and offer to make changes, or file a counter notice.

Again, although our clients chose to remove the references to the specific videos that GitHub requested, including them did not constitute copyright infringement.

RIAA’s letter accused youtube-dl of being a “circumvention device” that bypasses a digital lock protected by section 1201 of the DMCA. Responding on behalf of the developers, EFF explained that the “signature” code used by YouTube (what RIAA calls a “rolling cipher”) isn’t a protected digital lock—and if it were, youtube-dl doesn’t “circumvent” it but simply uses it as intended. For some videos, YouTube embeds a block of JavaScript code in its player pages. That code calculates a number called “sig” and sends this number back to YouTube’s video servers as part of signaling the actual video stream to begin. Any client software that can interpret JavaScript can run YouTube’s “signature” code and produce the right response, whether it’s a standard web browser or youtube-dl. The actual video stream isn’t encrypted with any DRM scheme like the ones used by subscription video sites.

It’s no secret that EFF doesn’t like Section 1201 and the ways it’s used to shut down innovation and competition. In fact, we’re challenging the law in court. But in the case of youtube-dl, Section 1201 doesn’t apply at all. GitHub agreed, and put the repository back online with its full functionality intact.

GitHub recognized that accusations about software projects violating DMCA 1201 don’t make valid takedowns under section 512. GitHub committed to have technical experts review Section 1201-related accusations and allow code repository owners to dispute those accusations before taking down their code. This is a strong commitment to the rights of GitHub’s developer community, and we hope it sets an example.

EFF is proud to have helped the free software community keep this important tool online. Please consider celebrating this victory with us by making a donation to EFF.

Donate to EFF

Defend Innovation and Free Speech

Share
Categories
DRM Fair Use Intelwars open access

Open Education and Artificial Scarcity in Hard Times

The sudden move to remote education by universities this year has forced the inevitable: the move to an online education. While most universities won’t be fully remote, having course materials online was already becoming the norm before the COVID-19 pandemic, and this year it has become mandatory for millions of educators and students. As academia recovers from this crisis, and hopefully prepares for the next one, the choices we make will send us down one of two paths. We can move towards a future of online education which replicates the artificial scarcity of traditional publishing, or take a path which fosters an abundance of free materials by embracing the principles of open access and open education.

The well-worn, hefty, out-of-date textbook you may have bought some years ago was likely obsolete the moment you had a reliable computer and an Internet connection. Traditional textbook publishers already know this, and tout that they have embraced the digital era and have ebooks and e-rentals available—sometimes even at a discount. Despite some state laws discouraging the practice, publishers try to bundle their digital textbooks into “online learning systems,” often at the expense of the student. However, the costs and time needed to copy and send thousands of the digital textbooks themselves is trivial compared to their physical equivalent. 

To make matters worse, these online materials are often locked down with DRM which prevent buyers from sharing or reselling books; in turn, devastating the secondhand textbook market. This creates the absurd situation of ebooks, which are almost free to reproduce, being effectively more expensive than a physical book you plan to resell. Fortunately for all of us this scarcity is constructed, and there exists a more equitable and intuitive alternative. 

Right now there is a global collaborative effort among the world’s educators and librarians to provide high-quality, free, and up-to-date education materials to all with little restriction. This of course is the global movement towards open education resources (OER). While this tireless effort of thousands of academics may seem complicated, it revolves around a simple idea: Education is a fundamental human right, so if technology enables us to share, reproduce, and update educational materials so effectively that we can give them away for free—it’s our moral duty to do so.

This cornucopia of syllabuses, exams, textbooks, video lectures, and much more is already available and awaiting eager educators and students. This is thanks to the power of open licensing, typically the Creative Commons Attribution license (CC BY), which is the standard for open educational resources. Open licensing preserves your freedom to retain, reuse, revise, remix and redistribute educational materials.  Much like free and open source licensing for code, these licenses help foster a collaborative ecosystem where people can freely use, improve, and recreate useful tools

Yet, most college students are still stuck on the path of prohibitively expensive and often outdated books from traditional publishers. While this situation is bad enough on its own, the COVID-19 pandemic has heightened the absurd and contradictory nature of this status quo. The structural equity offered by supporting OER is as clear and urgent as ever. Open Education, like all of open access more broadly, is a human rights issue.

The Squeeze on Students and Instructors

How do college students cope with being assigned highly priced textbooks? Some are fortunate enough to buy them outright, or can at least scrape together to rent everything they need. Physical books are available, they can share copies, resell them, and buy used. Artificial book scarcity has fortunately already been addressed in many communities with well-funded libraries. Unfortunately the necessity for reducing social contact during the pandemic has made these physical options more difficult if not impossible to orchestrate. That leaves the most vulnerable students with the easiest and by far most common solution for navigating the predicament: hope that you don’t actually need the book and avoid the purchase all together.

Unsurprisingly, a student’s performance is highly impacted by access to educational materials, and trying to catch up late in the semester is rarely viable. In short, these wholly artificial barriers to accessing necessary educational materials are setting up the most vulnerable students to choose between risking their grades, their health, or their wallet. Fortunately there is a growing number of institutions embracing OER, saving their students millions of dollars while making it possible for every student to succeed without any undue costs.

Instructors at universities have been feeling the pressure, too. With little support at most institutions, they were asked to prepare a fully online course for the fall, sometimes in addition to an in-person course plan. Studying this sudden pivot, Bay View Analytics estimates 97% of institutions had faculty teaching online for the first time, with 56% of instructors needing to adopt teaching methods they have never tried before. 

Adapting a course to work online is not a trivial amount of work. Integrating technology into education often requires special training in pedagogy, the digital divide, and emerging privacy concerns. Even if it falls short of these trainings, having a selection of pre-made courses to freely adapt is an age-old academic practice which can relieve instructors from this burden. While this informal system of sharing among instructors may have provided some confidence, provided they knew others who have taught similar courses online, this is where the power of OER can really take hold. 

Instead of being limited to who you know, the global community around OER offers a much broader variety of syllabuses and assignments for different teaching styles. As OER continues to grow, instructors will be more resilient and able to choose between the best materials the global OER community has to offer.   

Building Towards Education Equity

Despite the many benefits of open access and open education, most instructors have still never heard of OER. This means a simple first step away from an expensive and locked down system of education is to make sure you make the benefits of OER more widely known. While pushing for the broader utilization of OER, we must advocate for systemic changes to make sure OER is supported on every campus.

For this task, supporting public and private libraries is essential. Despite years of austerity cuts, many academic libraries have established hubs of well-curated OER resources, tailored for the needs of their institution. As just one example, OAISIS is a hub of OER at the State University of New York at Geneseo, where librarians maintain materials from over 500 institutions. As a greater number of educational materials utilize open licenses, it will be essential for librarians to help instructors navigate the growing number of options.

State legislatures are also increasingly introducing bills to address this issue, and we should all push our legislatures to do what’s right. Public funding should save students money and save teachers time, not deepen the divide between those who can and those who can’t access resources.

This is a lesson we cannot forget as we recover from the current crisis. Structural inequity and a system of artificial scarcity is nothing new, and it will still be there on the other side of the COVID-19 pandemic. Traditional publishers have restrained education too much for too long. We already have a future where education can be adaptable, collaborative, and free. Now is the time to reclaim it.

EFF is proud to celebrate Open Access Week 2020

Share
Categories
announcement Fair Use Intelwars open access

Education Groups Drop Their Lawsuit Against Public.Resource.Org, Give Up Their Quest to Paywall the Law

This week, open and equitable access to the law got a bit closer. For many years, EFF has defended Public.Resource.Org in its quest to improve public access to the law — including standards, like the National Electrical Code, that legislators and agencies have made into binding regulations. In two companion lawsuits, six standards development organizations sued Public Resource in 2013 for posting standards online. They accused Public Resource of copyright infringement and demanded the right to keep the law behind paywalls.

Yesterday, three of those organizations dropped their suit. The American Educational Research Association (AERA), the National Council on Measurement in Education (NCME), and the American Psychological Association (APA) publish a standard for writing and administering tests. The standard is widely used in education and employment contexts, and several U.S. federal and state government agencies have incorporated it into their laws.

A federal district court initially ruled that laws like the testing standard could be copyrighted, and that Public Resource could not post them without permission. But in 2018, the Court of Appeals for the D.C. Circuit threw out that ruling and sent the case back to the trial court with instructions to consider whether posting standards that are incorporated into law is a non-infringing fair use. As one member of the three-judge panel wrote [pdf], the court “put[] a heavy thumb on the scale in favor of an unrestrained ability to say what the law is.”

Also this year, in a related case, the Supreme Court ruled that Public Resource could post the state of Georgia’s annotated code of laws, ruling that the state could not assert copyright in its official annotations.

Yesterday, AERA, NCME, and APA asked the court to dismiss their suit with prejudice, indicating that they are no longer trying to stop Public Resource from posting the testing standards. “I’m pleased that AERA, NCME, and APA have withdrawn their claims and hope they will embrace open access to their admirable work,” said Public Resource founder Carl Malamud. “We have vigorously objected to what we believed was a baseless suit, but we are also very happy to move forward and thank them for taking this important though overdue step. It has been seven long years, let’s think about the future.“

Three other standards development groups (the American Society for Testing and Materials, the National Fire Protection Association, and the American Society for Heating, Refrigeration, and Air Conditioning Engineers) continue to pursue their suit against Public Resource. We’re confident that the court will rule that laws are free for all to read, speak, and share with the world.

Share
Categories
Creativity & Innovation Fair Use free speech Intelwars Legislative Analysis Section 230 of the Communications Decency Act

The Online Content Policy Modernization Act Is an Unconstitutional Mess

EFF is standing with a huge coalition of organizations to urge Congress to oppose the Online Content Policy Modernization Act (OCPMA, S. 4632). Introduced by Sen. Lindsey Graham (R-SC), the OCPMA is yet another of this year’s flood of misguided attacks on Internet speech (read bill [pdf]). The bill would make it harder for online platforms to take common-sense moderation measures like removing spam or correcting disinformation, including disinformation about the upcoming election. But it doesn’t stop there: the bill would also upend longstanding balances in copyright law, subjecting ordinary Internet users to up to $30,000 in fines for everyday activities like sharing photos and writing online, without even the benefit of a judge and jury.

The OCPMA combines two previous bills. The first—the Online Freedom and Viewpoint Diversity Act (S. 4534)—undermines Section 230, the most important law protecting free speech online. Section 230 enshrines the common-sense principle that if you say something unlawful online, you should be the one held responsible, not the website or platform where you said it. Section 230 also makes it clear that platforms have liability protections for the decisions they make to moderate or remove online speech: platforms are free to decide their own moderation policies however they see fit. The OCPMA would flip that second protection on its head, shielding only platforms that agree to confine their moderation policies to a narrowly tailored set of rules. As EFF and a coalition of legal experts explained to the Senate Judiciary Committee:

This narrowing would create a strong disincentive for companies to take action against a whole host of disinformation, including inaccurate information about where and how to vote, content that aims to intimidate or discourage people from casting a ballot, or misleading information about the integrity of our election systems. S.4632 would also create a new risk of liability for services that “editorialize” alongside user-generated content. In other words, sites that direct users to voter-registration pages, that label false information with fact-checks, or that provide accurate information about mail-in voting, would face lawsuits over the user-generated content they were intending to correct.

It’s easy to see the motivations behind the Section 230 provisions in this bill, but they simply don’t hold up to scrutiny. This bill is based on the flawed premise that social media platforms’ moderation practices are rampant with bias against conservative views; while a popular meme in some right-wing circles, this view doesn’t hold water. There are serious problems with platforms’ moderation practices, but the problem isn’t the liberal silencing the conservative; the problem is the powerful silencing the powerless. Besides, it’s absurd to suggest that the situation would somehow be improved by putting such severe limits on how platforms moderate; the Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more freeform.

The government forcing platforms to adopt a specific approach to moderation is not just a bad idea; in fact; it’s unconstitutional. As EFF explained in its own letter to the Judiciary Committee:

The First Amendment prohibits Congress from directly interfering with intermediaries’ decisions regarding what user-generated content they host and how they moderate that content. The OCPM Act seeks to coerce the same result by punishing services that exercise their rights. This is an unconstitutional condition. The government cannot condition Section 230’s immunity on interfering with intermediaries’ First Amendment rights.

Sen. Graham has also used the OCPMA as his vehicle to bring back the CASE Act, a 2019 bill that would have created a new tribunal for hearing “small” ($30,000!) copyright disputes, putting everyday Internet users at risk of losing everything simply for sharing copyrighted images or text online. This tribunal would exist within the Copyright Office, not the judicial branch, and it would lack important protections like the right to a jury trial and registration requirements. As we explained last year, the CASE Act would usher in a new era of copyright trolling, with copyright owners or their agents sending notices en masse to users for sharing memes and transformative works. When Congress was debating the CASE Act last year, its proponents laughed off concerns that the bill would put everyday Internet users at risk, clearly not understanding what a $30,000 fee would mean to the average family. As EFF and a host of other copyright experts explained to the Judiciary Committee:

The copyright small claims dispute provisions in S. 4632 are based upon S. 1273, the Copyright Alternative in Small-Claims Enforcement Act of 2019 (“CASE Act”), which could potentially bankrupt millions of Americans, and be used to target schools, libraries and religious institutions at a time when more of our lives are taking place online than ever before due to the COVID-19 pandemic. Laws that would subject any American organization or individual — from small businesses to religious institutions to nonprofits to our grandparents and children — to up to $30,000 in damages for something as simple as posting a photo on social media, reposting a meme, or using a photo to promote their nonprofit online are not based on sound policy.

The Senate Judiciary Committee plans to consider the OCPMA soon. This bill is far too much of a mess to be saved by amendments. We urge the Committee to reject it.

Share
Categories
Commentary Fair Use Intelwars International

An Open Letter to the Government of South Africa on the Need to Protect Human Rights in Copyright

Five years ago, South Africa embarked upon a long-overdue overhaul of its copyright system, and, as part of that process, the country incorporated some of the best elements of both U.S. and European copyright.

From the U.S.A., South Africa imported the flexible idea of fair use — a set of tests for when it’s okay to use others’ copyrighted work without permission. From the E.U., South Africa imported the idea of specific, enumerated exemptions for libraries, galleries, archives, museums, and researchers.

Both systems are important for preserving core human rights, including free expression, privacy, education, and access to knowledge; as well as important cultural and economic priorities such as the ability to build U.S.- and European-style industries that rely on flexibilities in copyright.

Taken together, the two systems are even better: the European system of enumerated exemptions gives a bedrock of certainty on which South Africans can stand, knowing for sure that they are legally permitted to make those uses. The U.S. system, meanwhile, future-proofs these exemptions by giving courts a framework with which to evaluate new uses involving technologies and practices that do not yet exist.

But as important as these systems are, and as effective as they’d be in combination, powerful rightsholder lobbies insisted that they should not be incorporated in South African law. Incredibly, the U.S. Trade Representative objected to elements of the South African law that were nearly identical to U.S. copyright, arguing that the freedoms Americans take for granted should not be enjoyed by South Africans.

Last week, South African President Cyril Ramaphosa alarmed human rights N.G.O.s and the digital rights community when he returned the draft copyright law to Parliament, striking out both the E.U.- and U.S.-style limitations and exceptions, arguing that they violated South Africa’s international obligations under the Berne Convention, which is incorporated into other agreements such as the WTO’s TRIPS Agreement and the WIPO Copyright Treaty.

President Ramaphosa has been misinformed. The copyright limitations and exceptions under consideration in South Africa are both lawful under international treaties and important to the human rights, cultural freedom, economic development, national sovereignty and self-determination of the South African nation, the South African people, and South African industry.

Today, EFF sent an open letter to The Honourable Ms. Thabi Modise, Speaker of South Africa’s National Assembly; His Excellency Mr. Cyril Ramaphosa, President of South Africa; Ms. Inze Neethling, Personal Assistant to Minister E. Patel, South African Department of Trade, Industry and Competition; and The Honourable Mr. Andre Hermans, Secretary of the Portfolio Committee on Trade and Industry of the Parliament of South Africa.

In our letter, we set out the legal basis for the U.S. fair use system’s compliance with international law, and the urgency of balancing South African copyright with limitations and exceptions that preserve the public interest.

This is an urgent matter. EFF is proud to partner with NGOs in South Africa and around the world in advocating for the public’s rights in copyright.

Share
Categories
Creativity & Innovation Fair Use Intelwars

What Really Does and Doesn’t Work for Fair Use in the DMCA

On July 28, the Senate Committee on the Judiciary held another in its year-long series of hearings on the Digital Millennium Copyright Act (DMCA). The topic of this hearing was “How Does the DMCA Contemplate Limitations and Exceptions Like Fair Use?

We’re glad Congress is asking the question. Without fair use, much of our common culture would be inaccessible, cordoned off by copyright. Fair use creates breathing space for innovation and new creativity by allowing us to re-use and comment on existing works. As Sherwin Siy, lead public policy manager for the Wikimedia Foundation, said in his testimony: “That fair uses aren’t rare exceptions to the exclusive rights of copyright law but a pervasive, constantly operating aspect of the law. Fair use not only promotes journalism, criticism, and education, it also ensures that our everyday activities aren’t constantly infringing copyrights. Especially now that so much of our lives are conducted on camera and online.”

Unfortunately, the answer to Congress’s question is: not enough. The DMCA, in particular, by design and as interpreted, doesn’t do enough to protect online fair uses. This is the case in both Section 1201 of the DMCA—the “anti-circumvention” provision which bans interference on technological restrictions on copyrighted works—and Section 512—the provision which immunizes platforms from liability for copyright infringement by their users so long as certain conditions are met.

Fair Use and Notice and Takedown

The DMCA was meant to be a grand bargain, balancing the needs of tech companies, rightsholders, and users. Section 512 embodies a carefully crafted system that, when properly deployed, gives service providers protection from liability, copyright owners tools to police infringement, and users the ability to challenge the improper use of those tools. Without Section 512, the risk of crippling liability for the acts of users would have prevented the emergence of most of the social media outlets we use today.

But Congress knew that Section 512’s powerful incentives could result in lawful material being censored from the Internet, without prior judicial scrutiny, much less advance notice to the person who posted the material, or an opportunity to contest the removal. For example, users often making fair use of copyrighted works in all kinds of online expression. That use is authorized by law, as part of copyright’s “built-in First Amendment accommodations.” Nonetheless, it is often targeted for takedown under the DMCA. 

For Section 512, user protections are supposed to be located in sections 512(g) and 512(f). In practice, neither of these sections have worked quite as intended.

Section 512(g) lays out the requirements for counternotifications. In theory, if a takedown is issued against a work making fair use, then the target can send a counternotice to get the work restored. The counternotice contains personal information of the creator and an agreement to be subject to a court case. If the person or organization doesn’t respond to the counternotice with a legal action within two weeks, the work goes back up. In practice, very few counternotices are sent, even when the original takedown was flawed.

512(f) was supposed to deter takedowns targeting lawful uses by giving those harmed the ability to hold senders accountable. Once again, in practice, this has done little to actually prevent abusive and false takedowns.

Columbia Law Professor Jane C. Ginsburg agreed, saying that these parts of 512 “may not always have worked out as intended.” She highlighted that automated takedown notice systems don’t take fair use into account and that there are relatively few counternotices. She allowed that “fear or ignorance” would cause users not to take advantage of counternotices, a point backed up by cases of trolling and the intimidating nature of the counternotices.

Evidence of how users avoid the process was given by Rick Beato, a musician who also has a popular YouTube channel that teaches music theory. He noted that he has made 750 YouTube videos, of which 254 have been demonetized and 43 have been blocked or taken down. Beato noted that he’s never disputed anything – it’s too much trouble.

Several witnesses urged the creation of some sort of “alternative dispute resolution” to make taking down and restoring content easier. We disagree. Section 512 already makes takedowns far too easy. The experience of the last 22 years shows just how much of the fundamental right to freedom of expression is harmed by extrajudicial systems like the DMCA. The answer to the DMCA’s failures cannot be yet another one.

As for the European model, there is no way to square the Copyright Directive with fair use. The European Union’s Copyright Directive effectively requires companies to ensure that nothing is ever posted on their platforms that might be infringing. That incentivizes them to over-remove, rather than take fair use into account. And to handle that need, filters become necessary. And so it creates a rule requiring online service providers to send everything we post to the Internet to black-box machine learning filters that will block anything that the filters classify as “copyright infringement.” And, as Beato testified to in the hearing, filters routinely fail to distinguish even obvious fair uses. For example, his educational videos have been taken down and demonetized because of a filter. And he is not alone.

Witnesses also suggested that fair use has expanded too far. This is a reassertion of the old bogeyman of “fair use creep,” and it assumes that fair use is set in stone. In fact, fair use, which is flexible by design, is merely keeping up with the changes in the online landscape and protecting users’ rights.

As witness Joseph Gratz put it:

Nobody likes to have their word, or their work, or their music used in ways that they can’t control. But that is exactly what fair use protects. And that is exactly what the First Amendment protects. Whether or not the copyright holder likes the use, and indeed, even more so where the copyright holder does not like the use, fair use is needed to make sure that free expression can thrive.

Fair Use, Copyright Protection Measures, and Right to Repair

On balance, Section 512 supports a great deal of online expression despite its flaws. The same cannot be said for Section 1201.  Section 1201 makes it illegal to circumvent a technological protection on a copyrighted work, even if you are doing so for an otherwise lawful reason.

Sound confusing? It is. Thanks to fair use, you have a legal right to use copyrighted material without permission or payment. But thanks to Section 1201, you do not have the right to break any digital locks that might prevent you from engaging in that fair use. And this, in turn, has had a host of unintended consequences, such as impeding the right to repair.

The only way to be safe under the law is to get an exemption from the Copyright Office, which grants exemptions to classes of uses every three years. And even if your use is covered by an exemption, that exemption must be continually renewed. In other words, you have to participate in an unconstitutional speech licensing regime, seeking permission from the Copyright Office to exercise your speech rights.

Nevertheless, Christopher Mohr, the Vice President for Intellectual Property and General Counsel of the Software and Information Industry Association, called Section 1201 a success because it supposedly prevented the “proliferation of piracy tools in big box stores.” And Ginsburg pointed to the triennial exemption process as a success. She said it “responds effectively to the essential challenge” of balancing the need for controls with the rights of users.

That’s one way of looking at it. Another is that even if you have an exemption allowing you access to material you have a Constitutional right to use, you can’t have someone with the technological know-how to do it for you and no one is supposed to provide you a tool to do it yourself, either. You have to do it all on your own.

So if you are, for example, one of the farmers trying to repair your own tractor, you now have an exemption allowing you to do that. But you still can’t go to an independent repair store to get an expert to let you in. You can’t use a premade tool to help you get in. This is a manifestly absurd result.

We’re glad Congress is asking questions about fair use under the DMCA. We wish there were better answers.

Share
Categories
Call to Action Fair Use free speech Intelwars Legal Analysis Trade Agreements and Digital Rights

A Legal Deep Dive on Mexico’s Disastrous New Copyright Law

Mexico has just adopted a terrible new copyright law, thanks to pressure from the United States (and specifically from the copyright maximalists that hold outsized influence on US foreign policy).

This law closely resembles the Digital Millennium Copyright Act enacted in the US 1998, with a few differences that make it much, much worse.

We’ll start with a quick overview, and then dig deeper.

“Anti-Circumvention” Provision

The Digital Millennium Copyright Act included two very significant provisions. One is DMCA 1201, the ban on circumventing technology that restricts access to or use of copyrighted works (or sharing such technology). Congress was thinking about people ripping DVDs to infringe movies or descrambling cable channels without paying, but the law it passed goes much, much farther. In fact, some US courts have interpreted it to effectively eliminate fair use if a technological restriction must be bypassed.

In the past 22 years, we’ve seen DMCA 1201 interfere with media education, remix videos, security research, privacy auditing, archival efforts, innovation, access to books for people with print disabilities, unlocking phones to work on a new carrier or to install software, and even the repair and reverse engineering of cars and tractors. It turns out that there are a lot of legitimate and important things that people do with culture and with software. Giving copyright owners the power to control those things is a disaster for human rights and for innovation.

The law is sneaky. It includes exemptions that sound good on casual reading, but are far narrower than you would imagine if you look at them carefully or in the context of 22 years of history. For instance, for the first 16 years under DMCA 1201, we tracked dozens of instances where it was abused to suppress security research, interoperability, free expression, and other noninfringing uses of copyrighted works.

It’s a terrible, unconstitutional law, which is why EFF is challenging it in court.

Unfortunately, Mexico’s version is even worse. Important cultural and practical activities are blocked by the law entirely. In the US, we and our allies have used Section 1201’s exemption process to obtain accommodations for documentary filmmaking, teachers to use video clips in the classroom, for fans to make noncommercial remix videos, to unlock or jailbreak your phone, to repair and modify cars and tractors, to use competing cartridges in 3D printers, and for archival preservation of certain works. Beyond those, we and our allies have been fighting for decades now to protect the full scope of noninfringing activities that require circumvention, so that journalism, dissent, innovation, and free expression do not take a back seat to an overbroad copyright law. Mexico’s version has an exemption process as well, but it is far more limited, in part because Mexico doesn’t have our robust fair use doctrine as a backstop.

This is not a niche issue. The U.S. Copyright Office received nearly 40,000 comments in the 2015 rulemaking. In response to a petition signed by 114,000 people, the U.S. Congress stepped in to correct the rulemaking authorities when they allowed the protection for unlocking phones to lapse in 2012.

“Notice-and-Takedown” Provision

In order to avoid the uncertainty and cost of litigation (which would have bankrupted every online platform and deprived the public of important opportunities to speak and connect), Congress enacted Section 512, which provides a “safe harbor” for various Internet-related activities. To stay in the safeharbor, service providers must comply with several conditions, including “notice and takedown” procedures that give copyright holders a quick and easy way to disable access to allegedly infringing content. Section 512 also contains provisions allowing users to challenge improper takedowns. Without these protections, the risk of potential copyright liability would prevent many online intermediaries from providing services such as hosting and transmitting user-generated content. Thus the safe harbors have been essential to the growth of the Internet as an engine for innovation and free expression.

But Section 512 is far from perfect, and again, the Mexican version is worse.

First of all, a platform can be fined simply for failing to abide by takedown requests even if the takedown is spurious and the targeted material does not infringe. In the US, if they opted out of the safe harbor, they would still only be liable if someone sued them and proved secondary liability. Platforms are already incentivized to take down content on a hair trigger to avoid potential liability, and the Mexican law provides new penalties if they don’t.

Second, we have long catalogued the many problems that arise when you provide the public a way to get material removed from the public sphere without any judicial involvement. It is sometimes deployed maliciously, to suppress dissent or criticism, while other times it is deployed with lazy indifference about whether it is suppressing speech that isn’t actually infringing.

Third, by requiring that platforms prevent material from reappearing after it is taken down, the Mexican law goes far beyond DMCA 512 by essentially mandating automatic filters. We have repeatedly written about the disastrous consequences of this kind of automated censorship.

So that’s the short version. For more detail, read on. But if you are in Mexico, consider first exercising your power to fight back against this law.

Take Action

If you are based in Mexico, we urge you to participate in R3D’s campaign “Ni Censura ni Candados” and send a letter to Mexico’s National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D’s privacy policy.

We are grateful to Luis Fernando García Muñoz of R3D (Red en Defensa de los Derechos Digitales) for his translation of the new law and for his advocacy on this issue.

In-depth legislative analysis and commentary

The text of the law is presented in full in blockquotes. EFF’s analysis has been inserted following the relevant provisions.

Provisions on Technical Protection Measures

Article 114 Bis.- In the protection of copyright and related neighboring rights, effective technological protection measures may be implemented and information on rights management. For these purposes:

I. An effective technological protection measure is any technology, device or component that, in the normal course of its operation, protects copyright, the right of the performer or the right of the producer of the phonogram, or that controls access to a work, to a performance, or to a phonogram. Nothing in this section shall be compulsory for persons engaged in the production of devices or components, including their parts and their selection, for electronic, telecommunication or computer products, provided that said products are not destined to carry Unlawful conduct, and

This provision adopts a broad definition of ‘technological protection measure’ or TPM, so that a wide range of encryption and authentication technologies will trigger this provision. The reference to copyright is almost atmospheric, since the law is not substantively restricted to penalizing those who bypass TPMs for infringing purposes.

II. The information on rights management are the data, notice or codes and, in general, the information that identifies the work, its author, the interpretation, the performer, the phonogram, the producer of the phonogram, and to the holder of any right over them, or information about the terms and conditions of use of the work, interpretation or execution, and phonogram, and any number or code that represents such information, when any of these information elements is attached to a copy or appear in relation to the communication to the public of the same.

In the event of controversies related to both fractions, the authors, performers or producers of the phonogram, or holders of respective rights, may exercise civil actions and repair the damage, in accordance with the provisions of articles 213 and 216 bis. of this Law, independently to the penal and administrative actions that proceed.

Article 114 Ter.- It does not constitute a violation of effective technological protection measures when the evasion or circumvention is about works, performances or executions, or phonograms whose term of protection granted by this Law has expired.

In other words, the law doesn’t prohibit circumvention to access works that have entered the public domain. This is small comfort: Mexico has one of the longest copyright terms in the world.

Article 114 Quater.- Actions of circumvention or evasion of an effective technological protection measure protection that controls access to a work, performance or execution, or phonogram protected by this Law, shall not be considered a violation of this Law, when:

This provision lays out some limited exceptions to the general rule of liability. But those exceptions won’t work. After more than two decades of experience with the DMCA in the United States, it is clear that when regulators can’t protect fundamental rights by attempting to imagine in advance and authorize particular forms of cultural and technological innovation. Furthermore, several of these exemptions are modeled off of stale US exemptions that have proven completely inadequate in practice. The US Congress could plead ignorance in the 90s; legislators have no excuse today.

It gets worse: because Mexico does not have a general fair use rule, innovators would be entirely dependent on these limited exemptions.

I. Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs;

If your eyes glazed over at “reverse engineering” and you assumed this covered reverse engineering generally, you would be in good company. This exemption is sharply limited, however. The reverse engineering is only authorized for the “computer program that effectively controls access” and is limited to “elements of said computer programs that have not been readily available.” It does not mention reverse engineering of computer programs that are subject to access controls – in part because the US Congress was thinking about DVD encryption and cable TV channel scrambling, not about software. If you circumvent to confirm that the software is the software claimed, do you lose access to this exemption because the program was already readily available to you? Even if you had no way to verify that claim without circumvention? Likewise, your “sole purpose” has to be achieving interoperability of an independently created computer program with other programs. It’s not clear what “independently” means, and this is not a translation error – the US law is similarly vague. Finally, the “good faith” limitation is a trap for the unwary or unpopular. It does not give adequate notice to a researcher whether their work will be considered to be done in “good faith.” Is reverse engineering for competitive advantage a permitted activity or not? Why should any non-infringing activity be a violation of copyright-related law, regardless of intent?

If you approach this provision as if it authorizes “reverse engineering” or “interoperability” generally you are imagining an exemption that is far more reasonable than what the text provides.

In the US, for example, companies have pursued litigation over interoperable garage door openers and printer cartridges all the way to appellate courts. It has never been this provision that protected interoperators. The Copyright Office has recognized this in granting exemptions to 1201 for activities like jailbreaking your phone to work with other software.

II. The inclusion of a component or part thereof, with the sole purpose of preventing minors from accessing inappropriate content, online, in a technology, product, service or device that itself is not prohibited;

It’s difficult to imagine something having this as the ‘sole purpose.’ In any event, this is far too vague to be useful for many.

III. Activities carried out by a person in good faith with the authorization of the owner of a computer, computer system or network, performed for the sole purpose of testing, investigating or correcting the security of that computer, computer system or network;

Again, if you skim this provision and believe it protects “computer security,” you are giving it too much credit. Most security researchers do not have the “sole purpose” of fixing the particular device they are investigating; they want to provide that knowledge to the necessary parties so that security flaws do not harm any of the users of similar technology. They want to advance the state of understanding of secure technology. They may also want to protect the privacy and autonomy of users of a computer, system, or network in ways that conflict with what the manufacturer would view as the security of the device. The “good faith” exemption again creates legal risk for any security researcher trying to stay on the right side of the law. Researchers often disagree with manufacturers about the appropriate way to investigate and disclose security vulnerabilities. The vague statutory provision for security testing in the United States was far too unreliable to successfully foster essential security research, something that even the US Copyright Office has now acknowledged. Restrictions on engaging in and sharing security research are also part of our active lawsuit seeking to invalidate Section 1201 as a violation of free expression.

IV. Access by the staff of a library, archive, or an educational or research institution, whose activities are non-profit, to a work, performance, or phonogram to which they would not otherwise have access, for the sole purpose to decide if copies of the work, interpretation or execution, or phonogram are acquired;

This exemption too must be read carefully. It is not a general exemption for noninfringing archival or educational uses. It is instead an extremely narrow exemption for deciding whether to purchase a work. When archivists want to break TPMs to archive an obsolete format, when educators want to take excerpts from films to discuss in class, when researchers want to run analytical algorithms on video data to measure bias or enhance accessibility, this exemption does nothing to help them. Several of these uses have been acknowledged as legitimate and impaired by the US Copyright Office.

V. Non-infringing activities whose sole purpose is to identify and disable the ability to compile or disseminate undisclosed personal identification data information, reflecting the online activities of a natural person, in a way that it does not to affect the ability of any person to gain access to a work, performance, or phonogram;

This section provides a vanishingly narrow exception, one that can be rendered null if manufacturers use TPMs in such a way that you cannot protect your privacy without bypassing the same TPM that prevents access to a copyrighted work. And rightsholders have repeatedly taken this very position in the United States. Besides that, the wording is tremendously outdated; you may want to modify the software in your child’s doll so that it doesn’t record their voice and send it back to the manufacturer; that is not clearly “online activities” – they’re simply playing with a doll at home. In the US, “personally identifiable information” also has a meaning that is narrower than you might expect.

VI. The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security;

This would be a good model for a general exemption: you can circumvent to do noninfringing things. Lawmakers have recognized, with this provision, that the ban on circumventing TPMs could interfere with legitimate activities that have nothing to do with copyright law, and provided a broad and general assurance that these noninfringing activities will not give rise to liability under the new regime.

VII. Non-infringing activities carried out by an investigator who has legally obtained a copy or sample of a work, performance or performance not fixed or sample of a work, performance or execution, or phonogram with the sole purpose of identifying and analyzing flaws in technologies for encoding and decoding information;

This exemption again is limited to identifying flaws in the TPM itself, as opposed to analyzing the software subject to the TPM.

VIII. Non-profit activities carried out by a person for the purpose of making accessible a work, performance, or phonogram, in languages, systems, and other special means and formats, for persons with disabilities, in terms of the provisions in articles 148, section VIII and 209, section VI of this Law, as long as it is made from a legally obtained copy, and

Why does accessibility have to be nonprofit? This means that companies trying to serve the needs of the disabled will be unable to interoperate with works encumbered by TPMs.

IX. Any other exception or limitation for a particular class of works, performances, or phonograms, when so determined by the Institute at the request of the interested party based on evidence.

It is improper to create a licensing regime that presumptively bans speech and the exercise of fundamental rights, and then requires the proponents of those rights to prove their rights to the government in advance of exercising them.  We have sued the US government over its regime and the case is pending.

Article 114 Quinquies.- The conduct sanctioned in article 232 bis shall not be considered as a violation of this Law:

These are the exemptions to the ban on providing technology capable of circumvention, as opposed to the act of circumventing oneself. They have the same flaws as the corresponding exemptions above, and they don’t even include the option to establish new, necessary exemptions over time. This limitation is present in the US regime, as well, and has sharply curtailed the practical utility of the exemptions obtained via subsequent rulemaking. They also do not include the very narrow privacy and library/archive exemptions, meaning that it is unlawful to give people the tools to take advantage of those rights.

I. When it is carried out in relation to effective technological protection measures that control access to a work, interpretation or execution, or phonogram and by virtue of the following functions:

a) The activities carried out by a non-profit person, in order to make an accessible format of a work, performance or execution, or a phonogram, in languages, systems and other modes , means and special formats for a person with a disability, in terms of the provisions of articles 148, section VIII and 209, section VI of this Law, as long as it is made from a copy legally obtained;

b) Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs;

c) Non-infringing activities carried out by an investigator who has legally obtained a copy or sample of a work, performance or performance not fixed or sample of a work, performance or execution, or phonogram with the sole purpose of identifying and analyzing flaws in technologies for encoding and decoding information;

d) The inclusion of a component or part thereof, with the sole purpose of preventing minors from accessing inappropriate content, online, in a technology, product, service or device that itself is not prohibited;

e) Non-infringing activities carried out in good faith with the authorization of the owner of a computer, computer system or network, carried out for the sole purpose of testing, investigating or correcting the security of that computer, computer system or network, and

f ) The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security.

II. When it is carried out in relation to effective technological measures that protect any copyright or related right protected in this Law and by virtue of the following functions:

a) Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs, and

b) The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security.

Article 114 Sexies.- It is not violation of rights management information, the suspension, alteration, modification or omission of said information, when it is carried out in the performance of their functions by persons legally authorized in terms of the applicable legislation, for the effects of law enforcement and safeguarding national security.

Article 232 Bis.- A fine of 1,000 UMA to 20,000 UMA will be imposed on whoever produces, reproduces, manufactures, distributes, imports, markets, leases, stores, transports, offers or makes available to the public, offer to the public or provide services or carry out any other act that allows having devices, mechanisms, products, components or systems that:

Again, it’s damaging to culture and innovation to ban non-infringing activities and technologies simply because they circumvent access controls.

I. Are promoted, published or marketed with the purpose of circumventing effective technological protection measure;

II. Are used predominantly to circumvent any effective technological protection measure, or

This seems to suggest that a technologist who makes a technology with noninfringing uses can be liable because others, independently, have used it unlawfully.

III. Are designed, produced or executed with the purpose of avoiding any effective technological protection measure.

Article 232 Ter.- A fine of 1,000 UMA to 10,000 UMA will be imposed, to those who circumvent an effective technological protection measure that controls access to a work, performance, or phonogram protected by this Law.

Article 232 Quáter.- A fine of 1,000 UMA to 20,000 UMA will be imposed on those who, without the respective authorization:

I. Delete or alter rights management information;

This kind of vague prohibition invites nuisance litigation. There are many harmless ways to ‘alter’ rights management information – for accessibility, convenience, or even clarity. In addition, when modern cameras take pictures, they often automatically apply information that identifies the author. This creates privacy concerns, and it is a common social media practice to strip that identifying information in order to protect users. While large platforms can obtain a form of authorization via their terms of service, it should not be unlawful to remove identifying information in order to protect the privacy of persons involved in the creation of a photograph (for instance, those attending a protest or religious event).

II. Distribute or import for distribution, rights management information knowing that this information has been deleted, altered, modified or omitted without authorization, or

III. Produce, reproduce, publish, edit, fix, communicate, transmit, distribute, import, market, lease, store, transport, disclose or make available to the public copies of works, performances, or phonograms, knowing that the rights management information has been deleted, altered, modified or omitted without authorization.

Federal Criminal Code

Article 424 bis.- A prison sentence of three to ten years and two thousand to twenty thousand days fine will be imposed:

I. Whoever produces, reproduces, enters the country, stores, transports, distributes, sells or leases copies of works, phonograms, videograms or books, protected by the Federal Law on Copyright, intentionally, for the purpose of commercial speculation and without the authorization that must be granted by the copyright or related rightsholder according to said law.

The same penalty shall be imposed on those who knowingly contribute or provide in any way raw materials or supplies intended for the production or reproduction of works, phonograms, videograms or books referred to in the preceding paragraph;

This is ridiculously harsh and broad, even in the most generous reading. And the chilling effect of this criminal prohibition will go even further. If one “knows” they are providing paper to someone but do not know that person is using it to print illicit copies, there should be complete legal clarity that they are not liable, let alone criminally liable.

II. Whoever manufactures, for profit, a device or system whose purpose is to deactivate the electronic protection devices of a computer program, or

As discussed, there are many legitimate and essential reasons for deactivating TPMs.

III. Whoever records, transmits or makes a total or partial copy of a protected cinematographic work, exhibited in a movie theater or places that substitute for it, without the authorization of the copyright or related rightsholder.

Jail time for filming any part of a movie in a theater is absurdly draconian and disproportionate.

Article 424 ter.- A prison sentence of six months to six years and five thousand to thirty thousand days fine will be imposed on whoever that sells to any final consumer on the roads or in public places, intentionally, for the purpose of commercial speculation, copies of works, phonograms, videograms or books, referred to in section I of the previous article.

If the sale is made in commercial establishments, or in an organized or permanent manner, the provisions of article 424 Bis of this Code will be applied.

Again, jail for such a violation is extremely disproportionate. The same comment applies to many of the following provisions.

Article 425.- A prison sentence of six months to two years or three hundred to three thousand days fine will be imposed on anyone who knowingly and without right exploits an interpretation or an execution for profit.

Article 426.- A prison term of six months to four years and a fine of three to three thousand days will be imposed, in the following cases:

I. Whoever manufactures, modifies, imports, distributes, sells or leases a device or system to decipher an encrypted satellite signal, carrier of programs, without authorization of the legitimate distributor of said signal;

II. Whoever performs, for profit, any act with the purpose of deciphering an encrypted satellite signal, carrier of programs, without authorization from the legitimate distributor of said signal;

III. Whoever manufactures or distributes equipment intended to receive an encrypted cable signal carrying programs, without authorization from the legitimate distributor of said signal, or

IV. Whoever receives or assists another to receive an encrypted cable signal carrying programs without the authorization of the legitimate distributor of said signal.

Article 427 Bis.- Who, knowingly and for profit, circumvents without authorization any effective technological protection measure used by producers of phonograms, artists, performers, or authors of any work protected by copyright or related rights, it will be punished with a prison sentence of six months to six years and a fine of five hundred to one thousand days.

Article 427 Ter.- To who, for profit, manufactures, imports, distributes, rents or in any way markets devices, products or components intended to circumvent an effective technological protection measure used by phonogram producers, artists or performers, as well as the authors of any work protected by copyright or related rights, will be imposed from six months to six years of prison and from five hundred to one thousand days fine.

Article 427 Quater.- To those who, for profit, provide or offer services to the public intended mainly to avoid an effective technological protection measure used by phonogram producers, artists, performers, or performers, as well as the authors of any protected work. by copyright or related right, it will be imposed from six months to six years in prison and from five hundred to a thousand days fine.

Article 427 Quinquies.- Anyone who knowingly, without authorization and for profit, deletes or alters, by himself or through another person, any rights management information, will be imposed from six months to six years in prison and five hundred to one thousand days fine.

The same penalty will be imposed on who for profit:

I. Distribute or import for its distribution rights management information, knowing that it has been deleted or altered without authorization, or

II. Distribute, import for distribution, transmit, communicate, or make available to the public copies of works, performances, or phonograms, knowing that rights management information has been removed or altered without authorization.

Notice and takedown provisions

Article 114 Septies.- The following are considered Internet Service Providers:

I. Internet Access Provider is the person who transmits, routes or provides connections for digital online communications without modification of their content, between or among points specified by a user, of material of the user’s choosing, or that makes the intermediate and transient storage of that material done automatically in the course of a transmission, routing or provision of connections for digital online communications.

II. Online Service Provider is a person who performs any of the following functions:

a) Caching carried out through an automated process;

b) Storage, at the request of a user, of material that is hosted in a system or network controlled or operated by or for an Internet Service Provider, or

c) Referring or linking users to an online location by using information location tools, including hyperlinks and directories.

Article 114 Octies.- The Internet Service Providers will not be responsible for the damages caused to copyright holders, related rights and other holders of any intellectual property right protected by this Law, for the copyright or related rights infringements that occur in their networks or online systems, as long as they do not control, initiate or direct the infringing behavior, even if it takes place through systems or networks controlled or operated by them or on their behalf, in accordance with the following:

I. The Internet Access Providers will not be responsible for the infringement, as well as the data, information, materials and contents that are transmitted or stored in their systems or networks controlled or operated by them or on their behalf when:

For clarity: this is the section that applies to those who provide your Internet subscription, as opposed to the websites and services you reach over the Internet.

a ) Does not initiate the chain of transmission of the materials or content nor select the materials or content of the transmission or its recipients, and

b) Include and do not interfere with effective standard technological measures, which protect or identify material protected by this law, which are developed through an open and voluntary process by a broad consensus of copyright holders and service providers, which are available from in a reasonable and non-discriminatory manner, and that do not impose substantial costs on service providers or substantial burdens on their network systems.

There is no such thing as a standard technological measure, so this is just dormant poison. A provision like this is in the US law and there has never been a technology adopted according to such a broad consensus.

II. The Online Service Providers will not be responsible for the infringements, as well as the data, information, materials and content that are stored or transmitted or communicated through their systems or networks controlled or operated by them or on their behalf, and in cases that direct or link users to an online site, when:

First, for clarity, this is the provision that applies to the services and websites you interact with online, including sites like YouTube, Dropbox, Cloudflare, and search engines, but also sites of any size like a bulletin-board system or a server you run to host materials for friends and family or for your activist group.

The consequences for linking are alarming. Linking isn’t infringing in the US or Canada, and this is an important protection for public discourse. In addition, a linked resource can change from a non-infringing page to an infringing one.

a) In an expeditious and effective way, they remove, withdraw, eliminate or disable access to materials or content made available, enabled or transmitted without the consent of the copyright or related rights holder, and that are hosted in their systems or networks, once you have certain knowledge of the existence of an alleged infringement in any of the following cases:

1. When it receives a notice from the copyright or related rights holder or by any person authorized to act on behalf of the owner, in terms of section III of this article, or

It’s extremely dangerous to take a mere allegation as “certain knowledge” given how many bad faith or mistaken copyright takedowns are sent.

2. When it receives a resolution issued by the competent authority ordering the removal, elimination or disabling of the infringing material or content.

In both cases, reasonable measures must be taken to prevent the same content that is claimed to be infringing from being uploaded to the system or network controlled and operated by the Internet Service Provider after the removal notice or the resolution issued by the competent authority.

This provision effectively mandates filtering of all subsequent uploads, comparing them to a database of everything that has been requested to be taken down. Filtering technologies are overly broad and unreliable, and cannot make infringement determinations. This would be a disaster for speech, and the expense would also be harmful to small competitors or nonprofit online service providers.

b) If they remove, disable or suspend unilaterally and in good faith, access to a publication, dissemination, public communication and/or exhibition of the material or content, to prevent the violation of applicable legal provisions or to comply with the obligations derived of a contractual or legal relationship, provided they take reasonable steps to notify the person whose material is removed or disabled.

c) They have a policy that provides for the termination of accounts of repeat offenders, which is publicly known by their subscribers;

This vague provision is also often a sword wielded by rightsholders. When the service provider is essential, such as access to the Internet, termination is an extreme measure and should not be routine.

d) Include and do not interfere with effective standard technological measures that protect or identify material protected by this Law, which are developed through an open and voluntary process by a broad consensus of copyright holders and service providers, which are available in a reasonable and non-discriminatory manner, and that do not impose substantial costs on service providers or substantial burdens on their systems or networks,

Again, there’s not yet any technology considered a standard technological measure.

e) In the case of Online Service Providers referred to in subsections b) and c) of the section II of article 114 Septies, in addition to the provisions of the immediately preceding paragraph, must not receive a financial benefit attributable to the infringing conduct, when the provider has the right and ability to control the infringing conduct.

This is a bit sneaky and could seriously undermine the safe harbor. Platforms do profit from user activity, and do technically have the ability to remove content – if that’s enough to trigger liability or to defeat a safe harbor, then the safe harbor is essentially null for any commercial platform.

III. The notice referred to in subsection a), numeral 1, of the previous section, must be submitted through the forms and systems as indicated in the regulations of the Law, which will establish sufficient information to identify and locate the infringing material or content.

Said notice shall contain as a minimum:

1. Indicate of the name of the rightsholder or legal representative and the means of contact to receive notifications;

2. Identify the content of the claimed infringement;

3. Express the interest or right regarding the copyright, and

4. Specify the details of the electronic location to which the claimed infringement refers.

The user whose content is removed, deleted or disabled due to probable infringing behavior and who considers that the Online Service Provider is in error, may request the content be restored through a counter-notice, in which he/she must demonstrate the ownership or authorization he/she has for that specific use of the content removed, deleted or disabled, or justify its use according to the limitations or exceptions to the rights protected by this Law.

The Online Service Provider who receives a counter-notice in accordance with the provisions of the preceding paragraph, must report the counter-notice to the person who submitted the original notice, and enable the content subject of the counter-notice, unless the person who submitted the original notice initiates a judicial or administrative procedure, a criminal complaint or an alternative dispute resolution mechanism within a period not exceeding 15 business days since the date the Online Service Provider reported the counter-notice to the person who submitted the original notice.

It should be made clear that the rightsholder is obligated to consider exceptions and limitations before sending a takedown.

IV. Internet Service Providers will not be obliged to supervise or monitor their systems or networks controlled or operated by them or on their behalf, to actively search for possible violations of copyright or related rights protected by this Law and that occur online.

In accordance with the provisions of the Federal Law on Telecommunications and Broadcasting, Internet Service Providers may carry out proactive monitoring to identify content that violates human dignity, is intended to nullify or impair rights and freedoms, as well as those that stimulate or advocate violence or a crime.

This provision is sneaky. It says “you don’t have to filter, but you’re allowed to look for content that impairs rights (like copyright) or a crime (like the new crimes in this law).” Given that the law also requires the platform to make sure that users cannot re-upload content that is taken down, it’s cold comfort to say here that they don’t have to filter proactively. At best, this means that a platform does not need to include works in its filters until it has received a takedown request for the works in question.

V. The impossibility of an Internet Service Provider to meet the requirements set forth in this article by itself does not generate liability for damages for violations of copyright and related rights protected by this Law.

This provision is unclear. Other provisions seem to indicate liability for failure to enact these procedures. Likely this means that a platform would suffer the fines below, but not liability for copyright infringement, if it is impossible to comply.

Article 232 Quinquies.- A fine of 1,000 UMA to 20,000 UMA will be imposed when:

I. Anyone who makes a false statement in a notice or counter-notice, affecting any interested party when the Online Service Provider has relied on that notice to remove, delete or disable access to the content protected by this Law or has rehabilitated access to the content derived from said counter-notice;

This is double-edged: it potentially deters both notices and counternotices. It also does not provide a mechanism to prevent censorship; a platform continues to be obligated to act on notices that include falsities.

II. To the Online Service Provider that does not remove, delete or disable access in an expedited way to the content that has been the subject of a notice by the owner of the copyright or related right or by someone authorized to act on behalf of the holder, or competent authority, without prejudice to the provisions of article 114 Octies of this Law, or

This is a shocking expansion of liability. In the US, the safe harbor provides important clarity, but even without the safe harbor, a platform is only liable if they have actually committed secondary copyright infringement. Under this provision, even a spurious takedown must be complied with to avoid a fine. This will create even worse chilling effects than what we’ve seen in the US.

III. To the Internet Service Provider that does not provide expeditiously to the judicial or administrative authority, upon request, the information that is in their possession and that identifies the alleged infringer, in the cases in which said information is required in order to protect or enforce copyright or related rights within a judicial or administrative proceeding.

We have repeatedly seen these kinds of information requests used alongside a pointless copyright claim in order to unmask critics or target people for harassment. Handing out personal information should not be automatic simply because of an allegation of copyright infringement. In the US, we have fought for and won protections for anonymous speakers when copyright owners seek to unmask them because of their expression of their views. For instance, we recently defended the anonymity of a member of a religious community who questioned a religious organization, when the organization sought to abuse copyright law to learn their identity.

Share
Categories
Commentary Creativity & Innovation DMCA Fair Use Intelwars

When the Senate Talks About the Internet, Note Who They Leave Out

In the midst of pandemic and protest, the Senate Judiciary Committee continued on with the third of many planned hearings about copyright. It is an odd moment to be considering whether or not the notice-and-takedown system laid out by Section 512 of the Digital Millennium Copyright Act is working or not, but since Section 512 is a cornerstone of the Internet and because protestors and those at home trying to avoid disease depend on the Internet, we watched it.

There was not a lot said at the hearing that we have not heard before. Major media and entertainment companies want Big Tech companies to implement copyright filters. Notice and takedown is burdensome to them, and they believe that technologists surely have a magic solution to the complicated problem of reconciling free expression with copyright that they simply have not implemented because Section 512 doesn’t require them to.

Artists have real problems and real concerns. In many sectors, including publishing and music, profits are high, but after the oligopolists of media and technology have taken their cut, there’s little left for artists. But the emphasis on Section 512 as the problem is misplaced and doesn’t ultimately serve artists. Before the DMCA created a way to take down material by sending a notice to platforms, the only remedy was to go to court. DMCA takedowns, by comparison, are as simple as sending email—or hiring an outside company to send emails on an artist’s behalf. The call for more Internet speech to be taken down automatically, on the algorithmic decision of some highly mistrusted tech monopolists, and without even an unproven allegation of infringement, is calling for a remedy without a process. It is calling for legal, protected expression to be in danger.

Artists are angry, as so many are, at Big Tech. But Big Tech can already afford to do the things that rightsholders want. And large rightsholders—like Hollywood studios and major music labels—likewise have an interest in taking down as much as they can, be it protected fair uses of their works or true infringement. That places Internet users in between the rock of Big Tech and the hard place of major entertainment companies. Artists and Internet users deserve alternatives to both Big Tech and major entertainment companies. Requiring tech companies to have filters, to search out infringement on their own, or any proposals requiring tech companies to do more will only solidify the positions of companies like Google and Facebook, which can afford to do these measures, and create more barriers for new competitors.

As Meredith Rose, Policy Counsel at Public Knowledge, said during the hearing:

This is not about content versus tech. I am here to speak about how Section 512 impacts the more than 229 million American adults who use the Internet as more than just a delivery mechanism for copyrighted content. They use it to pay bills, to learn, to work, to socialize, to receive healthcare. And yet they are missing from the Copyright Office’s Section 512 report, they are missing from the systems and procedures that govern their rights, and too often they are missing from the debate on Capitol Hill.

 We likewise note the absence of Internet users—a group that grows and grows and, whether they identify themselves as such or not, now includes 90% of Americans.

During the hearing, a witness wondered if there was a generation of artists who will be lost because it is just too difficult to police their copyrights online. This ignores the generation of artists who already share their work online, and who run into so many problems asserting their fair use rights. We note their absence as well.

We have already gone into depth about how the Copyright Office’s report on Section 512—mentioned quite a bit in the hearing—fails to take users and the importance of Internet access into account. Changing the foundation of the Internet, throwing up roadblocks to people expressing themselves online, creating new quasi-courts for copyright, or forcing the creation and adoption of faulty and easily abused automated filters will hurt users. And we are, almost all of us, Internet users.

Share
Categories
Article 13 Commentary COVID-19 and Digital Rights Creativity & Innovation Fair Use Intelwars

Copyright and Crisis: Filters Are Not the Answer

It’s been a joke for years now, from the days when Facebook was just a website where you said you were eating a sandwich and Instagram was just where you posted photos of said sandwich, but, right now, we really are living our everyday lives online. Teachers are trying to teach classes online, librarians are trying to host digital readings, and trainers are trying to offer home classes.

With more people entering the online world, more people are encountering the barriers created by copyright. Now is no time to make those barriers higher, but a new petition directed at tech companies does exactly that, and in the process tries to do for the US what Article 17 of last’s year’s European Copyright Directive is doing for Europe—create a rule requiring online service providers to send everything we post to the Internet to black-box machine learning filters that will block anything that the filters classify as “copyright infringement.”

The petition from musical artists, calls on companies to “reduce copyright infringement by establishing ‘standard technical measures.’” The argument is that, because of COVID-19, music labels and musicians cannot tour and, therefore, are having a harder time making up for losses due to online copyright infringement. So the platforms must do more to prevent that infringement.

Musical artists are certainly facing grave financial harms due to COVID-19, so it’s understandable that they’d like to increase their revenue wherever they can. But there are at least three problems with this approach, and each creates a situation which would cause harm for Internet users and wouldn’t achieve the ends musicians are seeking.

First, the Big Tech companies targeted by the petition already employ a wide variety of technical measures in the name of blocking infringement, and long experience with these systems has proven them to be utterly toxic to lawful, online speech. YouTube even warned that this current crisis would prompt even more mistakes, since human review and appeals were going to be reduced or delayed. It has, at least, decided not to issue strikes except where it has “high confidence” that there was some violation of YouTube policy. In a situation where more people than before are relying on these platforms to share their legally protected expression, we should, if anything, be looking to lessen the burden on users, not increase it. We should be looking to make them fairer, more transparent, and appeals more accessible, not adding more technological barriers.

YouTube’s Content ID tool has flagged everything from someone speaking into a mic to check the audio to a synthesizer test. Scribd’s filter caught and removed a duplicate upload of the Mueller report, despite the fact that anything created by a federal government employee as part of their work can’t even be copyrighted. Facebook’s Rights Manager keeps flagging its users’ performances of classical music composed hundreds of years ago. Filters can’t distinguish lawful from unlawful content. Human beings need to review these matches.

But they don’t. Or if they do, they aren’t trained to distinguish lawful uses. Five rightsholders were happy to monetize ten hours of static because Content ID matched it. Sony refused the dispute by one Bach performer, who only got his video unblocked after leveraging public outrage. A video explaining how musicologists determine whether one song infringes on another was taken down by Content ID, and the system was so confusing that law professors who are experts in intellectual property couldn’t figure out the effect of the claim in their account if they disputed it. They only got the video restored because they were able to get in touch with YouTube via their connections. Private connections, public outrage, and press coverage often get these bad matches undone, but they are not a substitute for a fair system.

Second, adding more restrictions will raise make making and sharing our common culture harder at a time when, if anything, it needs to be easier. We should not require everyone online become experts in law and the specific labyrinthine policies of a company or industry just when whole new groups of people are transferring their lives, livelihoods, and communities to the Internet.

If there’s one lesson recent history has taught us, it’s that “temporary, emergency measures” have a way of sticking around after the crisis passes, becoming a new normal. For the same reason that we should be worried about contact tracing apps becoming a permanent means for governments to track and control whole populations, we should be alarmed at the thought that all our online lives (which, during the quarantine, are almost our whole lives) will be subjected to automated surveillance, judgment and censorship by a system of unaccountable algorithms operated by massive corporations where it’s impossible to get anyone to answer an email.

Third, this petition appears to be a dangerous step toward the content industry’s Holy Grail: manufacturing an industry consensus on standard technical measures (STMs) to police copyright infringement. According to Section 512 of the Digital Millennium Copyright Act (DMCA), service providers must accommodate STMs in order to receive the safe harbor protections from the risk of crippling copyright liability. To qualify as an STM, a measure must (1) have been developed pursuant to a broad consensus in an “open, fair, voluntary, multi-industry standards process”; (2) be available on reasonable and nondiscriminatory terms; and (3) cannot impose substantial costs on service providers. Nothing has ever met all three requirements, not least because no “open, fair, voluntary, multi-industry standards process” exists.

Many in the content industries would like to change that, and some would like to see U.S. follow the EU in adopting mandatory copyright filtering. The EU’s Copyright Directive—also known as Article 17, the most controversial part —passed a year ago, but only one country has made progress towards implementing it [pdf]. Even before the current crisis, countries were having trouble reconciling the rights of users, the rights of copyright holders, and the obligations of platforms into workable law. The United Kingdom took Brexit as a chance not to implement it. And requiring automated filters in the EU runs into the problem that the EU has recognized the danger of algorithms by giving users the right not to be subject to decisions made by automated tools.

Put simply, the regime envisioned by Article 17 would end up being too complicated and expensive for most platforms to build and operate. YouTube’s Content ID alone has cost $100,000,000 to date, and it just filters videos for one service. Musicians are 100 percent right to complain about the size and influence of YouTube and Facebook, but mandatory filtering creates a world in which only YouTube and Facebook can afford to operate. Cementing Big Tech’s dominance is not in the interests of musicians or users. Mandatory copyright filters aren’t a way to control Big Tech: they’re a way for Big Tech to buy Perpetual Internet Domination licenses that guarantee that they need never fear a competitor.

Musicians are under real financial stress due to COVID-19, and they are not incorrect to see something wrong with just how much of the world is in the hands of Big Tech. But things will not get better for them or for users by entrenching its position or making it harder to share work online.

Share
Categories
Commentary COVID-19 and Digital Rights Fair Use Intelwars

Sharing Our Common Culture in Uncommon Times

We are in an unprecedented time. People are being told to stay home as much as possible. Some of us are lucky enough to have jobs that can be done remotely, schools are closed and kids are home, and healthcare, grocery, or other essential workers are looking for respite where they can safely find it. All of which means that, for now, for many of us, the Internet is not only our town square, but also our school, art gallery, museum, and library.

Users around the U.S.—from individual creators to libraries to educators to community organizers—are rising to the challenge this presents by going online to share information, music, books, and art. High-profile examples include LeVar Burton telling stories to children and adults alike and the cast of Hamilton reuniting via a YouTube show. Musicians of all levels of fame are performing through a wide range of services and apps. Teachers are taking on the tremendous task of educating online, rapidly finding, developing and sharing resources so that their students don’t have to lose their place while they shelter in place. In response to the temporary closure of libraries around the US and abroad, the Internet Archive has created a National Emergency Library (NEL) that gives members of the public digital access to 1.5 million books, without charge. Universities, private companies, and nonprofits are pledging to make their intellectual property available free of charge as needed to fight the COVID pandemic and minimize its impacts.

But with more and more people relying on the Internet to form and maintain community, more and more people are also finding themselves on the pointy end of a number of legal swords. Copyright, in particular, has become a potential threat. For example, celebrity doctor Dr. Drew tried to use bogus copyright claims to shut down a video compilation of his incorrect advice about the coronavirus. Burton said copyright worries had limited his reading choices. Others are getting caught in automated filtering machines, like the man who used Facebook Live to stream video of himself playing the violin, or the teacher trying to do a webcast for her Pre-K class while her husband was watching Wrestlemania in the background.  The Author’s Guild and others are up in arms about the NEL, insisting it will destroy authors’ livelihoods.

Fair use has a posse at EFF. If you are the target of an unfair infringement allegation, contact info@eff.org and we’ll see if the posse can help.

Those challenges are predictable, but it’s heartening to see so many rallying in favor of sharing our common culture. When LeVar Burton expressed concern about copyright liability, authors such as Neil Gaiman and our own Cory Doctorow stepped up to grant permission. The International Federation of Library Associations has drafted an open letter to the World Intellectual Property Organization calling on WIPO and its members to do their part to facilitate public interest uses of work, and for rightsholders to do the same, and more than 312 organizations have signed on in less than a week.

As for the NEL, the New Yorker called it a “gift to readers,” and more than 300 individuals and institutions have endorsed the project. The Internet Archive has put out a detailed response to its critics, explaining how the project works, why it is legally protected, and how authors can opt out if they wish.  It’s important to emphasize that the NEL only lends books for two weeks, after which the copy automatically disappears off of the user’s device. Moreover, the NEL does not offer new releases, as the collection excludes titles published within the last five years, and authors or publishers can request that a title be removed. And as Jonathan Band notes, there is no mechanism for the Internet Archive, or any other library, to license emergency access to many of the books in the collection. Finally, while the Archive does not closely log reader habits—showing a laudable and necessary respect for reader privacy – the information the Archive can share suggests that the NEL is not significantly affecting ebook licensing. Most books are only “opened” for less than 30 minutes, which suggests readers are using the NEL to browse and/or they are not interested in the PDF copy the Archive lends, which is significantly less reader-friendly than what you might see on your Kindle. The Archive also notes that 90% of the books that are borrowed were published a decade or more ago.

Fair Use Has a Posse

Many of these disputes over sharing culture are being played out in the court of public opinion for now, but users should know that there are strong legal protections as well. For example, our friends at American University have put together an outstanding primer for teachers on copyright and online learning. As they explain, the fair use doctrine protects many online learning practices, including reading aloud. Library adviser Kyle Courtney also has a great explainer on libraries, fair use, and exigent circumstances.

In a nutshell, the fair use doctrine allows you to use a copyrighted work without permission in a variety of circumstances. It is how we safeguard creativity and free speech in a world where copyright gives exclusive control of some kinds of expression to the copyright holder. To decide where a use is fair, courts consider the second user’s purpose (Is it new and different from that of the original creator? Is it commercial or for-profit?); the nature of the original work (Was it factual or fictional? Published or unpublished?); how much of the original work was used (Was it more than necessary to accomplish the second user’s purpose?); and market harm (Would the second use harm a likely or actual licensing market for the original work?). A court will weigh these four factors in light of copyright’s fundamental purpose of fostering creativity and innovation, and, in many cases, the public interest. This last bit is particularly important now. COVID-19 has created, almost by definition, a new and powerful public interest purpose that must be considered in any fair use analysis.

People are doing things right now that they instinctively know are right, are helping people, are giving light, or are simply things they would be able to do in the physical world. In many cases, their instinct is correct, but they don’t know they have legal protections. And even when those people have rights and defenses, they may not know how to use them, or have the resources to do it. Fortunately, fair use has a posse at EFF. If you are the target of an unfair infringement allegation, contact info@eff.org and we’ll see if the posse can help.

Share
Categories
Fair Use Intelwars

Show Support for Digital Rights During Video Calls with EFF Virtual Backgrounds

Want to show your support for EFF while you spend more and more time in video conferences and chats? Here’s one fun way: virtual backgrounds! 

We’ve collected some of our favorite EFF designs that promote issues like transparency, creativity, innovation, and privacy, for users to protect their own privacy (and add some joy to their conference calls) by replacing their usual backgrounds. These images are available under the Creative Commons CC BY license.

You can find even more virtual background ideas, and other CC BY photos and images, at EFF’s Flickr page. Enjoy!

Cat riding a unicorn with lightning bolts in his hands.

To call attention to the NSA’s collect-it-all approach to surveillance and to demand an end to mass spying, EFF joined forces with environmental campaigning group Greenpeace and the Tenth Amendment Center (TAC) to fly a blimp over the NSA’s data center in Utah

Airship flying above NSA hq.

Even if you aren’t on a call with Congress, you can show the importance of telling them where you stand on important issues of digital liberties.

Two figures with bullhorns spotlighting the US capitol.

Our robot mecha girl fights to protect users. This banner shows that you do, too.

a girl operating a mecha holding aloft a flame thrower/liberty light.

SLAPPs are lawsuits that are filed to bully or bankrupt activists, protesters, journalists, bloggers, or even online reviewers. The point of a SLAPP isn’t to resolve a legitimate legal dispute—instead, it seeks to leverage the financial and psychological pain of litigation against someone who has spoken out, and silence or diminish that person’s speech. Unfortunately, SLAPPs have been on the rise. Show your support for the ability to speak out with this background.

protestors under a protective dome, being struck by a judge's gavel.

Show your support for strong passwords—or other dice-related activities—with this banner. 

A pattern of multi-colored dice against a turquoise background.

The Digital Millennium Copyright Act (DMCA) “safe harbor” provisions protect service providers who meet certain conditions from monetary damages for the infringing activities of their users and other third parties on the net. Without these protections, the risk of potential copyright liability would prevent many online intermediaries from providing services such as hosting and transmitting user-generated content. Thus the safe harbors, while imperfect, have been essential to the growth of the Internet as an engine for innovation and free expression.

Safe harbor image, with giant DMCA letters in the sky.

Even if you’re not rocking the EFF member hoodie that this design is pulled from, you can show off your support for EFF.

A fist holding two electric bolts, with circular text reading 'ELECTRONIC FRONTIER FOUNDATION'

Computer security and the lack of computer security is a fundamental issue that underpins much of how the Internet does (and doesn’t) function. Many of the policy issues that EFF works on are linked to security in deep ways including privacy and anonymity, DRM, censorship, and network neutrality.

Crossed keys icon, with a pink and grey starburst behind.

Whether it’s fighting copyright infringement or fighting criminal behavior online, it may be tempting to believe that more reliance on automation will solve problems. In reality, when we let computers make the final decision about what types of speech are allowed online, we build a wall around our freedom of expression. We can’t put robots in charge of the Internet

Robots gathering with the intent of banning content based on copyright claims.

Imagine being able to walk around any street in any city and never worrying about checking an email, downloading a map, making a video call, or streaming a song. EFF believes that open wireless networks greatly contribute to that goal, and to the public good.

A city skyline, with overlapping wifi signals in the night sky.

EFF’s Privacy Badger is a browser add-on that stops advertisers and other third-party trackers from secretly tracking where you go and what pages you look at on the web.  If an advertiser seems to be tracking you across multiple websites without your permission, Privacy Badger automatically blocks that advertiser from loading any more content in your browser.  To the advertiser, it’s like you suddenly disappeared. Show your support for badgers everywhere (and push back against third-party tracking) with this banner. 

EFF Privacy Badger logo

Show that you’re part of the fight to protect mobile location data from government surveillance; to prevent GoogleApple and other mobile OS makers from seizing inappropriate and anti-competitive control over mobile software development; to push back against DRM technologies that prevent users from modifying the software on phones and tablets they’ve purchased; and to hold telcos accountable for their invasions of user privacy and violations of network neutrality.

Overlapping mobile devices with spying eyes, and pastel colors.

Keys make the encrypted Internet more reliable and secure. 

Old fashioned keys embedded in a network of nodes with a bright turquoise background.

The Snowden leaks changed how the world sees NSA surveillance and caused a sea change in the policy landscape related to surveillance. After the leaks, EFF worked with dozens of coalition partners across the political spectrum to pass the USA Freedom Act, the first piece of legislation to rein in NSA spying in over thirty years—a bill that would have been unthinkable without the Snowden leaks. Check out 65 things we know thanks to the leaks, and support keeping an eye on surveillance with this banner.

A creepy opticon eye surrounded by other eyes.

We introduced this flickering EFF wallpaper when we updated our logo, and added special icons representing issues that we fight for, like privacy, security, and free speech. Grab the .mp4 file here to use it as a background.

eff logo flicker gif

Whistleblower Mark Klein revealed AT&T’s complicity in NSA mass spying. In AT&T’s Folsom Street facility a device called a “fiberoptic splitter” made a complete copy of the email, web browsing requests, and other electronic communications sent to or from the customers of AT&T’s Internet service from people who use another Internet service provider. The copied traffic was diverted into a room, 641A, pictured below, which is controlled by the NSA. Mark Klein’s picture of room 641A (CC BY 3.0) reminds users to be careful what they say over the Internet. 

room 641a

Though EFF cannot guarantee that the use of this background screen will prevent a recording, it can be a good way to demand your privacy. This image is also available, slightly modified, as a wallpaper! 

(Use of this mark does not imply an endorsement and should not be used as such.) 

Bold white text on a black background reads "I DO NOT CONSENT TO THE RECORDING OF THIS VIDEO CALL" with an EFF logo below.

Share
Categories
Commentary Creativity & Innovation Fair Use free speech ICANN Intelwars International

Empty Promises Won’t Save the .ORG Takeover

The Internet Society’s (ISOC) November announcement that it intended to sell the Public Interest Registry (PIR, the organization that oversees the .ORG domain name registry) to a private equity firm sent shockwaves through the global NGO sector. The announcement came just after a change to the .ORG registry agreement—the agreement that outlines how the registry operator must run the domain—that gives PIR significantly more power to raise registration fees and implement new measures to censor organizations’ speech.

It didn’t take long for the global NGO sector to put two and two together: take a new agreement that gives the registry owner power to hurt NGOs; combine it with a new owner whose primary obligation is to its investors, not its users; and you have a recipe for danger for nonprofits and NGOs all over the world that rely on .ORG. Since November, over 800 organizations and 24,000 individuals from all over the world have signed an open letter urging ISOC to stop the sale of PIR. Members of Congress, UN Special Rapporteurs, and US state charity regulators [pdf] have raised warning flags about the sale.

Take Action

Stand up for .ORG

The NGO community must have a real say in the direction of the .ORG registry, not a nominal rubber stamp exercised by people who owe their position to PIR.

Ethos Capital—the mysterious private equity firm trying to buy PIR—has heard the outcry. Ethos and PIR attempted last week to convene a secret meeting with NGO sector stakeholders to build support for the sale, and then abruptly canceled it. Ethos finally responded last Friday with an announcement that its promises to limit price increases and establish a “stewardship council” would be written into the registry agreement. But Ethos’s weak commitments don’t really address the NGO community’s concerns. Rather, they appear to be window-dressing: PIR would choose the initial council members itself and be able to veto future nominations, ensuring that the council remains in line with PIR’s management. And the limit on price increases still allows Ethos to double the fee over the next eight years, with no restrictions at all after that.

In a letter published today in the Nonprofit Times, EFF Executive Director Cindy Cohn and NTEN CEO Amy Sample Ward explain why the proposed measures wouldn’t solve the many problems with the sale of .ORG:

The proposed “Stewardship Council” would fail to protect the interests of the NGO community. First, the council is not independent. The Public Interest Registry (PIR) board’s ability to veto nominated members would ensure that the council will not include members willing to challenge Ethos’ decisions. PIR’s handpicked members are likely to retain their seats indefinitely. The NGO community must have a real say in the direction of the .ORG registry, not a nominal rubber stamp exercised by people who owe their position to PIR.

Moreover, your proposal gives the Stewardship Council authority only to veto changes to existing PIR policies concerning 1) use or disclosure of registration data or other personal data of .ORG domain name registrants and users, and, 2) appropriate limitations and safeguards regarding censorship of free expression in the .ORG domain name space. While these areas are important, they do not represent the universe of issues where PIR’s interest might diverge from the interest of .ORG domain registrants.

And even within these areas, PIR would “reserve the right at all times in [its] sole judgment to take actions consistent with PIR’s Anti-Abuse Policy and to ensure compliance with applicable laws, policies and regulations.” In other words, the Stewardship Council would be powerless to intervene so long as PIR maintained that a given act of censorship was consistent with its Anti-Abuse Policy or required by some government.

Cohn and Ward go on to explain that the limits on fee increases also fail to provide comfort to NGOs worried about price hikes: if Ethos raises prices as allowed by the proposed rules, the price of .ORG registrations would more than double over eight years. After those eight years, there would be no limits whatsoever, and the stewardship council would be expressly barred from trying to limit future increases. “The relevant question is not what registrants can afford; it’s why a for-profit company should siphon ever more value from the NGO sector year after year. The cost of operating a registry has gone down since 2002, not up.”

The sale of the .ORG registry affects every nonprofit and NGO you care about. If you believe that .ORG should listen to its users, not to investors, then please join us in demanding a stop to the sale.

Take Action

Stand up for .ORG

Share