Categories
Cancel culture Chinese virus COVID-19 First Amendment free speech Intelwars Left-wing college students professor University of Cincinnati

College instructor outed by student for calling COVID-19 ‘the Chinese virus’ is placed on leave

A University of Cincinnati adjunct instructor who was outed by one of his students for referring to COVID-19 as “the Chinese virus” has been placed on administrative leave with pay, the Cincinnati Enquirer reported, citing documents.

What’s the background?

A school investigation was launched after third-year engineering student Evan Sotzing, 20, posted on Twitter an email he received from adjunct instructor John Ucker, the paper said. The email came after Sotzing had to miss a lab session due to being quarantined for possible exposure to the novel coronavirus, the Enquirer noted.

For students testing positive for the chinese [sic] virus, I will give no grade,” the email says, according to the paper. “You can read the info I sent to the class re: the torsion test.”

Sotzing’s post has gone viral:

“I think that the school should take disciplinary actions against the professor because [his] actions completely violate the school’s values,” Sotzing told the Enquirer last week, adding that he’s offended by Ucker’s “racist language” and that he’s concerned his instructor might punish students for adhering to national, state, and local health guidelines.

What happened next?

The paper reported that UC’s College of Engineering and Applied Sciences Dean John Weidner sent an email to Ucker Friday morning saying his courses will be handled by another faculty member for the time being.

“As you are aware, a student in one of your courses has raised a concern regarding one of your emails. This matter has been referred to UC’s Office of Equal Opportunity and Access (“OEOA”) for review,” the email reads, according to the Enquirer. “As such, effective immediately you are being placed on an administrative leave with pay pending the outcome of that review.”

Ucker’s “full cooperation” with the OEOA review is “both expected and appreciated,” Weidner added in the email, the paper noted.

The Enquirer noted that the following morning, Weidner confirmed the matter was referred for review to the OEOA, which handles matters concerning discrimination, harassment, or retaliation based on disability, race, color, religion, national origin, and other identities.

“These types of xenophobic comments and stigmatizations around location or ethnicity are more than troubling,” Weidner wrote to the Enquirer. “We can better protect and care for all when we speak about COVID-19 with both accuracy and empathy — something we should all strive for.”

Anything else?

Ucker’s personnel file indicates he’s taught at the university since 1996, the paper reported, and that an August letter states he was offered an adjunct position in the College of Engineering and Applied Science effective Aug. 24 through Dec. 12 for $3,600.

“Your appointment is contingent upon student enrollment, program need and student evaluation, and the University reserves the right to change or withdraw course offerings, instructors or schedules as these factors are evaluated and assessed,” the letter reads, according to the Enquirer.

The letter also states three requirements expected of Ucker, the paper said: To hold two office hours per week; to not miss any classes; and to “do a good job teaching and taking care of the students.”

Ucker as of Tuesday did not reply to an email from the Enquirer sent last week.


UC student: Adjunct professor gave student 0 on lab because he was in quarantine

youtu.be

Share
Categories
free speech Intelwars Section 230 of the Communications Decency Act

Plaintiffs Continue Effort to Overturn FOSTA, One of the Broadest Internet Censorship Laws

Special thanks to legal intern Ross Ufberg, who was lead author of this post.

A group of organizations and individuals are continuing their fight to overturn the Allow States and Victims to Fight Online Sex Trafficking Act, known as FOSTA, arguing that the law violates the Constitution in multiple respects.

In legal briefs filed in federal court recently, plaintiffs Woodhull Freedom Foundation, Human Rights Watch, the Internet Archive, Alex Andrews, and Eric Koszyk argued that the law violates the First and Fifth Amendments, and the Constitution’s prohibition against ex post facto laws. EFF, together with Daphne Keller at the Stanford Cyber Law Center, as well as lawyers from Davis Wright Tremaine and Walters Law Group, represent the plaintiffs.

How FOSTA Censored the Internet

FOSTA led to widespread Internet censorship, as websites and other online services either prohibited users from speaking or shut down entirely. FOSTA accomplished this comprehensive censorship by making three major changes in law:

First, FOSTA creates a new federal crime for any website owner to “promote” or “facilitate” prostitution, without defining what those words mean. Organizations doing educational, health, and safety-related work, such as The Woodhull Foundation, and one of the leaders of the Sex Workers Outreach Project USA (SWOP USA), fear that prosecutors may interpret advocacy on behalf of sex workers as the “promotion” of prostitution. Prosecutors may view creation of an app that makes it safer for sex workers out in the field the same way. Now, these organizations and individuals—the plaintiffs in the lawsuit—are reluctant to exercise their First Amendment rights for fear of being prosecuted or sued.

Second, FOSTA expands potential liability for federal sex trafficking offenses by adding vague definitions and expanding the pool of enforcers. In addition to federal prosecution, website operators and nonprofits now must fear prosecution from thousands of state and local prosecutors, as well as private parties. The cost of litigation is so high that many nonprofits will simply cease exercising their free speech, rather than risk a lawsuit where costs can run into the millions, even if they win.

Third, FOSTA limits the federal immunity provided to online intermediaries that host third-party speech under 47 U.S.C. § 230 (“Section 230”). This immunity has allowed for the proliferation of online services that host user-generated content, such as Craigslist, Reddit, YouTube, and Facebook. Section 230 helps ensure that the Internet supports diverse and divergent viewpoints, voices, and robust debate, without every website owner needing to worry about being sued for their users’ speech. The removal of Section 230 protections resulted in intermediaries shutting down entire sections or discussion boards for fear of being subject to criminal prosecution or civil suits under FOSTA.

How FOSTA Impacted the Plaintiffs

In their filings asking a federal district court in Washington, D.C. to rule that FOSTA is unconstitutional, the plaintiffs describe how FOSTA has impacted them and a broad swath of other Internet users. Some of those impacts have been small and subtle, while others have been devastating.

Eric Koszyk is a licensed massage therapist who heavily relied on Craigslist’s advertising platform to find new clients and schedule appointments. Since April 2018, it’s been hard for Koszyk to supplement his families’ income with his massage business. After Congress passed FOSTA, Craigslist shut down the Therapeutic Services of its website, where Koszyk had been most successful at advertising his services. Craigslist further prohibited him from posting his ads anywhere else on its site, despite the fact that his massage business is entirely legal. In a post about FOSTA, Craigslist said that they shut down portions of their site because the new law created too much risk. In the two years since Craigslist removed its Therapeutic Services section, Koszyk still hasn’t found a way to reach the same customer base through other outlets. His income is less than half of what it was before FOSTA.

Alex Andrews, a national leader in fighting for sex worker rights and safety, has had her activism curtailed by FOSTA. As a board member of SWOP USA, Andrews helped lead its efforts to develop a mobile app and website that would have allowed sex workers to report violence and harassment. The app would have included a database of reported clients that workers could query before engaging with a potential client, and would notify others nearby when a sex worker reported being in trouble. When Congress passed FOSTA, Alex and SWOP USA abandoned their plans to build this app. SWOP USA, a nonprofit, simply couldn’t risk facing prosecution under the new law.

FOSTA has also impacted a website that Andrews helped to create. The website Rate That Rescue is “a sex worker-led, public, free, community effort to help everyone share information” about organizations which aim to help sex workers leave their field or otherwise assist them. The website hosts ratings and reviews. But without the protections of Section 230, in Andrews’ words, the website “would not be able to function” because of the “incredible liability for the content of users’ speech.” It’s also likely that Rate That Rescue’s creators face criminal liability under FOSTA’s new criminal provisions because the website aims to make sex workers’ lives and work safer and easier. This could be considered to violate FOSTA’s provisions that make it a crime to promote or facilitate prostitution.

 Woodhull Freedom Foundation advocates for sexual freedom as a human right, which includes supporting the health, safety, and protection of sex workers. Each year, Woodhull organizes a Sexual Freedom Summit in Washington, DC, with the purpose of bringing together educators, therapists, legal and medical professionals, and advocacy leaders to strategize on ways to protect sexual freedom and health. There are workshops devoted to issues affecting sex workers, including harm reduction, disability, age, health, and personal safety. This year, COVID-19 has made an in person meeting impossible, so Woodhull is livestreaming some of the events. Woodhull has had to censor their ads on Facebook, and modify their programming on YouTube, just to get past those companies’ heightened moderation policies in the wake of FOSTA.

The Internet Archive, a nonprofit library that seeks to preserve digital materials, faces increased risk because FOSTA has dramatically increased the possibility that a prosecutor or private citizen might sue it simply for archiving newly illegal web pages. Such a lawsuit would be a real threat for the Archive, which is the Internet’s largest digital library.

FOSTA puts Human Rights Watch in danger, as well. Because the organization advocates for the decriminalization of sex work, they could easily face prosecution for “promoting” prostitution.

Where the Legal Fight Against FOSTA Stands Now

With the case now back in district court after the D.C. Circuit Court of Appeals reversed the lower court’s decision to dismiss the suit, both sides have filed motions for summary judgment. In their filings, the plaintiffs make several arguments for why FOSTA is unconstitutional.

First, they argue that FOSTA is vague and overbroad. The Supreme Court has said that if a law “fails to give ordinary people fair notice of the conduct it prohibits,” it is unconstitutional. That is especially true when the vagueness of the law raises special First Amendment concerns.

FOSTA does just that. The law makes it illegal to “facilitate” or “promote” prostitution without defining what that means. This has led to, and will continue to lead to, the censorship of speech that is protected by the First Amendment. Organizations like Woodhull, and individuals like Andrews, are already curbing their own speech. They fear their advocacy on behalf of sex workers may constitute “promotion” or “facilitation” of prostitution.

The government argues that the likelihood of anyone misconstruing these words is remote. But some courts interpret “facilitate” to simply mean make something easier. By this logic, anything that plaintiffs like Andrews or Woodhull do to make sex work safer, or make sex workers’ lives easier, could be considered illegal under FOSTA.

Second, the plaintiffs argue that FOSTA’s Section 230 carveouts violate the First Amendment. A provision of FOSTA eliminates some Section 230 immunity for intermediaries on the Web, which means anybody who hosts a blog where third parties can comment, or any company like Craigslist or Reddit, can be held liable for what other people say.

As the plaintiffs show, all the removal of Section 230 immunity really does is squelch free speech. Without the assurance that a host won’t be sued for what a commentator or poster says, those hosts simply won’t allow others to express their opinions. As discussed above, this is precisely what happened once FOSTA passed.

Third, the plaintiffs argued that FOSTA is not narrowly tailored to the government’s interest in stopping sex trafficking. Government lawyers say that Congress passed FOSTA because it was concerned about sex trafficking. The intent was to roll back Section 230 in order to make it easier for victims of trafficking to sue certain websites, such as Backpage.com. The plaintiffs agree with Congress that there is a strong public interest in stopping sex trafficking. But FOSTA doesn’t accomplish those goals—and instead, it sweeps up a host of speech and advocacy protected by the First Amendment.

There’s no evidence the law has reduced sex trafficking. The effect of FOSTA is that traffickers who once posted to legitimate online platforms will go even deeper underground—and law enforcement will have to look harder to find them and combat their illegal activity.

Finally, FOSTA violates the Constitution’s prohibition on criminalizing past conduct that was not previously illegal. It’s what is known as an “ex post facto” law. FOSTA creates new retroactive liability for conduct that occurred before Congress passed the law. During the debate over the bill, the U.S. Department of Justice even admitted this problem to Congress—but the DOJ later promised to “pursu[e] only newly prosecutable criminal conduct that takes place after the bill is enacted.” The government, in essence, is saying to the courts, “We promise to do what we say the law means, not what the law clearly says.” But the Department of Justice cannot control the actions of thousands of local and state prosecutors—much less private citizens who sue under FOSTA based on conduct that occurred long before it became law.

* * *

FOSTA sets out to tackle the genuine problem of sex trafficking. Unfortunately, the way the law is written achieves the opposite effect: it makes it harder for law enforcement to actually locate victims, and it punishes organizations and individuals doing important work. In the process, it does irreparable harm to the freedom of speech guaranteed by the First Amendment. FOSTA silences diverse viewpoints, makes the Internet less open, and makes critics and advocates more circumspect. The Internet should remain a place where robust debate occurs, without the fear of lawsuits or jail time.

Share
Categories
Content Blocking free speech Intelwars Video Games

What the *, Nintendo? This in-game censorship is * terrible.

While many are staying at home and escaping into virtual worlds, it’s natural to discuss what’s going on in the physical world. But Nintendo is shutting down those conversations with its latest Switch system update (Sep. 14, 2020) by adding new terms like COVID, coronavirus and ACAB to its censorship list for usernames, in-game messages, and search terms for in-game custom designs (but not the designs themselves).

A screenshot in-game of a postcard sent from a friend in Animal Crossing. The message says "testing censorship of" , followed by three asterisks in place of the expected words.

While we understand the urge to prevent abuse and misinformation about COVID-19, censoring certain strings of characters is a blunderbuss approach unlikely to substantially improve the conversation. As an initial matter, it is easily circumvented: while our testing, shown above, confirmed that while Nintendo censored coronavirus, COVID and ACAB, it does not restrict substitutes like c0vid or a.c.a.b., nor corona and virus, when written individually.

More importantly, it’s a bad idea, because these terms can be part of important conversations about politics or public health. Video games are not just for gaming and escapism, but are part of the fabric of our lives as a platform for political speech and expression.  As the world went into pandemic lockdown, Hong Kong democracy activists took to Nintendo’s hit Animal Crossing to keep their pro-democracy protest going online (and Animal Crossing was banned in China shortly after). Just as many Black Lives Matter protests took to the streets, other protesters voiced their support in-game.  Earlier this month, the Biden campaign introduced Animal Crossing yard signs which other players can download and place in front of their in-game home. EFF is part of this too—you can show your support for EFF with in-game hoodies and hats. 

A screenshot in-game showing Chow, an Animal Crossing panda villager, asking whether the player is an “Internet freedom fighter.” The player has highlighted “Yup.”

Nevertheless, Nintendo seems uncomfortable with political speech on its platform. The Japanese Terms of Use prohibit in-game “political advocacy” (?????? or seijitekina shuchou), which led to a candidate for Japan’s Prime Minister canceling an in-game campaign event. But it has not expanded this blanket ban to the Terms for Nintendo of America or Nintendo of Europe.

Nintendo has the right to host the platform as it sees fit. But just because they can do this, doesn’t mean they should. Nintendo needs to also recognize that it has provided a platform for political and social expression, and allow people to use words that are part of important conversations about our world, whether about the pandemic, protests against police violence, or democracy in Hong Kong.

Share
Categories
free speech Intelwars Social Networks

Trump’s Ban on TikTok Violates First Amendment by Eliminating Unique Platform for Political Speech, Activism of Millions of Users, EFF Tells Court

We filed a friend-of-the-court brief—primarily written by the First Amendment Clinic at the Sandra Day O’Connor College of Law—in support of a TikTok employee who is challenging President Donald Trump’s ban on TikTok and was seeking a temporary restraining order (TRO). The employee contends that Trump’s executive order infringes the Fifth Amendment rights of TikTok’s U.S.-based employees. Our brief, which is joined by two prominent TikTok users, urges the court to consider the First Amendment rights of millions of TikTok users when it evaluates the plaintiff’s claims.

Notwithstanding its simple premise, TikTok has grown to have an important influence in American political discourse and organizing. Unlike other platforms, users on TikTok do not need to “follow” other users to see what they post. TikTok thus uniquely allows its users to reach wide and diverse audiences. That’s why the two TikTok users who joined our brief use the platform. Lillith Ashworth, whose critiques of Democratic presidential candidates went viral last year, uses TikTok to talk about U.S. politics and geopolitics. The other user, Jynx, maintains an 18+ adult-only account, where they post content that centers on radical leftist liberation, feminism, and decolonial politics, as well as the labor rights of strippers.

Our brief argues that in evaluating the plaintiff’s claims, the court must consider the ban’s First Amendment implications. The Supreme Court has established that rights set forth in the Bill of Rights work together; as a result the plaintiff’s Fifth Amendment claims are enhanced by the First Amendment considerations. We say in our brief:

A ban on TikTok violates fundamental First Amendment principles by eliminating a specific type of speaking, the unique expression of a TikTok user communicating with others through that platform, without sufficient considerations for the users’ speech. Even though the order facially targets the platform, its censorial effects are felt most directly by the users, and thus their First Amendment rights must be considered in analyzing its legality.

EFF, the First Amendment Clinic, and the individual amici urge the court to adopt a higher standard of scrutiny when reviewing the plaintiff’s claims against the president. Not only are the plaintiff’s Fifth Amendment liberties at stake, but millions of TikTok users have First Amendment freedoms at stake. The Fifth Amendment and the First Amendment are each critical in securing life, liberty, and due process of law. When these amendments are examined separately, they each deserve careful analysis; but when the interests protected by these amendments come together, a court should apply an even higher standard of scrutiny.

The hearing on the TRO scheduled for tomorrow was canceled after the government promised the court that it did not intend to include the payment of wages and salaries within the executive order’s definition of prohibited transactions, thus addressing the plaintiff’s most urgent claims.

 

 

 

 

Share
Categories
Commentary Corporate Speech Controls free speech Intelwars

One database to rule them all: The invisible content cartel that undermines the freedom of expression online

Every year, millions of images, videos and posts that allegedly contain terrorist or violent extremist content are removed from social media platforms like YouTube, Facebook, or Twitter. A key force behind these takedowns is the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that seeks to “prevent terrorists and violent extremists from exploiting digital platforms.” And unfortunately, GIFCT has the potential to have a massive (and disproportionate) negative impact on the freedom of expression of certain communities.

Social media platforms have long struggled with the problem of extremist or violent content on their platforms. Platforms may have an intrinsic interest in offering their users an online environment free from unpleasant content, which is why most social media platforms’ terms of service contain a variety of speech provisions. During the past decade, however, social media platforms have also come under increasing pressure from governments around the globe to respond to violent and extremist content on their platforms. Spurred by the terrorist attacks in Paris and Brussels in 2015 and 2016, respectively, and guided by the shortsighted belief that censorship is an effective tool against extremism, governments have been turning to content moderation as a means to fix international terrorism.

Commercial content moderation is the process through which platforms—more specifically, human reviewers or, very often, machines—make decisions about what content can and cannot be on their sites, based on their own Terms of Service, “community standards,” or other rules. 

During the coronavirus pandemic, social media companies have been less able to use human content reviewers, and are instead increasingly relying on machine learning algorithms to moderate content as well as flag it.  Those algorithms, which are really just a set of instructions for doing something, are fed with an initial set of rules and lots of training data in the hopes that they will learn to identify similar content  But human speech is a complex social phenomenon and highly context-dependent; inevitably, content moderation algorithms make mistakes. What is worse, because machine-learning algorithms usually operate as black boxes that do not explain how they arrived at a decision, and as companies generally do not share either the basic assumptions underpinning their technology or their training data sets, third parties can do little to prevent those mistakes. 

This problem has become more acute with the introduction of hashing databases for tracking and removing extremist content. Hashes are digital “fingerprints” of content that companies use to identify and remove content from their platforms. They are essentially unique, and allow for easy identification of specific content. When an image is identified as “terrorist content,” it is tagged with a hash and entered into a database, allowing any future uploads of the same image to be easily identified.

This is exactly what the GIFCT initiative aims to do: Share a massive database of alleged ‘terrorist’ content, contributed voluntarily by companies, amongst members of its coalition. The database collects ‘hashes’, or unique fingerprints, of alleged ‘terrorist’, or extremist and violent content, rather than the content itself. GIFCT members can then use the database to check in real time whether content that users want to upload matches material in the database. While that sounds like an efficient approach to the challenging task of correctly identifying and taking down terrorist content, it also means that one single database might be used to determine what is permissible speech, and what is taken down—across the entire Internet. 

Countless examples have proven that it is very difficult for human reviewers—and impossible for algorithms—to consistently get the nuances of activism, counter-speech, and extremist content itself right. The result is that many instances of legitimate speech are falsely categorized as terrorist content and removed from social media platforms. Due to the proliferation of the GIFCT database, any mistaken classification of a video, picture or post as ‘terrorist’ content echoes across social media platforms, undermining users’ right to free expression on several platforms at once. And that, in turn, can have catastrophic effects on the Internet as a space for memory and documentation. Blunt content moderation systems can lead to the deletion of vital information not available elsewhere, such as evidence of human rights violations or war crimes. For example, the Syrian Archive, an NGO dedicated to collecting, sharing and archiving evidence of atrocities committed during the Syrian war reports that hundred of thousand videos of war atrocities are removed by YouTube annually. The Archive estimates that take down rates for videos documenting Syrian human rights violations is circa 13%, a number that has almost doubled to 20% in the wake of the coronavirus crisis. As noted, many social media platforms, including YouTube, have been using algorithmic tools for content moderation more heavily than usual, resulting in increased takedowns. If, or when, YouTube contributes hashes of content that depicts Syrian human rights violations, but has been tagged as ‘terrorist’ content by YouTube’s algorithms to the GIFCT database, that content could be deleted forever across multiple platforms. 

The GIFCT content cartel not only risks losing valuable human rights documentation, but also has a disproportionately negative effect on some communities. Defining ‘terrorism’ is a inherently political undertaking, and rarely stable across time and space. Absent international agreement on what exactly constitutes terrorist, or even violent and extremist, content, companies look at the United Nations’ list of designated terrorist organizations or the US State Department’s list of Foreign Terrorist Organizations. But those lists mainly consist of Islamist organizations, and are largely blind to, for example, right-wing extremist groups. That means that the burden of GIFCT’s misclassifications falls disproportionately on Muslim and Arab communities and highlights the fine line between an effective initiative to tackle the worst content online and sweeping censorship.

Ever since the attacks on two Mosques in Christchurch in March 2019, GIFCT has been more prominent than ever. In response to the shooting, during which 51 people were killed, French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern launched the Christchurch Call. That initiative, which aims to eliminate violent and extremist content online, foresees a prominent role for GIFCT. In the wake of this renewed focus on GIFCT, the initiative announced that it would evolve to an independent organization, including a new Independent Advisory Committee (IAC) to represent the voices of civil society, government, and inter-governmental entities. 

However, the Operating Board, where real power resides, remains in the hands of industry. And the Independent Advisory Committee is already seriously flawed, as a coalition of civil liberties organizations has repeatedly noted. 

For example, governments participating in the IAC are likely to leverage their position to influence companies’ content moderation policies and shape definitions of terrorist content that fit their interests, away from the public and eye and therefore lacking accountability. Including governments in the IAC could also undermine the meaningful participation of civil society organizations as many are financially dependent on governments, or might face threats of reprisals for criticism government officials in that forum. As long as civil society is treated as an afterthought, GIFCT will never be an effective multi-stakeholder forum. GIFCT’s flaws and their devastating effects on the freedom of expression, human rights, and the preservation of evidence of war crimes have been known for years. Civil societies organizations have tried to help reform the organization, but GIFCT and its new Executive Director have remained unresponsive. Which leads to the final problem with the IAC: leading NGOs are choosing not to participate at all.  

Where does this leave GIFCT and the millions of Internet users its policies impact? Not in a good place. Without meaningful civil society representation and involvement, full transparency and effective accountability mechanisms, GIFCT risks becoming yet another industry-led forum that promises multi-stakeholderism but delivers little more than government-sanctioned window-dressing. 

Share
Categories
Commentary Corporate Speech Controls free speech Intelwars

One database to rule them all: The invisible content cartel that undermines the freedom of expression online

Every year, millions of images, videos and posts that allegedly contain terrorist or violent extremist content are removed from social media platforms like YouTube, Facebook, or Twitter. A key force behind these takedowns is the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that seeks to “prevent terrorists and violent extremists from exploiting digital platforms.” And unfortunately, GIFCT has the potential to have a massive (and disproportionate) negative impact on the freedom of expression of certain communities.

Social media platforms have long struggled with the problem of extremist or violent content on their platforms. Platforms may have an intrinsic interest in offering their users an online environment free from unpleasant content, which is why most social media platforms’ terms of service contain a variety of speech provisions. During the past decade, however, social media platforms have also come under increasing pressure from governments around the globe to respond to violent and extremist content on their platforms. Spurred by the terrorist attacks in Paris and Brussels in 2015 and 2016, respectively, and guided by the shortsighted belief that censorship is an effective tool against extremism, governments have been turning to content moderation as a means to fix international terrorism.

Commercial content moderation is the process through which platforms—more specifically, human reviewers or, very often, machines—make decisions about what content can and cannot be on their sites, based on their own Terms of Service, “community standards,” or other rules. 

During the coronavirus pandemic, social media companies have been less able to use human content reviewers, and are instead increasingly relying on machine learning algorithms to moderate content as well as flag it.  Those algorithms, which are really just a set of instructions for doing something, are fed with an initial set of rules and lots of training data in the hopes that they will learn to identify similar content  But human speech is a complex social phenomenon and highly context-dependent; inevitably, content moderation algorithms make mistakes. What is worse, because machine-learning algorithms usually operate as black boxes that do not explain how they arrived at a decision, and as companies generally do not share either the basic assumptions underpinning their technology or their training data sets, third parties can do little to prevent those mistakes. 

This problem has become more acute with the introduction of hashing databases for tracking and removing extremist content. Hashes are digital “fingerprints” of content that companies use to identify and remove content from their platforms. They are essentially unique, and allow for easy identification of specific content. When an image is identified as “terrorist content,” it is tagged with a hash and entered into a database, allowing any future uploads of the same image to be easily identified.

This is exactly what the GIFCT initiative aims to do: Share a massive database of alleged ‘terrorist’ content, contributed voluntarily by companies, amongst members of its coalition. The database collects ‘hashes’, or unique fingerprints, of alleged ‘terrorist’, or extremist and violent content, rather than the content itself. GIFCT members can then use the database to check in real time whether content that users want to upload matches material in the database. While that sounds like an efficient approach to the challenging task of correctly identifying and taking down terrorist content, it also means that one single database might be used to determine what is permissible speech, and what is taken down—across the entire Internet. 

Countless examples have proven that it is very difficult for human reviewers—and impossible for algorithms—to consistently get the nuances of activism, counter-speech, and extremist content itself right. The result is that many instances of legitimate speech are falsely categorized as terrorist content and removed from social media platforms. Due to the proliferation of the GIFCT database, any mistaken classification of a video, picture or post as ‘terrorist’ content echoes across social media platforms, undermining users’ right to free expression on several platforms at once. And that, in turn, can have catastrophic effects on the Internet as a space for memory and documentation. Blunt content moderation systems can lead to the deletion of vital information not available elsewhere, such as evidence of human rights violations or war crimes. For example, the Syrian Archive, an NGO dedicated to collecting, sharing and archiving evidence of atrocities committed during the Syrian war reports that hundred of thousand videos of war atrocities are removed by YouTube annually. The Archive estimates that take down rates for videos documenting Syrian human rights violations is circa 13%, a number that has almost doubled to 20% in the wake of the coronavirus crisis. As noted, many social media platforms, including YouTube, have been using algorithmic tools for content moderation more heavily than usual, resulting in increased takedowns. If, or when, YouTube contributes hashes of content that depicts Syrian human rights violations, but has been tagged as ‘terrorist’ content by YouTube’s algorithms to the GIFCT database, that content could be deleted forever across multiple platforms. 

The GIFCT content cartel not only risks losing valuable human rights documentation, but also has a disproportionately negative effect on some communities. Defining ‘terrorism’ is a inherently political undertaking, and rarely stable across time and space. Absent international agreement on what exactly constitutes terrorist, or even violent and extremist, content, companies look at the United Nations’ list of designated terrorist organizations or the US State Department’s list of Foreign Terrorist Organizations. But those lists mainly consist of Islamist organizations, and are largely blind to, for example, right-wing extremist groups. That means that the burden of GIFCT’s misclassifications falls disproportionately on Muslim and Arab communities and highlights the fine line between an effective initiative to tackle the worst content online and sweeping censorship.

Ever since the attacks on two Mosques in Christchurch in March 2019, GIFCT has been more prominent than ever. In response to the shooting, during which 51 people were killed, French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern launched the Christchurch Call. That initiative, which aims to eliminate violent and extremist content online, foresees a prominent role for GIFCT. In the wake of this renewed focus on GIFCT, the initiative announced that it would evolve to an independent organization, including a new Independent Advisory Committee (IAC) to represent the voices of civil society, government, and inter-governmental entities. 

However, the Operating Board, where real power resides, remains in the hands of industry. And the Independent Advisory Committee is already seriously flawed, as a coalition of civil liberties organizations has repeatedly noted. 

For example, governments participating in the IAC are likely to leverage their position to influence companies’ content moderation policies and shape definitions of terrorist content that fit their interests, away from the public and eye and therefore lacking accountability. Including governments in the IAC could also undermine the meaningful participation of civil society organizations as many are financially dependent on governments, or might face threats of reprisals for criticism government officials in that forum. As long as civil society is treated as an afterthought, GIFCT will never be an effective multi-stakeholder forum. GIFCT’s flaws and their devastating effects on the freedom of expression, human rights, and the preservation of evidence of war crimes have been known for years. Civil societies organizations have tried to help reform the organization, but GIFCT and its new Executive Director have remained unresponsive. Which leads to the final problem with the IAC: leading NGOs are choosing not to participate at all.  

Where does this leave GIFCT and the millions of Internet users its policies impact? Not in a good place. Without meaningful civil society representation and involvement, full transparency and effective accountability mechanisms, GIFCT risks becoming yet another industry-led forum that promises multi-stakeholderism but delivers little more than government-sanctioned window-dressing. 

Share
Categories
Bloggers' Rights free speech Intelwars patent trolls Patents

Courts Shouldn’t Stifle Patent Troll Victims’ Speech

In the U.S., we don’t expect or allow government officials – including judges–to be speech police. Courts are allowed to restrain speech only in the rarest circumstances, subject to strict limitations. So we were troubled to learn that a judge in Missouri has issued an order stifling the speech of a small company that’s chosen to speak out about a patent troll lawsuit that was filed against it.

Mycroft AI, a company with nine employees that makes open-source voice technology, published a blog post on February 5 describing how it had been threatened by a patent troll called Voice Tech Corporation. Like all patent trolls, Voice Tech doesn’t offer any services or products. It simply owns patents, which it acquired through more than a decade of one-party argumentation with the U.S. Patent Office.  

Voice Tech’s two patents describe nothing more than using voice commands, together with a mobile device, to perform computer commands. It’s the basic statement of an idea, without any executable instructions, that’s been an idea in science fiction for more than 50 years. (In fact, Mycroft is named after a supercomputer in the Robert Heinlein novel The Moon Is a Harsh Mistress.) When Voice Tech used these patents to threaten and then sue [PDF] Mycroft AI, the company’s leadership decided not to pay the $30,000 that was demanded for these ridiculous patents. Instead, they fought back—and they asked their community for help.  

“Math isn’t patentable and software shouldn’t be either,” wrote Mycroft First Officer Joshua Montgomery in the blog.  “I don’t often ask this, but I’d like for everyone in our community who believes that patent trolls are bad for open source to re-post, link, tweet and share this post.  Please help us get the word out by sharing this post on Facebook, LinkedIn, Twitter, or email.” 

Montgomery also said that he’d “always wanted to be a troll hunter,” and that in his opinion, when confronted with matters like this, “it’s better to be aggressive and ‘stab, shoot and hang’ them, then dissolve them in acid.” He included a link to a piece of state legislation he opposed last year, where he’d used the same quote.  

That tough language got attention, and the story went viral on forums like reddit and Hacker News. The lawsuit, and the post, were also covered by tech publications like The Register and Techdirt. According to Mycroft, it led to an outpouring of much-needed support. 

The Court Steps In

According to Voice Tech, however, it led to harassment.  The company responded by asking the judge overseeing the case, U.S. District Judge Roseann Ketchmark of the Western District of Missouri, to intervene. Voice Tech suggested the post had led to both harassment of its counsel and a hacking attempt. Mycroft strenuously denied any harassment or hacking, and said it would “admonish and deny” any personal attacks.

Unfortunately, Judge Ketchmark not only accepted Voice Tech’s argument about harassment, but ordered Mycroft to delete portions of the blog post. What is worse, she ordered Mycroft to stop reaching out to its own open source community for support. Mycroft was specifically told to delete the request that “everyone in our community who believes that patent trolls are bad for open source” re-post and spread the news.

To be clear, if the allegations are true, Voice Tech’s counsel has a right to respond to those who are actually harassing him. This ruling, however, is deeply troubling. It does not appear as though there was sufficient evidence for the court to find that Mycroft’s colorful post led directly to the harassment—an essential (though not sufficient) requirement before prohibiting a party from sharing their opinions about a case.

But the public has a right to know what is happening in this case, and Mycroft has a right to share that information – even couched in colorful language. The fact that some members of the public may have responded negatively to the post, or even attempt to hack Voice Tech, doesn’t justify overriding that right without strong evidence showing a direct connection between Mycroft’s post and the harassment of Voice Tech’s counsel. 

Patent Trolls and Speech Police 

It gets worse. Apparently emboldened by its initial success, Voice Tech continues to press for more censorship.

In June, Mycroft published an update on its Mark II product. While the company anticipates delivery in 2021, Montgomery wrote that “progress is dependent on staffing and distractions like patent trolls,” and linked to a recent Techdirt article. Voice Tech quickly kicked into overdrive and wrote a note to Mycroft demanding the removal of a link, and a redaction:  

Voice Tech demands that Mycroft remove the link to the TECHDIRT article and redact the original article on the Mycroft Community Forum by no later than the close of business on Wednesday, July 22, 2020. If Mycroft fails to comply, Voice Tech will have no option but to file a motion for contempt with the Court.

Mycroft has removed the link. Voice Tech has also sought to censor third-party journalism about the case, like that published in Techdirt. 

It’s bad enough when small companies like Mycroft AI are subject to threats and litigation over patents that seem to be little more than science-fiction documents issued by a broken bureaucracy. But it’s even more outrageous when they can’t talk about it freely. No company should have to suffer in silence about the damage that patent trolls do to their businesses, to their communities, and to the public at large. We hope Judge Ketchmark clearly and quickly reconsiders and rescinds her troubling gag order. And we’re glad to see that Mycroft AI has been willing to put up legal fight against these clearly invalid patents. 

Share
Categories
announcement free speech Intelwars

EFF Sues Texas A&M University Once Again to End Censorship Against PETA on Facebook and YouTube

This week, EFF filed suit to stop Texas A&M University from censoring comments by PETA on the university’s Facebook and YouTube pages.

In light of the COVID-19 pandemic, Texas A&M held its spring commencement ceremonies online, with broadcasts over Facebook and YouTube. Both the Facebook and YouTube pages had comment sections open to any member of the public—but administrators deleted comments that were associated with PETA’s high-profile campaign against the university’s muscular dystrophy experiments on golden retrievers and other dogs.

Where government entities such as Texas A&M open online forums to the public, the First Amendment prohibits them from censoring comments merely because they don’t like the content of the message or the viewpoint expressed. On top of that, censoring comments based on their message or viewpoint also violates the public’s First Amendment right to petition the government for redress of grievances.

Texas A&M knows this well, because this is not the first time we’ve sued them for censoring comments online. Back in 2018, EFF brought another First Amendment lawsuit against Texas A&M for deleting comments by PETA and its supporters about the university’s dog labs from the Texas A&M Facebook page. This year, in a big win for free speech, the school settled with PETA and agreed to stop deleting comments from its social media pages based on the comments’ messages.   

We are disappointed that Texas A&M has continued to censor comments by PETA’s employees and supporters without regard for the legally binding settlement agreement that it signed just six months ago, and hope that the federal court will make clear to the university once and for all that its censorship cannot stand.  

EFF is joined by co-counsel PETA Foundation and Rothfelder Falick LLP of Houston.

Share
Categories
free speech Intelwars

EFF Joins SPLC Letter to Georgia High School Expressing Concern Over Restriction to Students’ Free Speech

The First Amendment includes the right to use technology to create and preserve images, and otherwise collect information, of newsworthy events. This issue has arisen in numerous contexts, including the right to record the police performing police-work, and we have filed several amicus briefs that have helped firmly establish that right in the law.

Consistent with this, we joined a letter protesting the actions of officials at a high school in Georgia who suspended a 15-year old student for posting a photograph of the school’s crowded hallways to Twitter. The photograph, of the school’s second day back in operation, illustrated what the student perceived to be the serious public health danger in the school’s reopening. The student tweeted a photograph showing crowded hallways on the first day of school, which was widely circulated on social media. The student was initially suspended for five days, but the suspension was revoked after two days and purportedly removed from her record. According to news reports, at least one other student was also suspended and then reinstated. The school also reportedly warned students over the intercom system that “‘there will be consequences for anyone who sends things out’ that shows the school in a negative light.”

As the letter acknowledges, students have robust First Amendment rights to both make and distribute photographs of their school:

In its landmark decision, the U.S. Supreme Court made clear that students in school have important First Amendment rights that protect their ability to talk about and share information with others, particularly about matters of public concern. “Students in school, as well as out of school,” the Court said, “are ‘persons’ under our Constitution. They are possessed of fundamental rights which the State must respect….” , 393 U.S. 503, 511 (1969).

….

While we understand that emotions around school reopening decisions are charged and that you have faced significant criticism for decisions outside of your control, students, teachers and staff nevertheless have the right to speak accurately and lawfully about their school day, even when that speech may be unflattering to the school. Instead of addressing these concerns, NPHS has sought to impose harsh penalties against those who speak out and chill the speech of others who may have similar concerns. Unconstitutionally prohibiting students from speaking about the conditions of the school does not change the conditions of the school or the concerns they have; it only fosters mistrust and fear.

 We acknowledge that in some situations, there may be countervailing privacy concerns that might justify restrictions on both creating and distributing images of students in school. This privacy concern is acknowledged in the school’s student rules, though the particular rule, requiring the permission of an administrator before using any visual recording device is insufficiently tailored to that interest.

In this situation, however, the student’s right to create and publish the image prevails. The image itself is mostly of students’ backs, with only a few faces visible. The privacy invasiveness appears minimal in light of the high newsworthiness.

Share
Categories
Austin tong Campus Conservative students first amendment rights Fordham university free speech Instagram Intelwars Lawsuits

Immigrant student sues college over ‘Soviet-style interrogation and punishment,’ charging him with hate crime over his social media posts on Tiananmen Square, BLM

Former Fordham University student Austin Tong is suing the school for banning him from the campus and from participating in any extracurricular activities affiliated with the school.

In short …

The school banned Tong, a Chinese immigrant, from the campus and its associated activities after sharing a photo of himself holding a legally owned gun on Instagram to honor the 1989 Tiananmen Square Massacre.

The school also took issue with a post that was apparently critical of the Black Lives Matter movement.

Now he’s filling a lawsuit against the school for a violation of First Amendment rights.

You can read more about Tong’s Instagram posts here.

What are the details?

Tong’s lawsuit insists that the school discriminated against him and attempted to suppress his First Amendment rights even though a “significant motivation for Tong’s social media posts was his desire to recognize a historically significant event for Chinese-Americans.”

Dean of Students Keith Eldredge informed Tong that the school would be conducting an investigation into his social media posts because they reportedly made “members of the Fordham community” feel “threatened.”

Following the investigation, Eldredge announced that Tong would not be permitted on campus unless he received expressed permission from Eldredge. Tong was also directed to remotely finish his 2020-21 academic year and could not participate in in-person instruction.

The suit insists that the school permit Tong to exercise free expression, a protected right under the school’s policies and rules.

“Tong will not and should not have to comply with either of these requirements because he plainly did not violate any Fordham policies or rules and will not and should not have to submit to punishment for exercising his constitutional rights, and will not and should not have to compromise his good faith beliefs, principals, and virtues,” a portion of the suit reads according to Campus Reform.

In addition to banning him from campus and forcing him into distance learning, Tong said that the school imposed “Soviet-style interrogation” tactics to question him about the innocuous social media posts.

The suit points out that the school “violated its own policies and rules, which unequivocally commit the University to the protection and encouragement of free speech and expression.”

“[Fordham University] breached their end of the bargain with respect to the implied contract by imposing irrational discipline against Tong as set forth herein,” the suit points out, which insists that the student is “entitled to damages incidental to the primary relief requested herein.”

Tong and his legal team filed the suit at the Supreme Court of the State of New York on July 23.

Campus Reform attempted to receive a statement from a Fordham spokesperson on the matter, but received none at the time of this reporting.

Blaze Media reached out to Fordham University for a comment on the pending litigation, but did not receive a response in time for publication.


Student Banned From Campus For Posting Picture With A Gun

www.youtube.com

A ‘reflection of the constitutional crisis we are facing today’ as Americans

Following his initial removal from the school, Tong told Campus Reform that his status as an immigrant places him in a unique position to appreciate all that America’s constitutionally protected rights have to offer.

“As an immigrant, a big beauty of America to me is the right it gives its citizens to bear arms, not only to protect themselves, but also to keep the government in check,” he explained at the time.

“I hope to use my punishment as a milestone and reflection of the constitutional crisis we are facing today as a society,” Tong added.

In late July, Tong told the the Washington Free Beacon that he refuses to apologize for the social media posts.

Tong’s lawyer, Brett Josphe, added, “For a mere $50,000 a year in tuition, Fordham has smeared our client’s reputation and permanently damaged his career prospects. This behavior by the school and its officials shocks the conscience, and there should be a heavy price to pay.”

Anything else?

Share
Categories
free speech Government Social Media Blocking Intelwars

TikTok Ban: A Seed of Genuine Security Concern Wrapped in a Thick Layer of Censorship

It is ironic that, while purporting to protect America from China’s authoritarian government, President Trump is threatening to ban the TikTok app. Censorship of both speech and social media applications, after all, is one of the hallmarks of the Chinese Internet strategy.  While there is significant cause for concern with TikTok’s security, privacy, and its relationship with the Chinese government, we should resist a governmental power to ban a popular means of communication and expression.  

As is too often the case with government pronouncements, the Trump administration has proposed a ban without specifying what the ban would actually be or what authority allows for it. Rather, the President has said broadly, “we’re banning them from the United States,” or most recently, “it’s going to be out of business in the United States.” This could mean a ban on using the app, or perhaps a ban on distributing TikTok in app stores, or maybe something else. Any way you slice it, an effective ban of the scope suggested cannot be squared with the Constitution. 

Banning Americans From Using TikTok Would Violate the First Amendment

Banning Americans from using the TikTok app would infringe the First Amendment rights of those users to express themselves online. Millions of users post protected speech to TikTok every day, choosing the app over other options for its features or for its audience. Courts will generally not uphold a categorical ban on speech. As the Supreme Court has recognized, to “foreclose access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”  Noting that the Court had previously struck down a law prohibiting protected speech in just one venue (the Los Angeles International Airport), the Court explained: “the State may not enact this complete bar to the exercise of First Amendment rights on websites integral to the fabric of our modern society and culture.” While some may not consider TikTok integral to their own lives, these good-bye videos show how much TikTok means to its users.

Moreover, if the Trump Administration’s true motives are based on perceived anti-Trump content on TikTok, as some have contended, the ban would be an impermissible restriction based on content and viewpoint, subjecting the ban to more constitutional scrutiny, which it could not survive.

Even if the courts reviewed the ban as just a content-neutral restriction on the manner of speech, a complete TikTok ban is overly broad and not narrowly tailored to achieve the government’s national security purpose.  The vast majority of TikTok videos are not in any way related to national security, nor are their posters in substantially more danger of Chinese government spying than the users of other Chinese-owned technologies.

Banning App Stores From Distributing TikTok Also Raises Serious First Amendment Concerns 

Banning app stores from distributing TikTok would raise the First Amendment rights of the app stores to distribute software. As courts have held, code is speech, and the Supreme Court has recognized that software is a protected means of expression (addressing age warnings for video games). Just as bookstores have a right to sell books protected by the First Amendment, so too do app stores have a right to distribute protected software. Of course, it would be up to Apple and Google to challenge a purported distribution ban on their app store.

As a practical matter, an app store ban would not be particularly effective, as close to 100 million people in the U.S. already have the app. However, an inability to get updates—as a result of a ban—would create a security nightmare. Major vulnerabilities left unpatched would leave TikTok users susceptible to a variety of attackers, up to and including the Chinese government.

Unclear Legal Authority to Ban TikTok

It is also unclear what statutory authority would support any type of TikTok ban. Lawfare’s primer is a good starting point, looking at potential actions against TikTok through requiring its parent company, Bytedance, to divest its acquisition of Musical.ly, via the Committee on Foreign Investment in the United States (CFIUS), under the Defense Production Act, as well as a ban through the International Emergency Economic Powers Act (IEEPA).

While CFIUS may be able to require the Musical.ly divestment, it is unclear whether that would be effective beyond its use as a punitive measure against ByteDance. In 2018, Musical.ly was merged into ByteDance’s prior app to make today’s TikTok, while consolidating the user accounts. TikTok has had quite a bit of growth and software development since then, so it’s unclear whether undoing that acquisition (potentially unwinding the merged user accounts and giving back rights in some technologies) would amount to an effective ban.

An IEEPA-based ban would run into more trouble. In 1994, Congress amended IEEPA to create an exception for information and communications. The President does not have the authority “to regulate or prohibit, directly or indirectly—(1) any … personal communication, which does not involve a transfer of anything of value; [or the import or export of] any information or informational materials.” The word ‘Indirectly’ here is important, because many possible bans would not speak of the TikTok messages, but the app or the company. Jarred Taylor’s 2012 law review article Information Wants to be Free (of Sanctions) cogently explains why this amendment means the President cannot prohibit foreign access to social media under U.S. export regulations. Likewise, the President cannot prohibit American access to foreign social media.

While it remains unclear which legal authorities the Administration would rely upon, ByteDance may well have grounds to challenge the President’s statutory authority to invoke a ban.

Ban Aside, Security Concerns About TikTok Persist

Just because the President does not have the power to ban TikTok does not mean there are not important security concerns with the app. Any time we talk about security, the first question is “security from what?” and “security for whom?”  For some users, installing TikTok on their phone is a potentially dangerous move.

There are people who may have concerns about China having access to their data who have not had the same concerns about the US or EU countries: student protesters in Hong Kong, Uighurs, Covid 19 researchers, executives at Fortune 500 companies concerned about theft of IP, journalists with sources in China that they want to protect, US government employees, military personnel stationed abroad. Citing security concerns, both the RNC and DNC have warned their campaign not to use TikTok, and Wells Fargo has banned the app internally. But you can acknowledge that there are genuine security concerns for certain populations while opposing efforts to unilaterally ban an app used by millions of Americans. It’s possible, even in this day and age, to have multiple thoughts about a complex issue.

TikTok is not notably less secure than equivalent social media apps, though it has had its share of vulnerabilities, privacy violations, and dubious practices. But it is different from apps such as Facebook or Twitter in that its data is stored in China and it has employees in China.  Your data is vulnerable to pressure by the government of the country where it is physically located or where employees are located. Governments have a disturbing history of arresting employees to add pressure to their data demands.

TikTok has said that they haven’t handed over any data to the Chinese government, but it’s reasonable to be skeptical of that claim. TikTok may be under a gag order that prevents the company from being honest about its data demands. More recently, TikTok withdrew the app from Hong Kong after Hong Kong enacted new powers to punish Internet companies that failed to comply with data demands. This may stop, at least for now, obtaining data from communications within Hong Kong, but it’s not a complete protection against pressure from the Chinese government.

It’s of no import that China blocks some U.S. based companies from operating in China. Nor should we be swayed that India has blocked TikTok, along with 58 other Chinese apps. The United States should not be taking its human rights tips from the Chinese government or the authoritarian Modi administration in India, which has banned apps as part of a broader effort by India to respond to a border conflict and stoke nationalist sentiment against China.

Moving Forward, Any TikTok Buyer Must Adopt Best Practices

Of course, we may never get to a formal ban or divestment order.  ByteDance is considering selling TikTok, and sale to a U.S. company would help alleviate the stated concern for data leaking to a foreign power. Right now, the likely purchaser appears to be Microsoft.

But even if TikTok is acquired by a U.S. company, there would remain legitimate security and privacy concerns, which need to be addressed regardless of whether ByteDance is the owner. 

Like Microsoft has, any new TikTok company must commit to publishing a transparency report and law enforcement guidelines. They must require a warrant before giving user content to law enforcement, provide advance notice to users about government data demands whenever possible, and promise delayed notice after a gag order expires. To stop workarounds for access to user data, they should adopt a policy to prohibit third parties from allowing TikTok user data to be used for surveillance purposes. And it’s going to need to address the concerns of TikTok users outside the U.S. that American law provides too little protection for their data. Beyond the policies to protect privacy, the new TikTok should be sure to follow best practices in transparency and accountability content moderation.  

They must also conduct a thorough code review, to give users confidence that there are no backdoors in the app and to find bugs that may compromise security. TikTok’s direct messages would be more private and secure with end-to-end encryption. That TikTok is disturbingly far from alone in its need to address these shortcomings is no excuse for inaction. User privacy and security, however, will not come through a bill of sale alone.

Share
Categories
free speech Intelwars Legal Analysis social media surveillance

In Historic Opinion, Third Circuit Protects Public School Students’ Off-Campus Social Media Speech

The U.S. Court of Appeals for the Third Circuit issued an historic opinion in B.L. v. Mahanoy Area School District, upholding the free speech rights of public school students. The court adopted the position EFF urged in our amicus brief that the First Amendment prohibits disciplining public school students for off-campus social media speech.

B.L. was a high school student who had failed to make the varsity cheerleading squad and was placed on junior varsity instead. Out of frustration, she posted—over the weekend and off school grounds—a Snapchat selfie with text that said, among other things, “fuck cheer.” One of her Snapchat connections took a screen shot of the “snap” and shared it with the cheerleading coaches, who suspended B.L. from the J.V. squad for one year. She and her parents sought administrative relief to no avail, and eventually sued the school district with the help of the ACLU of Pennsylvania.

In its opinion protecting B.L.’s social media speech under the First Amendment, the Third Circuit issued three key holdings.

Social Media Post Was “Off-Campus” Speech

First, the Third Circuit held that B.L.’s post was indeed “off-campus” speech. The court recognized that the question of whether student speech is “on-campus” or “off-campus” is a “tricky” one whose “difficulty has only increased after the digital revolution.” Nevertheless, the court concluded that “a student’s online speech is not rendered ‘on campus’ simply because it involves the school, mentions teachers or administrators, is shared with or accessible to students, or reaches the school environment.”

Therefore, B.L.’s Snapchat post was “off-campus” speech because she “created the snap away from campus, over the weekend, and without school resources, and she shared it on a social media platform unaffiliated with the school.”

The court quoted EFF’s amicus brief to highlight why protecting off-campus social media speech is so critical:

Students use social media and other forms of communication with remarkable frequency. Sometimes the conversation online is a high-minded one, with students “participating in issue- or cause-focused groups, encouraging other people to take action on issues they care about, and finding information on protests or rallies.”

Vulgar Off-Campus Social Media Speech is Not Punishable

Second, the Third Circuit reaffirmed its prior holding that the ability of public school officials to punish students for vulgar, lewd, profane, or otherwise offensive speech, per the Supreme Court’s opinion in Bethel School District No. 403 v. Fraser (1986), does not apply to off-campus speech.

The court held that the fact that B.L.’s punishment related to an extracurricular activity (cheerleading) was immaterial. The school district had argued that students have “no constitutionally protected property right to participate in extracurricular activities.” The court expressed concern when any form of punishment is “used to control students’ free expression in an area traditionally beyond regulation.”

Off-Campus Social Media Speech That “Substantially Disrupts” the On-Campus Environment is Not Punishable

Third, the Third Circuit finally answered the question that had been left open by its prior decisions: whether public school officials may punish students for off-campus speech that is likely to “substantially disrupt” the on-campus environment. School administrators often make this argument based on a misinterpretation of the U.S. Supreme Court’s opinion in Tinker v. Des Moines Independent Community School (1969).

Tinker involved only on-campus speech: students wearing black armbands on school grounds, during school hours, to protest the Vietnam War. The Supreme Court held that the school violated the student protestors’ First Amendment rights by suspending them for refusing to remove the armbands because the students’ speech did not “materially and substantially disrupt the work and discipline of the school,” and school officials did not reasonably forecast such disruption.

Tinker was a resounding free speech victory when it was decided, reversing the previously widespread assumption that school administrators had wide latitude to punish student speech on campus. Nevertheless, lower courts have more recently read Tinker as a sword against student speech rather than a shield protecting it, allowing schools to punish student off-campus speech they deem “disruptive.”

The Third Circuit unequivocally rejected reading Tinker as creating a pathway to punish student off-campus speech, such as B.L.’s Snapchat post. The court concisely defined “off-campus” speech as “speech that is outside school-owned, -operated, or -supervised channels and that is not reasonably interpreted as bearing the school’s imprimatur.”

The Third Circuit noted that EFF was the only party to argue that the court should reach this holding (p. 22 n.8). The court reasoned that “social media has continued its expansion into every corner of modern life,” and that it was time to end the “legal uncertainty” that “in this context creates unique problems.” The court stated, “Obscure lines between permissible and impermissible speech have an independent chilling effect on speech.”

Possible Limits on Student Social Media Speech

The Third Circuit clarified that schools may punish on-campus disruption that was caused by an off-campus social media post when a “student who, on campus, shares or reacts to controversial off-campus speech in a disruptive manner.” That is, a “school can punish any disruptive speech or expressive conduct within the school context that meets” the Supreme Court’s demanding standards for actual and serious disruption of the school day.

Thus, “a student who opens his cellphone and shows a classmate a Facebook post from the night before” may be punished if that post, by virtue of being affirmatively shared on campus by the original poster, “substantially disrupts” the on-campus environment. Similarly, if other students act disruptively on campus in response to that Facebook post, they may be punished—but not the original poster if he himself did not share the post on campus.

Additionally, the Third Circuit “reserv[ed] for another day the First Amendment implications of off-campus student speech that threatens violence or harasses others,” an issue that was not presented in this case.

Supreme Court Review Possible

The Third Circuit’s opinion is historic because it is the first federal appellate court to affirm that the substantial disruption exception from Tinker does not apply to off-campus speech.

Other circuits have upheld regulating off-campus speech citing Tinker in various contexts and under different specific rules, such as when it is “reasonably foreseeable” that off-campus speech will reach the school environment, or when off-campus speech has a sufficient “nexus” to the school’s “pedagogical interests.”

The Third Circuit rejected all these approaches. The court argued that its “sister circuits have adopted tests that sweep far too much speech into the realm of schools’ authority.” The court was critical of these approaches because they “subvert[] the longstanding principle that heightened authority over student speech is the exception rather than the rule.”

Because there is a circuit split on this important First Amendment student speech issue, it is possible that the school district will seek certiorari and that the Supreme Court will grant review. Until then, we can celebrate this historic win for public school students’ free speech rights.

Share
Categories
Call to Action Fair Use free speech Intelwars Legal Analysis Trade Agreements and Digital Rights

A Legal Deep Dive on Mexico’s Disastrous New Copyright Law

Mexico has just adopted a terrible new copyright law, thanks to pressure from the United States (and specifically from the copyright maximalists that hold outsized influence on US foreign policy).

This law closely resembles the Digital Millennium Copyright Act enacted in the US 1998, with a few differences that make it much, much worse.

We’ll start with a quick overview, and then dig deeper.

“Anti-Circumvention” Provision

The Digital Millennium Copyright Act included two very significant provisions. One is DMCA 1201, the ban on circumventing technology that restricts access to or use of copyrighted works (or sharing such technology). Congress was thinking about people ripping DVDs to infringe movies or descrambling cable channels without paying, but the law it passed goes much, much farther. In fact, some US courts have interpreted it to effectively eliminate fair use if a technological restriction must be bypassed.

In the past 22 years, we’ve seen DMCA 1201 interfere with media education, remix videos, security research, privacy auditing, archival efforts, innovation, access to books for people with print disabilities, unlocking phones to work on a new carrier or to install software, and even the repair and reverse engineering of cars and tractors. It turns out that there are a lot of legitimate and important things that people do with culture and with software. Giving copyright owners the power to control those things is a disaster for human rights and for innovation.

The law is sneaky. It includes exemptions that sound good on casual reading, but are far narrower than you would imagine if you look at them carefully or in the context of 22 years of history. For instance, for the first 16 years under DMCA 1201, we tracked dozens of instances where it was abused to suppress security research, interoperability, free expression, and other noninfringing uses of copyrighted works.

It’s a terrible, unconstitutional law, which is why EFF is challenging it in court.

Unfortunately, Mexico’s version is even worse. Important cultural and practical activities are blocked by the law entirely. In the US, we and our allies have used Section 1201’s exemption process to obtain accommodations for documentary filmmaking, teachers to use video clips in the classroom, for fans to make noncommercial remix videos, to unlock or jailbreak your phone, to repair and modify cars and tractors, to use competing cartridges in 3D printers, and for archival preservation of certain works. Beyond those, we and our allies have been fighting for decades now to protect the full scope of noninfringing activities that require circumvention, so that journalism, dissent, innovation, and free expression do not take a back seat to an overbroad copyright law. Mexico’s version has an exemption process as well, but it is far more limited, in part because Mexico doesn’t have our robust fair use doctrine as a backstop.

This is not a niche issue. The U.S. Copyright Office received nearly 40,000 comments in the 2015 rulemaking. In response to a petition signed by 114,000 people, the U.S. Congress stepped in to correct the rulemaking authorities when they allowed the protection for unlocking phones to lapse in 2012.

“Notice-and-Takedown” Provision

In order to avoid the uncertainty and cost of litigation (which would have bankrupted every online platform and deprived the public of important opportunities to speak and connect), Congress enacted Section 512, which provides a “safe harbor” for various Internet-related activities. To stay in the safeharbor, service providers must comply with several conditions, including “notice and takedown” procedures that give copyright holders a quick and easy way to disable access to allegedly infringing content. Section 512 also contains provisions allowing users to challenge improper takedowns. Without these protections, the risk of potential copyright liability would prevent many online intermediaries from providing services such as hosting and transmitting user-generated content. Thus the safe harbors have been essential to the growth of the Internet as an engine for innovation and free expression.

But Section 512 is far from perfect, and again, the Mexican version is worse.

First of all, a platform can be fined simply for failing to abide by takedown requests even if the takedown is spurious and the targeted material does not infringe. In the US, if they opted out of the safe harbor, they would still only be liable if someone sued them and proved secondary liability. Platforms are already incentivized to take down content on a hair trigger to avoid potential liability, and the Mexican law provides new penalties if they don’t.

Second, we have long catalogued the many problems that arise when you provide the public a way to get material removed from the public sphere without any judicial involvement. It is sometimes deployed maliciously, to suppress dissent or criticism, while other times it is deployed with lazy indifference about whether it is suppressing speech that isn’t actually infringing.

Third, by requiring that platforms prevent material from reappearing after it is taken down, the Mexican law goes far beyond DMCA 512 by essentially mandating automatic filters. We have repeatedly written about the disastrous consequences of this kind of automated censorship.

So that’s the short version. For more detail, read on. But if you are in Mexico, consider first exercising your power to fight back against this law.

Take Action

If you are based in Mexico, we urge you to participate in R3D’s campaign “Ni Censura ni Candados” and send a letter to Mexico’s National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D’s privacy policy.

We are grateful to Luis Fernando García Muñoz of R3D (Red en Defensa de los Derechos Digitales) for his translation of the new law and for his advocacy on this issue.

In-depth legislative analysis and commentary

The text of the law is presented in full in blockquotes. EFF’s analysis has been inserted following the relevant provisions.

Provisions on Technical Protection Measures

Article 114 Bis.- In the protection of copyright and related neighboring rights, effective technological protection measures may be implemented and information on rights management. For these purposes:

I. An effective technological protection measure is any technology, device or component that, in the normal course of its operation, protects copyright, the right of the performer or the right of the producer of the phonogram, or that controls access to a work, to a performance, or to a phonogram. Nothing in this section shall be compulsory for persons engaged in the production of devices or components, including their parts and their selection, for electronic, telecommunication or computer products, provided that said products are not destined to carry Unlawful conduct, and

This provision adopts a broad definition of ‘technological protection measure’ or TPM, so that a wide range of encryption and authentication technologies will trigger this provision. The reference to copyright is almost atmospheric, since the law is not substantively restricted to penalizing those who bypass TPMs for infringing purposes.

II. The information on rights management are the data, notice or codes and, in general, the information that identifies the work, its author, the interpretation, the performer, the phonogram, the producer of the phonogram, and to the holder of any right over them, or information about the terms and conditions of use of the work, interpretation or execution, and phonogram, and any number or code that represents such information, when any of these information elements is attached to a copy or appear in relation to the communication to the public of the same.

In the event of controversies related to both fractions, the authors, performers or producers of the phonogram, or holders of respective rights, may exercise civil actions and repair the damage, in accordance with the provisions of articles 213 and 216 bis. of this Law, independently to the penal and administrative actions that proceed.

Article 114 Ter.- It does not constitute a violation of effective technological protection measures when the evasion or circumvention is about works, performances or executions, or phonograms whose term of protection granted by this Law has expired.

In other words, the law doesn’t prohibit circumvention to access works that have entered the public domain. This is small comfort: Mexico has one of the longest copyright terms in the world.

Article 114 Quater.- Actions of circumvention or evasion of an effective technological protection measure protection that controls access to a work, performance or execution, or phonogram protected by this Law, shall not be considered a violation of this Law, when:

This provision lays out some limited exceptions to the general rule of liability. But those exceptions won’t work. After more than two decades of experience with the DMCA in the United States, it is clear that when regulators can’t protect fundamental rights by attempting to imagine in advance and authorize particular forms of cultural and technological innovation. Furthermore, several of these exemptions are modeled off of stale US exemptions that have proven completely inadequate in practice. The US Congress could plead ignorance in the 90s; legislators have no excuse today.

It gets worse: because Mexico does not have a general fair use rule, innovators would be entirely dependent on these limited exemptions.

I. Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs;

If your eyes glazed over at “reverse engineering” and you assumed this covered reverse engineering generally, you would be in good company. This exemption is sharply limited, however. The reverse engineering is only authorized for the “computer program that effectively controls access” and is limited to “elements of said computer programs that have not been readily available.” It does not mention reverse engineering of computer programs that are subject to access controls – in part because the US Congress was thinking about DVD encryption and cable TV channel scrambling, not about software. If you circumvent to confirm that the software is the software claimed, do you lose access to this exemption because the program was already readily available to you? Even if you had no way to verify that claim without circumvention? Likewise, your “sole purpose” has to be achieving interoperability of an independently created computer program with other programs. It’s not clear what “independently” means, and this is not a translation error – the US law is similarly vague. Finally, the “good faith” limitation is a trap for the unwary or unpopular. It does not give adequate notice to a researcher whether their work will be considered to be done in “good faith.” Is reverse engineering for competitive advantage a permitted activity or not? Why should any non-infringing activity be a violation of copyright-related law, regardless of intent?

If you approach this provision as if it authorizes “reverse engineering” or “interoperability” generally you are imagining an exemption that is far more reasonable than what the text provides.

In the US, for example, companies have pursued litigation over interoperable garage door openers and printer cartridges all the way to appellate courts. It has never been this provision that protected interoperators. The Copyright Office has recognized this in granting exemptions to 1201 for activities like jailbreaking your phone to work with other software.

II. The inclusion of a component or part thereof, with the sole purpose of preventing minors from accessing inappropriate content, online, in a technology, product, service or device that itself is not prohibited;

It’s difficult to imagine something having this as the ‘sole purpose.’ In any event, this is far too vague to be useful for many.

III. Activities carried out by a person in good faith with the authorization of the owner of a computer, computer system or network, performed for the sole purpose of testing, investigating or correcting the security of that computer, computer system or network;

Again, if you skim this provision and believe it protects “computer security,” you are giving it too much credit. Most security researchers do not have the “sole purpose” of fixing the particular device they are investigating; they want to provide that knowledge to the necessary parties so that security flaws do not harm any of the users of similar technology. They want to advance the state of understanding of secure technology. They may also want to protect the privacy and autonomy of users of a computer, system, or network in ways that conflict with what the manufacturer would view as the security of the device. The “good faith” exemption again creates legal risk for any security researcher trying to stay on the right side of the law. Researchers often disagree with manufacturers about the appropriate way to investigate and disclose security vulnerabilities. The vague statutory provision for security testing in the United States was far too unreliable to successfully foster essential security research, something that even the US Copyright Office has now acknowledged. Restrictions on engaging in and sharing security research are also part of our active lawsuit seeking to invalidate Section 1201 as a violation of free expression.

IV. Access by the staff of a library, archive, or an educational or research institution, whose activities are non-profit, to a work, performance, or phonogram to which they would not otherwise have access, for the sole purpose to decide if copies of the work, interpretation or execution, or phonogram are acquired;

This exemption too must be read carefully. It is not a general exemption for noninfringing archival or educational uses. It is instead an extremely narrow exemption for deciding whether to purchase a work. When archivists want to break TPMs to archive an obsolete format, when educators want to take excerpts from films to discuss in class, when researchers want to run analytical algorithms on video data to measure bias or enhance accessibility, this exemption does nothing to help them. Several of these uses have been acknowledged as legitimate and impaired by the US Copyright Office.

V. Non-infringing activities whose sole purpose is to identify and disable the ability to compile or disseminate undisclosed personal identification data information, reflecting the online activities of a natural person, in a way that it does not to affect the ability of any person to gain access to a work, performance, or phonogram;

This section provides a vanishingly narrow exception, one that can be rendered null if manufacturers use TPMs in such a way that you cannot protect your privacy without bypassing the same TPM that prevents access to a copyrighted work. And rightsholders have repeatedly taken this very position in the United States. Besides that, the wording is tremendously outdated; you may want to modify the software in your child’s doll so that it doesn’t record their voice and send it back to the manufacturer; that is not clearly “online activities” – they’re simply playing with a doll at home. In the US, “personally identifiable information” also has a meaning that is narrower than you might expect.

VI. The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security;

This would be a good model for a general exemption: you can circumvent to do noninfringing things. Lawmakers have recognized, with this provision, that the ban on circumventing TPMs could interfere with legitimate activities that have nothing to do with copyright law, and provided a broad and general assurance that these noninfringing activities will not give rise to liability under the new regime.

VII. Non-infringing activities carried out by an investigator who has legally obtained a copy or sample of a work, performance or performance not fixed or sample of a work, performance or execution, or phonogram with the sole purpose of identifying and analyzing flaws in technologies for encoding and decoding information;

This exemption again is limited to identifying flaws in the TPM itself, as opposed to analyzing the software subject to the TPM.

VIII. Non-profit activities carried out by a person for the purpose of making accessible a work, performance, or phonogram, in languages, systems, and other special means and formats, for persons with disabilities, in terms of the provisions in articles 148, section VIII and 209, section VI of this Law, as long as it is made from a legally obtained copy, and

Why does accessibility have to be nonprofit? This means that companies trying to serve the needs of the disabled will be unable to interoperate with works encumbered by TPMs.

IX. Any other exception or limitation for a particular class of works, performances, or phonograms, when so determined by the Institute at the request of the interested party based on evidence.

It is improper to create a licensing regime that presumptively bans speech and the exercise of fundamental rights, and then requires the proponents of those rights to prove their rights to the government in advance of exercising them.  We have sued the US government over its regime and the case is pending.

Article 114 Quinquies.- The conduct sanctioned in article 232 bis shall not be considered as a violation of this Law:

These are the exemptions to the ban on providing technology capable of circumvention, as opposed to the act of circumventing oneself. They have the same flaws as the corresponding exemptions above, and they don’t even include the option to establish new, necessary exemptions over time. This limitation is present in the US regime, as well, and has sharply curtailed the practical utility of the exemptions obtained via subsequent rulemaking. They also do not include the very narrow privacy and library/archive exemptions, meaning that it is unlawful to give people the tools to take advantage of those rights.

I. When it is carried out in relation to effective technological protection measures that control access to a work, interpretation or execution, or phonogram and by virtue of the following functions:

a) The activities carried out by a non-profit person, in order to make an accessible format of a work, performance or execution, or a phonogram, in languages, systems and other modes , means and special formats for a person with a disability, in terms of the provisions of articles 148, section VIII and 209, section VI of this Law, as long as it is made from a copy legally obtained;

b) Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs;

c) Non-infringing activities carried out by an investigator who has legally obtained a copy or sample of a work, performance or performance not fixed or sample of a work, performance or execution, or phonogram with the sole purpose of identifying and analyzing flaws in technologies for encoding and decoding information;

d) The inclusion of a component or part thereof, with the sole purpose of preventing minors from accessing inappropriate content, online, in a technology, product, service or device that itself is not prohibited;

e) Non-infringing activities carried out in good faith with the authorization of the owner of a computer, computer system or network, carried out for the sole purpose of testing, investigating or correcting the security of that computer, computer system or network, and

f ) The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security.

II. When it is carried out in relation to effective technological measures that protect any copyright or related right protected in this Law and by virtue of the following functions:

a) Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs, and

b) The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security.

Article 114 Sexies.- It is not violation of rights management information, the suspension, alteration, modification or omission of said information, when it is carried out in the performance of their functions by persons legally authorized in terms of the applicable legislation, for the effects of law enforcement and safeguarding national security.

Article 232 Bis.- A fine of 1,000 UMA to 20,000 UMA will be imposed on whoever produces, reproduces, manufactures, distributes, imports, markets, leases, stores, transports, offers or makes available to the public, offer to the public or provide services or carry out any other act that allows having devices, mechanisms, products, components or systems that:

Again, it’s damaging to culture and innovation to ban non-infringing activities and technologies simply because they circumvent access controls.

I. Are promoted, published or marketed with the purpose of circumventing effective technological protection measure;

II. Are used predominantly to circumvent any effective technological protection measure, or

This seems to suggest that a technologist who makes a technology with noninfringing uses can be liable because others, independently, have used it unlawfully.

III. Are designed, produced or executed with the purpose of avoiding any effective technological protection measure.

Article 232 Ter.- A fine of 1,000 UMA to 10,000 UMA will be imposed, to those who circumvent an effective technological protection measure that controls access to a work, performance, or phonogram protected by this Law.

Article 232 Quáter.- A fine of 1,000 UMA to 20,000 UMA will be imposed on those who, without the respective authorization:

I. Delete or alter rights management information;

This kind of vague prohibition invites nuisance litigation. There are many harmless ways to ‘alter’ rights management information – for accessibility, convenience, or even clarity. In addition, when modern cameras take pictures, they often automatically apply information that identifies the author. This creates privacy concerns, and it is a common social media practice to strip that identifying information in order to protect users. While large platforms can obtain a form of authorization via their terms of service, it should not be unlawful to remove identifying information in order to protect the privacy of persons involved in the creation of a photograph (for instance, those attending a protest or religious event).

II. Distribute or import for distribution, rights management information knowing that this information has been deleted, altered, modified or omitted without authorization, or

III. Produce, reproduce, publish, edit, fix, communicate, transmit, distribute, import, market, lease, store, transport, disclose or make available to the public copies of works, performances, or phonograms, knowing that the rights management information has been deleted, altered, modified or omitted without authorization.

Federal Criminal Code

Article 424 bis.- A prison sentence of three to ten years and two thousand to twenty thousand days fine will be imposed:

I. Whoever produces, reproduces, enters the country, stores, transports, distributes, sells or leases copies of works, phonograms, videograms or books, protected by the Federal Law on Copyright, intentionally, for the purpose of commercial speculation and without the authorization that must be granted by the copyright or related rightsholder according to said law.

The same penalty shall be imposed on those who knowingly contribute or provide in any way raw materials or supplies intended for the production or reproduction of works, phonograms, videograms or books referred to in the preceding paragraph;

This is ridiculously harsh and broad, even in the most generous reading. And the chilling effect of this criminal prohibition will go even further. If one “knows” they are providing paper to someone but do not know that person is using it to print illicit copies, there should be complete legal clarity that they are not liable, let alone criminally liable.

II. Whoever manufactures, for profit, a device or system whose purpose is to deactivate the electronic protection devices of a computer program, or

As discussed, there are many legitimate and essential reasons for deactivating TPMs.

III. Whoever records, transmits or makes a total or partial copy of a protected cinematographic work, exhibited in a movie theater or places that substitute for it, without the authorization of the copyright or related rightsholder.

Jail time for filming any part of a movie in a theater is absurdly draconian and disproportionate.

Article 424 ter.- A prison sentence of six months to six years and five thousand to thirty thousand days fine will be imposed on whoever that sells to any final consumer on the roads or in public places, intentionally, for the purpose of commercial speculation, copies of works, phonograms, videograms or books, referred to in section I of the previous article.

If the sale is made in commercial establishments, or in an organized or permanent manner, the provisions of article 424 Bis of this Code will be applied.

Again, jail for such a violation is extremely disproportionate. The same comment applies to many of the following provisions.

Article 425.- A prison sentence of six months to two years or three hundred to three thousand days fine will be imposed on anyone who knowingly and without right exploits an interpretation or an execution for profit.

Article 426.- A prison term of six months to four years and a fine of three to three thousand days will be imposed, in the following cases:

I. Whoever manufactures, modifies, imports, distributes, sells or leases a device or system to decipher an encrypted satellite signal, carrier of programs, without authorization of the legitimate distributor of said signal;

II. Whoever performs, for profit, any act with the purpose of deciphering an encrypted satellite signal, carrier of programs, without authorization from the legitimate distributor of said signal;

III. Whoever manufactures or distributes equipment intended to receive an encrypted cable signal carrying programs, without authorization from the legitimate distributor of said signal, or

IV. Whoever receives or assists another to receive an encrypted cable signal carrying programs without the authorization of the legitimate distributor of said signal.

Article 427 Bis.- Who, knowingly and for profit, circumvents without authorization any effective technological protection measure used by producers of phonograms, artists, performers, or authors of any work protected by copyright or related rights, it will be punished with a prison sentence of six months to six years and a fine of five hundred to one thousand days.

Article 427 Ter.- To who, for profit, manufactures, imports, distributes, rents or in any way markets devices, products or components intended to circumvent an effective technological protection measure used by phonogram producers, artists or performers, as well as the authors of any work protected by copyright or related rights, will be imposed from six months to six years of prison and from five hundred to one thousand days fine.

Article 427 Quater.- To those who, for profit, provide or offer services to the public intended mainly to avoid an effective technological protection measure used by phonogram producers, artists, performers, or performers, as well as the authors of any protected work. by copyright or related right, it will be imposed from six months to six years in prison and from five hundred to a thousand days fine.

Article 427 Quinquies.- Anyone who knowingly, without authorization and for profit, deletes or alters, by himself or through another person, any rights management information, will be imposed from six months to six years in prison and five hundred to one thousand days fine.

The same penalty will be imposed on who for profit:

I. Distribute or import for its distribution rights management information, knowing that it has been deleted or altered without authorization, or

II. Distribute, import for distribution, transmit, communicate, or make available to the public copies of works, performances, or phonograms, knowing that rights management information has been removed or altered without authorization.

Notice and takedown provisions

Article 114 Septies.- The following are considered Internet Service Providers:

I. Internet Access Provider is the person who transmits, routes or provides connections for digital online communications without modification of their content, between or among points specified by a user, of material of the user’s choosing, or that makes the intermediate and transient storage of that material done automatically in the course of a transmission, routing or provision of connections for digital online communications.

II. Online Service Provider is a person who performs any of the following functions:

a) Caching carried out through an automated process;

b) Storage, at the request of a user, of material that is hosted in a system or network controlled or operated by or for an Internet Service Provider, or

c) Referring or linking users to an online location by using information location tools, including hyperlinks and directories.

Article 114 Octies.- The Internet Service Providers will not be responsible for the damages caused to copyright holders, related rights and other holders of any intellectual property right protected by this Law, for the copyright or related rights infringements that occur in their networks or online systems, as long as they do not control, initiate or direct the infringing behavior, even if it takes place through systems or networks controlled or operated by them or on their behalf, in accordance with the following:

I. The Internet Access Providers will not be responsible for the infringement, as well as the data, information, materials and contents that are transmitted or stored in their systems or networks controlled or operated by them or on their behalf when:

For clarity: this is the section that applies to those who provide your Internet subscription, as opposed to the websites and services you reach over the Internet.

a ) Does not initiate the chain of transmission of the materials or content nor select the materials or content of the transmission or its recipients, and

b) Include and do not interfere with effective standard technological measures, which protect or identify material protected by this law, which are developed through an open and voluntary process by a broad consensus of copyright holders and service providers, which are available from in a reasonable and non-discriminatory manner, and that do not impose substantial costs on service providers or substantial burdens on their network systems.

There is no such thing as a standard technological measure, so this is just dormant poison. A provision like this is in the US law and there has never been a technology adopted according to such a broad consensus.

II. The Online Service Providers will not be responsible for the infringements, as well as the data, information, materials and content that are stored or transmitted or communicated through their systems or networks controlled or operated by them or on their behalf, and in cases that direct or link users to an online site, when:

First, for clarity, this is the provision that applies to the services and websites you interact with online, including sites like YouTube, Dropbox, Cloudflare, and search engines, but also sites of any size like a bulletin-board system or a server you run to host materials for friends and family or for your activist group.

The consequences for linking are alarming. Linking isn’t infringing in the US or Canada, and this is an important protection for public discourse. In addition, a linked resource can change from a non-infringing page to an infringing one.

a) In an expeditious and effective way, they remove, withdraw, eliminate or disable access to materials or content made available, enabled or transmitted without the consent of the copyright or related rights holder, and that are hosted in their systems or networks, once you have certain knowledge of the existence of an alleged infringement in any of the following cases:

1. When it receives a notice from the copyright or related rights holder or by any person authorized to act on behalf of the owner, in terms of section III of this article, or

It’s extremely dangerous to take a mere allegation as “certain knowledge” given how many bad faith or mistaken copyright takedowns are sent.

2. When it receives a resolution issued by the competent authority ordering the removal, elimination or disabling of the infringing material or content.

In both cases, reasonable measures must be taken to prevent the same content that is claimed to be infringing from being uploaded to the system or network controlled and operated by the Internet Service Provider after the removal notice or the resolution issued by the competent authority.

This provision effectively mandates filtering of all subsequent uploads, comparing them to a database of everything that has been requested to be taken down. Filtering technologies are overly broad and unreliable, and cannot make infringement determinations. This would be a disaster for speech, and the expense would also be harmful to small competitors or nonprofit online service providers.

b) If they remove, disable or suspend unilaterally and in good faith, access to a publication, dissemination, public communication and/or exhibition of the material or content, to prevent the violation of applicable legal provisions or to comply with the obligations derived of a contractual or legal relationship, provided they take reasonable steps to notify the person whose material is removed or disabled.

c) They have a policy that provides for the termination of accounts of repeat offenders, which is publicly known by their subscribers;

This vague provision is also often a sword wielded by rightsholders. When the service provider is essential, such as access to the Internet, termination is an extreme measure and should not be routine.

d) Include and do not interfere with effective standard technological measures that protect or identify material protected by this Law, which are developed through an open and voluntary process by a broad consensus of copyright holders and service providers, which are available in a reasonable and non-discriminatory manner, and that do not impose substantial costs on service providers or substantial burdens on their systems or networks,

Again, there’s not yet any technology considered a standard technological measure.

e) In the case of Online Service Providers referred to in subsections b) and c) of the section II of article 114 Septies, in addition to the provisions of the immediately preceding paragraph, must not receive a financial benefit attributable to the infringing conduct, when the provider has the right and ability to control the infringing conduct.

This is a bit sneaky and could seriously undermine the safe harbor. Platforms do profit from user activity, and do technically have the ability to remove content – if that’s enough to trigger liability or to defeat a safe harbor, then the safe harbor is essentially null for any commercial platform.

III. The notice referred to in subsection a), numeral 1, of the previous section, must be submitted through the forms and systems as indicated in the regulations of the Law, which will establish sufficient information to identify and locate the infringing material or content.

Said notice shall contain as a minimum:

1. Indicate of the name of the rightsholder or legal representative and the means of contact to receive notifications;

2. Identify the content of the claimed infringement;

3. Express the interest or right regarding the copyright, and

4. Specify the details of the electronic location to which the claimed infringement refers.

The user whose content is removed, deleted or disabled due to probable infringing behavior and who considers that the Online Service Provider is in error, may request the content be restored through a counter-notice, in which he/she must demonstrate the ownership or authorization he/she has for that specific use of the content removed, deleted or disabled, or justify its use according to the limitations or exceptions to the rights protected by this Law.

The Online Service Provider who receives a counter-notice in accordance with the provisions of the preceding paragraph, must report the counter-notice to the person who submitted the original notice, and enable the content subject of the counter-notice, unless the person who submitted the original notice initiates a judicial or administrative procedure, a criminal complaint or an alternative dispute resolution mechanism within a period not exceeding 15 business days since the date the Online Service Provider reported the counter-notice to the person who submitted the original notice.

It should be made clear that the rightsholder is obligated to consider exceptions and limitations before sending a takedown.

IV. Internet Service Providers will not be obliged to supervise or monitor their systems or networks controlled or operated by them or on their behalf, to actively search for possible violations of copyright or related rights protected by this Law and that occur online.

In accordance with the provisions of the Federal Law on Telecommunications and Broadcasting, Internet Service Providers may carry out proactive monitoring to identify content that violates human dignity, is intended to nullify or impair rights and freedoms, as well as those that stimulate or advocate violence or a crime.

This provision is sneaky. It says “you don’t have to filter, but you’re allowed to look for content that impairs rights (like copyright) or a crime (like the new crimes in this law).” Given that the law also requires the platform to make sure that users cannot re-upload content that is taken down, it’s cold comfort to say here that they don’t have to filter proactively. At best, this means that a platform does not need to include works in its filters until it has received a takedown request for the works in question.

V. The impossibility of an Internet Service Provider to meet the requirements set forth in this article by itself does not generate liability for damages for violations of copyright and related rights protected by this Law.

This provision is unclear. Other provisions seem to indicate liability for failure to enact these procedures. Likely this means that a platform would suffer the fines below, but not liability for copyright infringement, if it is impossible to comply.

Article 232 Quinquies.- A fine of 1,000 UMA to 20,000 UMA will be imposed when:

I. Anyone who makes a false statement in a notice or counter-notice, affecting any interested party when the Online Service Provider has relied on that notice to remove, delete or disable access to the content protected by this Law or has rehabilitated access to the content derived from said counter-notice;

This is double-edged: it potentially deters both notices and counternotices. It also does not provide a mechanism to prevent censorship; a platform continues to be obligated to act on notices that include falsities.

II. To the Online Service Provider that does not remove, delete or disable access in an expedited way to the content that has been the subject of a notice by the owner of the copyright or related right or by someone authorized to act on behalf of the holder, or competent authority, without prejudice to the provisions of article 114 Octies of this Law, or

This is a shocking expansion of liability. In the US, the safe harbor provides important clarity, but even without the safe harbor, a platform is only liable if they have actually committed secondary copyright infringement. Under this provision, even a spurious takedown must be complied with to avoid a fine. This will create even worse chilling effects than what we’ve seen in the US.

III. To the Internet Service Provider that does not provide expeditiously to the judicial or administrative authority, upon request, the information that is in their possession and that identifies the alleged infringer, in the cases in which said information is required in order to protect or enforce copyright or related rights within a judicial or administrative proceeding.

We have repeatedly seen these kinds of information requests used alongside a pointless copyright claim in order to unmask critics or target people for harassment. Handing out personal information should not be automatic simply because of an allegation of copyright infringement. In the US, we have fought for and won protections for anonymous speakers when copyright owners seek to unmask them because of their expression of their views. For instance, we recently defended the anonymity of a member of a religious community who questioned a religious organization, when the organization sought to abuse copyright law to learn their identity.

Share
Categories
Commentary Corporate Speech Controls free speech Intelwars International

Turkey’s New Internet Law Is the Worst Version of Germany’s NetzDG Yet

For years, free speech and press freedoms have been under attack in Turkey. The country has the distinction of being the world’s largest jailer of journalists and has in recent years been cracking down on online speechNow, a new law, passed by the Turkish Parliament on the 29th of July, introduces sweeping new powers and takes the country another giant step towards further censoring speech online. The law was ushered through parliament quickly and without allowing for opposition or stakeholder inputs and aims for complete control over social media platforms and the speech they host. The bill was introduced after a series of allegedly insulting tweets aimed at President Erdogan’s daughter and son-in-law and ostensibly aims to eradicate hate speech and harassment online. Turkish lawyer and Vice President of Ankara Bar Association IT, Technology & Law Council Gül?ah Deniz-Atalar called the lawan attempt to initiate censorship to erase social memory on digital spaces.”

Once ratified by President Erdogan, the law would mandate social media platforms with more than a million daily users to appoint a local representative in Turkey, which activists are concerned will enable the government to conduct even more censorship and surveillance. Failure to do so could result in advertisement bans, steep penalty fees, and, most troublingly, bandwidth reductions. Shockingly, the legislation introduces new powers for Courts to order Internet providers to throttle social media platforms’ bandwidth by up to 90%, practically blocking access to those sites. Local representatives would be tasked with responding to government requests to block or take down content. The law foresees that companies would be required to remove content that allegedly violates “personal rights” and the “privacy of personal life” within 48 hours of receiving a court order or face heavy fines. It also includes provisions that would require social media platforms to store users’ data locally, prompting fears that providers would be obliged to transmit those data to the authorities, which experts expect to aggravate the already rampant self-censorship of Turkish social media users. 

While Turkey has a long history of Internet censorship, with several hundred thousand websites currently blocked, this new law would establish unprecedented control of speech online by the Turkish government. When introducing the new law, Turkish lawmakers explicitly referred to the controversial German NetzDG law and a similar initiative in France as a positive example. 

Germany’s Network Enforcement Act, or NetzDG for short, claims to tackle “hate speech” and illegal content on social networks and passed into law in 2017 (and has been tightened twice since). Rushedly passed amidst vocal criticism from lawmakers, academia and civil experts, the law mandates social media platforms with one million users to name a local representative authorized to act as a focal point for law enforcement and receive content take down requests from public authorities. The law mandates social media companies with more than two million German users to remove or disable content that appears to be “manifestly illegal” within 24 hours of having been alerted of the content. The law has been heavily criticized in Germany and abroad, and experts have suggested that it interferes with the EU’s central Internet regulation, the e-Commerce Directive. Critics have also pointed out that the strict time window to remove content does not allow for a balanced legal analysis. Evidence is indeed mounting that NetzDG’s conferral of policing powers to private companies continuously leads to takedowns of innocuous posts, thereby undermining the freedom of expression. 

A successful German export

Since its introduction, NetzDG has been a true Exportschlager, or export success, as it has inspired a number of similarly harmful laws in jurisdictions around the globe. A recent study reports that at least thirteen countries, including Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia have proposed or enacted laws based on the regulatory structure of NetzDG since it entered into force. 

In Russia, a 2017 law encourages users to report allegedly “unlawful” content and requires social media platforms with more than two million users to take down the content in question as well as possible re-posts, which closely resembles the German law. Russia’s copy-pasting of Germany’s NetzDG confirmed critics’ worst fears: that the law would serve as a model and legitimization for autocratic governments to censor online speech. 

Recent Malaysian and Phillipinen laws aimed at tackling “fake news” and misinformation also explicitly refer to NetzDG, although NetzDG’s scope does not even extend to cover misinformation. In both countries, NetzDG’s model of imposing steep fines (and in the case of the Philippines up to 20 years of imprisonment) on social media platforms for failing to remove content swiftly was applied.

In Venezuela, another 2017 law that expressly refers to NetzDG takes the logic of NetzDG one step further by imposing a six hour time window for failing to remove content considered to be “hate speech”. The Venezuelan law—which includes weak definitions and a very broad scope and was also legitimized by invoking the German initiative—is a potent and flexible tool for the country’s government to oppress dissidents. 

Singapore is yet another country that got inspired by Germany’s NetzDG: In May 2019, the Protection from Online Falsehoods and Manipulation Bill was adopted, which empowers the government to order platforms to correct or disable content, accompanied with significant fines if the platform fails to comply. A government report preceding the introduction of the law explicitly references the German law. 

Similarly to these examples, the recently adopted Turkish law shows clear parallels with the German approach: targeting platforms of a certain size, the law incentivizes platforms to implement takedown requests by stipulating significant fees, thereby turning platforms into the ultimate gatekeepers tasked with deciding on the legality of online speech. In important ways, the Turkish law goes way beyond NetzDG, as its scope does not only include social media platforms but also new sites. In combination with its exorbitant fines and the threat to block access to websites, the law enables the Turkish government to erase any dissent, criticism or resistance. 

Even worse than NetzDG

But the fact that the Turkish law goes even beyond NetzDG highlights the danger of exporting Germany’s flawed law internationally. When Germany passed the law in 2017, states around the world were getting increasingly interested in regulating alleged and real online threats, ranging from hate speech to illegal content and cyberbullying. Already problematic in Germany, where it is embedded in a functioning legal system with appropriate checks and balances and equipped with safeguards absent from the laws it inspired, NetzDG has served to legitimize draconian censorship legislation across the globe. While it’s always bad if flawed laws are being copied elsewhere, this is particularly problematic in authoritarian states that have already pushed for and implemented severe censorship and restrictions on free speech and the freedom of the press. While the anti-free speech tendencies of countries like Turkey, Russia, Venezuela, Singapore and the Philippines long predate NetzDG, the German law surely provides legitimacy for them to further erode fundamental rights online.

Share
Categories
Commentary Digital Rights and the Black-led Movement Against Police Violence free speech Intelwars privacy

San Francisco Police Accessed Business District Camera Network to Spy on Protestors

The San Francisco Police Department (SFPD) conducted mass surveillance of protesters at the end of May and in early June using a downtown business district’s camera network, according to new records obtained by EFF. The records show that SFPD received real-time live access to hundreds of cameras as well as a “data dump” of camera footage amid the ongoing demonstrations against police violence.

The camera network is operated by the Union Square Business Improvement District (BID), a special taxation district created by the City and County of San Francisco, but operated by a private non-profit organization. These networked cameras, manufactured by Motorola Solutions’ brand Avigilon, are high definition, can zoom in on a person’s face to capture face-recognition ready images, and are linked to a software system that can automatically analyze content, including distinguishing between when a car or a person passes within the frame. Motorola Solutions recently unveiled plans to expand its portfolio of tools for aiding public-private  partnerships with law enforcement by making it easier for police to gain access to private cameras and video analytic tools like license plate readers. 

These unregulated camera networks pose huge threats to civil liberties

Union Square BID is only one of several special assessment districts in San Francisco that have begun deploying these cameras. These organizations are quasi-government agencies that act with state authority to collect taxes and provide services such as street cleaning. While they are run by private non-profits, they are funded with public money and carry out public services. However, in this case, the cameras were driven by one particular private citizen working with these districts. 

In 2012, cryptocurrency mogul Chris Larsen started providing money for what would eventually be $4 million worth of cameras deployed by businesses within the special assessment districts. These camera networks are managed by staff within the neighborhood and streamed to a local control room, but footage can be shared with other entities, including individuals and law enforcement, with little oversight. At least six special districts have installed these camera networks, the largest of which belongs to the Union Square BID. The camera networks now blanket a handful of neighborhoods and cover 135 blocks, according to a recent New York Times report.

A map taped to wall showing where surveillance cameras are in Union Square

The Union Square Business Improvement District’s surveillance camera map

According to logs obtained by EFF, SFPD has regularly sought footage related to alleged looting and assault in the area associated with the ongoing protests against police violence. However, SFPD has gone beyond simply investigating particular incident reports and instead engaged in indiscriminate surveillance of protesters. 

Review the documents on DocumentCloud or scroll to the bottom of the page for direct downloads.

SFPD requested and received a “data dump” of 12 hours of footage from every camera in the Union District BID from 5:00 pm on May 30, 2020 to 5:00 am on May 31, 2020. While this may have coincided with an uptick in property destruction in the protests’ vicinity, the fact that SFPD requested all footage without any kind of specificity means that anyone who attended the protestsor indeed was simply passing bycould have been caught in the surveillance dragnet. 

Also on May 31, SFPD’s Homeland Security Unit requested real-time access to the Union Square BID camera network “to monitor potential violence,” claiming they needed it for “situational awareness and enhanced response.” At 9:38am, the BID received SFPD’s initial request via email, and by 11:47am, the BID’s Director of Services emailed a technical specialist saying, “We have approved this request to provide access to all of our cameras for tonight and tomorrow night. Can you grant 48 hour remote access to [the officer]?” 

This email exchange shows that SFPD was given an initial two days of monitoring of live feeds, as well as technical assistance from the BID,  to get their remote access up and running. An email dated June 2 shows that SFPD requested access to live feeds of the camera network for another five days. The email reads:

“I…have been tasked by our Captain to reach out to see if we can extend our request for you [sic] BID cameras. We greatly appreciate you guys allowing us access for the past 2 days,but we are hoping to extend our access through the weekend. We have several planned demos all week and we anticipate several more over the weekend which are the ones we worry will turn violent again.” 

SFPD confirmed to EFF that the Union Square BID granted extended access for live feeds.  

Prior to these revelations, Chris Larsen, the funder of the special assessment district cameras, was on record as describing live access to the camera networks as illegal. “The police can’t monitor it live,” said Larsen in a recent interview, “That’s actually against the law in San Francisco.”

A saucer-shaped camera attached to the outside of the building.

An example of an Avigilon camera in San Francisco’s Japantown.

Last year, San Francisco passed a law restricting how and when government agencies may acquire, borrow, and use surveillance technology. Under these new rules, police cannot use any surveillance technology without first going through a public process and obtaining approval for a usage policy by the San Francisco Board of Supervisors. The same restrictions apply to police obtaining information or data based on an external entity’s use of surveillance technology. These records demonstrate a violation of San Francisco’s Surveillance Technology Ordinance. SFPD’s unfettered and indiscriminate live access for over a week to a third-party camera network to monitor protests was exactly the type of harm the ordinance was intended to protect against. 

These unregulated camera networks pose huge threats to civil liberties, even in times outside of the largest protest in U.S. history. In addition to cameras mounted outside of or facing private businesses, many of the special assessment district cameras also provide full view of public parks, pedestrian walkways, and other plazas where people might congregate, socialize, or protest. These documents prove that constant surveillance of these locations might capturedeliberately or accidentallygatherings protected under the First Amendment. When those gatherings involve protesting the police or other existing power structures, law enforcement access to these cameras could open people up to retribution, harassment, or increased surveillance and ultimately chill participation in civic society. SFPD must immediately stop accessing special assessment district camera networks to indiscriminately spy on protestors.

Share
Categories
Commentary free speech Intelwars

EFF and 45 Human Rights and Civil Liberties Groups Condemn Federal Law Enforcement Actions Against Protesters in Portland

EFF joined dozens of other groups in a letter condemning the behavior of federal law enforcement agencies in Portland, Oregon. Despite the wishes of local government officials, the federal government deployed law enforcement, including U.S. Marshals and Customs and Border Protection officers, to Portland. The federal government officially explained these actions as an effort to protect federal buildings, but it appears to be a militarized counter-insurgent effort to suppress protesters and the residents of Portland.

The coalition—which includes Fight for the Future, Media Justice, and PDX Privacy—called for unwanted federal forces to be removed from Portland and urged Congress to investigate the pattern and practice of abuses against protesters. 

You can read the letter in full here, and learn more from EFF’s coverage and analysis of police response to the 2020 protests. 

Share
Categories
free speech Intelwars Section 230 of the Communications Decency Act

The PACT Act’s Attempt to Help Internet Users Hold Platforms Accountable Will End Up Hurting Online Speakers

Recently, nearly every week brings a new effort to undercut or overhaul a key U.S. law—47 U.S.C. § 230 (“Section 230”)—that protects online services and allows Internet users to express themselves. Many of these proposals jeopardize users’ free speech and privacy, while others are thinly-veiled attacks against online services that the President and other officials do not like.

These attacks on user expression and the platforms we all rely on are serious. But they do not represent serious solutions to a real problem: that a handful of large online platforms dominate users’ ability to organize, connect, and speak online.

The Platform Accountability and Consumer Transparency (PACT) Act, introduced last month by Senators Brian Schatz (D-HI) and John Thune (R-SD) is a serious effort to tackle that problem. The bill builds on good ideas, such as requiring greater transparency around platforms’ decisions to moderate their users’ content—something EFF has championed as a voluntary effort as part of the Santa Clara Principles.

The PACT Act’s implementation of these good ideas, however, is problematic. The bill’s modifications of Section 230 will lead to greater, legally required online censorship, likely harming disempowered communities and disfavored speakers. It will also imperil the existence of small platforms and new entrants trying to compete with Facebook, Twitter, and YouTube by saddling them with burdensome content moderation practices and increasing the likelihood that they will be dragged through expensive litigation. The bill also has several First Amendment problems because it compels online services to speak and interferes with their rights to decide for themselves when and how to moderate the content their users post.

The PACT Act has noble intentions, but EFF cannot support it. This post details what the PACT Act does, and the problems it will create for users online.

The PACT Act’s Amendments to Section 230 Will Lead to Greater Online Censorship

Before diving into the PACT ACT’s proposed changes to Section 230, let’s review what Section 230 does: it generally protects online platforms from liability for hosting user-generated content that others claim is unlawful. For example, if Alice has a blog on WordPress, and Bob accuses Clyde of having said something terrible in the blog’s comments, Section 230(c)(1) ensures that neither Alice nor WordPress are liable for Bob’s statements about Clyde.

The PACT Act ends Section 230(c)(1)’s immunity for user-generated content if services fail to remove the material upon receiving a notice claiming that a court declared it illegal. Platforms with more than a million monthly users or that have more than $25 million in annual revenue have to respond to these takedown requests and remove the material within 24 hours. Smaller platforms must remove the materials “within a reasonable period of time based on size and capacity of provider.” If the PACT ACT passes, Clyde could be able to sue Alice over something she didn’t even say—Bob’s statement in the comments section that Clyde claims is unlawful.

On first blush, this seems uncontroversial—after all, why should services enjoy special immunity for continuing to host content that is deemed unlawful or is otherwise unprotected by the First Amendment? The problem is that the PACT Act poorly defines what qualifies as a court order, fails to provide strong protections, and sets smaller platforms up for even greater litigation risks than their larger competitors.

EFF has seen far too often that notice-and-takedown regimes result in greater censorship of users’ speech. The reality is that people abuse these systems to remove speech they do not like, rather than content that is in fact illegal. And when platforms face liability risks for continuing to host speech that has been flagged, they are likely to act cautiously and remove the speech.

The PACT Act’s thumb on the scale of removing speech means that disempowered communities will be most severely impacted, as they historically have been targeted by claims that their speech is illegal or offensive. Moreover, the PACT Act fails to include any penalties for abusing its new takedown regime, which should be a bare minimum to ensure protection for users’ free expression.

Further, the PACT Act’s loose definition of what would qualify as a judicial order creates its own headaches. The First Amendment generally protects speech until it is declared unprotected at the end of a case, usually after a trial. And those decisions are regularly appealed and higher courts often reverse them, as most famously seen in New York Times v. Sullivan. The Act contains no requirement that the court’s order be a final judgment or order, nor does it carve out preliminary or other non-final orders or default judgments. We know that people often use default judgments—in which one party gets a court order that is not meaningfully contested by anyone—to attempt to remove speech they do not like.

The PACT Act does not account for these concerns. Instead, it puts platforms in the position of having to decide—on risk of losing Section 230 immunity—which judicial orders are legitimate, a recipe that will lead to abuse and the removal of lawful speech.

Additionally, the PACT Act’s different deadlines for removing material based on platforms’ size will create a torrent of litigation for smaller services. For larger platforms, the PACT Act is clear: takedowns of materials claimed to be illegal must occur within 24 hours. For smaller platforms, the deadline is open-ended: a reasonable time period based on the size and capacity of the provider.

In theory, the bill’s flexible standard for smaller services is a good thing, since smaller services have less resources to moderate and remove their users’ content. But in practice, it will expose them to greater, more expensive litigation. This is because determining what is “reasonable” based on the specific size and capacity of a provider will require more factual investigation, including costly discovery, before a court and/or jury decides whether the service acted reasonably. For larger platforms, the legal question is far simpler: did they remove the material within 24 hours?

These are not abstract concerns. As one report found, litigating a case through discovery based on claims that user-generated material was illegal can cost platforms more than $500,000. Faced with those potentially crippling legal bills, smaller platforms will immediately remove content that is claimed to be illegal, resulting in greater censorship.

Requiring Content Moderation and Takedowns Will Hamper Much-Needed Competition for Online Platforms

The PACT Act’s goal of making online platforms more accountable to their users is laudable, but its implementation will likely reinforce the dominance of the largest online services.

In addition to requiring that nearly all services hosting user-generated content institute a notice-and-takedown scheme, the PACT Act also requires them to create comprehensive processes for responding to complaints about offensive content and the services moderation decisions. Specifically, the bill requires services to (1) create policies that detail what is permitted on their services, (2) respond to complaints regarding content on their platform that others do not like and user complaints about improper moderation, (3) quickly act on and report back on their reasons for decisions in response to a complaint, while also permitting users to appeal those decisions, and (4) publish transparency reports regarding their takedowns, appeals, and other moderation decisions. A failure to implement any of these policies is grounds for investigation and potential enforcement by the Federal Trade Commission. 

The PACT Act further requires large platforms to respond to user complaints within 14 days, while smaller platforms have an open-ended deadline that depends on their size and capacity.

The burdens associated with complying with all of these requirements cannot be overlooked. Responding to each and every user complaint about content they believe is offensive, or a takedown they believe was in error, requires all platforms to employ content moderators and institute systems to ensure that each request is resolved in a short time period.

Unfortunately, the only platforms that could easily comply with the PACT Act’s moderation requirements are the same dominant platforms that already employ teams of content moderators. The PACT Act’s requirements would further cement the power of Facebook, YouTube, and Twitter, making it incredibly difficult for any new competitor to unseat them. The bill’s reach means that even medium-sized news websites with comment boards would likely have to comply with the PACT Act’s moderation requirements, too. 

What’s missing from the PACT Act’s approach is that Section 230 already levels the playing field by providing all online services the same legal protections for user-generated content that the dominant players enjoy. So instead of increasing legal and practical burdens on entities trying to unseat Facebook, YouTube, and Twitter, Congress should be more focused on using antitrust law and other mechanisms to reverse the outsized market power those platforms have. 

Legally Mandating Content Moderation And Transparency Reporting Would Violate the First Amendment

The PACT Act’s aim, increasing online services’ accountability for their speech moderation decisions, is commendable. EFF has been part of a broader coalition calling for this very same type of accountability, including efforts to push services to have clear policies, to allow users to appeal moderation decisions they think are wrong, and to publish reports about those removal decisions.

But we don’t believe these principles should be legally mandated, which is precisely what the PACT Act does. As described above, the bill requires online services to publish policies describing what content is permitted, respond to complaints about content and their moderation decisions, explain their moderation decisions to affected users, and publish reports about their moderation decisions. If they don’t do all those things to the government’s satisfaction, online services could face FTC enforcement.

If legally mandated, the requirements would be a broad and unconstitutional intrusion on the First Amendment rights of platforms to decide for themselves whether and how to manage their users’ content, including how to respond to complaints about violations of their policies or mistakes they’ve made in moderating content. The First Amendment protects services’ decisions on whether to have such policies in the first place, when to change them, or whether to enforce them in specific situations. The First Amendment also gives platforms the right to make inconsistent decisions.

More than 40 years ago, the Supreme Court struck down a Florida law that required newspapers to include responses from political candidates because it interfered with their First Amendment rights to make decisions about the content they published. Courts have repeatedly ruled that the same principle applies to online platforms hosting user-generated content. This includes the 9th Circuit’s recent decision in Prager U v. Google, and a May decision by the U.S. Court of Appeals for the District of Columbia in a case brought by Freedom Watch and Laura Loomer against Google. In another case, a court ruled that when online platforms “select and arrange others’ materials, and add the all-important ordering that causes some materials to be displayed first and others last, they are engaging in fully protected First Amendment expression—the presentation of an edited compilation of speech generated by other persons.” 

The PACT Act creates other First Amendment problems by compelling platforms to speak. For instance, the bill requires that platforms publish policies setting out what type of content is acceptable on their service. The bill also requires platforms to publish transparency reports detailing their decisions. To be clear, having clear policies and publishing transparency reports are a clear benefit to users. But the First Amendment generally “prohibits the government from telling people what they must say.” And to the extent the PACT Act dictates that online services must speak in certain ways, it violates the Constitution.

The PACT Act’s sponsors are not wrong in wanting online services to be more accountable to their users. Nor are they incorrect in their assessment that a handful of services serve as the gatekeepers for much of our free expression online. But the PACT Act’s attempts to reach those concerns creates more problems and would end up harming users’ speech. The way to address these concerns is to encourage competition, and make users less reliant on a handful of services—not to institute legal requirements that further entrench the major platforms.

Share
Categories
free speech Intelwars Section 230 of the Communications Decency Act

The PACT Act’s Attempt to Help Internet Users Hold Platforms Accountable Will End Up Hurting Online Speakers

Recently, nearly every week brings a new effort to undercut or overhaul a key U.S. law—47 U.S.C. § 230 (“Section 230”)—that protects online services and allows Internet users to express themselves. Many of these proposals jeopardize users’ free speech and privacy, while others are thinly-veiled attacks against online services that the President and other officials do not like.

These attacks on user expression and the platforms we all rely on are serious. But they do not represent serious solutions to a real problem: that a handful of large online platforms dominate users’ ability to organize, connect, and speak online.

The Platform Accountability and Consumer Transparency (PACT) Act, introduced last month by Senators Brian Schatz (D-HI) and John Thune (R-SD) is a serious effort to tackle that problem. The bill builds on good ideas, such as requiring greater transparency around platforms’ decisions to moderate their users’ content—something EFF has championed as a voluntary effort as part of the Santa Clara Principles.

The PACT Act’s implementation of these good ideas, however, is problematic. The bill’s modifications of Section 230 will lead to greater, legally required online censorship, likely harming disempowered communities and disfavored speakers. It will also imperil the existence of small platforms and new entrants trying to compete with Facebook, Twitter, and YouTube by saddling them with burdensome content moderation practices and increasing the likelihood that they will be dragged through expensive litigation. The bill also has several First Amendment problems because it compels online services to speak and interferes with their rights to decide for themselves when and how to moderate the content their users post.

The PACT Act has noble intentions, but EFF cannot support it. This post details what the PACT Act does, and the problems it will create for users online.

The PACT Act’s Amendments to Section 230 Will Lead to Greater Online Censorship

Before diving into the PACT ACT’s proposed changes to Section 230, let’s review what Section 230 does: it generally protects online platforms from liability for hosting user-generated content that others claim is unlawful. For example, if Alice has a blog on WordPress, and Bob accuses Clyde of having said something terrible in the blog’s comments, Section 230(c)(1) ensures that neither Alice nor WordPress are liable for Bob’s statements about Clyde.

The PACT Act ends Section 230(c)(1)’s immunity for user-generated content if services fail to remove the material upon receiving a notice claiming that a court declared it illegal. Platforms with more than a million monthly users or that have more than $25 million in annual revenue have to respond to these takedown requests and remove the material within 24 hours. Smaller platforms must remove the materials “within a reasonable period of time based on size and capacity of provider.” If the PACT ACT passes, Clyde could be able to sue Alice over something she didn’t even say—Bob’s statement in the comments section that Clyde claims is unlawful.

On first blush, this seems uncontroversial—after all, why should services enjoy special immunity for continuing to host content that is deemed unlawful or is otherwise unprotected by the First Amendment? The problem is that the PACT Act poorly defines what qualifies as a court order, fails to provide strong protections, and sets smaller platforms up for even greater litigation risks than their larger competitors.

EFF has seen far too often that notice-and-takedown regimes result in greater censorship of users’ speech. The reality is that people abuse these systems to remove speech they do not like, rather than content that is in fact illegal. And when platforms face liability risks for continuing to host speech that has been flagged, they are likely to act cautiously and remove the speech.

The PACT Act’s thumb on the scale of removing speech means that disempowered communities will be most severely impacted, as they historically have been targeted by claims that their speech is illegal or offensive. Moreover, the PACT Act fails to include any penalties for abusing its new takedown regime, which should be a bare minimum to ensure protection for users’ free expression.

Further, the PACT Act’s loose definition of what would qualify as a judicial order creates its own headaches. The First Amendment generally protects speech until it is declared unprotected at the end of a case, usually after a trial. And those decisions are regularly appealed and higher courts often reverse them, as most famously seen in New York Times v. Sullivan. The Act contains no requirement that the court’s order be a final judgment or order, nor does it carve out preliminary or other non-final orders or default judgments. We know that people often use default judgments—in which one party gets a court order that is not meaningfully contested by anyone—to attempt to remove speech they do not like.

The PACT Act does not account for these concerns. Instead, it puts platforms in the position of having to decide—on risk of losing Section 230 immunity—which judicial orders are legitimate, a recipe that will lead to abuse and the removal of lawful speech.

Additionally, the PACT Act’s different deadlines for removing material based on platforms’ size will create a torrent of litigation for smaller services. For larger platforms, the PACT Act is clear: takedowns of materials claimed to be illegal must occur within 24 hours. For smaller platforms, the deadline is open-ended: a reasonable time period based on the size and capacity of the provider.

In theory, the bill’s flexible standard for smaller services is a good thing, since smaller services have less resources to moderate and remove their users’ content. But in practice, it will expose them to greater, more expensive litigation. This is because determining what is “reasonable” based on the specific size and capacity of a provider will require more factual investigation, including costly discovery, before a court and/or jury decides whether the service acted reasonably. For larger platforms, the legal question is far simpler: did they remove the material within 24 hours?

These are not abstract concerns. As one report found, litigating a case through discovery based on claims that user-generated material was illegal can cost platforms more than $500,000. Faced with those potentially crippling legal bills, smaller platforms will immediately remove content that is claimed to be illegal, resulting in greater censorship.

Requiring Content Moderation and Takedowns Will Hamper Much-Needed Competition for Online Platforms

The PACT Act’s goal of making online platforms more accountable to their users is laudable, but its implementation will likely reinforce the dominance of the largest online services.

In addition to requiring that nearly all services hosting user-generated content institute a notice-and-takedown scheme, the PACT Act also requires them to create comprehensive processes for responding to complaints about offensive content and the services moderation decisions. Specifically, the bill requires services to (1) create policies that detail what is permitted on their services, (2) respond to complaints regarding content on their platform that others do not like and user complaints about improper moderation, (3) quickly act on and report back on their reasons for decisions in response to a complaint, while also permitting users to appeal those decisions, and (4) publish transparency reports regarding their takedowns, appeals, and other moderation decisions. A failure to implement any of these policies is grounds for investigation and potential enforcement by the Federal Trade Commission. 

The PACT Act further requires large platforms to respond to user complaints within 14 days, while smaller platforms have an open-ended deadline that depends on their size and capacity.

The burdens associated with complying with all of these requirements cannot be overlooked. Responding to each and every user complaint about content they believe is offensive, or a takedown they believe was in error, requires all platforms to employ content moderators and institute systems to ensure that each request is resolved in a short time period.

Unfortunately, the only platforms that could easily comply with the PACT Act’s moderation requirements are the same dominant platforms that already employ teams of content moderators. The PACT Act’s requirements would further cement the power of Facebook, YouTube, and Twitter, making it incredibly difficult for any new competitor to unseat them. The bill’s reach means that even medium-sized news websites with comment boards would likely have to comply with the PACT Act’s moderation requirements, too. 

What’s missing from the PACT Act’s approach is that Section 230 already levels the playing field by providing all online services the same legal protections for user-generated content that the dominant players enjoy. So instead of increasing legal and practical burdens on entities trying to unseat Facebook, YouTube, and Twitter, Congress should be more focused on using antitrust law and other mechanisms to reverse the outsized market power those platforms have. 

Legally Mandating Content Moderation And Transparency Reporting Would Violate the First Amendment

The PACT Act’s aim, increasing online services’ accountability for their speech moderation decisions, is commendable. EFF has been part of a broader coalition calling for this very same type of accountability, including efforts to push services to have clear policies, to allow users to appeal moderation decisions they think are wrong, and to publish reports about those removal decisions.

But we don’t believe these principles should be legally mandated, which is precisely what the PACT Act does. As described above, the bill requires online services to publish policies describing what content is permitted, respond to complaints about content and their moderation decisions, explain their moderation decisions to affected users, and publish reports about their moderation decisions. If they don’t do all those things to the government’s satisfaction, online services could face FTC enforcement.

If legally mandated, the requirements would be a broad and unconstitutional intrusion on the First Amendment rights of platforms to decide for themselves whether and how to manage their users’ content, including how to respond to complaints about violations of their policies or mistakes they’ve made in moderating content. The First Amendment protects services’ decisions on whether to have such policies in the first place, when to change them, or whether to enforce them in specific situations. The First Amendment also gives platforms the right to make inconsistent decisions.

More than 40 years ago, the Supreme Court struck down a Florida law that required newspapers to include responses from political candidates because it interfered with their First Amendment rights to make decisions about the content they published. Courts have repeatedly ruled that the same principle applies to online platforms hosting user-generated content. This includes the 9th Circuit’s recent decision in Prager U v. Google, and a May decision by the U.S. Court of Appeals for the District of Columbia in a case brought by Freedom Watch and Laura Loomer against Google. In another case, a court ruled that when online platforms “select and arrange others’ materials, and add the all-important ordering that causes some materials to be displayed first and others last, they are engaging in fully protected First Amendment expression—the presentation of an edited compilation of speech generated by other persons.” 

The PACT Act creates other First Amendment problems by compelling platforms to speak. For instance, the bill requires that platforms publish policies setting out what type of content is acceptable on their service. The bill also requires platforms to publish transparency reports detailing their decisions. To be clear, having clear policies and publishing transparency reports are a clear benefit to users. But the First Amendment generally “prohibits the government from telling people what they must say.” And to the extent the PACT Act dictates that online services must speak in certain ways, it violates the Constitution.

The PACT Act’s sponsors are not wrong in wanting online services to be more accountable to their users. Nor are they incorrect in their assessment that a handful of services serve as the gatekeepers for much of our free expression online. But the PACT Act’s attempts to reach those concerns creates more problems and would end up harming users’ speech. The way to address these concerns is to encourage competition, and make users less reliant on a handful of services—not to institute legal requirements that further entrench the major platforms.

Share
Categories
Colleges First Amendment free speech Hate speech Intelwars privacy schools Social Media watch

VIDEO: Some students say they’d give up free speech, privacy so others can ‘feel comfortable’ and to avoid offensiveness

Amid public schools and colleges punishing students and staff members for offensive social media content, Campus Reform digital reporter Eduardo Neret asked students if they’d feel comfortable giving up their free speech and privacy to further the cause of not offending others.

Most students Neret interviewed not only said they’d be willing to give up some of their First Amendment rights, but they also said they’d be willing to turn over their social media accounts to their schools for offensive speech inspection.

What else did they say?

“I definitely think that they should be monitoring the hate speech because that shouldn’t be allowed,” one person said, adding that she’d be willing to let her social media accounts be examined “if it has to do with helping the school in … creating a sense of more safety and security and erasing the hate speech.”

The individual added that she would “be interested in doing anything I can to help others feel more comfortable.”

Others also said they’d be willing to give up their free speech rights in the name of non-offensiveness:

  • “If that’s something I can be helpful for, I’d be happy to.”
  • “I would do that, ’cause I mean if I’m just giving up a little of what I care about just to make others feel better, I’d do that. I’d make that exchange.”
  • “I’d like for everybody to feel comfortable.”

But not every student agreed:

  • “People should have their own privacy; there’s no reason to be monitoring everybody.”
  • “I don’t think that they should be taking like your privilege of being able to speak what you think.”
  • “Free speech is so important … giving it up is like giving up one of your rights.”


Students Support Giving Up Free Speech To Avoid Offending Others

youtu.be

Share
Categories
Bailouts Capitalism Censorship Civil Liberties control Corporations Donald Trump elitists Emotions Federal feelings free market free speech freedom google Government Headline News Intelwars private property Public Service Racists Reddit silenced Socialism state Twitch tyranny victims Woke Youtube

The Purge: The Natural Progression Of “Woke” Censorship Is Tyranny

This article was originally published by Brandon Smith at Alt-Market. 

As I have noted in the past, in order to be a conservative one has to stick to certain principles. For example, you have to stand against big government and state intrusions into individual lives, you have to support our constitutional framework and defend civil liberties, and you also have to uphold the rights of private property. Websites are indeed private property, as much as a person’s home is private property. There is no such thing as free speech rights in another person’s home, and there is no such thing as free speech rights on a website.

That said, there are some exceptions. When a corporation or a collective of corporations holds a monopoly over a certain form of communication, then legal questions come into play when they try to censor the viewpoints of an entire group of people. Corporations exist due to government-sponsored charters; they are creations of government and enjoy certain legal protections through government, such as limited liability and corporate personhood. Corporations are a product of socialism, not free-market capitalism; and when they become monopolies, they are subject to regulation and possible demarcation.

Many corporations have also received extensive government bailouts (taxpayer money) and corporate welfare. Google and Facebook, for example, rake in billions in state and federal subsidies over the course of a few years.   Google doesn’t even pay for the massive bandwidth it uses.  So, it is not outlandish to suggest that if a company receives the full protection of government from the legal realm to the financial realm then they fall under the category of public service. If they are allowed to continue to monopolize communication while also being coddled by the government as “too big to fail”, then they become a public menace instead.

This is not to say that I support the idea of nationalization. On the contrary, the disasters of socialism cannot be cured with even more socialism. However, monopolies are a poison to free markets and to free speech and must be deconstructed or abolished.

Beyond corporate monopolies, there is also the danger of ideological monopolies. Consider this – The vast majority of silicon valley companies that control the lion’s share of social media platforms are run by extreme political leftists and globalists that are openly hostile to conservative and moderate values.

Case in point: Three of the largest platforms on the internet – Reddit, Twitch, and YouTube just acted simultaneously in a single day to shut down tens of thousands of forums, streamers and video channels, the majority of which espouse conservative arguments which the media refers to as “hate speech”.

To be sure, at least a few of the outlets shut down probably argue from a position of race superiority.  However, I keep seeing the mainstream media making accusations that all the people being silenced right now deserve it due to “racism” and “calls for violence”, and I have yet to see them offer a single piece of evidence supporting any of these claims.

A recent article from the hyper-leftist Salon is a perfect example of the hypocrisy and madness of the social justice left in action. It’s titled ‘Twitch, YouTube And Reddit Punished Trump And Other Racists – And That’s A Great Thing For Freedom’. Here are a few excerpts with my commentary:

Salon: “Freedom is impossible for everyone when viewpoints prevail that dehumanize anyone. And it appears that several big social media platforms agree, judging from recent bans or suspensions of racist accounts across YouTube, Twitch, and Reddit.”

My Response

Freedom cannot be taken away by another person’s viewpoint. Every individual has complete control over whether or not they “feel” marginalized and no amount of disapproval can silence a person unless they allow it to. If you are weak-minded or weak-willed, then grow a backbone instead of expecting the rest of the world to stay quiet and keep you comfortable.

Remember when the political left was the bastion of the free speech debate against the censorship of the religious right? Well, now the leftists have a religion (or cult) of their own and they have changed their minds on the importance of open dialogue.

Salon: “For those who are dehumanized — whether by racism, sexism, classism, ableism, anti-LGBTQ sentiment or any other prejudices — their voices are diminished or outright silenced, and in the process, they lose their ability to fully participate in our democracy. We all need to live in a society where hate is discouraged, discredited, and whenever possible scrubbed out completely from our discourse. This doesn’t mean we should label all ideas as hateful simply because we disagree with them; to do that runs afoul of President Dwight Eisenhower’s famous statement, “In a democracy debate is the breath of life”. When actual hate enters the dialogue, however, it acts as a toxic smoke in the air of debate, suffocating some voices and weakening the rest.”

My Response

Where do I begin with this steaming pile of woke nonsense? First, it’s impossible to be “dehumanized” by another person’s opinion of you. If they are wrong, or an idiot, then their opinion carries no weight and should be ignored. Your value is not determined by their opinion. No one can be “silenced” by another person’s viewpoint unless they allow themselves to be silenced. If they are right about you and are telling you something you don’t want to hear, then that is your problem, not theirs. No one in this world is entitled to protection from other people’s opinions. Period.

It should not surprise anyone though that leftists are actively attempting to silence all dissent while accusing conservatives of stifling free speech. This is what they do; they play the victim while they seek to victimize. They have no principles. They do not care about being right, they only care about “winning”.

Under the 1st Amendment, ALL speech is protected, including what leftists arbitrarily label “hate speech”. Unless you are knowingly defaming a specific person or threatening specific violence against a specific person, your rights are protected. Interpreting broad speech as a “threat” because of how it might make certain people feel simply will not hold up in a court of law. Or at least, it should not hold up…

Political leftists have declared themselves the arbiters of what constitutes “hate speech”, the problem is they see EVERYTHING that is conservative as racist, sexist, misogynistic, etc. No human being or group of human beings is pure enough or objective enough to sit in judgment of what encompasses fair or acceptable speech. Therefore, all speech must be allowed in order to avoid tyranny.

If an idea is unjust, then, by all means, the political left has every right to counter it with their own ideas and arguments. “Scrubbing” all opposing ideas from the public discourse is unacceptable, and this is exactly what the social justice movement is attempting to do. If you want to erase these ideas from your own home or your personal website, then you are perfectly within your rights to do so, but you DO NOT have the right to assert a monopoly on speech and the political narrative.

Generally, when a group of zealots is trying to erase opposing ideals from the discussion, it usually means their own ideals don’t hold up to scrutiny. If your ideology is so pure and correct in its form, there should be no need to trick the masses into accepting it by scrubbing the internet.

Finally, America was not founded as a democracy, we are a republic, and with good reason. A democracy is a tyranny by the majority; a collectivist hell where power is centralized into the hands of whoever can con 51% of the population to their side. Marxists and communists love the idea of “democracy” and speak about it often because they think they are keenly equipped to manipulate the masses and form a majority. But, in a republic, individual rights are protected REGARDLESS of what the majority happens to believe at any given time, and this includes the right to free speech.

In the same breath, Solon pretends to value free discussion, then calls for the destruction of free speech and opposing ideas in the name of protecting people’s thin-skinned sensitivities. In other words, free speech is good, unless it’s a viewpoint they don’t like, then it becomes hate speech and must be suppressed.

Solon: “Reddit referred Salon to a statement explaining,”We committed to closing the gap between our values and our policies to explicitly address hate” and that “ultimately, it’s our responsibility to support our communities by taking stronger action against those who try to weaponize parts of Reddit against other people.””

My Response

In other words, they don’t like conservatives using their platforms against them, and since the political left is unable to present any valid arguments to defend their beliefs and they are losing the culture war, they are going for broke and seeking to erase all conservatives from their platforms instead. The “hate speech” excuse is merely a false rationale.  Social justice warriors stand on top of a dung heap and pretend it’s the moral high ground.

Solon: “No one who understands Constitutional law can argue that these corporate decisions violate the First Amendment which only protects speech from government repression. Professor Rick Hasen at the University of California, Irvine Law School told Salon by email that “private companies running websites are not subject to being sued for violating the First Amendment. The companies are private actors who can include whatever content they want unless there is a law preventing them from doing so.”

My Response

Again, this is not entirely true. Corporations are constructs of government and receive special privileges from government. If corporations form a monopoly over a certain form of communication and they attempt to censor all opposing views from that platform then they can be broken up by government to prevent destruction of the marketplace. Also, government can rescind the limited liability and corporate personhood of these companies as punishment for violating the public trust. And finally, any company that relies on taxpayer dollars or special tax break incentives to survive can and should have those dollars taken away when attempting to assert a monopoly.

Yes, there are alternative platforms for people to go to, but what is to stop leftist/globalist monopolies from buying up every other social media and standard media platform (as they have been doing for the past decade)? What is to stop leftist/globalist interests from using the “hate speech” argument to put pressure on ALL other web platforms including service and domain providers to cancel conservatives?

Finally, just because something is technically legal does not necessarily make it right. Corporations exploit government protection, yet claim they are not subject to government regulation? The left hates corporate America, yet they happily defend corporations when they are censoring conservatives? This is insane.

The Salon author then goes on a blathering diatribe about how he was once a victim of racism (all SJWs measure personal value according to how much more victimized someone is compared to others). His claims are irrelevant to the argument at hand, then he continues…

Salon: “Trump threatening to use the government power to retaliate against those companies, on the other hand, is a threat to both the letter and the spirit of the First Amendment. He and his supporters are not being stopped from disseminating their views on other platforms…”

My Response

Here is the only area where I partially agree with Salon. All of my readers know I do not put any faith in Donald Trump to do the right thing, mostly because of the elitists he surrounds himself within his cabinet. When it comes down to it, Trump will act in THEIR best interests, not in the public’s best interests. Giving him (or the FCC) the power to dictate speech rules on the internet is a bad idea. Also, for those that think the election process still matters, what if we gift such powers to the government today and then the political left enters the White House tomorrow? Yikes! Then we’ll have no room to complain as they will most certainly flip-flop and use government power to silence their opposition.

Of course, if the roles were reversed and corporations were deplatforming thousands of social justice forums and videos, the leftists would be screaming bloody murder about “corporate censorship” and “discrimination”. For now, in their minds, racial discrimination = bad. Political discrimination = good.

The monopoly issue still stands, though, and an ideological monopoly coupled with a unified corporate monopoly is a monstrosity that cannot be tolerated.  Government can and should break up such monopolies without going down the rabbit hole of nationalization.

Yes, we can go to small startup platforms and leave Twitter, Facebook, Reddit, YouTube, etc. behind. I have been saying for years that conservatives with the capital should start their own alternative social media. In fact, that is exactly what is finally happening. There has been a mass exodus of users from mainstream websites lately. I say, let the SJWs have their echo chambers and maybe these companies will collapse. Get Woke Go Broke still applies.

But, government can no longer protect these corporations, either.  With the government raining down bailout cash and corporate welfare on media companies, voting with your feet and your wallet does not have the same effect or send the same message.

The future of this situation is bleak. I have no doubt that leftists and globalists will attempt to purge ALL conservative discussion from the internet, to the point of attempting to shut down private conservative websites through service providers.  The final outcome of the purge is predictable:  Civil war; an issue I will be discussing in my next article.

Leftists accuse conservatives of hate, but social justice adherents seem to hate almost everything. I don’t think I’ve ever witnessed a group of people more obsessed with visiting misery on others, and they will never be satisfied or satiated. That which is normal speech today will be labeled as hate speech tomorrow.  The cult must continue to justify its own existence.   I for one am not going to live my life walking on eggshells around a clique of narcissistic sociopaths. Cancel culture is mob rule, and mob rule is at its core the true evil here; far more evil than any mere words spoken by any “white supremacist” on any forum or video ever.

Share
Categories
free speech Intelwars Legal Analysis privacy social media surveillance Social Networks

EFF to Court: Social Media Users Have Privacy and Free Speech Interests in Their Public Information

Special thanks to legal intern Rachel Sommers, who was the lead author of this post.

Visa applicants to the United States are required to disclose personal information including their work, travel, and family histories. And as of May 2019, they are required to register their social media accounts with the U.S. government. According to the State Department, approximately 14.7 million people will be affected by this new policy each year.

EFF recently filed an amicus brief in Doc Society v. Pompeo, a case challenging this “Registration Requirement” under the First Amendment. The plaintiffs in the case, two U.S.-based documentary film organizations that regularly collaborate with non-U.S. filmmakers and other international partners, argue that the Registration Requirement violates the expressive and associational rights of both their non-U.S.-based and U.S.-based members and partners. After the government filed a motion to dismiss the lawsuit, we filed our brief in district court in support of the plaintiffs’ opposition to dismissal. 

In our brief, we argue that the Registration Requirement invades privacy and chills free speech and association of both visa applicants and those in their social networks, including U.S. persons, despite the fact that the policy targets only publicly available information. This is amplified by the staggering number of social media users affected and the vast amounts of personal information they publicly share—both intentionally and unintentionally—on their social media accounts.

Social media profiles paint alarmingly detailed pictures of their users’ personal lives. By monitoring applicants’ social media profiles, the government can obtain information that it otherwise would not have access to through the visa application process. For example, visa applicants are not required to disclose their political views. However, applicants might choose to post their beliefs on their social media profiles. Those seeking to conceal such information might still be exposed by comments and tags made by other users. And due to the complex interactions of social media networks, studies have shown that personal information about users such as sexual orientation can reliably be inferred even when the user doesn’t expressly share that information. Although consular officers might be instructed to ignore this information, it is not unreasonable to fear that it might influence their decisions anyway.

Just as other users’ online activity can reveal information about visa applicants, so too can visa applicants’ online activity reveal information about other users, including U.S. persons. For example, if a visa applicant tags another user in a political rant or posts photographs of themselves and the other user at a political rally, government officials might correctly infer that the other user shares the applicant’s political beliefs. In fact, one study demonstrated that it is possible to accurately predict personal information about those who do not use any form of social media based solely on personal information and contact lists shared by those who do. The government’s surveillance of visa applicants’ social media profiles thus facilitates the surveillance of millions—if not billions—more people.

Because social media users have privacy interests in their public social media profiles, government surveillance of digital content risks chilling free speech. If visa applicants know that the government can glean vast amounts of personal information about them from their profiles—or that their anonymous or pseudonymous accounts can be linked to their real-world identities—they will be inclined to engage in self-censorship. Many will likely curtail or alter their behavior online—or even disengage from social media altogether. Importantly, because of the interconnected nature of social media, these chilling effects extend to those in visa applicants’ social networks, including U.S. persons.

Studies confirm these chilling effects. Citizen Lab found that 62 percent of survey respondents would be less likely to “speak or write about certain topics online” if they knew that the government was engaged in online surveillance. A Pew Research Center survey found that 34 percent of its survey respondents who were aware of the online surveillance programs revealed by Edward Snowden had taken at least one step to shield their information from the government, including using social media less often, uninstalling certain apps, and avoiding the use of certain terms in their digital communications.

One might be tempted to argue that concerned applicants can simply set their accounts to private. Some users choose to share their personal information—including their names, locations, photographs, relationships, interests, and opinions—with the public writ large. But others do so unintentionally. Given the difficulties associated with navigating privacy settings within and across platforms and the fact that privacy settings often change without warning, there is good reason to believe that many users publicly share more personal information than they think they do. Moreover, some applicants might fear that setting their accounts to private will negatively impact their applications. Others—especially those using social media anonymously or pseudonymously—might be loath to maximize their privacy settings because they use their platforms with the specific intention of reaching large audiences.

These chilling effects are further strengthened by the broad scope of the Registration Requirement, which allows the government to continue surveilling applicants’ social media profiles once the application process is over. Personal information obtained from those profiles can also be collected and stored in government databases for decades. And that information can be shared with other domestic and foreign governmental entities, as well as current and prospective employers and other third parties. It is no wonder, then, that social media users might severely limit or change the way they use social media.

Secrecy should not be a prerequisite for privacy—and the review and collection by the government of personal information that is clearly outside the scope of the visa application process creates unwarranted chilling effects on both visa applicants and their social media associates, including U.S. persons. We hope that the D.C. district court denies the government’s motion to dismiss the case and ultimately strikes down the Registration Requirement as unconstitutional under the First Amendment.

Share
Categories
Bloggers' Rights free speech Government Social Media Blocking Intelwars Offline : Imprisoned Bloggers and Technologists social media surveillance

Egypt’s Crackdown on Free Expression Will Cost Lives

For years, EFF has been monitoring a dangerous situation in Egypt: journalists, bloggers, and activists have been harassed, detained, arrested, and jailed, sometimes without trial, in increasing numbers by the Sisi regime. Since the COVID-19 pandemic began, these incidents have skyrocketed, affecting free expression both online and offline. 

As we’ve said before, this crisis means it is more important than ever for individuals to be able to speak out and share information with one another online. Free expression and access to information are particularly critical under authoritarian rulers and governments that dismiss or distort scientific data. But at a time when true information about the pandemic may save lives, instead, the Egyptian government has expelled journalists from the country for their reporting on the pandemic, and arrested others on spurious charges for seeking information about prison conditions. Shortly after the coronavirus crisis began, a reporter for The Guardian was deported, while a reporter for the The New York Times was issued a warning.. Just last week the editor of Al Manassa, Nora Younis, was arrested on cybercrime charges (and later released). And the Committee to Protect Journalists reported today that at least four journalists arrested during the pandemic remain imprisoned. 

Social media is also being monitored more closely than ever, with disastrous results: the Supreme Council for Media Regulation has banned the publishing of any data that contradicts the Ministry of Health’s official data. It has sent warning letters to news websites and social networks’ accounts it claims are sharing false news, and has arrested individuals for posting about the virus. Claiming national security interests, the far-reaching ban, which also limits the use of pseudonyms by journalists and criminalizes discussion of other “sensitive” topics, such as Libya, is being seen (rightfully) as censorship across the country. At a moment when obtaining true information is extremely important, the fact that Egypt’s government is increasing its attack on free expression is especially dangerous.

The government’s attacks on expression aren’t only damaging free speech online: rather than limiting the number of individuals in prison who are potentially exposed to the virus, Egyptian police have made matters worse, by harassing, beating, and even arresting protestors who are demanding the release of prisoners in dangerously overcrowded cells or simply ask for information on their arrested loved ones. Just last week, the family of Alaa Abd El Fattah, a leading Egyptian coder, blogger and activist who we’ve profiled in our Offline campaign, was attacked by police while protesting in front of Tora Prison. The next day, Alaa’s sister, Sanaa Seif, was forced into an unmarked car in front of the Prosecutor-General’s office as she arrived to submit a complaint regarding the assault and Alaa’s detention. She is now being held in pre-trial detention on charges of “broadcast[ing] fake news and rumors about the country’s deteriorating health conditions and the spread of the coronavirus in prisons” on Facebook, among others—according to police, for a fifteen day period, though there is no way to know for sure that it will end then. 

All of these actions put the health and safety of the Egyptian population at risk. We join the international coalition of human rights and civil liberties organizations demanding both Alaa and Sanaa be released, and asking Egypt’s government to immediately halt its assault on free speech and free expression. We must lift up the voices of those who are being silenced to ensure the safety of everyone throughout the country. 

Banner image CC-BY, by Molly Crabapple.

Share
Categories
Ad boycott Censorship Facebook boycott free speech Intelwars Social media censorship

?Facebook BOYCOTT: Major corporations demand MORE social media censorship

A Facebook anti-hate campaign called “Stop Hate for Profit” is gaining momentum, leading large companies like Patagonia, North Face, REI, and Ben & Jerry’s to pull advertising revenue if Facebook doesn’t do more to “stop the spread of hateful lies and dangerous propaganda on its platform.”

But, what is considered “hateful”? It’s anything that goes against the mob, and they won’t stop with Facebook. Any voices that speak out against the progressive narrative will soon be censored and silenced. The more this campaign grows, the more our freedom of speech diminishes.

Watch the video below to hear Glenn Beck break down the details:

Use promo code FIGHTTHEMOB to get $20 off your BlazeTV subscription or start your 30-day free trial today.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multiplatform network of voices who love America, defend the Constitution and live the American dream.

Share
Categories
free speech Intelwars International privacy

Brazil’s Fake News Bill Would Dismantle Crucial Rights Online and is on a Fast Track to Become Law

Despite widespread complaints about its effects on free expression and privacy, Brazilian Congress is moving forward in its attempts to hastily approve a “Fake News” bill. We’ve already reported about some of the most concerning issues in previous proposals, but the draft text released this week is even worse. It will impede users’ access to social networks and applications, require the construction of massive databases of users’ real identities, and oblige companies to keep track of our private communications online. It creates demands that disregard Internet key characteristics like end-to-end encryption and decentralised tool-building, running afoul of innovation, and could criminalize the online expression of political opinions. Although the initial bill arose as an attempt to address legitimate concerns on the spread of online disinformation, it has opened the door to arbitrary and unnecessary measures, that strike settled privacy and freedom of expression safeguards.

You can join the hundreds of other protestors and organizations telling Brazil’s lawmakers why not to approve this Fake News bill right now.

Here’s how the latest proposals measure up:

Providers Are Required to Retain the Chain of Forwarded Communications

Social networks and any other Internet application that allows social interaction would be obliged to keep the chain of all communications that have been forwarded, whether distribution of the content was done maliciously or not. This is a massive data retention obligation which would affect millions of innocent users instead of only those investigated for an illegal act. Although Brazil already has obligations for retaining specific communications metadata, the proposed rule goes much further. Piecing together a communication chain may reveal highly sensitive aspects of individuals, groups, and their interactions — even when none are actually involved in illegitimate activities. The data will end up as a constantly-updated map of connections and relations between nearly every Brazilian Internet user: it will be ripe for abuse.

Furthermore, this obligation disregards the way more decentralized communication architectures work. It assumes that application providers are always able to identify and distinguish forwarded and non-forwarded content, and also able to identify the origin of a forwarded message. In practice, this depends on the design of the service and on the relationship between applications and services. When the two are independent it is common that the service provider will not be able to  differentiate between forwarded and non-forwarded content, and that the application does not store the forwarding history except on the user’s device.  This architectural separation is traditional in Internet communications, including  web browsers, FTP clients, email, XMPP, file sharing, etc. All of them allow actions equivalent to the forwarding of contents or the act of copying and pasting them, where the client application and its functions are  technically and legally independent from the service to which it connects. The obligation would also negatively impact open source applications, designed to let  end-users not only understand but also to modify and adapt  the functioning of local applications.

It Compels Applications to Get All User’s ID and Cell Phone Numbers

The bill creates a general monitoring obligation on user’s identity, compelling Internet applications to require all users to give proof of identity through a national ID or passport, as well as their phone number. This requirement goes in the opposite direction to the  principles and safeguards set out in the country’s data protection law which is yet to enter into force.  A vast database of identity cards, held by private actors, is in no way aligned with the standards of data minimization, purpose limitation and the prevention of risks in processing and storing personal data that Brazil’s data protection law represents. Current versions of the “Fake News” Bill do not even ensure the use of  pseudonyms for Internet users. As we’ve said many times before, there are myriad reasons why individuals may wish to use a name other than the one they have on their IDs and were born with. Women rebuilding their lives despite the harassment of domestic violence abusers, activists and community leaders facing threats, investigative journalists carrying out sensitive research in online groups, transgender users affirming their identities are just a few of examples of the need for pseudonymity in a modern society. Under the new bill, users’ accounts would be linked to their cell phone numbers, allowing  — and in some cases requiring —  telecom service providers and Internet companies to track users even more closely. Anyone without a mobile number would be prevented from using any social network — if users’ numbers are disabled for any reason, their social media accounts would be suspended. In addition to privacy harms, the rule creates serious hurdles to speak, learn, and share online. 

Censorship, Data Localization, and Blocking

These proposals seriously curb the online expression of political opinions and could quickly lead to political persecution. The bill sets high fines in cases of online sponsored content that mocks electoral candidates or question election reliability. Although elections’ trustworthiness is crucial for democracy and disinformation attempts to disrupt it should be properly tackled, a broad interpretation of the bill would severely endanger the vital work of e-voting security researchers in preserving that trustworthiness and reliability. Electoral security researchers already face serious harassment in the region. Other new and vague criminal offenses set by the bill are prone to silence legitimate critical speech and could criminalize users’ routine actions without the proper consideration of malicious intent.

The bill revives the disastrous idea of data localization. One of its provisions would force  social networks to store user data in a special database that would be required to be hosted in Brazil. Data localization rules such as this can make data especially vulnerable to security threats and surveillance, while also imposing serious barriers to international trade and e-commerce.

Finally, as the icing on the cake of a raft of provisions that disregard  the Internet’s global nature, providers that fail to comply with the rules would be subject to a suspension penalty. Such suspensions are unjustifiable and disproportionate, curtailing the communications of millions of Brazilians and incentivizing applications to overcompliance in the detriment of users’ privacy, security, and free expression.

EFF has joined many other organizations across the world calling on the Brazilian parliament to reject the latest version of the bill and stop the fast-track mode that has been adopted. You can also take action against the “Fake News” bill now, with our Twitter campaign aimed at senators of the National Congress.

Share