Categories
announcement free speech Intelwars

Tracking Global Online Censorship: What EFF Is Doing Next

As the world stays home to slow the spread of COVID-19, communities are rapidly transitioning to digital meeting spaces. This highlights a trend EFF has tracked for years: discussions in virtual spaces shape and reflect societal freedoms, and censorship online replicates repression offline. As most of us spend increasing amounts of time in digital spaces, the impact of censorship on individuals around the world is acute.

Tracking Global Online Censorship is a new project to record and combat international speech restrictions, especially where censorship policies are exported from Europe and the United States to the rest of the world. Headed by EFF Director for International Freedom of Expression Jillian York, the project will seek accountability for powerful online censors—in particular, social media platforms such as Facebook and Google—and hold them to just, inclusive standards of expressive discourse, transparency, and due process in a way that protects marginalized voices, dissent, and disparate communities.

 “Social media companies make mistakes at scale that catch a range of vital expression in their content moderation net. And as companies grapple with moderating new types of content during a pandemic, these error rates will have new, dangerous consequences,” said Jillian York. “Misapplication of content moderation systems results in the systemic silencing of marginalized communities. It is vital that we protect the free flow of information online and ensure that platforms provide users with transparency and a path to remedy.”

Support for Tracking Global Online Censorship is provided by the Swedish Postcode Foundation (Svenska Postkodstiftelsen). Established in 2003, the Swedish Postcode Foundation receives part of the Swedish Postcode Lottery’s surplus, which it then uses to provide financial support to non-governmental organizations creating positive changes through concrete efforts. The Foundation’s goal is to create a better world through projects that challenge, inspire, and promote change.

“Social media is a huge part of our daily life and a primary source of information. Social media companies enjoy an unprecedented power and control and the lack of transparency that these companies exercise does not run parallel to the vision that these same companies were established for. It is time to question, create awareness, and change this. We are therefore proud to support the Electronic Frontier Foundation in their work to do so,” says Marie Dahllöf, Secretary General of the Swedish Postcode Foundation.

We are at a pivotal moment for free expression. A dizzying array of actors have recognized the current challenges posed by intermediary corporations in an increasingly global world, but a large number of solutions seek to restrict—rather than promote—the free exchange of ideas. At the same time, as COVID-19 results in greater isolation, online expression has become more important than ever, and the impact of censorship greater. The Tracking Global Online Censorship project will draw attention to the myriad issues surrounding online speech, develop new and existing coalitions to strengthen the effort, and offer policy solutions that protect freedom of expression. In the long term, our hope is for corporations to stop chilling expression, to promote free access to time-sensitive news, foster engagement, and to usher in a new era of online expression in which marginalized communities will be more strongly represented within democratic society.

Share
Categories
Commentary Corporate Speech Controls free speech Intelwars

Facebook’s Policy Shift on Politicians Is a Welcome Step

We are happy to see the news that Facebook is putting an end to a policy that has long privileged the speech of politicians over that of ordinary users. The policy change, which was announced on Friday by The Verge, is something that EFF has been pushing for since as early as 2019. 

Back then, Facebook executive Nick Clegg, a former politician himself, famously pondered: “Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be.” 

Perhaps Clegg had a point—we’ve long said that companies are ineffective arbiters of what the world says—but that hardly justifies holding politicians to a lower standard than the average person. International standards will consider the speaker, but only as one of many factors. For example, the United Nations’ Rabat Plan of Action outlines a six-part threshold test that takes into account “(1) the social and political context, (2) status of the speaker, (3) intent to incite the audience against a target group, (4) content and form of the speech, (5) extent of its dissemination and (6) likelihood of harm, including imminence.” Facebook’s Oversight Board recently endorsed the Plan, as a framework for assessing the removal of posts that may incite hostility or violence.

Facebook has deviated very far from the Rabat standard thanks, in part, to the policy it is finally repudiating. For example, it has banned elected officials from parties disfavored by the U.S. government, such as Hezbollah, Hamas, and the Kurdistan Workers Party (PKK), all of which appear on the government’s list of designated terrorist organizations—despite not being legally obligated to do so. And in 2018, the company deleted the account of Chechen leader Ramzan Kadyrov, claiming that they were legally obligated after the leader was placed on a sanctions list. Legal experts familiar with the law of international sanctions have disagreed, on the grounds that the sanctions are economic in nature and do not apply to speech.

So this decision is a good step in the right direction. But Facebook has many steps to go, including finally—and publicly—endorsing and implementing the Santa Clara Principles.

But ultimately, the real problem is that Facebook’s policy choices have so much power in the first place. It’s worth noting that this move coincides with a massive effort to persuade the U.S. Congress to impose new regulations that are likely to entrench Facebook power over free expression in the U.S. and around the world. If users, activists and, yes, politicians want real progress in defending free expression, we must fight for a world where changes in Facebook’s community standards don’t merit headlines at all—because they just don’t matter that much.

 

Share
Categories
anonymity Commentary Encrypting the Web free speech Intelwars Locational Privacy privacy Street-Level Surveillance

EFF at 30: Surveillance Is Not Obligatory, with Edward Snowden

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF was proud to welcome NSA whistleblower Edward Snowden for a chat about surveillance, privacy, and the concrete ways we can improve our digital world, as part of our EFF30 Fireside Chat series. EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia weighed in on the way the internet (and surveillance) actually function, the impact that has on modern culture and activism, and how we’re grappling with the cracks this pandemic has revealed—and widened—in our digital world. 

You can watch the full conversation here or read the transcript.

On June 3, we’ll be holding our fourth EFF30 Fireside Chat, on how to free the internet, with net neutrality pioneer Gigi Sohn. EFF co-founder John Perry Barlow once wrote, “We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.” This year marked the 25th anniversary of this audacious essay denouncing centralized authority on the blossoming internet. But modern tech has strayed far from the utopia of individual freedom that 90s netizens envisioned. We’ll be discussing corporatization, activism, and the fate of the internet, framed by Barlow’s “Declaration of the Independence of Cyberspace,” with Gigi, along with EFF Senior Legislative Counsel Ernesto Falcon and EFF Associate Director of Policy and Activism Katharine Trendacosta.

RSVP to the next EFF30 Fireside Chat

The Internet is Not Made of Magic

Snowden opened the discussion by explaining the reality that all of our internet usage is made up of a giant mesh of companies and providers. The internet is not magic—it’s other people’s computers: “All of our communications—structurally—are intermediated by other people’s computers and infrastructure…[in the past] all of these lines that you were riding across—the people who ran them were taking notes.” We’ve come a long way from that time when our communications were largely unencrypted, and everything you typed into the Google search box “was visible to everybody else who was on that Starbucks network with you, and your Internet Service Provider, who knew this person who paid for this account searched for this thing on Google….anybody who was between your communications could take notes.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FPYRaSOIbiOA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

How Can Tech Protect Us from Surveillance?

In 2013, Snowden came forward with details about the PRISM program, through which the NSA and FBI worked directly with large companies to see what was in individuals’ internet communications and activity, making much more public the notion that our digital lives were not safe from spying. This has led to a change in people’s awareness of this exploitation, Snowden said, and myriad solutions have come about to solve parts of what is essentially an ecosystem problem: some technical, some legal, some political, some individual. “Maybe you install a different app. Maybe you stop using Facebook. Maybe you don’t take your phone with you, or start using an encrypted messenger like Signal instead of something like SMS.” 

Nobody sells you a car without brakes—nobody should sell you a browser without security.

When it comes to the legal cases, like EFF’s case against the NSA, the courts are finally starting to respond. Technical solutions, like the expansion of encryption in everyday online usage, are also playing a part, Alexis Hancock, EFF’s Director of Engineering for Certbot, explained. “Just yesterday, I checked on a benchmark that said that 95% of web traffic is encrypted—leaps and bounds since 2013.” In 2015, web browsers started displaying “this site is not secure” messages on unencrypted sites, and that’s where EFF’s Certbot tool steps in. Certbot is a “free, open source software that we work on to automatically supply free SSL, or secure, certificates for traffic in transit, automating it for websites everywhere.” This keeps data private in transit—adding a layer of protection over what is traveling between your request and a website’s server. Though this is one of the things that don’t get talked about a lot, partly because these are pieces that you don’t see and shouldn’t have to see, but give people security. “Nobody sells you a car without brakes—nobody should sell you a browser without security.”  

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcJWq6ub0CQs%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Balancing the Needs of the Pandemic and the Dangers of Surveillance

We’ve moved the privacy needle forward in many ways since 2013, but in 2020, a global catastrophe could have set us back: the COVID-19 pandemic. As Hancock described it, EFF’s focus for protecting privacy during the pandemic was to track “where technology can and can’t help, and when is technology being presented as a silver bullet for certain issues around the pandemic when people are the center for being able to bring us out of this.”

There is a looming backlash of people who have had quite enough.

Our fear was primarily scope creep, she explained: from contact tracing to digital credentials, many of these systems already exist, but we must ask, “what are we actually trying to solve here? Are we actually creating more barriers to healthcare?” Contact tracing, for example, must put privacy first and foremost—because making it trustworthy is key to making it effective. 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FR9CIDUhGOgU%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

The Melting Borders Between Corporate, Government, Local, and Federal Surveillance 

But the pandemic, unfortunately, isn’t the only nascent danger to our privacy. EFF’s Matthew Guariglia described the merging of both government and corporate surveillance, and federal and local surveillance, that’s happening around the country today: “Police make very effective marketers, and a lot of the manufacturers of technology are counting on it….If you are living in the United States today you are likely walking past or carrying around street level surveillance everywhere you go, and this goes double if you live in a concentrated urban setting or you live in an overpoliced community.”

Police make very effective marketers, and a lot of the manufacturers of technology are counting on it

From automated license plate readers to private and public security cameras to Shotspotter devices that listen for gunshots but also record cars backfiring and fireworks, this matters now more than ever, as the country reckons with a history of dangerous and inequitable overpolicing: “If a Shotspotter misfires, and sends armed police to the site of what they think is a shooting, there is likely to be a higher chance for a more violent encounter with police who think they’re going to a shooting.” This is equally true for a variety of these technologies, from automated license plate readers to facial recognition, which police claim are used for leads, but are too often accepted as fact. 

“Should we compile records that are so comprehensive?” asked Snowden about the way these records aren’t only collected, but queried, allowing government and companies to ask for the firehose of data. “We don’t even care what it is, we interrelate it with something else. We saw this license plate show up outside our store at a strip mall and we want to know how much money they have.” This is why the need for legal protections is so important, added Executive Director Cindy Cohn: “The technical tools are not going to get to the place where the phone company doesn’t know where your phone is. But the legal protections can make sure that the company is very limited in what they can do with that information—especially when the government comes knocking.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcLlVb_W8OmA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

 

After All This, Is Privacy Dead?

All these privacy-invasive regimes may lead some to wonder if privacy, or anonymity, are, to put it bluntly, dying. That’s exactly what one audience member asked during the question and answer section of the chat. “I don’t think it’s inevitable,” said Guariglia. “There is a looming backlash of people who have had quite enough.” Hancock added that optimism is both realistic and required: “No technology makes you a ghost online—none of it, even the most secure, anonymous-driven tools out there. And I don’t think that it comes down to your own personal burden…There is actually a more collective unit now that are noticing that this burden is not yours to bear…It’s going to take firing on all cylinders, with activism, technology, and legislation. But there are people fighting for you out there. Once you start looking, you’ll find them.” 

If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

“So many people care,” Snowden said. “But they feel like they can’t do anything….Does it have to be that way?…Governments live in a permissionless world, but we don’t. Does it have to be that way?” If you’re looking for a lever to pull—look at the presumptions these mass data collection systems make, and what happens if they fail: “They do it because mass surveillance is cheap…could we make these systems unlawful for corporations, and costly [for others]? I think in all cases, the answer is yes.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEaeKVAbMO6s%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Democracy, social movements, our relationships, and your own well being all require private space to thrive. If you missed this chat, please take an hour to watch it—whether you’re a privacy activist or an ordinary person, it’s critical for the safety of our society that we push back on all forms of surveillance, and protect our ability to communicate, congregate, and coordinate without fear of reprisal. We deeply appreciate Edward Snowden joining us for this EFF30 Fireside Chat and discussing how we can fight back against surveillance, as difficult as it may seem. As Hancock said (yes, quoting the anime The Last Airbender): “If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

___________________________

Check out additional recaps of EFF’s 30th anniversary conversation series, and don’t miss our next program where we’ll tackle digital access and the open web with Gigi Sohn on June 3, 2021—EFF30 Fireside Chat: Free the Internet.

Share
Categories
Change my mind COMEDY Crowder Crowder confronts CURRENT EVENTS Dave landau fake news free speech Funny conservative H3h3 H3h3 podcast H3h3productions Hasan piker Intelwars June liberal libertarian Louder with crowder Lwc Mug club news Politics pride progressive news Stephen Crowder Steven Crowder Steven crowder banned Steven crowder billie eilish Steven crowder change my mind Steven crowder interview Steven crowder suspended The Young Turks Video Youtube.com

CROWDER: I’m back…and HELL’S coming with me

Steven Crowder will return to YouTube this week after the platform issued Crowder his second hard strike which resulted in suspension. So what did Crowder do that made YouTube restrict his ability to upload content on his channel? Crowder explains here. This is Crowder’s last chance. If he gets one more strike, his channel will be removed permanently. Watch the video, mark your calendars, and don’t miss Crowder’s return on Thursday, June 3rd at 10 AM ET.

Catch up on missed episodes here.

Want more from Steven Crowder?

To enjoy more of Steven’s uncensored late-night comedy that’s actually funny, join Mug Club — the only place for all of Crowder uncensored and on demand.

Share
Categories
Commentary free speech Intelwars

Amid Systemic Censorship of Palestinian Voices, Facebook Owes Users Transparency

Over the past few weeks, as protests in—and in solidarity with—Palestine have grown, so too have violations of the freedom of expression of Palestinians and their allies by major social media companies. From posts incorrectly flagged by Facebook as incitement to violence, to financial censorship of relief payments made on Venmo, and the removal of Instagram Stories (which also heavily affected activists in Colombia, Canada, and Brazil), Palestinians are experiencing an unprecedented level of censorship during a time where digital communications are absolutely critical.

The vitality of social media during a time like this cannot be understated. Journalistic coverage from the ground is minimal—owing to a number of factors, including restrictions on movement by Israeli authorities—while, as the New York Times reported, misinformation is rife and has been repeated by otherwise reliable media sources. Israeli officials have even been caught spreading misinformation on social media. 

Palestinian digital rights organization 7amleh has spent the past few weeks documenting content removals, and a coalition of more than twenty organizations, including EFF, have reached out to social media companies, including Facebook and Twitter. Among the demands are for the companies to immediately stop censoring—and reinstate—the accounts and content of Palestinian voices, to open an investigation into the takedowns, and to transparently and publicly share the results of those investigations.

A brief history

Palestinians face a number of obstacles when it comes to online expression. Depending on where they reside, they may be subject to differing legal regimes, and face censorship from both Israeli and Palestinian authorities. Most Silicon Valley tech companies have offices in Israel (but not Palestine), while some—such as Facebook—have struck particular deals with the Israeli government to deal with incitement. While incitement to violence is indeed against the company’s community standards, groups like 7amleh say that this agreement results in inconsistent application of the rules, with incitement against Palestinians often allowed to remain on the platform.

Additionally, the presence of Hamas—which is the democratically-elected government of Gaza, but is also listed as a terrorist organization by the United States and the European Union—complicates things for Palestinians, as any mention of the group (including, at times, something as simple as the group’s flag flying in the background of an image) can result in content removals.

And it isn’t just Hamas—last week, Buzzfeed documented an instance where references to Jerusalem’s Al Aqsa mosque, one of the holiest sites in Islam, were removed because “Al Aqsa” is also contained within another designated group, Al Aqsa Martyrs’ Brigade. Although Facebook apologized for the error, this kind of mistake has become all too common, particularly as reliance on automated moderation has increased amidst the pandemic.

“Dangerous Individuals and Organizations”

Facebook’s Community Standard on Dangerous Individuals and Organizations gained a fair bit of attention a few weeks back when the Facebook Oversight Board affirmed that President Trump violated the standard with several of his January 6 posts. But the standard is also regularly used as justification for the widespread removal of content by Facebook pertaining to Palestine, as well as other countries like Lebanon. And it isn’t just Facebook—last Fall, Zoom came under scrutiny for banning an academic event at San Francisco State University (SFSU) at which Palestinian figure Leila Khaled, alleged to belong to another US-listed terrorist organization, was to speak.

SFSU fell victim to censorship again in April of this year when its Arab and Muslim Ethnicities and Diasporas (AMED) Studies Program discovered that its Facebook event “Whose Narratives? What Free Speech for Palestine?,” scheduled for April 23, had been taken down for violating Facebook Community Standards. Shortly thereafter, the program’s entire page, “AMED STUDIES at SFSU,” was deleted, along with its years of archival material on classes, syllabi, webinars and vital discussions not only on Palestine but on Black, Indigenous, Asian and Latinx liberation, gender and sexual justice and a variation of Jewish voices and perspectives including opposition to Zionism. Although no specific violation was noted, Facebook has since confirmed that the post and the page were removed for violating the Dangerous Individuals and Organizations standard. This was in addition to cancellations by other platforms including Google, Zoom, and Eventbrite. 

SFSU's AMED Studies Program gets censored by Facebook

Given the frequency and the high-profile contexts in which Facebook’s Dangerous Individuals and Organizations Standard is applied, the company should take extra care to make sure the standard reflects freedom of expression and other human rights values. But to the contrary, the standard is a mess of vagueness and overall lack of clarity—a point that the Oversight Board has emphasized.

Facebook has said that the purpose of this community standard is to “prevent and disrupt real-world harm.” In the Trump ruling, the Oversight Board found that President Trump’s January 6 posts readily violated the Standard. “The user praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted. Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained.”

But in two previous decisions, the Oversight Board criticized the standard. In a decision overturning Facebook’s removal of a post featuring a quotation misattributed to Joseph Goebbels, the Oversight Board admonished Facebook for not including all aspects of its policy on dangerous individuals and organizations in the community standard.

Facebook apparently has self-designated lists of individuals and organizations subject to the policy that it does not share with users, and treats any quoting of such persons as an “expression of support” unless the user provides additional context to make their benign intent explicit, a condition also not disclosed to users. Facebook’s lists evidently include US-designated foreign terrorist organizations, but also seems to go beyond that list.

As the Oversight Board concluded, “this results in speech being suppressed which poses no risk of harm” and found that the standard fell short of international human rights standards: “the policy lacks clear examples that explain the application of ‘support,’ ‘praise’ and ‘representation,’ making it difficult for users to understand this Community Standard. This adds to concerns around legality and may create a perception of arbitrary enforcement among users.” Moreover, “the policy fails to explain how it ascertains a user’s intent, making it hard for users to foresee how and when the policy will apply and conduct themselves accordingly.”

The Oversight Board recommended that Facebook explain and provide examples of the application of key terms used in the policy, including the meanings of “praise,” “support,” and “representation.” The Board also recommended that the community standard provide clearer guidance to users on making their intent apparent when discussing such groups, and that a public list of “dangerous” organizations and individuals be provided to users.

The United Nations Special Rapporteur on Freedom of Expression also expressed concern that the standard, and specifically the language of “praise” and “support,” was “excessively vague.”

Recommendations

Policies such as Facebook’s that restrict references to designated terrorist organizations may be well-intentioned, but in their blunt application, they can have serious consequences for documentation of crimes—including war crimes—as well as vital expression, including counterspeech, satire, and artistic expression, as we’ve previously documented. While companies, including Facebook, have regularly claimed that they are required to remove such content by law, it is unclear to what extent this is true. The legal obligations are murky at best. Regardless, Facebook should be transparent about the composition of its “Dangerous Individuals and Organizations” list so that users can make informed decisions about what they post.

But while some content may require removal under certain jurisdictions, it is clear that other decisions are made on the basis of internal policies and external pressure—and are often not in the best interest of the individuals that they claim to serve. This is why it is vital that companies include vulnerable communities—in this case, Palestinians—in policy conversations.

Finally, transparency and appropriate notice to users would go a long way toward mitigating the harm of such takedowns—as would ensuring that every user has the opportunity to appeal content decisions in every circumstance. The Santa Clara Principles on Transparency and Accountability in Content Moderation offer a baseline for companies.

Share
Categories
free speech Intelwars International Security Education

#ParoNacionalColombia and Digital Security Considerations for Police Brutality Protests

In the wake of Colombia’s tax reform proposal, which came as more Colombians fell into poverty as a result of the pandemic, demonstrations spread over the country in late April, reviving social unrest and socio-economic demands that led people to the streets in 2019.The government’s attempts to reduce public outcry by withdrawing the tax proposal to draft a new text did not work. Protests continue online and offline. Violent repression on the ground by police, and the military presence in Colombian cities, have raised concerns among national and international groups—from civil organizations across the globe to human rights bodies that are calling on the government to respect people’s constitutional rights to assemble and allow free expression on the Internet and the streets. Media has reported on government crackdowns against the protestors, including physical violence, missing persons, and deaths, seizing of phones and other equipment used to document protests, and police action, as well as internet disruptions and content restrictions or takedowns by online platforms.

As the turmoil and demonstrations continue, we’ve put together some useful resources from EFF and allies we hope can help those attending protests and using technology and the Internet to speak up, report, and organize. Please note that the authors of this post come from primarily U.S.- and Brazil-based experiences. The post is by no means comprehensive. We urge readers to be aware that protest circumstances change quickly; digital security risks, and their mitigation, can vary depending on your location and other contexts. 

This post has two sections covering resources for navigating protests and resources for navigating networks.

Resources for Navigating Protests

Resources for Navigating Network Issues

Resources for Navigating Protests

To attend protests safely, demonstrators need to consider many factors and threats: these range from protecting themselves from harassment and their own devices’ location tracking capabilities, to balancing the need to use technologies for documenting law enforcement brutality and disseminating information. Another consideration is using encryption to protect data and messages from unintended readers. Some resources that may be helpful are:

For Protestors (Colombia)

 For Bringing Devices to Protests

For Using Videos and Photos to Document Police Brutality, Protect Protesters’ Faces, and Scrub Metadata

Resources for Navigating Network Issues

What happens if the Internet is really slow, down altogether, or there’s some other problem keeping people from connecting online? What if social media networks remove or block content from being widely seen, and each platform has a different policy for addressing content issues? We’ve included some resources for understanding hindrances to sending messages and posts or connecting online. 

For Network and Platform Blockages (Colombia) 

For Network Censorship 

For Selecting a Circumvention Tool

If circumvention (not anonymity) is your primary goal for accessing and sending material online, the following resources might be helpful. Keep in mind that Internet Service Providers (ISPs) are still able to see that you are using one of these tools (e.g. that you’re on a Virtual Private Network (VPN) or that you’re using Tor), but not where you’re browsing, nor the content of what you are accessing. 

VPNs

A few diagrams showing the difference between default connectivity to an ISP using a VPN and using Tor are included below (from the Understanding and Circumventing Network Censorship SSD guide).

 the request for eff.org passes through a router and ISP server on the way to the eff.org's server.

Your computer tries to connect to https://eff.org, which is at a listed IP address (the numbered sequence beside the server associated with EFF’s website). The request for that website is made and passed along to various devices, such as your home network router and your ISP, before reaching the intended IP address of https://eff.org. The website successfully loads for your computer.

 the request passes encrypted through the router, ISP's server, the VPN's server, before finally landing at eff.org's server.

In this diagram, the computer uses a VPN, which encrypts its traffic and connects to eff.org. The network router and ISP might see that the computer is using a VPN, but the data is encrypted. The ISP routes the connection to the VPN server in another country. This VPN then connects to the eff.org website.

Tor 

Digital security guide on using Tor Browser, which uses the volunteer-run Tor network, from Surveillance Self-Defense (EFF): How to: Use Tor on macOS (English), How to: Use Tor for Windows (English), How to: Use Tor for Linux (English), Cómo utilizar Tor en macOS (Español), Cómo Usar Tor en Windows (Español), Como usar Tor en Linux (Español)

 The request is encrypted and passes through the router, ISP server, three Tor servers, before landing at the intended eff.org server.

The computer uses Tor to connect to eff.org. Tor routes the connection through several “relays,” which can be run by different individuals or organizations all over the world. The final “exit relay” connects to eff.org. The ISP can see that you’re using Tor, but cannot easily see what site you are visiting. The owner of eff.org, similarly, can tell that someone using Tor has connected to its site, but does not know where that user is coming from.

For Peer-to-Peer Resources

Peer-to-Peer alternatives can be helpful during a shutdown or during network disruptions and include tools like the Briar App, as well as other creative uses such as Hong Kong protesters’ use of AirDrop on iOS devices.

For Platform Censorship and Content Takedowns

If your content is taken down from services like social media platforms, this guide may be helpful for understanding what might have happened, and making an appeal (Silenced Online): How to Appeal (English)

For Identifying Disinformation

Verifying the authenticity of information (like determining if the poster is part of a bot campaign, or if the information itself is part of a propaganda campaign) is tremendously difficult. Data & Society’s reports on the topic (English), and Derechos Digitales’ thread (Español) on what to pay attention to and how to check information might be helpful as a starting point. 

Need More Help?

For those on the ground who need digital security assistance, Access Now has a 24/7 Helpline for human rights defenders and folks at risk, which is available in English, Spanish, French, German, Portuguese, Russian, Tagalog, Arabic, and Italian. You can contact their helpline at https://www.accessnow.org/help/

Thanks to former EFF fellow Ana Maria Acosta for her contributions to this piece.

Share
Categories
free speech Gender Intelwars Sexes Transgender Transgender sports

Law student faces possible expulsion for daring to say women have vaginas and that men are physically stronger

A Scottish law student is being investigated by her school for making biologically factual statements about women and men, the U.K.’s Times reported. And now she faces possible expulsion.

What happened?

Lisa Keogh, a 29-year-old student at Abertay University in Scotland, is reportedly facing discipline from the school for telling her classmates that a woman must have a vagina — citing generally understood biology that females are born with female genitals — and that men are stronger than women and “the difference in physical strength of men versus women is a fact.”

Keogh told the Times that she made the comments during a video seminar, which prompted the class lecturer to mute her. Her comments came as the class discusses the topic of men who identify as female taking part in mixed martial arts matches. She said she pointed out to the class not only that women have female body parts but also that in such a scenario the trans athlete “had testosterone in her body for 32 years and, as such, would be genetically stronger than your average woman.”

She is also accused of calling women the “weaker sex” and having the gall to stand up for men after one of her classmates indicated that all men are rapists and pose a danger to women. Keogh admits to responding by calling her anti-men classmates “man-hating feminists.”

“I didn’t intend to be offensive but I did take part in a debate and outlined my sincerely held views,” she said, the Times reported. “I was abused and called names by the other students, who told me I was a ‘typical white, cis girl.’ You have got to be able to freely exchange differing opinions otherwise it’s not a debate.”

Those statements did not sit well with her younger classmates who anonymously reported her to the school’s higher-ups.

Her lawyers have described the entire thing as “farcical,” the paper said.

And Keogh apparently agrees, having told the Times that after she received an email accusing her of uttering transphobic and offensive comments during a class on gender feminism.

“I thought it was a joke,” she said. “I thought there was no way that the university would pursue me for utilising my legal right to freedom of speech.

“I wasn’t being mean, transphobic or offensive,” Keogh said. “I was stating a basic biological fact. I previously worked as a mechanic and when I was in the workshop there were some heavy things that I just couldn’t lift but male colleagues could.”

What’s next?

Now Keogh, a mother of two who aspires to be a human rights attorney, faces possible expulsion and fears she could lose her chance to “make a positive contribution” all because a handful of anonymous younger classmates had their feelings hurt.

“I’m worried that my chance of becoming a lawyer, and making a positive contribution, could be ended just because some people were offended,” she said, according to the Times. “Those girls fresh out of high school who accused me are training to be lawyers. There are no trigger warnings in a courtroom. The judge isn’t going to whisper softly or excuse them from listening to things they might not want to hear.”

Abertay University told the Times that it would not comment on disciplinary matters.

Share
Categories
free speech Intelwars Section 230 of the Communications Decency Act

Lawsuit Against Snapchat Rightfully Goes Forward Based on “Speed Filter,” Not User Speech

The U.S. Court of Appeals for the Ninth Circuit has allowed a civil lawsuit to move forward against Snapchat, a smartphone social media app, brought by the parents of three teenage boys who died tragically in a car accident after reaching a maximum speed of 123 miles per hour. We agree with the court’s ruling, which confirmed that internet intermediaries are not immune from liability when the harm does not flow from the speech of other users.

The parents argue that Snapchat was negligently designed because it incentivized users to drive at dangerous speeds by offering a “speed filter” that could be used during the taking of photos and videos. The parents allege that many users believed that the app would reward them if they drove 100 miles per hour or faster. One of the boys had posted a “snap” with the “speed filter” minutes before the crash.

The Ninth Circuit rightly held in Lemmon v. Snap, Inc. that Section 230 does not protect Snapchat from the parents’ lawsuit. Section 230 is a critical federal law that protects user speech by providing internet intermediaries with partial immunity against civil claims for hosting user-generated content (see 47 U.S.C. § 230(c)(1)). Thus, for example, if a review site publishes a review that contains a statement that defames someone else, the reviewer may be properly sued for writing and uploading the defamatory content, but not the review site for hosting it.

EFF has been a staunch supporter of Section 230 since it was enacted in 1996, recognizing that the law has facilitated free speech and innovation online for 25 years. By partially shielding internet intermediaries from potential liability for what their users say and do on their platforms, Section 230 creates the legal breathing room for entrepreneurs to create a multitude of diverse spaces and services online. By contrast, with greater legal exposure, companies are incentivized in the opposite direction—to take down more user speech or to cease operations altogether.

However, this case against Snapchat shows that Section 230 does not—and was never meant to—shield internet intermediaries (such as social media platforms) from liability in all cases. Section 230 already has several exceptions, including for when online platforms host user speech that violates federal criminal law or intellectual property law.

In this case, the court explained that Section 230 does not protect companies when a claim is premised on harm that flows from the company’s own speech or actions, independent from the speech of other users. As the Ninth Circuit explained, the parents are aiming to hold Snapchat liable for creating a defective product with a feature that inspired users, including their children, to drive too fast. Nothing in the claim tries to hold Snapchat liable for publishing the “speed filter” post by one of the boys before they died in the crash. Nor would the parents “be permitted under § 230(c)(1) to fault Snap for publishing other Snapchat-user content (e.g., snaps of friends speeding dangerously) that may have incentivized the boys to engage in dangerous behavior.”

Thus, the court repeatedly emphasizes in the opinion that the parents’ claim “stand[s] independently of the content that Snapchat’s users create with the Speed Filter,” and internet intermediaries may lose Section 230 immunity for offering defective tools, “so long as plaintiffs’ claims do not blame them for the content that third parties generate with those tools.”

The Ninth Circuit also noted that the Lemmon case is distinguishable from other cases where the plaintiffs tried to creatively plead around Section 230 by arguing that the design of the website or app was the problem, when in fact the plaintiffs’ harm flowed from other users’ content—such as online content related to sex trafficking, illegal drug sales, and harassment. In these cases, the courts rightly granted the companies immunity under Section 230.

By emphasizing this distinction, we believe the decision does not create a troublesome incentive to censor user speech in order to avoid potential liability. 

One thing to keep in mind is that the Ninth Circuit’s decision not to apply Section 230 here does not automatically mean that Snapchat will be held liable for negligent product design. As we saw in a seminal Section 230 case, the website Roommates.com was denied Section 230 immunity by the Ninth Circuit, but later defeated a housing discrimination claim. The Lemmon case now goes back down to the district court, which will allow the case to proceed to a consideration of the merits.

Share
Categories
free speech Game informer Gaza violence Ign Intelwars israeli palestinian conflict journalism Media Bias

Video game websites IGN and Game Informer post and then retract support for Palestinian civilians

Video game and entertainment media outlets IGN and Game Informer published and then retracted articles encouraging readers to donate to charities for the Palestinian civilians caught in the crossfire of the long-standing Israeli-Palestinian conflict.

On May 14, IGN
posted an article titled, “How to Help Palestine,” which provided links to five charities and organizations that provide humanitarian relief to Palestinian civilians living in the West Bank and Gaza. The article featured a graphic of the Palestinian flag in IGN’s masthead.

“Palestinian civilians are currently suffering in great numbers in Jerusalem, Gaza, and West Bank, due to the active Israel-Palestine conflict,” IGN staff wrote. “The
NYTimes reported that most of the deaths so far have occurred in Gaza. Below are charities and organizations on the ground in those areas where you can donate funds to help those most in need. We will continue to update this article with other ways you can help.”

Game Informer staff
published a similar article on May 15, citing the IGN post and linking to the same charities. Several top gaming outlets posted their own pro-Palestine stories, following IGN’s lead.

But on Monday, the IGN and Game Informer articles were taken down. IGN
released a statement about the removal, apologizing for appearing to take sides in the Israeli-Palestinian conflict.

“We have a track record of supporting humanitarian efforts and charities across the globe. In the instance of our recent post regarding how to help civilians in the Israel-Palestinian Conflict, our philanthropic instincts to help those in need was not in-line with our intent of trying to show support for all people impacted by tragic events,” IGN said.

“By highlighting only one population, the post mistakenly left the impression that we were politically aligned with one side. That was not our intention and we sincerely regret the error.”

Game Informer has not yet released a public statement on why its article was taken down.

Kotaku reported that after the IGN article went live, IGN Israel shared a statement on its social media accounts condemning the U.S. IGN article and social media posts supporting Palestinian charities as “misleading.”

“We at IGN Israel support the State of Israel (obviously) and support IDF soldiers who do everything to keep us all in these tough days,” IGN Israel stated. “We work in every way possible to remove this misleading and offensive content from the American edition which does not represent our views.”

That post by IGN Israel has since been deleted.

Reacting on social media, various game industry journalists and media figures were critical of the decision and speculated that IGN’s Palestinian-sympathetic editorial staff was under orders from the website’s corporate owners to take down the post.

Israeli-Palestinian fighting began again after weeks of tensions boiled over into violence when Israel’s Supreme Court approved the evictions of six Palestinian families from a neighborhood in East Jerusalem to make way for Israeli settlers. On May 7, Israeli police were deployed to the Aqsa Mosque in Jerusalem, where conflict with Palestinian worshippers there led to rocket strikes on Israel by Hamas and retaliation by Israeli Defense Forces.

President Joe Biden on Sunday spoke with Israeli Prime Minister Benjamin Netanyahu and Palestinian President Mahmoud Abbas, raising concerns about civilian casualties in Gaza amid the ongoing violence. Palestinian health officials claim at least 140 civilians, including dozens of children, have been killed by Israeli airstrikes, while nine people, including two children, were killed by Hamas rockets in Israel.

More than 25 Democratic senators, led by Sen. Jon Ossoff (D-Ga.) have called for an immediate ceasefire between Israel and Palestine to “prevent further loss of life and further escalation of violence.”

Share
Categories
America Constitution First Amendment free speech Intelwars Outrage Prince Harry royal family UK

Prince Harry rips 1st Amendment as ‘bonkers’ — and more than a few Americans get plenty annoyed: ‘Show some utter respect’

Fresh on the heels of boldly commanding top podcaster — and MMA fighter — Joe Rogan to “just stay out of it” in regard to Rogan suggesting that young people should not get the COVID-19 vaccine, Prince Harry’s outspoken mouth is getting him in trouble again.

What did he say this time?

The Duke of Sussex — who recently left his royal family behind in England for the sunnier environs of southern California, since which he’s nabbed a considerable windfall through deals with American companies like Netflix and Spotify, the Daily Mail reported — was conversing on Dax Shepard’s podcast Thursday when the subject of paparazzi taking photos of celebrities’ children came up.

“I don’t want to start, sort of, going down the First Amendment route because that’s a huge subject and one in which I don’t understand because I’ve only been here for a short period of time, but you can find a loophole in anything,” Harry said, adding that “laws were created to protect people.”

The prince added, “I believe we live in an age now where you’ve got certain elements of the media redefining to us what privacy means. There’s a massive conflict of interest. And then you’ve got social media platforms, trying to redefine what free speech means. … And we’re living in this world where we’ve almost, like, the laws have been completely flipped by the very people that need them flipped so they can make more money, and they can capitalize off our pain, grief, and this sort of general self-destructive mode that is happening in the moment.”

And soon Harry delivered the shot heard ’round the world: “I’ve got so much I want to say about the First Amendment. I still don’t understand it, but it is bonkers.” (In fairness, Shepard agreed: “It is bonkers.”)

The prince’s comments can be heard in context after the 39-minute mark of the podcast.

How did folks react?

To put it mildly, a number of Americans didn’t much care for Harry’s commentary on the Constitution’s First Amendment, which guarantees citizens the right to freely express themselves, practice whatever religions they choose — or none at all — and to assemble and petition the government.

To say nothing of the fact that the prince is a guest here, making a load of cash here, and enjoying a lifestyle here protected by U.S. laws — and on top of that admitting that “I still don’t understand” the First Amendment, yet summoning the arrogance to dismiss it as “bonkers” — a number of notable U.S. citizens fired back hard.

“We fought a war in 1776 so we don’t have to care what you say or think,” Meghan McCain of “The View” said in reaction to Harry’s comments. “That being said, you have chosen to seek refuge from your homeland here and thrive because all of what our country has to offer and one of the biggest things is the 1st Amendment — show some utter respect.”

Others shared McCain’s sentiments:

  • Republican U.S. Sen. Ted Cruz of Texas quipped in rather understated fashion that it’s “nice” that Harry “can say that.”
  • GOP U.S. Rep. Dan Crenshaw, also of the Lone Star State, noted that Harry “just doubled the size of my Independence Day party.”
  • Megyn Kelly reminded him that it’s “‘better to remain silent and be thought a fool than to speak and remove all doubt.’ (Lincoln or Twain or someone smarter than Prince Harry.)”
  • “Don’t let the door knob hit you, Windsor,” Fox News’ host Laura Ingraham tweeted.

Author Nick Adams — who’s from Australia and says in his Twitter bio that he’s “American by choice” — declared that “Prince Harry should go back to the UK!”

Even fellow Brits got into the act. Former Brexit leader Nigel Farage observed that “for Prince Harry to condemn the USA’s First Amendment shows he has lost the plot. Soon he will not be wanted on either side of the pond.”

Dan Wootton of the UK’s GBNews tweeted that “the First Amendment is one of the biggest reasons why the USA is a bastion of free speech and freedom of expression. The fact Prince Harry doesn’t like it because he thinks rich privileged folk deserve more rights than everyone else says a lot!”

Speaking of, Harry and Meghan just added more to their already considerable coffers with a new partnership with Proctor and Gamble, Yahoo Finance reported Sunday.

Anything else?

According to the Daily Mail, Harry also criticized Prince Charles, Prince Philip, and the queen during the podcast and complained he had suffered “genetic pain,” which led to royal aides demanding that give up his royal titles.

During the 2020 election cycle, Harry and Meghan issued a video widely interpreted as a campaign ad for then-candidate Joe Biden.

When asked for his reaction to the couple weighing in on the race at the time, then-President Donald Trump said, “I’m not a fan of hers, and I would say this — and she probably has heard that — but I wish a lot of luck to Harry, ’cause he’s gonna need it.”

Harry and Meghan also were famously interviewed by Oprah Winfrey recently, during which they alleged racism within the Royal Family.

Share
Categories
free speech Intelwars Section 230 of the Communications Decency Act

President Biden Revokes Unconstitutional Executive Order Retaliating Against Online Platforms

President Joe Biden on Friday rescinded a dangerous and unconstitutional Executive Order issued by President Trump that threatened internet users’ ability to obtain truthful information online and retaliated against services that fact-checked the former president. The Executive Order called on multiple federal agencies to punish private online social media services for content moderation decisions that President Trump did not like.

Biden’s rescission of the Executive Order comes after a coalition of organizations challenging the order in court called on the president to abandon the order last month. In a letter from Rock The Vote, Voto Latino, Common Cause, Free Press, Decoding Democracy, and the Center for Democracy & Technology, the organizations demanded the Executive Order’s rescission because “it is a drastic assault on free speech designed to punish online platforms that fact-checked President Trump.”

The organizations filed lawsuits to strike down the Executive Order last year, with Rock The Vote, Voto Latino, Common Cause, Free Press, and Decoding Democracy’s challenge currently on appeal in the U.S. Court of Appeals for the Ninth Circuit. The Center for Democracy & Technology’s appeal is currently pending in the U.S. Court of Appeal for the D.C. Circuit.

Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in Rock The Vote v. Biden. We applaud Biden’s revocation of the “Executive Order on Preventing Online Censorship,” and are reviewing his rescission of the order and conferring with our clients to determine what impact it has on the pending legal challenge in the Ninth Circuit.

Trump issued the unconstitutional Executive Order in retaliation for Twitter fact-checking May 2020 tweets spreading false information about mail-in voting. The Executive Order issued two days later sought to undermine a key law protecting internet users’ speech, 47 U.S.C. § 230 (“Section 230”) and punish online platforms, including by directing federal agencies to review and potentially stop advertising on social media and kickstarting a federal rulemaking to re-interpret Section 230.

Share
Categories
Campus reform reports free speech Intelwars Iowa state university professor Racism Rita bannerjee Social Media Young americans for freedom

Professor says she limits ‘interactions’ with white people ‘as much as possible’

An Iowa State University professor raised eyebrows after revealing that she limits “interactions” with white people “as much as possible.”

She also complained that white men “with dirty hair and wrinkled clothes” will always be “liked and higher ranked.”

The professor’s tweets are protected at the time of this reporting.

What are the details?

According to a Tuesday report from Campus Reform, Professor Rita Mookerjee, who spoke on a student government “Diversity and Inclusion Panel in March,” reportedly tweeted that she tries to “limit my interactions with yt people as much as possible.”

“I can’t with the self-importance and performance esp during Black History Month,” she said in a since-deleted tweet.

Iowa State’s Young Americans for Freedom tweeted about Mookerjee’s remarks and wrote, “ISUStuGov, do you still plan on hosting this professor for Women’s Week after she has made repeated racist claims against women of a different race?”

What else?

The outlet also reported that in 2020, she complained about how “whyte men with dirty hair and wrinkled clothes will always be liked and higher ranked.”

She also reportedly complained that a person mistook her for a white person on social media, so she was forced to change her profile picture.

“Someone called me white the other day so #NewProfilePic because I think the f*** not,” she tweeted, according to the outlet.

In a statement, the university’s student government group said that while Mookerjee’s remarks did not “reflect the views of the student government,” it would not remove her from the upcoming Women’s Week panel.

A portion of the statement said, “Student government does not agree with the content of the comments that were made — no one should be reduced to the color of their skin.”

It later added, “The tweets from the panelist do not reflect the views of Student Government: we believe that prejudice based on race or the color of someone’s skin is wrong in all accounts. We have decided that the professor will still be invited to participate in the panel; it is important to allow ideas to be shared, even if the find the comments to be wrong. In order to have a true free marketplace of ideas, we must not rescind an offer to speak based on our objection to someone’s personal speech.”

Charles Klapatauskas, president of Young Americans for Freedom at the school, told the outlet, “YAF was not supportive of the decision to not remove Dr. Mookerjee from the women’s panel.”

“YAF might disagree with the content of the tweets, [but] we firmly stand behind her ability to tweet out such things,” Klapatauskas added.

Share
Categories
free speech Intelwars International Surveillance and Human Rights

Proposed New Internet Law in Mauritius Raises Serious Human Rights Concerns

As debate continues in the U.S. and Europe over how to regulate social media, a number of countries—such as India and Turkey—have imposed stringent rules that threaten free speech, while others, such as Indonesia, are considering them. Now, a new proposal to amend Mauritius’ Information and Communications Technologies Act (ICTA) with provisions to install a proxy server to intercept otherwise secure communications raises serious concerns about freedom of expression in the country.

Mauritius, a democratic parliamentary republic with a population just over 1.2 million, has an Internet penetration rate of roughly 68% and a high rate of social media use. The country’s Constitution guarantees the right to freedom of expression but, in recent years, advocates have observed a backslide in online freedoms.

In 2018, the government amended the ICTA, imposing heavy sentences—as high as ten years in prison—for online messages that “inconvenience” the receiver or reader. The amendment was in turn utilized to file complaints against journalists and media outlets in 2019.

In 2020, as COVID-19 hit the country, the government levied a tax on digital services operating  in the country, defined as any service supplied by “a foreign supplier over the internet or an electronic network which is reliant on the internet; or by a foreign supplier and is dependent on information technology for its supply.”

The latest proposal to amend the ICTA has raised alarm bells amongst local and international free expression advocates, as it would enable government officials who have established instances of “abuse and misuse” to block social media accounts and track down users using their IP addresses.

The amendments are reminiscent of those in India and Turkey in that they seek to regulate foreign social media, but differ in that Mauritius—a far smaller country—lacks the ability to force foreign companies to maintain a local presence. In a paper for a consultation of the amendments, proponents argue:

Legal provisions prove to be relatively effective only in countries where social media platforms have regional offices. Such is not the case for Mauritius. The only practical solution in the local context would be the implementation of a regulatory and operational framework which not only provides for a legal solution to the problem of harmful and illegal online content but also provides for the necessary technical enforcement measures required to handle this issue effectively in a fair, expeditious, autonomous and independent manner.

While some of the concerns raised in the paper—such as the fact that social media companies do not sufficiently moderate content in the country’s local language—are valid, the solutions proposed are disproportionate. 

A Change.org petition calling on local and international supporters to oppose the amendments notes that “Whether human … or AI, the system that will monitor, flag and remove information shared by users will necessarily suffer from conscious or unconscious bias. These biases will either be built into the algorithm itself, or will afflict those who operate the system.” 

Most concerning, however, is that authorities wish to install a local/proxy server that impersonates social media networks to fool devices and web browsers into sending secure information to the local server instead of social media networks, effectively creating an archive of the social media information of all users in Mauritius before resending it to the social media networks’ servers. This plan fails to mention how long the information will be archived, or how user data will be protected from data breaches.

Local free expression advocates are calling on the ICTA authorites to “concentrate their efforts in ethically addressing concerns made by citizens on posts that already exist and which have been deemed harmful.” Supporters are encouraged to sign the Change.org petition or submit comment to the open consultation by emailing socialmediaconsultation@icta.mu before May 5, 2021.

Share
Categories
free speech Intelwars International

Brazil’s Bill Repealing National Security Law Has its Own Threats to Free Expression

The Brazilian Chamber of Deputies is on track to approve  a law that threatens freedom of expression and the right to assemble and protest, with the stated aim of defending the democratic constitutional state. Bill 6764/02 repeals the Brazilian National Security Law (Lei de Segurança Nacional), one of the ominous legacies of the country’s dictatorship that lasted until 1985. Although there’s a broad consensus over the harm the National Security Law represents, Brazilian civil groups have been stressing that replacing it with a new act without careful discussion on its grounds, principles, and specific rules risks rebuilding a framework serving more to repressive than to democratic ends.

The Brazilian National Security Law has a track record of abuses in persecuting and silencing dissent, with vague criminal offenses and provisions targeting speech. After a relatively dormant period, it gained new prominence during President Bolsonaro’s administration. It has served as a legal basis for accusations against opposition leaders, critics, journalists, and even a congressman aligned to Bolsonaro in the country’s current turbulent political landscape.

However, its proposed replacement, Bill 6764/02, raises various concerns, some particularly unsettling for digital rights. Even with alternative drafts trying to untangle them, problems remain.

First, the espionage offense in the bill defines the handover of secret documents to foreign governments as a crime. It’s crucial that this and related offenses do not apply to acts in a way that would raise serious human rights concerns: whistleblowers revealing facts or acts that could imply the violation of human rights, crimes committed by government officials, and other serious wrongdoings affecting public administration; or,  journalistic and investigative reporting, and the work of civil groups and activists, that bring to light governments’ unlawful practices and abuses. These acts should be clearly exempted from the offense. Amendments under discussion seek to address these concerns, but there’s no assurance they will prevail in the final text if this new law is approved.

The IACHR’s Freedom of Expression Rapporteur highlighted how often governments in Latin America classify information under national security reasons without proper assessment and substantiation. The report provides a number of examples in the region on the hurdles this represents to accessing information related to human rights violations and government surveillance. The IACHR Rapporteur stresses the key role of investigative journalists, the protection of their sources, and the need to grant legal backing against reprisal to whistleblowers who expose human rights violations and other wrongdoings. This aligns with the UN Freedom of Expression Rapporteur’s previous recommendations and reinforces the close relationship between democracy and strong safeguards for those who take a stand of unveiling sensitive public interest information. As the UN High Commissioner for Human Rights has already pointed out:

The right to privacy, the right to access to information and freedom of expression are closely linked. The public has the democratic right to take part in the public affairs and this right cannot be effectively exercised by solely relying on authorized information.

Second, the proposal also aims to tackle “fake news” by making “mass misleading communication” a crime against democratic institutions. Although the bill should be strictly tailored to counter exceptionally serious threats, bringing disinformation into its scope, on the contrary, potentially targets millions of Internet users. Disseminating “facts the person know is untrue” that could put at risk “the health of the electoral process” or “the free exercise of constitutional powers,” using “means not provided by the private messaging application,” could lead to up to five years’ jail time.

We agree with the digital rights groups on the ground which have stressed the provision’s harmful implications to users’ freedom of expression.  Criminalizing the spread of disinformation is full of traps. It criminalizes speech by relying on vague terms (as in this bill) easily twisted to stifle critical voices and those challenging entrenched political power. Repeatedly, joint declarations of the Freedom of Expression Rapporteurs urged States not to take that road.

Moreover, the provision applies when such messages were distributed using “means not provided by the application.” Presuming that the use of such means is inherently malicious poses a major threat to interoperability. The technical ability to plug one product or service into another product or service, even when one service provider hasn’t authorized that use, has been a key driver to competition and innovation. And dominant companies repeatedly abuse legal protections to ward off and try to punish competitors. 

This is not to say we do not care about the malicious spread of disinformation at scale. But it should not be part of this bill, given its specific scope, neither be addressed without careful attention to unintended consequences. There’s an ongoing debate, and other avenues to pursue that are aligned with fundamental rights and rely on joint efforts from the public and private sectors.

Political pressure has hastened the bill’s vote. Bill 6764/02 may pass in a few days in the Chamber of Deputies, pending the Senate’s approval. We join the call of civil and digital rights groups that a rushed approach actually creates greater risks for what the bill is supposed to protect. These and other troubling provisions put freedom of expression on the spot, serving also to spur government’s surveillance and repressive actions. These risks are what the defense of democracy should fend off, not reiterate. 

Share
Categories
Commentary free speech Intelwars Section 230 of the Communications Decency Act

EFF at 30: Protecting Free Speech, with Senator Ron Wyden

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF was proud to welcome Senator Ron Wyden as our second special guest in EFF’s yearlong Fireside Chat series. Senator Wyden is a longtime supporter of digital rights, and as co-author of Section 230, one of the key pieces of legislation protecting speech online, he’s a well-recognized champion of free speech. EFF’s Legal Director, Dr. Corynne McSherry, spoke with the senator about the fight to protect free expression and how Section 230, despite recent attacks, is still the “single best law for small businesses and single best law for free speech.” He also answered questions from the audience about some of the hot topics that have swirled around the legislation for the last few years. 

You can watch the full conversation here or read the transcript.

On May 5, we’ll be holding our third EFF30 Fireside Chat, on surveillance, with special guest Edward Snowden. He will be joined by EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia as they weigh in on surveillance in modern culture, activism, and the future of privacy. 

RSVP NOW

Section 230 and Social Movements

Senator Wyden began the fireside chat with a reminder that some of the most important, and often divisive, social issues of the last few years, from #BlackLivesMatter to the #MeToo movement, would likely be censored much more heavily on platforms without Section 230. That’s because the law gives platforms both the power to moderate as they see fit, and partial immunity from liability for what’s posted on those sites, making the speech the legal responsibility of the original speaker.

Section 230…has always been for the person who doesn’t have deep pockets

The First Amendment protects most speech online, but without Section 230, many platforms would be unable to host much of this important, but controversial speech because they would be stuck in litigation far more often. Section 230 has been essential for those who “don’t own their own TV stations” and others “without deep pockets” for getting their messages online, Wyden explained. 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FELSJofIhnRM%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Wyden also discussed the history of Section 230, which was passed in 1996. ”[Senator Chris Cox] and I wanted to make sure that innovators and creators and people who had promising ideas and wanted to know how they were going to get them out – we wanted to make sure that this new concept known as the internet could facilitate that.” 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FF916aJbM96Q%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Misconceptions Around Section 230

Wyden took aim at several of the misconceptions around 230, like the fact that the law is a benefit only for Big Tech. “One of the things that makes me angry…the one [idea] that really infuriates me, is that Section 230 is some kind of windfall for Big Tech. The fact of the matter is Big Tech’s got so much money that they can buy themselves out of any kind of legal scrape. We sure learned that when the first bill to start unraveling Section 230 passed, called SESTA/FOSTA.”

We need that fact-finding so that we make smart technology policy

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FtwOpQY2htzs%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Another common misunderstanding around the law is that it mandates platforms to be “neutral.” This couldn’t be further from the truth, Wyden explained: “There’s not a single word in Section 230 that requires neutrality….The point was essentially to let ‘lots of flowers bloom.’ If you want to have a conservative platform, more power to you…If you want to have a progressive platform, more power to you.“ 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEM_gj6ZqCpA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

How to Think About Changes to Intermediary Liability Laws

All the positive benefit for online speech that Section 230 allows doesn’t mean that Section 230 is perfect, however. But before making changes to the law, Wyden suggested, “There ought to be some basic fact finding before the Congress just jumps in to making sweeping changes to speech online.” EFF Legal Director, Corynne McSherry, agreed wholeheartedly: “We need that fact-finding so that we make smart technology policy,” adding that we need go no further than our experience with SESTA/FOSTA and its collateral damage to prove this point. 

The first thing we ought to do is tackle the incredible abuses in the privacy area

There are other ways to improve the online ecosystem as well. Asked for his thoughts on better ways to address problems, Senator Wyden was blunt: “The first thing we ought to do is tackle the incredible abuses in the privacy area. Every other week in this country Americans learn about what amounts to yet another privacy disaster.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FhDT4J224EB4%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Another area where we can improve the online ecosystem is in data sales and collection. Wyden recently introduced a bill, “The Fourth Amendment is Not For Sale,” that will help reign in the problem of apps and commercial data brokers selling things user location data.

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FusMYK5rKCpA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

To wrap up the discussion, Senator Wyden took some questions about potential changes to Section 230. He lambasted SESTA/FOSTA, which EFF is challenging in court on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist, as an example of a poorly guided amendment. 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fcl48SEXjliI%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Senator Wyden pointed out that every time a proposal to amend the law comes up, there should be a rubric of several questions asked about how the change would work, and what impact it would have on users. (EFF has its own rubric for laws that would affect intermediary liability for just these purposes.)

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FAWJ6o6jOKgA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

We thank Senator Wyden for joining us to discuss free speech, Section 230, and the battle for digital rights. Please join us in the continuation of this fireside chat series on May 5 as we discuss surveillance with whistleblower Edward Snowden.

Share
Categories
Crushing free speech Free expression awards free speech Intelwars Susan Wojcicki watch youtube censorship

YouTube CEO receives ‘Free Expression’ award — and mockery ensues over platform’s track record of crushing free expression

YouTube CEO Susan Wojcicki received a “Free Expression” award from the Freedom Forum Institute in a virtual ceremony last week — sponsored by YouTube, incidentally — and observers mercilessly mocked the gesture given YouTube’s notorious track record of crushing free speech.

What are the details?

As part of the ceremony posted to YouTube, Wojcicki sat for an interview in which she lauded the concept of free expression.

“I’ve just seen the real benefits that freedom of speech has, as well as representing people of all different backgrounds and all different perspectives, and that the freedoms we have, we really can’t take for granted,” she said. “We really have to make sure that we’re protecting them in every way possible.”

But Wojcicki also emphasized YouTube’s community guidelines and that there are “limits” to what the platform can allow.

Yeah, you might say that:

  • In 2019, Wojcicki admitted that YouTube took down hundreds of ads for then-President Donald Trump yet claimed the purge was done without political bias.
  • Last year, YouTube removed a viral video featuring frontline doctors calling for an end to the quarantine and comparing COVID-19 to the flu.
  • Earlier this month, YouTube removed a video of a public health roundtable hosted by Florida Republican Gov. Ron DeSantis featuring a panel of scientists and researchers who challenged the effectiveness of COVID-19 lockdowns and masks.

How did folks react?

As you might guess, YouTube and Wojcicki were savaged, both for their history of squelching content and for sponsoring the award ceremony itself. Heck, the video itself received over 21,000 down votes compared to only 88 up votes as of Wednesday afternoon:

  • “Free Expression Award?” one Twitter user asked. “Are they having a laugh[?] Didn’t she just censor the governor of Florida speaking with medical experts[?]”
  • “LOL, YouTube receiving an award for free expression/pro first amendment is Orwellian s**t,” another Twitter commenter declared. “What’s next[?] Facebook getting an award for respecting privacy?”

Well, at least YouTube didn’t turn off comments (yet) under the video of the virtual award ceremony, where a bevy of gems can be found:

  • “The Ministry of Truth has awarded The Ministry of Truth an award for being the most truthful ministry,” one commenter quipped.
  • “Well, at least this is a great example of free expression,” another user noted. “Just make up an award and give it to yourself.”
  • “Censor-Tube giving itself a Free Expression Award,” another commenter observed. “I think we need to invent a new word for the level of irony this has reached.”


2021 Free Expression Awards Highlight: Susan Wojcicki

youtu.be

Share
Categories
acceptance speech award censor dissent Censorship CEO Facade free expression free speech free speech activist Headline News Intelwars joke liars limits free speech Molly Burke propaganda removes content ruling class Susan Wojcicki Youtube

YouTube Gives Itself “Free Expression” Award Then Brags About Censoring Dissent

YouTube’s CEO Susan Wojcicki won the Freedom Forum Institute’s “Free Expression Award” on Friday in a ceremony sponsored by her own company.  So basically, YouTube awarded themselves a “free expression” award then their CEO bragged about how much they censor people on the platform.

You couldn’t make this up if you tried. We live in crazy times.

In the digital awards ceremony, YouTube video creator Molly Burke praised Wojcicki as a “free speech leader” before the YouTube CEO detailed in her acceptance speech how much the platform censors its users, according to a report by RT. 

“The freedoms we have, we really can’t take for granted,” Wojcicki declared, adding that “we also need to make sure there are limits.” Literally, right after giving herself an award for being a free speech activist, she admits that she’s not a free speech activist and censors people, putting a limit on free speech.

If there are limits, it isn’t free speech. But Wojcicki doesn’t care because she gave herself this award, so it can be as hypocritical as she wants.  She added that YouTube removed nine million videos in the last quarter, 90% of which were taken down by machines. She also said there is “a lot of content that technically meets the spirit of what we’re trying to do, but it is borderline, and so for that content, we will just reduce – meaning we’re not going to recommend it to our users.”

Thankfully, the irony was not lost on the public. YouTube and Wojcicki were thrashed for this:

“This is the worst form of gaslighting I’ve ever seen,” commented one user, while another questioned, “What’s next, Facebook getting an award for respecting privacy?”

We live in the strangest of times. YouTube gave itself an award for protecting free speech when they literally admitted they censor those who go against the official narrative and hide videos of those with whom they disagree.  Welcome to 2021: the dystopian reality is being constructed for you.

Luckily, most have already seen through this particular facade.

 

The post YouTube Gives Itself “Free Expression” Award Then Brags About Censoring Dissent first appeared on SHTF Plan – When It Hits The Fan, Don’t Say We Didn’t Warn You.

Share
Categories
Commentary free speech Intelwars Student Privacy

Proctoring Tools and Dragnet Investigations Rob Students of Due Process

Like many schools, Dartmouth College has increasingly turned to technology to monitor students taking exams at home. And while many universities have used proctoring tools that purport to help educators prevent cheating, Dartmouth’s Geisel School of Medicine has gone dangerously further. Apparently working under an assumption of guilt, the university is in the midst of a dragnet investigation of complicated system logs, searching for data that might reveal student misconduct, without a clear understanding of how those logs can be littered with false positives. Worse still, those attempting to assert their rights have been met with a university administration more willing to trust opaque investigations of inconclusive data sets rather than their own students.

The Boston Globe explains that the medical school administration’s attempts to detect supposed cheating have become a flashpoint on campus, exemplifying a worrying trend of schools prioritizing misleading data over the word of their students. The misguided dragnet investigation has cast a shadow over the career aspirations of over twenty medical students.

Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating

What’s Wrong With Dartmouth’s Investigation

In March, Dartmouth’s Committee on Student Performance and Conduct (CSPC) accused several students of accessing restricted materials online during exams. These accusations were based on a flawed review of an entire year’s worth of the students’ log data from Canvas, the online learning platform that contains class lectures and information. This broad search was instigated by a single incident of confirmed misconduct, according to a contentious town hall between administrators and students (we’ve re-uploaded this town hall, as it is now behind a Dartmouth login screen). These logs show traffic between students’ devices and specific files on Canvas, some of which contain class materials, such as lecture slides. At first glance, the logs showing that a student’s device connected to class files would appear incriminating: timestamps indicate the files were retrieved while students were taking exams. 

But after reviewing the logs that were sent to EFF by a student advocate, it is clear to us that there is no way to determine whether this traffic happened intentionally, or instead automatically, as background requests from student devices, such as cell phones, that were logged into Canvas but not in use. In other words, rather than the files being deliberately accessed during exams, the logs could have easily been generated by the automated syncing of course material to devices logged into Canvas but not used during an exam. It’s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. Most of us don’t log out of every app, service, or webpage on our smartphones when we’re not using them.

Much like a cell phone pinging a tower, the logs show files being pinged in short time periods and sometimes being accessed at the exact second that students are also entering information into the exam, suggesting a non-deliberate process. The logs also reveal that the files accessed are largely irrelevant to the tests in question, also indicating  an automated, random process. A UCLA statistician wrote a letter explaining that even an automated process can result in multiple false-positive outcomes. Canvas’ own documentation explicitly states that the data in these logs “is meant to be used for rollups and analysis in the aggregate, not in isolation for auditing or other high-stakes analysis involving examining single users or small samples.” Given the technical realities of how these background refreshes take place, the log data alone should be nowhere near sufficient to convict a student of academic dishonesty. 

Along with The Foundation for Individual Rights in Education (FIRE), EFF sent a letter to the Dean of the Medical School on March 30th, explaining how these background connections work and pointing out that the university has likely turned random correlations into accusations of misconduct. The Dean’s reply was that the cases are being reviewed fairly. We disagree.

For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software

It appears that the administration is the victim of confirmation bias, turning fallacious evidence of misconduct into accusations of cheating. The school has admitted in some cases that the log data appeared to have been created automatically, acquitting some students who pushed back. But other students have been sanctioned, apparently entirely based on this spurious interpretation of the log data. Many others are anxiously waiting to hear whether they will be convicted so they can begin the appeal process, potentially with legal counsel. 

These convictions carry heavy weight, leaving permanent marks on student transcripts that could make it harder for them to enter residencies and complete their medical training. At this level of education, this is not just about being accused of cheating on a specific exam. Being convicted of academic dishonesty could derail an entire career. 

University Stifles Speech After Students Express Concerns Online

Worse still, following posts from an anonymous Instagram account apparently run by students concerned about the cheating accusations and how they were being handled, the Office of  Student Affairs introduced a new social media policy.

An anonymous Instagram account detailed some concerns students have with how these cheating allegations were being handled (accessed April 7). As of April 15, the account was offline.

The policy was emailed to students on April 7 but backdated to April 5—the day the Instagram posts appeared. The new policy states that, “Disparaging other members of the Geisel UME community will trigger disciplinary review.” It also prohibits social media speech that is not “courteous, respectful, and considerate of others” or speech that is “inappropriate.” Finally, the policy warns, “Students who do not follow these expectations may face disciplinary actions including dismissal from the School of Medicine.” 

One might wonder whether such a policy is legal. Unfortunately, Dartmouth is a private institution and so not prohibited by the First Amendment from regulating student speech.

If it were a public university with a narrower ability to regulate student speech, the school would be stepping outside the bounds of its authority if it enforced the social media policy against medical school students speaking out about the cheating scandal. On the one hand, courts have upheld the regulation of speech by students in professional programs at public universities under codes of ethics and other established guidance on professional conduct. For example, in a case about a mortuary student’s posts on Facebook, the Minnesota Supreme Court held that a university may regulate students’ social media speech if the rules are “narrowly tailored and directly related to established professional conduct standards.” Similarly, in a case about a nursing student’s posts on Facebook, the Eleventh Circuit held that “professional school[s] have discretion to require compliance with recognized standards of the profession, both on and off campus, so long as their actions are reasonably related to legitimate pedagogical concerns.” On the other hand, the Sixth Circuit has held that a university can’t invoke a professional code of ethics to discipline a student when doing so is clearly a “pretext” for punishing the student for her constitutionally protected speech.

Although the Dartmouth medical school is immune from a claim that its social media policy violates the First Amendment, it seems that the policy might unfortunately be a pretext to punish students for legitimate speech. Although the policy states that the school is concerned about social media posts that are “lapses in the standards of professionalism,” the timing of the policy suggests that the administrators are sending a message to students who dare speak out against the school’s dubious allegations of cheating. This will surely have a chilling effect on the community to the extent that students will refrain from expressing their opinions about events that occur on campus and affect their future careers. The Instagram account was later taken down, indicating that the chilling effect on speech may have already occurred. (Several days later, a person not affiliated with Dartmouth, and therefore protected from reprisal, has reposted many of the original Instagram’s posts.)

Students are at the mercy of private universities when it comes to whether their freedom of speech will be respected. Students select private schools based on their academic reputation and history, and don’t necessarily think about a school’s speech policies. Private schools shouldn’t take advantage of this, and should instead seek to sincerely uphold free speech principles.

Investigations of Students Must Start With Concrete Evidence

Though this investigation wasn’t the result of proctoring software, it is part and parcel of a larger problem: educators using the pandemic as an excuse to comb for evidence of cheating in places that are far outside their technical expertise. Proctoring tools and investigations like this one flag students based on flawed metrics and misunderstandings of technical processes, rather than concrete evidence of misconduct. 

Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career. 

Proctoring software that assumes all students take tests the same way—for example, in rooms that they can control, their eyes straight ahead, fingers typing at a routine pace—puts a black mark on the record of students who operate outside the norm. One problem that has been widely documented with proctoring software is that students with disabilities (especially those with motor impairment) are consistently flagged as exhibiting suspicious behavior by software suites intended to detect cheating. Other proctoring software has flagged students for technical snafus such as device crashes and Internet cuts out, as well as completely normal behavior that could indicate misconduct if you squint hard enough.

For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software. Across the country, thousands of students, and some parents, have created petitions against the use of proctoring tools, most of which (though not all) have been ignored. Students taking the California and New York bar exams—as well as several advocacy organizations and a group of deans—advocated against the use of proctoring tools for those exams. As expected, many of those students then experienced “significant software problems” with the Examsoft proctoring software, specifically, causing some students to fail. 

Many proctoring companies have defended their dangerous, inequitable, privacy-invasive, and often flawed software tools by pointing out that humans—meaning teachers or administrators—usually have the ability to review flagged exams to determine whether or not a student was actually cheating. That defense rings hollow when those reviewing the results don’t have the technical expertise—or in some cases, the time or inclination—to properly examine them.

Similar to schools that rely heavily on flawed proctoring software, Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating. Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career. 

The Dartmouth faculty has stated that they will not continue to look at Canvas logs in the future for violations (51:45 into the video of the town hall). That’s a good step forward. We insist that the school also look beyond these logs for the students currently being investigated, and end this dragnet investigation entirely, unless additional evidence is presented.

Share
Categories
1st Amendment Alexandria ocasio-cortez AOC Big tech Capitol Police free speech Intelwars privacy Twitter

Capitol Police send cops to podcaster’s home because someone else replied to his tweet with a threat toward AOC

Two police officers allegedly paid a visit to the home of a podcaster in California because of a threat that was posted on Twitter against Rep. Alexandria Ocasio-Cortez (D-N.Y.). However, the podcaster didn’t make the threat toward AOC. Instead, it was reportedly another Twitter user who replied to the podcaster’s original tweet where he benignly said the Democratic representative was “incredibly underwhelming” in an interview.

The podcaster, who goes by the name of @queeralamode on Twitter, shared a video of Ocasio-Cortez being interviewed by Michael Miller, the head of the Jewish Community Relations Council of New York. AOC was asked about Middle East peace, especially between Israelis and Palestinians.

“Her response was incredibly underwhelming, to say the very least,” @queeralamode tweeted on April 7. The Twitter user, who is allegedly a progressive “anti-war activist,” added, “Words AOC used: ‘What’ ‘How’ Words AOC didn’t use: ‘Occupation’ ‘Apartheid’ ‘Colonization’ ‘Genocide.'”

A day after he posted the tweet, two California Highway Patrol officers knocked on his door.

The podcaster told The Grayzone, “The officers said, ‘We got a warning about a sitting member of Congress. And it was because of your tweet, which tagged them in it.’ And then they just wouldn’t back down from this accusation that I threatened to kill her.”

Apparently, the U.S. Capitol Police in Washington, D.C., instructed law enforcement in California to investigate @queeralamode, whose real name is Ryan Wentz.

“I’m really shaken up right now. I was just visited by two plainclothes police officers from California Highway Patrol at my home,” Wentz tweeted on April 8. “They said they came here on behalf of the Capitol Police and accused me of threatening @AOC on Twitter yesterday. This is provably false.”

“This is completely outrageous. I was visited by two police at my home over a harmless tweet about @AOC. I felt scared, intimidated, and violated,” Wentz, who doesn’t provide his real name or home state on Twitter, said. “They knew my name and where I live. It was done on behalf of a congresswoman who advocates against police state tactics. I’d really appreciate it if @AOC could look into this. I recognize she probably receives a lot of threats, but I shouldn’t be harassed by police for critiquing her politics. I frankly feel very unsafe in my home right now.”

Ryan Grim, a reporter for The Intercept, said, “A spokesperson for @AOC says they did not report this post to police, and have asked for answers from Capitol Police: ‘No, not at all. But when we saw his tweets last night about being visited we asked Capitol Police to look into what happened here.'”

The official Twitter account for the California Highway Patrol wrote, “The CHP often assists in investigations at the request of allied agencies. Please contact the U.S. Capitol Police for additional information.”

“USCP investigates all threats that are reported by Congressional offices. The Department also monitors open and classified sources to identify and investigate threats,” the Capitol Police told Fox News. “This is standard operating procedure for the department. As it pertains to this incident, the congresswomen did not request that USCP initiate an investigation.”

A Capitol Police official informed Fox News that Wentz didn’t make a threat toward Ocasio-Cortez, but someone who replied to his original tweet did threaten the congresswoman.

“They were tagged in a tweet that was perceived as threatening that prompted us to look into this,” Capitol Police said. “Obviously as you can imagine, anytime there’s anything that could be a perceived threat, we’re going to talk to everybody involved, whether they’re directly involved or indirectly involved.”

The original tweet by Wentz, who has over 15,000 followers, received nearly 4,000 Likes and over 1,300 Quote Tweets.

That reply tweet with the threat to AOC has since been taken down from Twitter.

In February, AOC’s office sent a mass email to supporters asking them to “scan your social media to find posts with misleading information” about the New York representative and “use the built-in report feature to flag” threats or harassment of Ocasio-Cortez.

Share
Categories
Bitcoin Blockchain Censorship Cryptocurrency free speech Intelwars Podcasts Social Media

Episode-2852- Nathan Senn on Social Media Freedom with DBuzz

Nathan Senn is 29 years old and grew up in El Dorado, Arkansas, United States. From the time he was six years old, he was already learning about computers and programming. In his early teen years, he dedicated his free Continue reading →

Share
Categories
fired free speech Intelwars Meghan markle Piers Morgan Tucker Carlson watch Woke culture

‘I was gonna be damned if I was going to apologize for something that I believe’: Piers Morgan defends free speech, says never bow to the ‘woke brigade’

Piers Morgan — until recently a co-host of “Good Morning Britain” — told Fox News’ Tucker Carlson on Monday that he more or less was forced out of his job after refusing to apologize to Duchess of York Meghan Markle for not believing things she said during an interview with Oprah Winfrey last month.

But Morgan — a fierce free speech advocate — told Carlson his gut told him to stay strong in the face of pressure from Markle, his bosses, and the government of the United Kingdom, which regulates television there.

What are the details?

“I was basically corralled into a position where I was told, ‘You either gotta apologize for effectively disbelieving Meghan Markle’s version of events here, or your position is untenable, and you have to leave,'” he told Carlson. “And my gut was, ‘I was gonna be damned if I was going to apologize for something that I believe.’ I just wasn’t gonna go down that road. I’ve seen too many people when they’re bullied by the woke brigade into apologizing.”

Morgan noted how celebrity Sharon Osbourne “was bullied into apologizing for defending me against a disgusting slur that I’m a racist … and then she lost her job anyway.” He also noted the plight of Alexi McCammond, who was forced to give up her Teen Vogue editor position before she even started when offensive tweets of hers surfaced dating back to when she was a teenager.

Morgan noted that even though McCammond said she was sorry, “that wasn’t enough, so apologies don’t ever get you anywhere.”

As for his own plight, Morgan told Carlson that he was “under attack from Miss Markle … to basically conform to her version of events, and I had to believe her, and if I didn’t, I was a callous racist, and I should be condemned, and ultimately — as it turned out later that day — lose my job. And I think that’s a pretty perilous slope. A journalist’s job … is to express skepticism.”

Morgan added: “Frankly I should be allowed in a democracy that values freedom of speech … to say, ‘I’m sorry, I don’t believe you,’ but I wasn’t. … It was Meghan’s way and Meghan’s narrative and Meghan’s truth. That phrase was actually used by Oprah Winfrey: ‘This is your truth.’ What does that mean? When did we get to your truth? This is the kind of defense … we would hear liberals attack [former President] Donald Trump for: for reinventing facts, for creating his own truth. But when Meghan Markle does it, the same liberals that attacked Donald Trump cheer and applaud and say, ‘This is her truth, and it must be believed, and if you don’t believe it, you’re a racist.’ Well, I’m sorry; I’m not a racist. I just don’t believe her.”

What free speech truly is

Carlson told Morgan he found it curious that Markle and Winfrey behaved like victims when Morgan doesn’t hold their kind of power. “They got you fired,” Carlson continued, “but you’re oppressing them. The strong pretending to be weak in order to crush people below them.”

Morgan added that the cancel culture he fell victim to will end only when others stand up and say enough’s enough — and when we remember what free speech truly is. To illustrate, he told Carlson that Winston Churchill once said, “Some people think free speech is absolutely fine, right to the point they hear an opinion that they don’t like, and then it’s an outrage.”

“That’s not what free speech is,” Morgan continued. “Free speech is about listening to an outrageous opinion you don’t agree with and being able to accept that somebody else doesn’t feel like you. That’s what free speech is.”


Piers Morgan joins ‘Tucker Carlson Today’ for first interview since ‘cancellation’ | Preview

youtu.be

Share
Categories
free speech ICANN Intelwars Shadow Regulation

Ethos Capital Is Grabbing Power Over Domain Names Again, Risking Censorship-For-Profit. Will ICANN Intervene?

Ethos Capital is at it again. In 2019, this secretive private equity firm that includes insiders from the domain name industry tried to buy the nonprofit that runs the .ORG domain. A huge coalition of nonprofits and users spoke out. Governments expressed alarm, and ICANN (the entity in charge of the internet’s domain name system) scuttled the sale. Now Ethos is buying a controlling stake in Donuts, the largest operator of “new generic top-level domains.” Donuts controls a large swathe of the domain name space. And through a recent acquisition, it also runs the technical operations of the .ORG domain. This acquisition raises the threat of increased censorship-for-profit: suspending or transferring domain names against the wishes of the user at the request of powerful corporations or governments. That’s why we’re asking the ICANN Board to demand changes to Donuts’ registry contracts to protect its users’ speech rights.

Donuts is big. It operates about 240 top-level domains, including .charity, .community, .fund, .healthcare, .news, .republican, and .university. And last year it bought Afilias, another registry company that also runs the technical operations of the .ORG domain. Donuts already has questionable practices when it comes to safeguarding its users’ speech rights. Its contracts with ICANN contain unusual provisions that give Donuts an unreviewable and effectively unlimited right to suspend domain names—causing websites and other internet services to disappear.

Relying on those contracts, Donuts has cozied up to powerful corporate interests at the expense of its users. In 2016, Donuts made an agreement with the Motion Picture Association to suspend domain names of websites that MPA accused of copyright infringement, without any court process or right of appeal. These suspensions happen without transparency: Donuts and MPA haven’t even disclosed the number of domains that have been suspended through their agreement since 2017.

Donuts also gives trademark holders the ability to pay to block the registration of domain names across all of Donuts’ top-level domains. In effect, this lets trademark holders “own” words and prevent others from using them as domain names, even in top-level domains that have nothing to do with the products or services for which a trademark is used. It’s a legal entitlement that isn’t part of any country’s trademark law, and it was considered and rejected by ICANN’s multistakeholder policy-making community.

These practices could accelerate and expand with Ethos Capital at the helm. As we learned last year during the fight for .ORG, Ethos expects to deliver high returns to its investors while preserving its ability to change the rules for domain name registrants, potentially in harmful ways. Ethos refused meaningful dialogue with domain name users, instead proposing an illusion of public oversight and promoting it with a slick public relations campaign. And private equity investors have a sordid record of buying up vital institutions like hospitals, burdening them with debt, and leaving them financially shaky or even insolvent.

Although Ethos’s purchase of Donuts appears to have been approved by regulators, ICANN should still intervene. Like all registry operators, Donuts has contracts with ICANN that allow it to run the registry databases for its domains. ICANN should give this acquisition as much scrutiny as it gave Ethos’s attempt to buy .ORG. And to prevent Ethos and Donuts from selling censorship as a service at the expense of domain name users, ICANN should insist on removing the broad grants of censorship power from Donuts’ registry contracts. ICANN did the right thing last year when confronted with the takeover of .ORG. We hope it does the right thing again by reining in Ethos and Donuts.

 

Share
Categories
free speech Intelwars

Content Moderation Is A Losing Battle. Infrastructure Companies Should Refuse to Join the Fight

It seems like every week there’s another Big Tech hearing accompanied by a flurry of mostly bad ideas for reform. Two events set last week’s hubbub apart, both involving Facebook. First, Mark Zuckerberg took a new step in his blatant effort to use 230 reform to entrench Facebook’s dominance. Second, new reports are demonstrating, if further demonstration were needed, how badly Facebook is failing at policing the content on its platform with any consistency whatsoever. The overall message is clear: if content moderation doesn’t work even with the kind of resources Facebook has, then it won’t work anywhere.

Inconsistent Policies Harm Speech in Ways That Are Exacerbated the Further Along the Stack You Go

Facebook has been swearing for many months that it will do a better job of rooting out “dangerous content.” But a new report from the Tech Transparency Project demonstrates that it is failing miserably. Last August, Facebook banned some militant groups and other extremist movements tied to violence in the U.S. Now, Facebook is still helping expand the groups’ reach by automatically creating new pages for them and directing people who “like” certain militia pages to check out others, effectively helping these movements recruit and radicalize new members. 

These groups often share images of guns and violence, misinformation about the pandemic, and racist memes targeting Black Lives Matter activists. QAnon pages also remain live despite Facebook’s claim to have taken them down last fall. Meanwhile, a new leak of Facebook’s internal guidelines shows how much it struggles to come up with consistent rules for users living under repressive governments. For example, the company forbids “dangerous organizations”—including, but not limited to, designated terrorist organizations—but allows users in certain countries to praise mass murderers and “violent non-state actors” (designated militant groups engaged that do not target civilians) unless their posts contain an explicit reference to violence.

A Facebook spokesperson told the Guardian: “We recognise that in conflict zones some violent non-state actors provide key services and negotiate with governments – so we enable praise around those non-violent activities but do not allow praise for violence by these groups.”

The problem is not that Facebook is trying to create space for some speech – they should probably do more of that. But the current approach is just incoherent. Like other platforms, Facebook does not base its guidelines on international human rights frameworks, nor do the guidelines necessarily adhere to local laws and regulations. Instead, they seem to be based upon what Facebook policymakers think is best.

The capricious nature of the guidelines is especially clear with respect to LGBTQ+ content. For example, Facebook has limited use of the rainbow “like” button in certain regions, including the Middle East, ostensibly to keep users there safe. But in reality, this denies members of the LGBTQ+ community there the same range of expression as other users and is hypocritical given the fact that Facebook refuses to bend its “authentic names” policy to protect the same users.

Whatever Facebook’s intent, in practice, it is taking sides in a region that it doesn’t seem to understand. Or as Lebanese researcher Azza El Masri put it on Twitter: “The directive to let pro-violent/terrorist content up in Myanmar, MENA, and other regions while critical content gets routinely taken down shows the extent to which [Facebook] is willing to go to appease our oppressors.”

This is not the only example of a social media company making inconsistent decisions about what expression to allow. Twitter, for instance, bans alcohol advertising from every Arab country, including several (such as Lebanon and Egypt) where the practice is perfectly legal. Microsoft Bing once limited sexual search terms from the entire region, despite not being asked by governments to do so.

Now imagine the same kinds of policies being applied to internet access. Or website hosting. Or cloud storage.

All the Resources in the World Can’t Make Content Moderation Work at Scale

Facebook’s lopsided policies are deserving of critique and point to a larger problem that too much focus on specific policies misses: if Facebook, with the money to hire thousands of moderators, implement filters, and fund an Oversight Board can’t manage to develop and implement a consistent, coherent and transparent moderation policy, maybe we should finally admit that we can’t look to social media platforms to solve deep-seated political problems – and we should stop trying.

Even more importantly, we should call a halt to any effort to extend this mess beyond platforms. If two decades of experience with social media has taught us anything, it is that the companies are bad at creating and implementing consistent, coherent policies. But at least, when a social media company makes an error in judgement, its impact is relatively limited. But at the infrastructure level, however, those decisions necessarily hit harder and wider. If an internet service provider (ISP) shut down access to LGTBQ+ individuals using the same capricious whims as Facebook, it would be a disaster.

What Infrastructure Companies Can Learn

The full infrastructure of the internet, or the “full stack” is made up of a range of companies and intermediaries that range from consumer facing platforms like Facebook or Pinterest to ISPs, like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.

For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that have to function to get the content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about AWS at all. We are more aware of the content moderation decisionsand mistakesmade by the consumer facing platforms.

We have detailed many times the chilling effect and other problems with opaque, bad, or inconsistent content moderation decisions from companies like Facebook. But when ISPs or intermediaries decide to wade into the content moderation game and start blocking certain users and sites, it’s far worse. For one thing, many of these services have few, if any, competitors. For example, too many people in the United States and overseas only have one choice for an ISP. If the only broadband provider in your area cuts you off because they (or your government) didn’t like what you said onlineor what someone else whose name is on the account saidhow can you get back online? Further, at the infrastructure level, services usually cannot target their response narrowly. Twitter can shut down individual accounts; when those users migrate to Parler and continue to engage in offensive speech, AWS can only deny service to the entire site including speech that is entirely unobjectionable. And that is exactly why ISPs and intermediaries need to stay away from this fight entirely. The risks from getting it wrong at the infrastructure level are far too great.

It is easy to understand why repressive governments (and some advocates) want to pressure ISPs and intermediaries in the stack to moderate content: it is a broad, blunt and effective way to silence certain voices. Some intermediaries might also feel compelled to moderate aggressively in the hopes of staving off criticism down the line.  As last week’s hearing showed, this tactic will not work. The only way to avoid the pressure is to stake out an entirely different approach.

To be clear, in the United States, businesses have a constitutional right to decide what content they want to host. That’s why lawmakers who are tempted to pass laws to punish intermediaries beyond platforms in the stack for their content moderation decisions would face the same kind of First Amendment problems as any bill attempting to meddle with speech rights.

But, just because something is legally permissible does not mean it is the right thing to do, especially when implementation will vary depending on who is asking for it, when. Content moderation is empirically impossible to do well at scale; given the impact of the inevitable mistakes, ISPs and infrastructure intermediaries should not try. Instead, they should reject pressure to moderate like platforms, and clarify that they are much more like the local power company. If you wouldn’t want the power company shutting off service to a house just because someone doesn’t like what’s going on inside, you shouldn’t want a domain name registrar freezing a domain name because someone doesn’t like a site, or an ISP shutting down an account. And if you would hold the power company responsible for the behavior you don’t like just because that behavior relied on electricity, you shouldn’t hold an ISP or a domain name registrar or CDN, etc, responsible for behavior or speech that relies on their services either.  

If more than two decades of social media content moderation has taught us anything, it is that we cannot tech our way out of a fundamentally political problem. Social media companies have tried and failed to do so; beyond the platform, companies should refuse to replicate those failures.

Share
Categories
free speech Intelwars

Facebook Treats Punk Rockers Like Crazy Conspiracy Theorists, Kicks Them Offline

Facebook announced last year that it would be banning followers of QAnon, the conspiracy theorists that allege that a cabal of satanic pedophiles is plotting against former U.S. president Donald Trump. It seemed like a case of good riddance to bad rubbish.

Members of an Oakland-based punk rock band called Adrenochrome were taken completely by surprise when Facebook disabled their band page, along with all three of their personal accounts, as well as a page for a booking business run by the band’s singer, Gina Marie, and drummer Brianne.

Marie had no reason to think that Facebook’s content moderation battle with QAnon would affect her. The strange word (which refers to oxidized adrenaline) was popularized by Hunter Thompson in two books from the 1970s. Marie and her bandmates, who didn’t even know about QAnon when they named their band years ago, picked the name as a shout-out to a song by a British band from the 80’s, Sisters of Mercy. They were surprised as anyone that in the past few years, QAnon followers copied Hunter Thompson’s (fictional) idea that adrenochrome is an intoxicating substance, and gave this obscure chemical a central place in their ideology.

The four Adrenochrome band members had nothing to do with the QAnon conspiracy theory and didn’t discuss it online, other than receiving occasional (unsolicited and unwanted) Facebook messages from QAnon followers confused about their band name.

But on Jan. 29, without warning, Facebook shut down not just the Adrenochrome band page, but the personal pages of the three band members who had Facebook accounts, including Marie, and the page for the booking business.  

“I had 2,300 friends on Facebook, a lot of people I’d met on tour,” Marie said. “Some of these people I don’t know how to reach anymore. I had wedding photos, and baby photos, that I didn’t have copies of anywhere else.”

False Positives

The QAnon conspiracy theory became bafflingly widespread. Any website host—whether it’s comments on the tiniest blog, or a big social media site—is within its rights to moderate that QAnon-related content and the users who spread it. Can Facebook really be blamed for catching a few innocent users in the net that it aimed at QAnon?

Yes, actually, it can. We know that content moderation, at scale, is impossible to do perfectly. That’s why we advocate companies following the Santa Clara Principles: a short list of best practices, that include numbers (publish them), notice (provide it to users in a meaningful way), and appeal (a fair path to human review).

Facebook didn’t give Marie and her bandmates any reason that her page went down, leaving them to just assume it was related to their band’s name. They also didn’t provide any mechanism at all for appeal. All she got was a notice (screenshot below) telling her that her account was disabled, and that it would be deleted permanently within 30 days. The screenshot said “if you think your account was disabled by mistake, you can submit more information via the Help Center.” But Marie wasn’t able to even log in to the Help Center to provide this information.

Ultimately, Marie reached out to EFF and Facebook restored her account on February 16, after we appealed to them directly. But then, within hours, Facebook disabled it again. On February 28, after we again asked Facebook to restore her account, it was restored.

We asked Facebook why the account went down, and they said only that “these users were impacted by a false positive for harmful conspiracy theories.” That was the first time Marie had been given any reason for losing access to her friends and photos.

That should have been the end of it, but on March 5 Marie’s account was disabled for a third time. She was sent the exact same message, with no option to appeal. Once more we intervened, and got her account back—we hope, this time, for good.

This isn’t a happy ending. First, users shouldn’t have to reach out to an advocacy group in the first place to get help in challenging a basic account disabling. One hand wasn’t talking to the other, and Facebook couldn’t seem to stop this wrongful account termination.

Second, Facebook still hasn’t provided any meaningful ability to appeal—or even any real notice, something they explicitly promised to provide in our 2019 “Who Has Your Back?” report

Facebook is the largest social network online. They have the resources to set the standard for content moderation, but they’re not even doing the basics. Following the Santa Clara Principles—Numbers, Notice, and Appeal—would be a good start. 

Share