Categories
Intelwars privacy Social Networks

VICTORY: You Can Now Make Your Venmo Friends List Private. Here’s How.

It took two and a half years and one national security incident, but Venmo did it, folks: users now have privacy settings to hide their friends lists.

EFF first pointed out the problem with Venmo friends lists in early 2019 with our “Fix It Already” campaign. While Venmo offered a setting to make your payments and transactions private, there was no option to hide your friends list. No matter how many settings you tinkered with, Venmo would show your full friends list to anyone else with a Venmo account. That meant an effectively public record of the people you exchange money with regularly, along with whoever the app might have automatically imported from your phone contact list or even your Facebook friends list. The only way to make a friends list “private” was to manually delete friends one at a time; turn off auto-syncing; and, when the app wouldn’t even let users do that, monitor for auto-populated friends and remove them one by one, too.

This public-no-matter-what friends list design was a privacy disaster waiting to happen, and it happened to the President of the United States. Using the app’s search tool and all those public friends lists, Buzzfeed News found President Biden’s account in less than 10 minutes, as well as those of members of the Biden family, senior staffers, and members of Congress.  This appears to have been the last straw for Venmo: after more than two years of effectively ignoring calls from EFF, Mozilla, and others, the company has finally started to roll out privacy settings for friends lists.

As we’ve noted before, this is the bare minimum. Providing more privacy settings options so users can opt-out of the publication of their friends list is a step in the right direction. But what Venmo—and any other payment app—must do next is make privacy the default for transactions and friends lists, not just an option buried in the settings.

In the meantime, follow these steps to lock down your Venmo account:

  1. Tap the three lines in the top right corner of your home screen and select Settings near the bottom. From the settings screen, select Privacy and then Friends List. (If the Friends List option does not appear, try updating your app, restarting it, or restarting your phone.
       

  2. The settings will look like this by default.

  3. Change the privacy setting to Private. If you do not wish to appear in your friends’ own friends lists—after all, they may not set theirs to private—click the toggle off at the bottom. The final result should look like this.

  4. Back on the Privacy settings page, make sure your Default privacy settings look like this: set your default privacy option for all future payments to Private.

  5. Now select Past Transactions.

  6. Select Change All to Private.

  7. Confirm the change and click Change to Private.

  8. Now go all the way back to the main settings page, and select Friends & social.

  9. From here, you may see options to unlink your Venmo account from your Facebook account, Facebook friends list, and phone contact list. (Venmo may not give you all of these options if, for example, you originally signed up for Venmo with your Facebook account.) Click all the toggles off if possible.

    Obviously your specific privacy preferences are up to you, but following the steps above should protect you from the most egregious snafus that the company has caused over the years with its public-by-default—or entirely missing— privacy settings. Although it shouldn’t take a national security risk to force a company to focus on privacy, we’re glad that Venmo has finally, at last, years later, provided friends list privacy options

Share
Categories
Bloggers' Rights Commentary free speech Government Social Media Blocking Intelwars Social Networks

For Many, the Arab Spring Isn’t Over

Ten years ago today, Egyptians took to the streets to topple a dictator who had clung to power for nearly three decades. January 25th remains one of the most important dates of the Arab Spring, a series of massive, civilian-led protests and uprisings that spread across the Middle East and North Africa a decade ago. Using social media and other digital technologies to spread the word and amplify their organizing, people across the region demanded an end to the corruption and authoritarian rule that had plagued their societies.

Despite setbacks, much of the work that was started in 2011 is still ongoing.

A decade later, the fallout from this upheaval has taken countries in different directions. While Tunisia immediately abolished its entrenched Internet censorship regime and took steps toward democracy, other countries in the region—such as Egypt, Saudi Arabia, and Bahrain—have implemented more and more tools for censorship and surveillance. From the use of Western-made spyware to target dissidents to collusion with US social media companies to censor journalism, the hope once expressed by tech firms has been overlaid with cynical amoral profiteering. 

As we consider the role that social media and online platforms have played in the U.S. in recent months, it’s both instructive and essential to remember the events that took place a decade ago, and how policies and decisions made at the time helped to strengthen (or, in some cases, handicap) those democratic movements. There are also worthwhile parallels to be drawn between calls in the U.S. for stronger anti-terrorism laws and the shortsighted cybercrime and counterterrorism laws­ passed by other countries after the upheaval. And as governments today wield new, dangerous technologies, such as face surveillance, to identify Black Lives Matter protestors as well as those responsible for the attempted insurrection at the U.S. Capitol, we must be reminded of the expansive surveillance regimes that developed in many Middle Eastern and North African countries over the last ten years as well. 

But most importantly, we must remember that a decade later, despite setbacks, much of the work that was started in 2011 is still ongoing

EFF’s Work In the Region

Just a few weeks prior to the January 25th protest in Egypt, a street vendor named Mohamed Bouazizi had set himself on fire in Tunisia in protest of the government’s corruption and brutality. Following his death, others in the country began to protest. These protests in Tunisia and Egypt inspired others throughout the region—and indeed, throughout the world—to rise up and fight for their rights, but they were often met with violent pushback. EFF focused heavily at the time on lifting up those voices who fought against censorship, for free expression, and for expanding digital rights. A detailed history of the issues EFF tracked at the time would be too lengthy to cover here, but a few instances stand out and help shine a light on what the region is experiencing now.

A Social Media Revolution?

Many governments have long seen social media as a potential threat. In the early part of the new millennium, countries across the globe were engaged in a wide variety of censorship and even wholesale restrictions to platforms, and even Internet access as a whole. As the use of social media accelerated, governments took action: Thailand blocked YouTube in 2006; Facebook was blocked in Syria in 2007; countries from Turkey to Tunisia to Iran followed suit. Within a few years, tech companies were inundated with governmental requests to block access for some users or to take down specific content. For the most part, they agreed, though some companies, like Twitter, didn’t do so until much later.  

In 2010, a picture of the body of Khaleed Saeed, who had been brutally murdered by Egyptian police, began to spread across Facebook. Pseudonymous organizers created a page to memorialize Saeed that quickly gathered followers and became the place at which the now-famous 25th of January protests were first called for.

But although Facebook was later happy to take credit for the role it played in the uprising, the company actually took the page down just two months prior because its administrators were violating the platform’s “real name” policy—only restoring it after allies stepped in to help.

Another emblematic incident from that era occurred when photo-sharing platform Flickr removed images of Mubarak’s security offices liberated by activists from Cairo’s state security offices in the days following the revolution. Journalist Hossam Hamalawy protested the company’s decision, which was allegedly made on the basis of a bogus copyright claim, prompting debate among civil society as to whether Flickr’s decision had been appropriate.

Overcoming Censorship, Fighting for Free Expression in Egypt

As governments in the region tried to stop the impact of the protests, and to minimize the spread of dissent on social media, critics were often jailed, including bloggers. But many of these critics were charged for merely expressing themselves online. EFF took particular note of the cases of Maikel Nabil Sanad and Ayman Youssef Mansour, the first and second bloggers in post-Mubarak Egypt to be sentenced to jail for their online expression. 

It’s clear that the Arab Spring was a turning point for free expression online.

While the Mubarak regime had utilized emergency law to silence voices, the military shut up bloggers at whim. Sanad was sentenced, by a military court, to three years in prison for accusing the military of having conducted virginity tests on female protesters (a charge later found to be true). EFF highlighted his case numerous times, and reported on his eventual release, granted alongside 1,959 other prisoners as a marker of the first anniversary of the revolution. 

Mansour was tried by a civilian court and found to be “in contempt of religion,” a crime under article 98(f) of the Penal Code. His crime was joking about Islam on Facebook.  

Today, EFF and other human rights organizations continue to advocate for the release of jailed Egyptian activists who have been arrested by the current regime in an attempt to silence their voices. Alaa Abd El Fattah‚ who has been arrested (and eventually released) by every Egyptian head of state, including during the revolution‚ and Amal Fathy, an activist for women’s rights, are just two of the prominent critics of the government whose use of the Internet has cost them their freedom. EFF’s “Offline” campaign showcases key cases across the globe of individuals who have been silenced, and who may not be receiving wide coverage, but that we believe speak to a wider audience concerned with online freedom.

Graffiti in Cairo depicting Alaa Abd El Fattah

A Tunisian Internet Agency Becomes A Hacker Space 

The lasting effects of the Arab Spring in Tunisia have been notable. Tunisians under the Zine El Abidine Ben Ali regime experienced curtailed digital rights, including the blocking of websites and the surveillance of citizens. Tunisian free expression advocates worked for years to raise awareness of the country’s pervasive Internet controls, and in 2011, amidst the “Jasmine Revolution,” Ben Ali promised to end the filtering in his final speech on January 13, before fleeing to Saudi Arabia. 

This created a dilemma for the Agence Tunisienne d’Internet (ATI) or Tunisian Internet Agency, which was caught in the middle of implementing censorship orders and arguing in support of free expression. EFF followed the battle and the online protests that ensued as Tunisians fought for freedom from censorship. The country’s highest court ruled against the filtering in 2012, and ATI’s headquarters, formerly a private home of Ben Ali and his regime’s censorship and surveillance technologies, became a hackerspace for Tunisians to innovate, #404labs. EFF was there to celebrate, and the site has since hosted events such as the Freedom Online Coalition and the Arab Bloggers Meeting.

Moez Chakchouk, current Transport Minister of Tunisia and former CEO of the Tunisian Internet Agency

Syria and Tunisia Spy On Their Own

As protests spread throughout Syria in 2011, bloggers and programmers were often the targets of threats, attacks, and detention by the Bashar al-Assad regime. While this harassment was public, a secret danger also lurked in cyberspace: the Syrian government, after having blocked various sites such as Facebook for years, was covertly surveilling online activity and communications throughout the country via both malware and “Man-in-the-middle” attacks on Facebook. The extent of the surveillance was not well known before the Arab Spring. 

In 2011, EFF helped raise the alarm about the malware. Later, we were among the first to identify the “Man-in-the-middle” attacks, as well as phishing attacks against activists in the area. 

Around this same time in Tunisia, reports of hacked Facebook pages tipped off that company’s security team that the nation’s ISPs were essentially recording and stealing the entire country’s worth of Facebook passwords from anyone who logged into the site. Presumably, this information was then being fed to the Ben Ali regime in an effort to remove protest pages from the site. The company’s response, to implement the encrypted HTTPS protocol for the country, helped spur it to do so elsewhere later. Awareness of the detail of spying both during and after the Arab Spring has not only helped protect activists from their government, it has also spurred the adoption of safer, more secure online communications methods—a benefit to all who desire private communications.

In 2011, EFF awarded te Pioneer Award to Tunisian bloggers Nawaat (pictured)

Today’s Issues in the Region Reflect What Changed, and What Was Lost

The Arab Spring was a turning point for free expression online. Across the region, people fought for digital rights, and in some cases, thanks to them, for the first time. From Pakistan and Syria to Iran and Egypt, residents defended and improved their human rights thanks to secure communications and censorship-free social media platforms. 

But even as new technology assisted the revolution, tech companies continued (and continue) to assist governments in the silencing, and surveillance, of their critics. Earlier this year, a group of activists, journalists, and human rights organizations sent an open letter to Facebook, Twitter, and YouTube, demanding that the companies stop silencing critical voices from the Middle East and North Africa. Noting that key activists and journalists throughout the Middle East and North Africa continue to be censored on the platforms, sometimes at the behest of other governments, the letter urges the companies to “end their complicity in the censorship and erasure of the oppressed communities’ narratives and histories,” and makes several important demands. 

In particular, the letter urges the companies to:

  • Engage with local users, activists, human rights experts, academics, and civil society;
  • Invest in the local and regional expertise to develop and implement context-based content moderation decisions;
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities;
  • Preserve restricted content related to cases arising from war and conflict zones that is made unavailable;
  • Provide greater transparency and notice when it comes to deletions and account takedowns, and offer meaningful and timely appeals for users, in accordance with the Santa Clara Principles

The Arab Spring may seem to present us with a conundrum. As more governments around the world have chosen authoritarianism in the days since, platforms have often contributed to repression. But the legacy of activists and citizens using social media to push for political change and social justice lives on. We now know that the Internet, and technology as a whole, can enable human rights. Today, more people than ever understand that digital rights are themselves human rights. But like all rights, they must be defended, fought for, and protected against those who would rather hold onto power than share it—whether they lead a government or a tech company.

Years ago, Bassel Safadi Khartabil, the Syrian open source developer, blogger, entrepreneur, hackerspace founder, and free culture advocate, wrote to EFF that “code is much more than tools. It’s an education that opens youthful minds, and moves nations forward. Who can stop that? No-one.” In 2015, after several years of imprisonment without trial, Bassel was executed by the Syrian government. He is missed dearly, along with the many, many others who were killed in their fight for basic rights and freedoms. While the Arab Spring as it is commonly referred to may be over, for so many, it has never ended. Some day, we hope, it will. 

All photos credit Jillian C. York.

Share
Categories
Big tech Intelwars privacy Social Networks

It’s Business As Usual At WhatsApp

WhatsApp users have recently started seeing a new pop-up screen requiring them to agree to its new terms and privacy policy by February 8th in order to keep using the app. 

The good news is that, overall, this update does not make any extreme changes to how WhatsApp shares data with its parent company Facebook. The bad news is that those extreme changes actually happened over four years ago, when WhatsApp updated its privacy policy in 2016 to allow for significantly more data sharing and ad targeting with Facebook. What’s clear from the reaction to this most recent change is that WhatsApp shares much more information with Facebook than many users were aware, and has been doing so since 2016. And that’s not users’ fault: WhatsApp’s obfuscation and misdirection around what its various policies allow has put its users in a losing battle to understand what, exactly, is happening to their data.

This new terms of service and the privacy policy are one more step in Facebook’s long-standing effort to monetize its messaging properties, and are also in line with its plans to make WhatsApp, Facebook Messenger, and Instagram Direct less separate. This brings serious privacy and competition concerns, including but not limited to WhatsApp’s ability to share new information with Facebook about users’ interactions with new shopping and payment products.

To be clear: WhatsApp still uses strong end-to-end encryption, and there is no reason to doubt the security of the contents of your messages on WhatsApp. The issue here is other data about you, your messages, and your use of the app. We still offer guides for WhatsApp (for iOS and Android) in our Surveillance Self-Defense resources, as well as for Signal (for iOS and Android).

Then and Now

This story really starts in 2016, when WhatsApp changed its privacy policy for the first time since its 2014 acquisition to allow Facebook access to several kinds of WhatsApp user data, including phone numbers and usage metadata (e.g. information about how long and how often you use the app, as well as your operating system, IP address, mobile network, etc.). Then, as now, public statements about the policy highlighted how this sharing would help WhatsApp users communicate with businesses and receive more “relevant” ads on Facebook.

At the time, WhatsApp gave users a limited option to opt out of the change. Specifically, users had 30 days after first seeing the 2016 privacy policy notice to opt out of “shar[ing] my WhatsApp account information with Facebook to improve my Facebook ads and product experiences.” The emphasis is ours; it meant that WhatsApp users were able to opt out of seeing visible changes to Facebook ads or Facebook friend recommendations, but could not opt out of the data collection and sharing itself.

If you were a WhatsApp user in August 2016 and opted out within the 30-day grace period, that choice will still be in effect. You can check by going to the “Account” section of your settings and selecting “Request account info.” The more than one billion users who have joined since then, however, did not have the option to refuse this expanded sharing of their data, and have been subject to the 2016 policy this entire time.

Now, WhatsApp is changing the terms again. The new terms and privacy policy are mainly concerned with how businesses on WhatsApp can store and host their communications. This is happening as WhatsApp plans to roll out new commerce tools in the app like Facebook Shops. Taken together, this renders the borders between WhatsApp and Facebook (and Facebook-owned Instagram) even more permeable and ambiguous. Information about WhatsApp users’ interactions with Shops will be available to Facebook, and can be used to target the ads you see on Facebook and Instagram. On top of the WhatsApp user data Facebook already has access to, this is one more category of information that can now be shared and used for ad targeting. And there’s still no meaningful way to opt-out.

So when WhatsApp says that its data sharing practices and policies haven’t changed, it is correct—and that’s exactly the problem. Those practices and policies have represented an erosion of Facebook’s and WhatsApp’s original promises to keep the apps separate for over four years now, and these new products mean the scope of data that WhatsApp has access to, and can share with Facebook, is only expanding. 

All of this looks different for users in the EU, who are protected by the EU’s General Data Protection Regulation, or GDPR. The GDPR prevents WhatsApp from simply passing on user data to Facebook without the permission of its users. As user consent must be freely given, voluntary, and unambiguous, the all-or-nothing consent framework that appeared to many WhatsApp users last week is not allowed. Tying consent for a performance of a service—in this case, private communication on WhatsApp—to additional data processing by Facebook—like shopping, payments, and data sharing for targeted advertising—violates the “coupling prohibition” under the GDPR.

The Problems with Messenger Monetization

Facebook has been looking to monetize its messaging properties for years. WhatsApp’s 2016 privacy policy change paved the way for Facebook to make money off it, and its recent announcements and changes point to a monetization strategy focused on commercial transactions that span WhatsApp, Facebook, and Instagram.

Offering a hub of services on top of core messaging functionality is not new—LINE and especially WeChat are two long-standing examples of “everything apps”—but it is a problem for privacy and competition, especially given WhatsApp’s pledge to remain a “standalone” product from Facebook. Even more dangerously, this kind of mission creep might give those who would like to undermine secure communications another pretense to limit, or demand access to, those technologies.

With three major social media and messaging properties in its “family of companies”—WhatsApp, Facebook Messenger, and Instagram Direct—Facebook is positioned to blur the lines between various services with anticompetitive, user-unfriendly tactics. When WhatsApp bundles new Facebook commerce services around the core messaging function, it bundles the terms users must agree to as well. The message this sends to users is clear: regardless of what services you choose to interact with (and even regardless of whether or when those services are rolled out in your geography), you have to agree to all of it or you’re out of luck. We’ve addressed similar user choice issues around Instagram’s recent update.

After these new shopping and payment features, it wouldn’t be unreasonable to expect WhatsApp to drift toward even more data sharing for advertising and targeting purposes. After all, monetizing a messenger isn’t just about making it easier for you to find businesses; it’s also about making it easier for businesses to find you.

Facebook is no stranger to building and then exploiting user trust. Part of WhatsApp’s immense value to Facebook was, and still is, its reputation for industry-leading privacy and security. We hope that doesn’t change any further.  

Share
Categories
Big tech Corporate Speech Controls Intelwars Social Networks

How COVID Changed Content Moderation: Year in Review 2020

In a year that saw every facet of online life reshaped the coronavirus pandemic, online content moderation and platform censorship were no exception.

After a successful Who Has Your Back? campaign in 2019 to encourage large platforms to adopt best practices and endorse the Santa Clara Principles, 2020 was poised to be a year of more progress toward transparency and accountability in content moderation across the board. The pandemic changed that, however, as companies relied even more on automated tools in response to disrupted content moderator workforces and new types and volumes of misinformation.

At a moment when online platforms became newly vital to people’s work, education, and lives, this uptick in automation threatens freedom of expression online. That makes the Santa Clara Principles on Transparency and Accountability in Content Moderation more important than ever—and, like clockwork, transparency reporting later in the year demonstrated the pitfalls and costs of automated content moderation.

As the pandemic wore on, new laws regulating fake news online led to censorship and prosecutions across the world, including notable cases in Cambodia, India, and Turkey that targeted and harmed journalists and activists.

In May, Facebook announced its long-awaited Oversight Board. We had been skeptical from day one, and were disappointed to see the Board launch without adequate representation from the Middle East, North Africa, or Southeast Asia, and further missing advocates for LGBTQ or disability communities. Although the Board was designed to identify and decide Facebook’s most globally significant content disputes, the Board’s composition was and is more directed at parochial U.S. concerns.

In June, Facebook disabled the accounts of more than 60 Tunisian users with no notice or  transparency. We reminded companies how vital their platforms are to speech; while the current PR storm swirls around whether or not they fact-check President Trump, we cannot forget that those most impacted by corporate speech controls are not politicians, celebrities, or right-wing provocateurs, but some of the world’s most vulnerable people who lack the access to corporate policymakers to which states and Hollywood have become accustomed.

As the EU’s regulation to curb “terrorist” or violent extremist content online moved forward in the latter half of the year, the Global Internet Forum to Counter Terrorism (GIFCT) took center stage as a driving force behind the bulk of allegedly terrorism-related takedowns and censorship online. And in September, we saw Zoom, Facebook, and YouTube cite U.S. terrorism designations when they refused to host Palestinian activist Leila Khaled.

At the same time, EFF has put forward EU policy principles throughout 2020 that would give users, not content cartels like GIFCT, more control over and visibility into content decisions.

The United States’ presidential election in November drove home the same problems we saw with Facebook’s Oversight Board in May and the string of disappeared accounts in June: tech companies and online platforms have focused on American concerns and politics to the detriment of addressing problems in—and learning from—the rest of the world. While EFF made clear what we were looking for and demanding from companies as they tailored content moderation policies to the U.S. election, we also reminded companies to first and foremost listen to their global user base

Looking ahead to 2021, we will continue our efforts to hold platforms and their policies accountable to their users. In particular, we’ll be watching developments in AI and automation for content moderation, how platforms handle COVID vaccine misinformation, and how they apply election-related policies to significant elections coming up around the world, including in Uganda, Peru, Kyrgyzstan, and Iran.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Share
Categories
competition Digital Services Act EU Policy EUROPEAN UNION free speech Intelwars Social Networks

European Commission’s Proposed Digital Services Act Got Several Things Right, But Improvements Are Necessary to Put Users in Control

The European Commission is set to release today a draft of the Digital Services Act, the most significant reform of European Internet regulations in two decades. The proposal, which will modernize the backbone of the EU’s Internet legislation—the e-Commerce Directive—sets out new responsibilities and rules for how Facebook, Amazon, and other companies that host content handle and make decisions about billions of users’ posts, comments, messages, photos, and videos.

This is a great opportunity for the EU to reinvigorate principles like transparency, openness, and informational self-determination. Many users feel locked into a few powerful platforms and at the mercy of algorithmic decision systems they don’t understand. It’s time to change this.

We obtained a copy of the 85-page draft and, while we are still reviewing all the sections, we zeroed in on several provisions pertaining to liability for illegal content, content moderation, and interoperability, three of the most important issues that affect users’ fundamental rights to free speech and expression on the Internet.

What we found is a mixed bag with some promising proposals. The Commission got it right setting limits on content removal and allowing users to challenge censorship decisions. We are also glad to see that general monitoring of users is not a policy option and that liability for speech rests with the speaker, and not with platforms that host what users post or share online. But the proposal doesn’t address user control over data or establish requirements that the mega platforms work towards interoperability. Thus, there is space for improvement and we will work with the EU Parliament and the Council, which must agree on a text for it to become law, to make sure that the EU fixes what is broken and puts users back in control.

Content liability and monitoring

The new EU Internet bill preserves the key pillars of the current Internet rules embodied in the EU’s e-Commerce Directive. The Commission followed our recommendation to refrain from forcing platforms to monitor and censor what users say or upload online. It seems to have learned a lesson from recent disastrous Internet bills like Article 17, which makes platforms police users’ speech.

The draft allows intermediaries to continue to benefit from comprehensive liability exemptions so, as a principle, they will not be held liable for user content. Due to a European-style “good samaritan” clause, this includes situations where platforms voluntarily act against illegal content. However, the devil lies in the details and we need to make sure that platforms are not nudged to employ “voluntary” upload filters.

New due-diligence obligations

The DSA sets out new due diligence obligations for flagging illegal content for all providers of intermediary services, and establishes special type and size-oriented obligations for online platforms, including the very large ones.

We said from the start that a one-size fits all approach to Internet regulations for social media networks does not work for an Internet that is monopolized by a few powerful platforms. We can therefore only support new due diligence obligations that are matched to the type and size of the platform. The Commission rightly recognizes that the silencing of speech is a systemic risk on very large platforms and that transparency about content moderation can improve the status quo. However, we will carefully analyze other, potentially problematic provisions, such as requiring platforms to report certain types of illegal content to law enforcement authorities. Rules on supervision, investigation, and enforcement deserve in-depth scrutiny from the European Parliament and the Council.

Takedown notices and complaint handling

Here, the Commission has taken a welcome first step towards more procedural justice. Significantly, the Commission acknowledges that platforms frequently make mistakes when moderating content. Recognizing that users deserve more transparency about platforms’ decisions to remove content or close accounts, the draft regulations call for online platforms to provide a user-friendly complaint handling system and restore content or accounts that were wrongly removed.

However, we have concerns that platforms, rather than courts, are increasingly becoming the arbiters of what speech can or cannot be posted online. A harmonized notification system for all sorts of content will increase the risk that the platform becomes aware about the illegality of content and thus held liable for it.

Interoperability measures are missing

The Commission missed the mark on giving users more freedom and control over their Internet experience, as rules on interoperability are absent from the proposal. That may be addressed in the Digital Markets Act draft proposal. If the EU wants to break the power of platforms that monopolize the Internet, it needs regulations that will enable users to communicate with friends across platform boundaries, or be able to follow their favorite content across different platforms without having to create several accounts.

Court/administrative content takedown orders

The Internet is global and takedown orders of global reach are immensely unjust and impair users’ freedom. The draft rules address the perils of world-wide takedown orders by requiring such orders take into account users’ rights and their territorial scope must be necessary

Sanctions

Under the proposed regulations, the largest platforms can be fined up to six percent of their annual revenue for violating rules about hate speech and the sale of illegal goods. Proper enforcement actions and dissuasive sanctions are important tools to change the current digital space that is monopolized by very large platforms. That being said, high fines are only good if the substance of the regulations is good, which we will study in great detail in the next few weeks.

Non-EU platforms

Non-EU platform providers will face compliance duties if their services have a substantial connection to the EU. The proposed rules take particular aim at companies outside the Union, such as those in the U.S, that offer services to EU users. But the criteria for imposing the duties is not clear, and we’re concerned that if non-EU platforms are obligated to have legal representation in the EU, some will decide against offering services in the EU.

 

Share
Categories
Big tech competition Corporate Speech Controls Creativity & Innovation Intelwars Social Networks

Federal and State Antitrust Suits Challenging Facebook’s Acquisitions are a Welcome Sight

Antitrust enforcers charged with protecting us from monopolists have awoke from decades-long hibernation to finally address something users have known, and been paying for with their private data, for years: Facebook’s acquisitions of rival platforms have harmed social media users by reducing competition, leaving them with fewer choices and creating a personal data-hoovering behemoth whose profiling capabilities only cement its dominance.

Now the government’s enforcers want Facebook broken up. The company’s acquisitions of Instagram in 2012 and WhatsApp in 2014 are at the center of lawsuits filed yesterday by the Federal Trade Commission (FTC) and forty U.S. states and territories that accuse the giant platform of having and illegally maintaining monopoly power in the “personal social networking” market. Facebook CEO Mark Zuckerberg, the lawsuits allege, strategized that it was better to buy rather than compete. Acquiring Instagram and WhatsApp, the lawsuits allege, deprives social media users of the benefits of competition—more choice, quality, and innovation.

The suits also focus on how Facebook treats companies that want to interoperate with its services. Facebook has long recognized that the ability to interoperate with an incumbent platform is a powerful anti-monopoly weapon. That’s why, say the lawsuits, Facebook attaches conditions when it allows app developers to use its APIs: they can’t provide services that compete with Facebook’s functions, and they can’t connect with or promote other social networks.

Like many antitrust suits, a key issue will be whether the court accepts the governments’ definition of the relevant market that’s being monopolized. In other words, is “personal social networking services” a unique type of service that Facebook dominates? Or does Facebook compete head-to-head with everything from email to television as one player among many? That issue is sure to be hotly contested as the government and states grapple with Facebook about what other companies are part of the relevant market.  

Facebook will probably also argue that its acquisitions were good for consumers and weren’t illegal from an antitrust standpoint because, even if they gave the company market dominance, they led to innovation that benefited users. Because no one can know for sure what would have happened if Instagram and WhatsApp had remained independent, Facebook will argue, the courts can do nothing now.

Tell that to former Instagram and WhatsApp users who saw the platforms they chose over Facebook be subsumed into Facebook’s ecosystem. Those users thought their preferred network, and their data, could be kept separate from Facebook’s; first because they were actually separate, and then because Facebook told them so, only to go back on its word, siphon off their data, and be opaque about the privacy implications to boot.

Antitrust regulators were mostly asleep at the wheel. Meantime, Instagram users saw the Instagram Direct logo disappear and be replaced with Facebook Messenger logo. Facebook continues to blur the lines between the two apps, we noted last month, as part of a broader plan to consolidate Instagram Direct, Facebook Messenger, and WhatsApp. In a recent messaging “update,” Facebook encouraged Instagram users take advantage of  new “cross-platform messaging” features that in essence give you Facebook Messenger inside Instagram. But hey, you get innovations like colors in chats and new emojis.

Facebook will also have to defend its 2013 acquisition of VPN maker Onavo, which was specifically called out in the states’ lawsuit. Onavo’s data-gathering features were billed as a way for Facebook customers to keep their web browsing safe. But as it turns out, Facebook was using Onavo to gather intelligence about potential rivals by seeing how many messages users were sending through WhatsApp, which is what led it to buy WhatsApp. Facebook shut down the Onavo service after the practice was revealed. Whoops.

The enforcers aren’t asking that Facebook pay damages in the lawsuit. Rather, they want a court to require Facebook to divest Instagram, WhatsApp and possibly other acquisitions, and to limit the companies’ future mergers and acquisitions.

That’s the right approach. Even though company break-ups are hard to achieve—the last significant technology company to be broken up was AT&T in 1982—spinning off Facebook’s acquisitions could inject competition into a field where it’s been stifled for many years now. Even the pursuit of a break-up and restrictions on future mergers can create needed space for competition in the future. That’s why these lawsuits, though they won’t be easy to win, are a welcome sight.

Share
Categories
free speech Intelwars Social Networks

Trump’s Ban on TikTok Violates First Amendment by Eliminating Unique Platform for Political Speech, Activism of Millions of Users, EFF Tells Court

We filed a friend-of-the-court brief—primarily written by the First Amendment Clinic at the Sandra Day O’Connor College of Law—in support of a TikTok employee who is challenging President Donald Trump’s ban on TikTok and was seeking a temporary restraining order (TRO). The employee contends that Trump’s executive order infringes the Fifth Amendment rights of TikTok’s U.S.-based employees. Our brief, which is joined by two prominent TikTok users, urges the court to consider the First Amendment rights of millions of TikTok users when it evaluates the plaintiff’s claims.

Notwithstanding its simple premise, TikTok has grown to have an important influence in American political discourse and organizing. Unlike other platforms, users on TikTok do not need to “follow” other users to see what they post. TikTok thus uniquely allows its users to reach wide and diverse audiences. That’s why the two TikTok users who joined our brief use the platform. Lillith Ashworth, whose critiques of Democratic presidential candidates went viral last year, uses TikTok to talk about U.S. politics and geopolitics. The other user, Jynx, maintains an 18+ adult-only account, where they post content that centers on radical leftist liberation, feminism, and decolonial politics, as well as the labor rights of strippers.

Our brief argues that in evaluating the plaintiff’s claims, the court must consider the ban’s First Amendment implications. The Supreme Court has established that rights set forth in the Bill of Rights work together; as a result the plaintiff’s Fifth Amendment claims are enhanced by the First Amendment considerations. We say in our brief:

A ban on TikTok violates fundamental First Amendment principles by eliminating a specific type of speaking, the unique expression of a TikTok user communicating with others through that platform, without sufficient considerations for the users’ speech. Even though the order facially targets the platform, its censorial effects are felt most directly by the users, and thus their First Amendment rights must be considered in analyzing its legality.

EFF, the First Amendment Clinic, and the individual amici urge the court to adopt a higher standard of scrutiny when reviewing the plaintiff’s claims against the president. Not only are the plaintiff’s Fifth Amendment liberties at stake, but millions of TikTok users have First Amendment freedoms at stake. The Fifth Amendment and the First Amendment are each critical in securing life, liberty, and due process of law. When these amendments are examined separately, they each deserve careful analysis; but when the interests protected by these amendments come together, a court should apply an even higher standard of scrutiny.

The hearing on the TRO scheduled for tomorrow was canceled after the government promised the court that it did not intend to include the payment of wages and salaries within the executive order’s definition of prohibited transactions, thus addressing the plaintiff’s most urgent claims.

 

 

 

 

Share
Categories
Intelwars privacy Security Social Networks

After This Week’s Hack, It Is Past Time for Twitter to End-to-End Encrypt Direct Messages

Earlier this week, chaos reigned supreme on Twitter as high-profile public figures—from Elon Musk to Jeff Bezos to President Barack Obama—started tweeting links to the same bitcoin scam.

Twitter’s public statement and reporting from Motherboard suggest attackers gained access to an internal admin tool at the company, and used it to take over these accounts. Twitter says that approximately 130 accounts were targeted. According to Reuters, the attackers offered accounts for sale right before the bitcoin scam attack.

The full extent of the attack is unclear at this point, including what other capabilities the attackers might have had, or what other user information they could have accessed and how. Users cannot avoid a hack like this by strengthening their password or using two-factor authentication (though you should still take those steps to protect against other, much more common attacks). Instead, it’s Twitter’s responsibility to provide robust internal safeguards. Even with Twitter’s strong security team, it is almost impossible to defend against all insider threats and social engineering attacks—so these safeguards must prevent even an insider from getting unnecessary access.

Twitter direct messages (or DMs), some of the most sensitive user data on the platform, are vulnerable to this week’s kind of internal compromise. That’s because they are not end-to-end encrypted, so Twitter itself has access to them. That means Twitter can hand them over in response to law enforcement requests, they can be leaked, and—in the case of this week’s attack—internal access can be abused by malicious hackers and Twitter employees themselves.

End-to-end encryption provides the robust internal safeguard that Twitter needs. Twitter wouldn’t have to worry about whether or not this week’s attackers read or exfiltrated DMs if it had end-to-end encrypted them, like we have been asking Twitter to do for years.

Senator Ron Wyden also called for Twitter to end-to-end encrypt DMs after the hack, reminding Twitter CEO Jack Dorsey that he reassured the Senator that end-to-end encryption was in the works two years ago.

Many other popular messaging systems are already using end-to-end encryption, including WhatsApp, iMessage, and Signal. Even Facebook Messenger offers an end-to-end encrypted option, and Facebook has announced plans to end-to-end encrypt all its messaging tools. It’s a no-brainer that Twitter should protect your DMs too, and they have been unencrypted for far too long.

Finally, let’s all pour one out for Twitter’s Incident Response team, living the security response nightmare in real time. We appreciate their work, and @TwitterSupport for providing ongoing updates on the investigation.

Share
Categories
free speech Intelwars Legal Analysis privacy social media surveillance Social Networks

EFF to Court: Social Media Users Have Privacy and Free Speech Interests in Their Public Information

Special thanks to legal intern Rachel Sommers, who was the lead author of this post.

Visa applicants to the United States are required to disclose personal information including their work, travel, and family histories. And as of May 2019, they are required to register their social media accounts with the U.S. government. According to the State Department, approximately 14.7 million people will be affected by this new policy each year.

EFF recently filed an amicus brief in Doc Society v. Pompeo, a case challenging this “Registration Requirement” under the First Amendment. The plaintiffs in the case, two U.S.-based documentary film organizations that regularly collaborate with non-U.S. filmmakers and other international partners, argue that the Registration Requirement violates the expressive and associational rights of both their non-U.S.-based and U.S.-based members and partners. After the government filed a motion to dismiss the lawsuit, we filed our brief in district court in support of the plaintiffs’ opposition to dismissal. 

In our brief, we argue that the Registration Requirement invades privacy and chills free speech and association of both visa applicants and those in their social networks, including U.S. persons, despite the fact that the policy targets only publicly available information. This is amplified by the staggering number of social media users affected and the vast amounts of personal information they publicly share—both intentionally and unintentionally—on their social media accounts.

Social media profiles paint alarmingly detailed pictures of their users’ personal lives. By monitoring applicants’ social media profiles, the government can obtain information that it otherwise would not have access to through the visa application process. For example, visa applicants are not required to disclose their political views. However, applicants might choose to post their beliefs on their social media profiles. Those seeking to conceal such information might still be exposed by comments and tags made by other users. And due to the complex interactions of social media networks, studies have shown that personal information about users such as sexual orientation can reliably be inferred even when the user doesn’t expressly share that information. Although consular officers might be instructed to ignore this information, it is not unreasonable to fear that it might influence their decisions anyway.

Just as other users’ online activity can reveal information about visa applicants, so too can visa applicants’ online activity reveal information about other users, including U.S. persons. For example, if a visa applicant tags another user in a political rant or posts photographs of themselves and the other user at a political rally, government officials might correctly infer that the other user shares the applicant’s political beliefs. In fact, one study demonstrated that it is possible to accurately predict personal information about those who do not use any form of social media based solely on personal information and contact lists shared by those who do. The government’s surveillance of visa applicants’ social media profiles thus facilitates the surveillance of millions—if not billions—more people.

Because social media users have privacy interests in their public social media profiles, government surveillance of digital content risks chilling free speech. If visa applicants know that the government can glean vast amounts of personal information about them from their profiles—or that their anonymous or pseudonymous accounts can be linked to their real-world identities—they will be inclined to engage in self-censorship. Many will likely curtail or alter their behavior online—or even disengage from social media altogether. Importantly, because of the interconnected nature of social media, these chilling effects extend to those in visa applicants’ social networks, including U.S. persons.

Studies confirm these chilling effects. Citizen Lab found that 62 percent of survey respondents would be less likely to “speak or write about certain topics online” if they knew that the government was engaged in online surveillance. A Pew Research Center survey found that 34 percent of its survey respondents who were aware of the online surveillance programs revealed by Edward Snowden had taken at least one step to shield their information from the government, including using social media less often, uninstalling certain apps, and avoiding the use of certain terms in their digital communications.

One might be tempted to argue that concerned applicants can simply set their accounts to private. Some users choose to share their personal information—including their names, locations, photographs, relationships, interests, and opinions—with the public writ large. But others do so unintentionally. Given the difficulties associated with navigating privacy settings within and across platforms and the fact that privacy settings often change without warning, there is good reason to believe that many users publicly share more personal information than they think they do. Moreover, some applicants might fear that setting their accounts to private will negatively impact their applications. Others—especially those using social media anonymously or pseudonymously—might be loath to maximize their privacy settings because they use their platforms with the specific intention of reaching large audiences.

These chilling effects are further strengthened by the broad scope of the Registration Requirement, which allows the government to continue surveilling applicants’ social media profiles once the application process is over. Personal information obtained from those profiles can also be collected and stored in government databases for decades. And that information can be shared with other domestic and foreign governmental entities, as well as current and prospective employers and other third parties. It is no wonder, then, that social media users might severely limit or change the way they use social media.

Secrecy should not be a prerequisite for privacy—and the review and collection by the government of personal information that is clearly outside the scope of the visa application process creates unwarranted chilling effects on both visa applicants and their social media associates, including U.S. persons. We hope that the D.C. district court denies the government’s motion to dismiss the case and ultimately strikes down the Registration Requirement as unconstitutional under the First Amendment.

Share
Categories
Content Blocking free speech Intelwars net neutrality Social Networks

The Executive Order Targeting Social Media Gets the FTC, Its Job, and the Law Wrong

This is one of a series of blog posts about President Trump’s May 28 Executive Order. Other posts are here, here, and here.

The inaptly named Executive Order on Preventing Online Censorship seeks to insert the federal government into private Internet speech in several ways. In particular, Sections 4 and 5 seek to address possible deceptive practices, but end up being unnecessary at best and legally untenable at worst.

These provisions are motivated in part by concerns, which we share, that the dominant platforms do not adequately inform users about their standards for moderating content, and that their own free speech rhetoric often doesn’t match their practices. But the EO’s provisions either don’t help, or introduce new and even more dangerous problems.

Section 4(c) says, “The FTC (Federal Trade Commission) shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce, pursuant to section 45 of title 15, United States Code. Such unfair or deceptive acts or practice may include practices by entities covered by Section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.”

Well, sure. Platforms should be honest about their restriction practices, and held accountable when they lie about them. The thing is, the FTC already has the ability to “consider taking action” about deceptive commercial practices.

But the real difficulty comes with the other parts of this section. Section 4(a) sets out the erroneous legal position that large online platforms are “public forums” that are legally barred from exercising viewpoint discrimination and have little ability to limit the categories of content that may be published on their sites. As we discuss in detail in our post dedicated to Section 230, every court that has considered this legal question has rejected it, including recent decisions by U.S. District Courts of Appeal for the Ninth and D.C. Circuits. And for good reason: treating social media companies like “public forums” gives users less ability to respond to misuse, not more.

Instead, those courts have correctly adopted the rule on editorial freedom from the Supreme Court’s 1974 decision in Miami Herald Co. v Tornillo. In that case, the court rejected strikingly similar arguments—that the newspapers of the day were misusing their editorial authority to favor one side over the other in public debates and that government intervention was necessary to “insure fairness and accuracy and to provide for some accountability.” Sound familiar?

The Supreme Court didn’t go for it: the “treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.”

The current Supreme Court agrees. Just last term, in Manhattan Community Access v Halleck, the Supreme Court affirmed that the act of serving as a platform for the speech of others did not eliminate that platform’s own First Amendment right to editorial freedom.

But the EO doesn’t just get the law wrong—it wants the FTC to punish platforms that don’t adhere to the erroneous position that online platforms are “public forums” legally barred from editorial freedom. Section 4(d) commands the FTC to consider whether the dominant platforms are inherently engaging in unfair practices by not operating as public forums as set forth in Section 4(a). This means that a platform could be completely honest, transparent, and open about its content moderation practices but still face penalties because it did not act like a public forum. So, platforms have a choice—take their guidance from the Supreme Court or from the Trump administration.

Additionally, Section 4(b) refers to the White House’s Tech Bias Reporting Tool launched last year to collect reports of political bias. The EO states that 16,000 reports were received and they will be forwarded to the FTC. We filed a Freedom of Information Act (FOIA) request with the White House’s Office of Science and Technology Policy for those complaints last year and wer told that that office had no records (https://www.eff.org/document/eff-fioa-request-tech-bias-story-sharing-tool).

Section 5 commands the Attorney General to convene a group to look at existing state laws and propose model state legislation to address unfair and deceptive practices by online platforms. This group will be empowered to collect publicly available information about: how platforms track user interactions with other users; the use of “algorithms to suppress political alignment or viewpoint”; differential policies when applied to the Chinese government; reliance on third-party entities with “indicia of bias,” and viewpoint discrimination with respect to user monetization. To the extent that this means that decisions will be made based on actual data rather than anecdote and supposition, that is a good thing. But given this pretty one-sided list, there does seem to be a predetermined political decision the EO wants to reach, and the resulting proposals that come out of this may create yet another set of problems.

All of this exacerbates a growing environment of legal confusion for technology and its users that bodes ill for online expression. Keep in mind that “entities covered by section 230” describes a huge population of online services that facilitate online user communication, from Wikimedia to the Internet Archive to the comments section of local newspapers. However you feel about Big Tech, rest assured that the EO’s effects will not be confined to the small group of companies that can afford to navigate these choppy waters.

Share
Categories
Content Blocking Creativity & Innovation free speech Intelwars Section 230 of the Communications Decency Act Social Networks

Trump’s Executive Order Threatens to Leverage Government’s Advertising Dollars to Pressure Online Platforms

This is one of a series of blog posts about President Trump’s May 28 Executive Order. Other posts are here, here, and here.

The inaptly named  Executive Order on Preventing Online Censorship (EO) seeks to insert the federal government into private Internet speech in several ways. Section 3 of the EO threatens to leverage the federal government’s significant online advertising spending to coerce platforms to conform to the government’s desired editorial position.

This raises significant First Amendment concerns.

The EO provides:

Sec. 3Protecting Federal Taxpayer Dollars from Financing Online Platforms That Restrict Free Speech.  (a)  The head of each executive department and agency (agency) shall review its agency’s Federal spending on advertising and marketing paid to online platforms.  Such review shall include the amount of money spent, the online platforms that receive Federal dollars, and the statutory authorities available to restrict their receipt of advertising dollars.

(b)  Within 30 days of the date of this order, the head of each agency shall report its findings to the Director of the Office of Management and Budget.

(c)  The Department of Justice shall review the viewpoint-based speech restrictions imposed by each online platform identified in the report described in subsection (b) of this section and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.

The First Amendment is implicated by this provision because it is, at its essence, the government punishing a speaker for expressing a political viewpoint. The Supreme Court has recognized that “[t]he expression of an editorial opinion . . . lies at the heart of First Amendment protection.” The First Amendment thus generally protects speakers against enforced neutrality.

Although the government may have broad leeway to decide where it wants to run its advertisements, here it seems that the government would otherwise place advertisements on these platforms but for the sole fact that it dislikes the political viewpoint reflected by the platform’s editorial and curatorial decisions. This is true regardless of whether the platform actually has an editorial viewpoint or if the government simply perceives a viewpoint it finds inappropriate.

This decision is especially suspect when the platform’s speech is unrelated to the advertisement or the government program or policy being advertised. It might present a different situation if the message in the government’s advertisement would be undermined by the platform’s editorial decisions, or,  if by advertising, the government would be perceived as adopting the platform’s viewpoint. But neither of those is contemplated by the EO.

The EO thus seems purely retaliatory, and designed solely to coerce the platforms to meet the government’s conception of acceptable “neutrality”—a severe penalty for having a political viewpoint. The goal of federal government advertising is to reach the broadest audience possible: think of the Consumer Product Safety Commission’s Quinn the Quarantine Fox ads, or the National Park Service’s promotions about its units. This advertising is not a reward for the platform for its perceived neutrality. It’s a service to Americans who need vital information.

In other contexts, the Supreme Court has made clear that the government’s spending decisions can generally not be “the product of invidious viewpoint discrimination.” The court has applied this rule to strike down a property tax exemption that was available only to those who took loyalty oaths, explaining that “the deterrent effect is the same as if the State were to fine them for this speech.” And the court also applied it when a county canceled a contract with a trash hauler who was a fervent critic of the county’s government. Even when the court rejected a First Amendment challenge to a requirement that the National Endowment for the Arts consider “general standards of decency and respect for the diverse beliefs and values of the American public” as one of many factors in awarding arts grants, it emphasized that the criterion did not give the government authority to “leverage its power to award subsidies on the basis of subjective criteria into a penalty on disfavored viewpoints,” and funding decisions should not be “calculated to drive certain ideas or viewpoints from the marketplace.”

By denying ad dollars that it would otherwise spend solely because it disagrees with a platform’s editorial views, or dislikes that it has editorial views, the government violates these fundamental principles. And this in turn harms the public, which may need or want information contained in government advertisements.

Share
Categories
Content Blocking free speech Intelwars net neutrality Section 230 of the Communications Decency Act Social Networks

Trump’s Executive Order Seeks To Have FCC Regulate Platforms. Here’s Why It Won’t Happen

This is one of a series of blog posts about President Trump’s May 28 Executive Order. Other posts are here and here.

The inaptly named  Executive Order on Preventing Online Censorship seeks to insert the federal government into private Internet speech in several ways. Through Section 2 of the Executive Order (EO), the president has attempted to demand the start of a new administrative rulemaking. Despite the ham-fisted language, such a process can’t come into being. No matter how much someone might wish it.

The EO attempts to enlist the Secretary of Commerce and Attorney General to draft a rulemaking petition with the Federal Communications Commission (FCC) that asks it  that independent agency to interpret 47 U.S.C. § 230 (“Section 230”), a law that underlies much of the architecture for the modern Internet.

Quite simply, this isn’t allowed.

Specifically, the petition will ask the FCC to examine:

“(i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider’s responsibility for its own editorial decisions;

“(ii)  the conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly whether actions can be “taken in good faith” if they are:

“(A)  deceptive, pretextual, or inconsistent with a provider’s terms of service; or

“(B)  taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard; and

“(iii)  any other proposed regulations that the NTIA concludes may be appropriate to advance the policy described in subsection (a) of this section.”

There are several significant legal obstacles to this happening.

First, the Federal Communications Commission (FCC) has no regulatory authority over the platforms the President wishes the agency to regulate. The FCC is a telecommunications/spectrum regulator and only the communications infrastructure industry (companies such as AT&T, Comcast, Frontier as well as airwaves) are subject to the agency’s regulatory authority. This is the position of both the current, Trump-appointed FCC Chair as well as the  courts that have considered the question.

In fact, this is why the issue of net neutrality is legally premised on whether or not broadband companies are telecommunications carriers. While that question, whether broadband providers are telecommunications carriers under the law, is one where we disagree with current FCC leadership, neither this FCC nor any previous one has taken the position that social media companies are telecommunications carriers. So to implement regulations targeting social media companies, the FCC would have to explain how—under what legal authority—it is allowed to issue regulations aimed at social media companies. We don’t see it doing so.  

But say the FCC ignores this likely fatal flaw and proceeds anyway. The EO triggers a long and slow process which is unlikely to be completed, much less one that results in an enforcement action, this year. That process will involve a Notice of Proposed Rules (NPRM), with the FCC issuing a statement explaining its rationale for regulating these companies, what authorities it has to regulate them, and the possible regulations the FCC intends to produce. The commission must then solicit public comment in response to its statement.

The process also involves public comment periods and agreement by a majority of FCC Commissioners on the regulations they want to issue. Absent a majority, nothing can be issued and the proposed regulations effectively die from inaction. If a majority of FCC Commissioners do agree and move forward, a lawsuit will inevitably follow to test the legal merits of the FCC’s decision, both on whether the government followed the proper procedures in issuing the regulation and whether it has the legal authority to issue rules in the first place.

Needless to say, the EO has initiated a long and uncertain process. Certainly one that will not be completed before the November election, if ever.

Share
Categories
Bloggers' Rights free speech Intelwars Section 230 of the Communications Decency Act Security Social Networks

Dangers of Trump’s Executive Order Explained

The inaptly named Executive Order on Preventing Online Censorship (EO) is a mess on many levels: it’s likely unconstitutional on several grounds, built on false premises, and bad policy to boot. We are no fans of the way dominant social media platforms moderate user content. But the EO, and its clear intent to retaliate against Twitter for marking the president’s tweets for fact-checking, demonstrates that governmental mandates are the wrong way to address concerns about faulty moderation practices.

The EO contains several key provisions. We will examine them in separate posts linked here:

1. The FCC rule-making provision
2. The misinterpretation of and attack on Section 230
3. Threats to pull government advertising
4. Review of unfair or deceptive practices.

Although we will focus on the intended legal consequences of the EO, we must also acknowledge the danger the Executive Order poses even if it is just political theater and never has any legal effect. The mere threat of heavy-handed speech regulation can inhibit speakers who want to avoid getting into a fight with the government, and deny readers information they want to receive. The Supreme Court has recognized that “people do not lightly disregard public officers’ thinly veiled threats” and thus even “informal contacts” by government against speakers may violate the First Amendment.

The EO’s threats to free expression and retaliation for constitutionally-protected editorial decisions by a private entity are not even thinly veiled: they should have no place in any serious discussion about concerns over the dominance of a few social media companies and how they moderate user content.

That said, we too are disturbed by the current state of content moderation on the big platforms. So, while we firmly disagree with the EO, we have been highly critical of the platforms’ failure to address some of the same issues targeted in the EO’s policy statement, specifically: first, that users deserve more transparency about how, when and how much content is moderated; second, that decisions often appear inconsistent; and, third, that content guidelines are often vague and unhelpful. Starting long before the president got involved, we have said repeatedly that the content moderation system is broken and called for platforms to fix it. We have documented a range of egregious content moderation decisions (see our onlinecensorship.org, Takedown Hall of Shame, and TOSsed Out projects). We have proposed a human rights framing for content moderation called the Santa Clara Principles, urged companies to adopt it, and then monitored whether they did so (see our 2018 and 2019 Who Has Your Back reports).

But we have rejected government mandates as a solution, and this EO demonstrates why it is indeed the wrong approach. In the hands of a retaliatory regime, government mandates on speech will inevitably be used to punish disfavored speakers and platforms, and for other oppressive and repressive purposes. Those decisions will disproportionately impact the marginalized. Regardless of the dismal state of content moderation, it is truly dangerous to put the government in control of online communication channels.

Some have proposed that the EO is simply an attempt to bring some due process and transparency to content moderation. However, our analysis of the various parts of the EO illuminate why that’s not true. 

What about Competition?

For all its bluster, the EO doesn’t address one of the biggest underlying threats to online speech and user rights: the concentration of power in a few social media companies.

If the president and other social media critics really want to ensure that all voices have a chance to be heard, if they are really concerned that a few large platforms have too much practical power to police speech, the answer is not to create a new centralized speech bureaucracy, or promote the creation of fifty separate ones in the states. A better and actually constitutional option is to reduce the power of the social media giants and increase the power of users by promoting real competition in the social media space. This means eliminating the legal barriers to the development of tools that will let users control their own Internet experience. Instead of enshrining Google, Facebook, Amazon, Apple, Twitter, and Microsoft as the Internet’s permanent overlords, and then striving to make them as benign as possible, we can fix the Internet by making Big Tech less central to its future.

The Santa Clara Principles provide a framework for making content moderation at scale more respectful of human rights. Promoting competition provides a way to make the problems caused by content moderation by the big tech companies less important. Neither of these seem likely to be accomplished by the EO. But the chilling effect the EO will likely have on hosts of speech, and, consequently, the public—which relies on the Internet to speak out and be heard—is likely very real.

Share
Categories
Intelwars privacy Security Social Networks

How Twitter’s Default Settings Can Leak Your Phone Number

Twitter has publicly disclosed a security “incident” that points to long-standing problems with how the service handles phone numbers. Twitter announced it had discovered and shut down “a large network of fake accounts” that were uploading large numbers of phone numbers and using tools in Twitter’s API to match them to individual usernames. This type of activity can be used to build a reverse-lookup tool, to find the phone number associated with a given username.

It turns out at least one of those people uploading massive lists of phone numbers was a security researcher, whose findings TechCrunch reported on in December.

Problems with tools that allow users to find accounts using the phone numbers associated with them are not new at Twitter (or at Facebook, for that matter). And, given the way these features are designed, their potential for abuse is not likely to go away anytime soon.

The best way for Twitter to protect its users is to minimize the number of accounts with phone numbers tied to them, and make it clear to users when and how those numbers might be exposed. That’s why Twitter needs to stop pressuring users to add their phone numbers to their profiles and stop making those phone numbers discoverable by default.

How It Works

When you are new to a service or first download an app, you may see a prompt to upload your contacts to find people you already know on the app. Twitter offers one of those contact upload tools.

The problem is that, if Twitter wants to connect you with your friends via their phone numbers, it needs to offer an API to support it. While that kind of API can and should come with limitations to prevent someone from maliciously revealing people’s identities and contact information, there will almost always be a way around it. In Twitter’s case, one of the limitations in place was to reject anyone who tried to upload a long list of sequential phone numbers—a sign that this person was almost certainly not uploading an address book in an attempt to find friends.

The workaround is almost comically simple: someone could just upload a long list of randomized phone numbers instead. And that’s how the security researcher whose work tipped Twitter off to this problem was able to match up phone numbers and usernames not just for people in his contacts, but for 17 million unsuspecting Twitter users, including high-profile officials and politicians around the world.

Who It Affects

This tool can only match phone numbers to Twitter accounts for those who 1) have “phone number discoverability” turned on in their settings and 2) have a phone number associated with their account. If neither of those are true for you, then your account was not exposed by this problem. For a step-by-step guide to checking your settings, visit our tutorial here.

Vague Promises

This is not the only time Twitter has recently messed up its management and protection of user phone numbers: just last October Twitter fessed up to using users’ two-factor authentication numbers for targeted advertising.

But, then as now, Twitter’s plans to fix the problems that exposed user information in the first place are troublingly vague. Twitter’s announcement claimed that the company has made a “number of changes to this [contact upload tool] so that it could no longer return specific account names in response to queries.”

But what exactly are those changes, and how will they work? After failing the public trust, Twitter owes its users more transparency so that they can judge for themselves whether the fixes were adequate.

Share
Categories
Article 13 Creativity & Innovation EUROPEAN UNION Fair Use free speech Intelwars Social Networks transparency

Rights Groups to European Commission: Prioritize Users’ Rights, Get Public Input For Article 17 Application Guidelines

The implementation of Art 17 (formerly Article 13) into national laws will have a profound effect on what users can say and share online. The controversial rule, part of the EU’s copyright directive approved last year, turns tech companies and online services operators into copyright police. Platforms are liable for any uploaded content on their sites that infringes someone’s copyright, absent authorization from rightsholders. To escape liability, online service operators have to make best efforts to ensure that infringing content is not available on their platforms, which in practice is likely to require scanning and filtering of billions of daily social media posts and content uploads containing copyrighted material.

The content moderation practices of Internet platforms are already faulty and opaque. Layering copyright enforcement onto this already broken system will censor even more speech. It’s paramount that preserving and protecting users’ rights are baked into guidelines the EC is developing for how member states should implement the controversial rule. The guidelines are non-binding but politically influential.

The commission has held four meetings with stakeholders in recent months to gather information about copyright licensing and content moderation practices. Two more meetings are scheduled for this spring, after which the EC is expected to begin drafting guidelines for the application of Article 17, which must be implemented in national laws by June 7, 2021.

The fifth meeting was held today in Brussels. The good news is EFF and other digital rights organizations have a seat at the table, alongside rightsholders from the music and film industries and representatives of big tech companies like Google and Facebook. The bad news is that the commission’s proposed guidelines probably won’t keep users’ rights to free speech and freedom of expression from being trampled as internet service providers, fearful of liability, race to over-block content.

That’s why EFF and more than 40 user advocate and digital rights groups sent an open letter to the EC asking the commissioners to ensure that implementation guidelines focus on user rights, specifically free speech, and limit the use of automated filtering, which is notoriously inaccurate. The guidelines must ensure that protecting legitimate, fair uses of copyrighted material for research, criticism, review, or parody takes precedence over content blocking measures Internet service providers employ to comply with Article 17, the letter says. What’s more, the guidelines must make clear that automated filtering technologies can only be used if content-sharing providers can show that users aren’t being negatively affected.

Further, we asked the commission to share the draft guidelines with rights organizations and the public, and allow both to comment on and suggest improvements to ensure that they comply with European Union civil and human rights requirements. As we told the EC in the letter, “This request is based on the requirement of transparency, which is a core principle of the rule of law.” EFF and its partners want to “ensure that the guidelines are in line with the right to freedom of expression and information and also data protection guaranteed by the Charter of Fundamental Rights.”

The EC is scheduled to hold the next stakeholder meeting in February in preparation for drafting guidelines. We will keep the pressure on to protect users from censorship and content blocking brought on by this incredibly dangerous directive.

Share