Categories
Big tech Commentary Creativity & Innovation Digital Services Act EUROPEAN UNION Intelwars International Online Behavioral Tracking privacy

The GDPR, Privacy and Monopoly

In Privacy Without Monopoly: Data Protection and Interoperability, we took a thorough look at the privacy implications of various kinds of interoperability. We examined the potential privacy risks of interoperability mandates, such as those contemplated by 2020’s ACCESS Act (USA), the Digital Services Act and Digital Markets Act (EU), and the recommendations presented in the Competition and Markets Authority report on online markets and digital advertising (UK). 

We also looked at the privacy implications of “competitive compatibility” (comcom, AKA adversarial interoperability), where new services are able to interoperate with existing incumbents without their permission, by using reverse-engineering, bots, scraping, and other  improvised techniques common to unsanctioned innovation.

Our analysis concluded that while interoperability created new privacy risks (for example, that a new firm might misappropriate user data under cover of helping users move from a dominant service to a new rival), these risks can largely be mitigated with thoughtful regulation and strong enforcement. More importantly, interoperability also had new privacy benefits, both because it made it easier to leave a service with unsuitable privacy policies, and because this created real costs for dominant firms that did not respect their users’ privacy: namely, an easy way for those users to make their displeasure known by leaving the service.

Critics of interoperability (including the dominant firms targeted by interoperability proposals) emphasize the fact that weakening a tech platform’s ability to control its users weakens its power to defend its users.

 They’re not wrong, but they’re not complete either. It’s fine for companies to defend their users’ privacy—we should accept nothing less—but the standards for defending user-privacy shouldn’t be set by corporate fiat in a remote boardroom, they should come from democratically accountable law and regulation.

The United States lags in this regard: Americans whose privacy is violated have to rely on patchy (and often absent) state privacy laws. The country needs—and deserves—a strong federal privacy law with a private right of action.

That’s something Europeans actually have. The General Data Protection Regulation (GDPR), a powerful, far-reaching, and comprehensive (if flawed and sometimes frustrating) privacy law came into effect in 2018.

The European Commission’s pending Digital Services Act (DSA) and Digital Markets Act (DMA) both contemplate some degree of interoperability, prompting two questions:

  1. Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy? And
  2. Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

We think the answers are “no” and “no,” respectively. Below, we explain why.

Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy?

Increased interoperability can help to address user lock-in and ultimately create opportunities for services to offer better data protection.

The European Data Protection Supervisor has weighed in on the relation between the GDPR and the Digital Markets Act (DMA), and they affirmed that interoperability can advance the GDPR’s goals.

Note that the GDPR doesn’t directly mandate interoperability, but rather “data portability,” the ability to take your data from one online service to another. In this regard, the GDPR represents the first two steps of a three-step process for full technological self-determination: 

  1. The right to access your data, and
  2. The right to take your data somewhere else.

The GDPR’s data portability framework is an important start! Lawmakers correctly identified the potential of data portability to help promote competition of platform services and to reduce the risk of user lock-in by reducing switching costs for users.

The law is clear on the duty of platforms to provide data in a structured, commonly used and machine-readable format and users should have the right to transmit data without hindrance from one data controller to another. Where technically feasible, users also have the right to ask the data controller to transmit the data to another controller.

Recital 68 of the GDPR explains that data controllers should be encouraged to develop interoperable formats that enable data portability. The WP29, a former official European data protection advisory body, explained that this could be implemented by making application programme interfaces (APIs) available.

However, the GDPR’s data portability limits and interoperability shortcomings have become more obvious since it came into effect. These shortcomings are exacerbated by lax enforcement. Data portability rights are insufficient to get Europeans the technological self-determination the GDPR seeks to achieve.

The limits the GDPR places on which data you have the right to export, and when you can demand that export, have not served their purpose. They have left users with a right to data portability, but few options about where to port that data to.

Missing from the GDPR is step three:

      3. The right to interoperate with the service you just left.

The DMA proposal is a legislative way of filling in that missing third step, creating a “real time data portability” obligation, which is a step toward real interop, of the sort that will allow you to leave a service, but remain in contact with the users who stayed behind. An interop mandate breathes life into the moribund idea of data-portability.

Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

The GDPR is very far-reaching, and European officials are still coming to grips with its implications. It’s conceivable that the Commission could propose a regulation that cannot be reconciled with EU data protection rules. We learned that in 2019, when the EU Parliament adopted the Copyright Directive without striking down the controversial and ill-conceived Article 13 (now Article 17). Article 17’s proponents confidently asserted that it would result in mandatory copyright filters for all major online platforms, not realizing that those filters cannot be reconciled with the GDPR.

But we don’t think that’s what’s going on here. Interoperability—both the narrow interop contemplated in the DMA, and more ambitious forms of interop beyond the conservative approach the Commission is taking—is fully compatible with European data protection, both in terms of what Europeans legitimately expect and what the GDPR guarantees.

Indeed, the existence of the GDPR solves the thorniest problem involved in interop and privacy. By establishing the rules for how providers must treat different types of data and when and how consent must be obtained and from whom during the construction and operation of an interoperable service, the GDPR moves hard calls out of the corporate boardroom and into a democratic and accountable realm.

Facebook often asserts that its duty to other users means that it has to block you from bringing some of “your” data with you if you want to leave for a rival service. There is definitely some material on Facebook that is not yours, like private conversations between two or more other people. Even if you could figure out how to access those conversations, we want Facebook to take steps to block your access and prevent you from taking that data elsewhere.

But what about when Facebook asserts that its privacy duties mean it can’t let you bring the replies to your private messages; or the comments on your public posts; or the entries in your address book; with you to a rival service? These are less clear-cut than the case of other peoples’ private conversations, but blocking you from accessing this data also helps Facebook lock you onto its platform, which is also one of the most surveilled environments in the history of data-collection.

There’s something genuinely perverse about deferring these decisions to the reigning world champions of digital surveillance, especially because an unfavorable ruling about which data you can legitimately take with you when you leave Facebook might leave you stuck on Facebook, without a ready means to address any privacy concerns you have about Facebook’s policies.

This is where the GDPR comes in. Rather than asking whether Facebook thinks you have the right to take certain data with you or to continue accessing that data from a rival platform, the GDPR lets us ask the law which kinds of data connections are legitimate, and when consent from other implicated users is warranted. Regulation can make good, accountable decisions about whether a survey app deserves access to all of the “likes” by all of its users’ friends (Facebook decided it did, and the data ended up in the hands of Cambridge Analytica), or whether a user should be able to download a portable list of their friends to help switch to another service (which Facebook continues to prevent).

The point of an interoperability mandate—either the modest version in the DMA or a more robust version that allows full interop—is to allow alternatives to high-surveillance environments like Facebook to thrive by reducing switching costs. There’s a hard collective action problem of getting all your friends to leave Facebook at the same time as you. If people can leave Facebook but stay in touch with their Facebook friends, they don’t need to wait for everyone else in their social circle to feel the same way. They can leave today.

In a world where platforms—giants, startups, co-ops, nonprofits, tinkerers’ hobbies—all treat the GDPR as the baseline for data-processing, services can differentiate themselves by going beyond the GDPR, sparking a race to the top for user privacy.

Consent, Minimization and Security

We can divide all the data that can be passed from a dominant platform to a new, interoperable rival into several categories. There is data that should not be passed. For example, a private conversation between two or more parties who do not want to leave the service and who have no connection to the new service. There is data that should be passed after a simple request from the user. For example, your own photos that you uploaded, with your own annotations; your own private and public messages, etc. Then there is data generated by others about you, such as ratings. Finally, there is someone else’s personal information contained in a reply to a message you posted.

The last category is tricky, and it turns on the GDPR’s very fulcrum: consent. The GDPR’s rules on data portability clarify that exporting data needs to respect the rights and freedom of others. Thus, although there is no ban on porting data that does not belong to the requesting user, data from other users shouldn’t be passed on without their explicit consent, or under another GDPR legal basis, and without further safeguards. 

That poses a unique challenge for allowing users to take their data with them to other platforms, when that data implicates other users—but it also promises a unique benefit to those other users.

If the data you take with you to another platform implicates other users, the GDPR requires that they consent to it. The GDPR’s rules for this are complex, but also flexible.

For example, say, in the future, that Facebook obtains consent from users to allow their friends to take the comments, annotations, and messages they send to those friends with them to new services. If you quit Facebook and take your data (including your friends’ contributions to it) to a new service, the service doesn’t have to bother all your friends to get their consent again—under the WP Guidelines, so long as the new service uses the data in a way that is consistent with the uses Facebook obtained consent for in the first place, that consent carries over.

But even though the new service doesn’t have to obtain consent from your friends, it does have to notify them within 30 days – so your friends will always know where their data ended up.

And the new platform has all the same GDPR obligations that Facebook has: they must only process data when they have a “lawful basis” to do so; they must practice data minimization; they must maintain the confidentiality and security of the data; and they must be accountable for its use.

None of that prevents a new service from asking your friends for consent when you bring their data along with you from Facebook. A new service might decide to do this just to be sure that they are satisfying the “lawfulness” obligations under the GDPR.

One way to obtain that consent is to incorporate it into Facebook’s own consent “onboarding”—the consent Facebook obtains when each user creates their account. To comply with the GDPR, Facebook already has to obtain consent for a broad range of data-processing activities. If Facebook were legally required to permit interoperability, it could amend its onboarding process to include consent for the additional uses involved in interop.

Of course, the GDPR does not permit far-reaching, speculative consent. There will be cases where no amount of onboarding consent can satisfy either the GDPR or the legitimate privacy expectations of users. In these cases, Facebook can serve as a “consent conduit,” through which consent to allow their friends to take data with muddled claims with them to a rival platform can be sought, obtained, or declined.

Such a system would mean that some people who leave Facebook would have to abandon some of the data they’d hope to take with them—their friends’ contact details, say, or the replies to a thread they started—and it would also mean that users who stayed behind would face a certain amount of administrative burden when their friends tried to leave the service. Facebook might dislike this on the grounds that it “degraded the user experience,” but on the other hand, a flurry of notices from friends and family who are leaving Facebook behind might spur the users who stayed to reconsider that decision and leave as well.

For users pondering whether to allow their friends to take their blended data with them onto a new platform, the GDPR presents a vital assurance: because the GDPR does not permit companies to seek speculative, blanket consent for future activities for new purposes that you haven’t already consented to, and because the companies your friends take your data to have no way of contacting you, they generally cannot lawfully make any further use of that data (except through one of the other narrow bases permitted by GDPR, for example, to fulfil a “legitimate interest”) . Your friends can still access it, but neither they, nor the services they’ve fled to, can process your data beyond the scope of the initial consent to move it to the new context. Once the data and you are separated, there is no way for third parties to obtain the consent they’d need to lawfully repurpose it for new products or services.

Beyond consent, the GDPR binds online services to two other vital obligations: “data minimization” and “data security.” These two requirements act as a further backstop to users whose data travels with their friends to a new platform.

Data minimization means that any user data that lands on a new platform has to be strictly necessary for its users’ purposes (whether or not there might be some commercial reason to retain it). That means that if a Facebook rival imports your comments to its new user’s posts, any irrelevant data that Facebook transmits along with that data (say, your location when you left the comment, or which link brought you to the post), must be discarded. This provides a second layer of protection for users whose friends migrate to new services: not only is their consent required before their blended data travels to the new service, but that service must not retain or process any extraneous information that seeps in along the way.

The GDPR’s security guarantee, meanwhile, guards against improper handling of the data you consent to let your friends take with them to new services. That means that the data in transit has to be encrypted, and likewise the data at rest, on the rival service’s servers. And no matter that the new service is a startup, it has a regulated, affirmative duty to practice good security across the board, with real liability if it commits a material omission that leads to a breach.

Without interoperability, the monopolistic high-surveillance platforms are likely to enjoy long term, sturdy dominance. The collective action problem represented by getting all the people on Facebook whose company you enjoy to leave at the same time you do means that anyone who leaves Facebook incurs a high switching cost.

Interoperability allows users to depart Facebook for rival platforms, including those that both honor the GDPR and go beyond its requirements. These smaller firms will have less political and economic influence than the monopolists whose dominance they erode, and when they do go wrong, their errors will be less consequential because they impact fewer users.

Without interoperability, privacy’s best hope is to gentle Facebook, rendering it biddable and forcing it to abandon its deeply held beliefs in enrichment through nonconsensual surveillance —and to do all of this without the threat of an effective competitor that Facebook users can flee to no matter how badly it treats them.

Interoperability without privacy safeguards is a potential disaster, provoking a competition to see who can extract the most data from users while offering the least benefit in return. Every legislative and regulatory interoperability proposal in the US, the UK, and the EU contains some kind of privacy consideration, but the EU alone has a region-wide, strong privacy regulation that creates a consistent standard for data-protection no matter what measure is being contemplated. Having both components – an interoperability requirement and a comprehensive privacy regulation – is the best way to ensure interoperability leads to competition in desirable activities, not privacy invasions.

Share
Categories
Commentary Intelwars privacy

Ring Changed How Police Request Door Camera Footage: What it Means and Doesn’t Mean

Amazon Ring has announced that it will change the way police can request footage from millions of doorbell cameras in communities across the country. Rather than the current system, in which police can send automatic bulk email requests to individual Ring users in an area of interest up to a square half mile, police will now publicly post their requests to Ring’s accompanying Neighbors app. Users of that app will see a “Request for Assistance” on their feed, unless they opt out of seeing such requests, and then Ring customers in the area of interest (still up to a square half mile) can respond by reviewing and providing their footage. 

Because only a portion of Ring users also are Neighbors users, and some of them may opt out of receiving police requests, this new system may  reduce the number of people who receive police requests, though we wonder whether Ring will now push more of its users to register for the app. 

This new model also may increase transparency over how police officers use and abuse the Ring system, especially as to people of color, immigrants, and protesters. Previously, in order to learn about police requests to Ring users, investigative reporters and civil liberties groups had to file public records requests with police departments–which consumed significant time and often yielded little information from recalcitrant agencies. Through this labor-intensive process, EFF revealed that the Los Angeles Police Department targeted Black Lives Matter protests in May and June 2020 with bulk Ring requests for doorbell camera footage that likely included First Amendment protected activities. Now, users will be able to see every digital request a police department has made to residents for Ring footage by scrolling through a department’s public page on the app. 

But making it easier to monitor historical requests can only do so much. It certainly does not address the larger problem with Ring and Neighbors: the network is predicated on perpetuating irrational fear of neighborhood crime, often yielding disproportionate scrutiny against people of color, all for the purposes of selling more cameras. Ring does so through police partnerships, which now encompass 1 in every 10 police departments in the United States. At their core, these partnerships facilitate bulk requests from police officers to Ring customers for their camera footage, built on a growing Ring surveillance network of millions of public-facing cameras. EFF adamantly opposes these Ring-police partnerships and advocates for their dissolution.

Nor does new transparency about bulk officer-to-resident requests through Ring erase the long history of secrecy about these shady partnerships. For example, Amazon has provided free Ring cameras to police, and limited what police were allowed to say about Ring, even including the existence of the partnership. 

Notably, Amazon has moved Ring functionality to its Neighbors app. Neighbors is a problematic technology. Like its peers Nextdoor and Citizen, it encourages its users to report supposedly suspicious people–often resulting in racially biased posts that endanger innocent residents and passersby. 

Ring’s small reforms invite  bigger questions: Why does a customer-focused technology company need to develop and maintain a feature for law enforcement in the first place? Why must Ring and other technology companies continue to offer police free features to facilitate surveillance and the transfer of information from users to the government? 

Here’s some free advice for Ring: Want to make your product less harmful to vulnerable populations? Stop facilitating their surveillance and harassment at the hands of police. 

Share
Categories
Commentary Corporate Speech Controls free speech Intelwars

Facebook’s Policy Shift on Politicians Is a Welcome Step

We are happy to see the news that Facebook is putting an end to a policy that has long privileged the speech of politicians over that of ordinary users. The policy change, which was announced on Friday by The Verge, is something that EFF has been pushing for since as early as 2019. 

Back then, Facebook executive Nick Clegg, a former politician himself, famously pondered: “Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be.” 

Perhaps Clegg had a point—we’ve long said that companies are ineffective arbiters of what the world says—but that hardly justifies holding politicians to a lower standard than the average person. International standards will consider the speaker, but only as one of many factors. For example, the United Nations’ Rabat Plan of Action outlines a six-part threshold test that takes into account “(1) the social and political context, (2) status of the speaker, (3) intent to incite the audience against a target group, (4) content and form of the speech, (5) extent of its dissemination and (6) likelihood of harm, including imminence.” Facebook’s Oversight Board recently endorsed the Plan, as a framework for assessing the removal of posts that may incite hostility or violence.

Facebook has deviated very far from the Rabat standard thanks, in part, to the policy it is finally repudiating. For example, it has banned elected officials from parties disfavored by the U.S. government, such as Hezbollah, Hamas, and the Kurdistan Workers Party (PKK), all of which appear on the government’s list of designated terrorist organizations—despite not being legally obligated to do so. And in 2018, the company deleted the account of Chechen leader Ramzan Kadyrov, claiming that they were legally obligated after the leader was placed on a sanctions list. Legal experts familiar with the law of international sanctions have disagreed, on the grounds that the sanctions are economic in nature and do not apply to speech.

So this decision is a good step in the right direction. But Facebook has many steps to go, including finally—and publicly—endorsing and implementing the Santa Clara Principles.

But ultimately, the real problem is that Facebook’s policy choices have so much power in the first place. It’s worth noting that this move coincides with a massive effort to persuade the U.S. Congress to impose new regulations that are likely to entrench Facebook power over free expression in the U.S. and around the world. If users, activists and, yes, politicians want real progress in defending free expression, we must fight for a world where changes in Facebook’s community standards don’t merit headlines at all—because they just don’t matter that much.

 

Share
Categories
Commentary Intelwars The Public Interest Internet

Organizing in the Public Interest: MusicBrainz

This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our first two parts or our introduction.

Last time, we saw how much of the early internet’s content was created by its users—and subsequently purchased by tech companies. By capturing and monopolizing this early data, these companies were able to monetize and scale this work faster than the network of volunteers that first created it for use by everybody. It’s a pattern that has happened many times in the network’s history: call it the enclosure of the digital commons. Despite this familiar story, the older public interest internet has continued to survive side-by-side with the tech giants it spawned: unlikely and unwilling to pull in the big investment dollars that could lead to accelerated growth, but also tough enough to persist in its own ecosystem. Some of these projects you’ve heard of—Wikipedia, or the GNU free software project, for instance. Some, because they fill smaller niches and aren’t visible to the average Internet user, are less well-known. The public interest internet fills the spaces between tech giants like dark matter; invisibly holding the whole digital universe together.

Sometimes, the story of a project’s switch to the commercial model is better known than its continuing existence in the public interest space. The notorious example in our third post was the commercialization of the publicly-built CD Database (CDDB): when a commercial offshoot of this free, user-built database, Gracenote, locked down access, forks like freedb and gnudb continued to offer the service free to its audience of participating CD users.

Gracenote’s co-founder, Steve Scherf, claimed that without commercial investment, CDDB’s free alternatives were doomed to “stagnation”. While alternatives like gnudb have survived, it’s hard to argue that either freedb or gnudb have innovated beyond their original goal of providing and collecting CD track listings. Then again, that’s exactly what they set out to do, and they’ve done it admirably for decades since.

But can innovation and growth take place within the public interest internet? CDDB’s commercialization parlayed its initial market into a variety of other music-based offerings. Their development of these products led to them being purchased, at various points, by AV manufacturer Escient, Sony, Tribune Media, and most recently, Nielsen. Each sale made money for its investors. Can a free alternative likewise build on its beginnings, instead of just preserving them for its original users?

MusicBrainz, a Community-Driven Alternative to Gracenote

Among the CDDB users who were thrown by its switch to a closed system in the 1990s, was Robert Kaye. Kaye was a music lover and, at the time, a coder working on one of the earliest MP3 encoders and players at Xing. Now he and a small staff work full-time on MusicBrainz, a community-driven alternative to Gracenote. (Disclosure: EFF special advisor Cory Doctorow is on the board of MetaBrainz, the non-profit that oversees MusicBrainz).

“We were using CDDB in our service,” he told me from his home in Barcelona. “Then one day, we received a notice that said you guys need to show our [Escient, CDDB’s first commercial owner] logo when a CD is looked up. This immediately screwed over blind users who were using a text interface of another open source CD player that couldn’t comply with the requirement. And it pissed me off because I’d typed in a hundred or so CDs into that database… so that was my impetus to start the CD index, which was the precursor to MusicBrainz.”

Over two decades after the user rebellion that created it, MusicBrainz continues to tick along

MusicBrainz has continued ever since to offer a CDDB-compatible CD metadata database, free for anyone to use. The bulk of its user-contributed data has been put into the public domain, and supplementary data—such as extra tags added by volunteers—is provided under a non-commercial, attribution license. 

Over time, MusicBrainz has expanded by creating other publicly available, free-to-use databases of music data, often as a fallback for when other projects commercialize and lock down. For instance, Audioscrobbler was an independent system that collected information on what music you’ve listened to (no matter on what platform you heard it), to learn and provide recommendations based on its users’ contributions, but under your control. It was merged into Last.fm, an early Spotify-like streaming service, which was then sold to CBS. When CBS seemed to be neglecting the “scrobbling” community, MusicBrainz created ListenBrainz, which re-implemented features that had been lost over time. The plan, says Kaye, is to create a similarly independent recommendation system. 

While the new giants of Internet music—Spotify, Apple Music, Amazon—have been building closed machine-learning models to data-mine their users, and their musical interests, MusicBrainz has been working in the open with Barcelona’s Pompeu Fabra University to derive new metadata from the MusicBrainz communities’ contributions. Automatic deductions of genre, mood, beats-per-minute and other information are added to the AcousticBrainz database for everyone to use. These algorithms learn from their contributors’ corrections, and the fixes they provide are added to the commonwealth of public data for everyone to benefit from.

MusicBrainz’ aspirations sound in synchrony with the early hopes of the Internet, and after twenty years, they appear to have proven the Internet can support and expand a long-term public good, as opposed to a proprietary, venture capital-driven growth model. But what’s to stop the organization from going the same way as those other projects with their lofty goals? Kaye works full-time on MusicBrainz along with eight other employees: what’s to say that they’re not exclusively profiteering from the wider unpaid community in the same way as larger companies like Google benefit from their users’ contributions?

MusicBrainz has some good old-fashioned pre-Internet institutional protections. It is managed as a 501(c) non-profit, the MetaBrainz Foundation, which places some theoretical constraints on how it might be bought out. Another old Internet value is radical transparency, and the organization has that in spades. All of its financial transactions, from profit and loss sheets to employment costs, to its server outlay and board meeting notes are published online.

Another factor, says Kaye, is keeping a clear delineation between the work done by MusicBrainz’s paid staff and the work of the MusicBrainz volunteer community. “My team should work on the things that aren’t fun to work on. The volunteers work on the fun things,” he says. When you’re running a large web service built on the contributions of a community, there’s no end of volunteers for interesting projects, but, as Kaye notes, “there’s an awful lot of things that are simply not fun, right? Our team is focused on doing these things.” It helps that MetaBrainz, the foundation, hires almost exclusively from long-term MusicBrainz community members.

Perhaps MusicBrainz’s biggest defense against its own decline is the software (and data) licenses it uses for its databases and services. In the event of the organizations’ separation from the desires of its community, all its composition and output—its digital assets, the institutional history—are laid out so that the community can clone its structure, and create another, near-identical, institution closer to its needs. The code is open source; the data is free to use; the radical transparency of the financial structures means that the organization itself can be reconstructed from scratch if need be.

Such forks are painful. Anyone who has recently watched the volunteer staff and community of Freenode, the distributed Internet Relay Chat (IRC) network, part ways with the network’s owner and start again at Libera.chat, will have seen this. Forks can be divisive in a community, and can be reputationally devastating to those who are abandoned by the community they claimed to lead and represent. MusicBrainz staff’s livelihood depends on its users in a way that even the most commercially sensitive corporation does not. 

It’s unlikely that a company would place its future viability so directly in the hands of its users. But it’s this self-imposed sword of Damocles hanging over Rob Kaye and his staff’s heads that fuels the communities’ trust in their intentions.

Where Does the Money Come From?

Open licenses, however, can also make it harder for projects to gather funding to persist. Where does MusicBrainz’ money come from? If anyone can use their database for free, why don’t all their potential revenue sources do just that, free-riding off the community without ever paying back? Why doesn’t a commercial company reproduce what MusicBrainz does, using the same resources that a community would use to fork the project?

MusicBrainz’s open finances show that, despite those generous licenses, they’re doing fine. The project’s transparency lets us see that it brought in around $400K in revenue in 2020, and had $400K in costs (it experienced a slight loss, but other years have been profitable enough to make this a minor blip). The revenue comes as a combination of small donors and larger sponsors, including giants like Google, who use MusicBrainz’ data and pay for a support contract.

Given that those sponsors could free-ride, how does Kaye get them to pay? He has some unorthodox strategies (most famously, sending a cake to Amazon to get them to honor a three-year-old invoice), but the most common reason seems to be that an open database maintainer that is responsive to a wider community is also easier for commercial concerns to interface with, both technically and contractually. Technologists building out a music tool or service turn to MusicBrainz for the same reason as they might pick an open source project: it’s just easier to slot it into their system without having to jump through authentication hoops or begin negotiations with a sales team. Then, when a company forms around that initial hack, its executives eventually realize that they now have a real dependency on a project with whom they have no contractual or financial relationship. A support contract means that they have someone to call up if it goes down; a financial relationship means that it’s less likely to disappear tomorrow.

If Sony had used MusicBrainz’ data, they would have been able to carry on regardless

Again, commercial alternatives may make the same offer, but while a public interest non-profit like MusicBrainz might vanish if it fails its community, or simply runs out of money, those other private companies may well have other reasons to exit their commitments with their customers. When Sony bought Gracenote, it was presumably partly so that they could support their products that used Gracenote’s databases. After Sony sold Gracenote, they ended up terminating their own use of the databases. Sony announced to their valued customers in 2019 that Sony Blu-Ray and Home Theater products would no longer have CD and DVD recognition features. The same thing happened to Sony’s mobile Music app in 2020, which stopped being able to recognize CDs when it was cut off from Gracenote’s service. We can have no insight into these closed, commercial deals, but we can presume that Sony and Gracenote’s new owner could not come to an amicable agreement. 

By contrast, if Sony had used MusicBrainz’ data, they would have been able to carry on regardless. They’d be assured that no competitor would buy out MusicBrainz from under them, or lock their products out of an advertised feature. And even if MusicBrainz the non-profit died, there would be a much better chance that an API-compatible alternative would spring up from the ashes. If it was that important, Sony could have supported the community directly. As it is, Sony paid $260 million for Gracenote. For their CD services, at least, they could have had a more stable service deal with MusicBrainz for $1500 a month.

Over two decades after the user rebellion that created it, MusicBrainz continues to tick along. Its staff is drawn from music fans around the world, and meets up every year with a conference paid for by the MusicBrainz Foundation. Its contributors know that they can always depend on its data staying free; its paying customers know that they can always depend on its data being usable in their products. MusicBrainz staff can be assured that they won’t be bought up by big tech, and they can see the budget that they have to work with.

It’s not perfect. A transparent non-profit that aspires to internet values can be as flawed as any other. MusicBrainz suffered a reputational hit last year when personal data leaked from its website, for instance. But by continuing to exist, even with such mistakes, and despite multiple economic downturns, it demonstrates that a non-profit, dedicated to the public interest, can thrive without stagnating, or selling its users out.

But, but, but. While it’s good to know public interest services are successful in niche territories like music recognition, what about the parts of the digital world that really seem to need a more democratic, decentralized alternative—and yet notoriously lack them? Sites like Facebook, Twitter, and Google have not only built their empires from others’ data, they have locked their customers in, apparently with no escape. Could an alternative, public interest social network be possible? And what would that look like?

We’ll cover these in a later part of our series. (For a sneak preview, check out the recorded discussions at “Reimagining the Internet”, from our friends at the Knight First Amendment Institute at Columbia University and the Initiative on Digital Public Infrastructure at the University of Massachusetts, Amherst, which explore in-depth many of the topics we’ve discussed here.)

Share
Categories
Commentary Computer Fraud And Abuse Act Reform Intelwars Security

Supreme Court Overturns Overbroad Interpretation of CFAA, Protecting Security Researchers and Everyday Users

EFF has long fought to reform vague, dangerous computer crime laws like the CFAA. We’re gratified that the Supreme Court today acknowledged that overbroad application of the CFAA risks turning nearly any user of the Internet into a criminal based on arbitrary terms of service. We remember the tragic and unjust results of the CFAA’s misuse, such as the death of Aaron Swartz, and we will continue to fight to ensure that computer crime laws no longer chill security research, journalism, and other novel and interoperable uses of technology that ultimately benefit all of us.

EFF filed briefs both encouraging the Court to take today’s case and urging it to make clear that violating terms of service is not a crime under the CFAA. In the first, filed alongside the Center for Democracy and Technology and New America’s Open Technology Institute, we argued that Congress intended to outlaw computer break-ins that disrupted or destroyed computer functionality, not anything that the service provider simply didn’t want to have happen. In the second, filed on behalf of computer security researchers and organizations that employ and support them, we explained that the broad interpretation of the CFAA puts computer security researchers at legal risk for engaging in socially beneficial security testing through standard security research practices, such as accessing publicly available data in a manner beneficial to the public yet prohibited by the owner of the data. 

Today’s win is an important victory for users everywhere. The Court rightly held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Thus, “an individual ‘exceeds authorized access’ when he accesses a computer with authorization but then obtains information located in particular areas of the computer— such as files, folders, or databases—that are off limits to him.” Rejecting the Government’s reading allowing CFAA charges for any website terms of service violation, the Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA.

Share
Categories
anonymity Commentary Encrypting the Web free speech Intelwars Locational Privacy privacy Street-Level Surveillance

EFF at 30: Surveillance Is Not Obligatory, with Edward Snowden

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF was proud to welcome NSA whistleblower Edward Snowden for a chat about surveillance, privacy, and the concrete ways we can improve our digital world, as part of our EFF30 Fireside Chat series. EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia weighed in on the way the internet (and surveillance) actually function, the impact that has on modern culture and activism, and how we’re grappling with the cracks this pandemic has revealed—and widened—in our digital world. 

You can watch the full conversation here or read the transcript.

On June 3, we’ll be holding our fourth EFF30 Fireside Chat, on how to free the internet, with net neutrality pioneer Gigi Sohn. EFF co-founder John Perry Barlow once wrote, “We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.” This year marked the 25th anniversary of this audacious essay denouncing centralized authority on the blossoming internet. But modern tech has strayed far from the utopia of individual freedom that 90s netizens envisioned. We’ll be discussing corporatization, activism, and the fate of the internet, framed by Barlow’s “Declaration of the Independence of Cyberspace,” with Gigi, along with EFF Senior Legislative Counsel Ernesto Falcon and EFF Associate Director of Policy and Activism Katharine Trendacosta.

RSVP to the next EFF30 Fireside Chat

The Internet is Not Made of Magic

Snowden opened the discussion by explaining the reality that all of our internet usage is made up of a giant mesh of companies and providers. The internet is not magic—it’s other people’s computers: “All of our communications—structurally—are intermediated by other people’s computers and infrastructure…[in the past] all of these lines that you were riding across—the people who ran them were taking notes.” We’ve come a long way from that time when our communications were largely unencrypted, and everything you typed into the Google search box “was visible to everybody else who was on that Starbucks network with you, and your Internet Service Provider, who knew this person who paid for this account searched for this thing on Google….anybody who was between your communications could take notes.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FPYRaSOIbiOA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

How Can Tech Protect Us from Surveillance?

In 2013, Snowden came forward with details about the PRISM program, through which the NSA and FBI worked directly with large companies to see what was in individuals’ internet communications and activity, making much more public the notion that our digital lives were not safe from spying. This has led to a change in people’s awareness of this exploitation, Snowden said, and myriad solutions have come about to solve parts of what is essentially an ecosystem problem: some technical, some legal, some political, some individual. “Maybe you install a different app. Maybe you stop using Facebook. Maybe you don’t take your phone with you, or start using an encrypted messenger like Signal instead of something like SMS.” 

Nobody sells you a car without brakes—nobody should sell you a browser without security.

When it comes to the legal cases, like EFF’s case against the NSA, the courts are finally starting to respond. Technical solutions, like the expansion of encryption in everyday online usage, are also playing a part, Alexis Hancock, EFF’s Director of Engineering for Certbot, explained. “Just yesterday, I checked on a benchmark that said that 95% of web traffic is encrypted—leaps and bounds since 2013.” In 2015, web browsers started displaying “this site is not secure” messages on unencrypted sites, and that’s where EFF’s Certbot tool steps in. Certbot is a “free, open source software that we work on to automatically supply free SSL, or secure, certificates for traffic in transit, automating it for websites everywhere.” This keeps data private in transit—adding a layer of protection over what is traveling between your request and a website’s server. Though this is one of the things that don’t get talked about a lot, partly because these are pieces that you don’t see and shouldn’t have to see, but give people security. “Nobody sells you a car without brakes—nobody should sell you a browser without security.”  

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcJWq6ub0CQs%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Balancing the Needs of the Pandemic and the Dangers of Surveillance

We’ve moved the privacy needle forward in many ways since 2013, but in 2020, a global catastrophe could have set us back: the COVID-19 pandemic. As Hancock described it, EFF’s focus for protecting privacy during the pandemic was to track “where technology can and can’t help, and when is technology being presented as a silver bullet for certain issues around the pandemic when people are the center for being able to bring us out of this.”

There is a looming backlash of people who have had quite enough.

Our fear was primarily scope creep, she explained: from contact tracing to digital credentials, many of these systems already exist, but we must ask, “what are we actually trying to solve here? Are we actually creating more barriers to healthcare?” Contact tracing, for example, must put privacy first and foremost—because making it trustworthy is key to making it effective. 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FR9CIDUhGOgU%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

The Melting Borders Between Corporate, Government, Local, and Federal Surveillance 

But the pandemic, unfortunately, isn’t the only nascent danger to our privacy. EFF’s Matthew Guariglia described the merging of both government and corporate surveillance, and federal and local surveillance, that’s happening around the country today: “Police make very effective marketers, and a lot of the manufacturers of technology are counting on it….If you are living in the United States today you are likely walking past or carrying around street level surveillance everywhere you go, and this goes double if you live in a concentrated urban setting or you live in an overpoliced community.”

Police make very effective marketers, and a lot of the manufacturers of technology are counting on it

From automated license plate readers to private and public security cameras to Shotspotter devices that listen for gunshots but also record cars backfiring and fireworks, this matters now more than ever, as the country reckons with a history of dangerous and inequitable overpolicing: “If a Shotspotter misfires, and sends armed police to the site of what they think is a shooting, there is likely to be a higher chance for a more violent encounter with police who think they’re going to a shooting.” This is equally true for a variety of these technologies, from automated license plate readers to facial recognition, which police claim are used for leads, but are too often accepted as fact. 

“Should we compile records that are so comprehensive?” asked Snowden about the way these records aren’t only collected, but queried, allowing government and companies to ask for the firehose of data. “We don’t even care what it is, we interrelate it with something else. We saw this license plate show up outside our store at a strip mall and we want to know how much money they have.” This is why the need for legal protections is so important, added Executive Director Cindy Cohn: “The technical tools are not going to get to the place where the phone company doesn’t know where your phone is. But the legal protections can make sure that the company is very limited in what they can do with that information—especially when the government comes knocking.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcLlVb_W8OmA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

 

After All This, Is Privacy Dead?

All these privacy-invasive regimes may lead some to wonder if privacy, or anonymity, are, to put it bluntly, dying. That’s exactly what one audience member asked during the question and answer section of the chat. “I don’t think it’s inevitable,” said Guariglia. “There is a looming backlash of people who have had quite enough.” Hancock added that optimism is both realistic and required: “No technology makes you a ghost online—none of it, even the most secure, anonymous-driven tools out there. And I don’t think that it comes down to your own personal burden…There is actually a more collective unit now that are noticing that this burden is not yours to bear…It’s going to take firing on all cylinders, with activism, technology, and legislation. But there are people fighting for you out there. Once you start looking, you’ll find them.” 

If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

“So many people care,” Snowden said. “But they feel like they can’t do anything….Does it have to be that way?…Governments live in a permissionless world, but we don’t. Does it have to be that way?” If you’re looking for a lever to pull—look at the presumptions these mass data collection systems make, and what happens if they fail: “They do it because mass surveillance is cheap…could we make these systems unlawful for corporations, and costly [for others]? I think in all cases, the answer is yes.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEaeKVAbMO6s%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Democracy, social movements, our relationships, and your own well being all require private space to thrive. If you missed this chat, please take an hour to watch it—whether you’re a privacy activist or an ordinary person, it’s critical for the safety of our society that we push back on all forms of surveillance, and protect our ability to communicate, congregate, and coordinate without fear of reprisal. We deeply appreciate Edward Snowden joining us for this EFF30 Fireside Chat and discussing how we can fight back against surveillance, as difficult as it may seem. As Hancock said (yes, quoting the anime The Last Airbender): “If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

___________________________

Check out additional recaps of EFF’s 30th anniversary conversation series, and don’t miss our next program where we’ll tackle digital access and the open web with Gigi Sohn on June 3, 2021—EFF30 Fireside Chat: Free the Internet.

Share
Categories
Commentary free speech Intelwars

Amid Systemic Censorship of Palestinian Voices, Facebook Owes Users Transparency

Over the past few weeks, as protests in—and in solidarity with—Palestine have grown, so too have violations of the freedom of expression of Palestinians and their allies by major social media companies. From posts incorrectly flagged by Facebook as incitement to violence, to financial censorship of relief payments made on Venmo, and the removal of Instagram Stories (which also heavily affected activists in Colombia, Canada, and Brazil), Palestinians are experiencing an unprecedented level of censorship during a time where digital communications are absolutely critical.

The vitality of social media during a time like this cannot be understated. Journalistic coverage from the ground is minimal—owing to a number of factors, including restrictions on movement by Israeli authorities—while, as the New York Times reported, misinformation is rife and has been repeated by otherwise reliable media sources. Israeli officials have even been caught spreading misinformation on social media. 

Palestinian digital rights organization 7amleh has spent the past few weeks documenting content removals, and a coalition of more than twenty organizations, including EFF, have reached out to social media companies, including Facebook and Twitter. Among the demands are for the companies to immediately stop censoring—and reinstate—the accounts and content of Palestinian voices, to open an investigation into the takedowns, and to transparently and publicly share the results of those investigations.

A brief history

Palestinians face a number of obstacles when it comes to online expression. Depending on where they reside, they may be subject to differing legal regimes, and face censorship from both Israeli and Palestinian authorities. Most Silicon Valley tech companies have offices in Israel (but not Palestine), while some—such as Facebook—have struck particular deals with the Israeli government to deal with incitement. While incitement to violence is indeed against the company’s community standards, groups like 7amleh say that this agreement results in inconsistent application of the rules, with incitement against Palestinians often allowed to remain on the platform.

Additionally, the presence of Hamas—which is the democratically-elected government of Gaza, but is also listed as a terrorist organization by the United States and the European Union—complicates things for Palestinians, as any mention of the group (including, at times, something as simple as the group’s flag flying in the background of an image) can result in content removals.

And it isn’t just Hamas—last week, Buzzfeed documented an instance where references to Jerusalem’s Al Aqsa mosque, one of the holiest sites in Islam, were removed because “Al Aqsa” is also contained within another designated group, Al Aqsa Martyrs’ Brigade. Although Facebook apologized for the error, this kind of mistake has become all too common, particularly as reliance on automated moderation has increased amidst the pandemic.

“Dangerous Individuals and Organizations”

Facebook’s Community Standard on Dangerous Individuals and Organizations gained a fair bit of attention a few weeks back when the Facebook Oversight Board affirmed that President Trump violated the standard with several of his January 6 posts. But the standard is also regularly used as justification for the widespread removal of content by Facebook pertaining to Palestine, as well as other countries like Lebanon. And it isn’t just Facebook—last Fall, Zoom came under scrutiny for banning an academic event at San Francisco State University (SFSU) at which Palestinian figure Leila Khaled, alleged to belong to another US-listed terrorist organization, was to speak.

SFSU fell victim to censorship again in April of this year when its Arab and Muslim Ethnicities and Diasporas (AMED) Studies Program discovered that its Facebook event “Whose Narratives? What Free Speech for Palestine?,” scheduled for April 23, had been taken down for violating Facebook Community Standards. Shortly thereafter, the program’s entire page, “AMED STUDIES at SFSU,” was deleted, along with its years of archival material on classes, syllabi, webinars and vital discussions not only on Palestine but on Black, Indigenous, Asian and Latinx liberation, gender and sexual justice and a variation of Jewish voices and perspectives including opposition to Zionism. Although no specific violation was noted, Facebook has since confirmed that the post and the page were removed for violating the Dangerous Individuals and Organizations standard. This was in addition to cancellations by other platforms including Google, Zoom, and Eventbrite. 

SFSU's AMED Studies Program gets censored by Facebook

Given the frequency and the high-profile contexts in which Facebook’s Dangerous Individuals and Organizations Standard is applied, the company should take extra care to make sure the standard reflects freedom of expression and other human rights values. But to the contrary, the standard is a mess of vagueness and overall lack of clarity—a point that the Oversight Board has emphasized.

Facebook has said that the purpose of this community standard is to “prevent and disrupt real-world harm.” In the Trump ruling, the Oversight Board found that President Trump’s January 6 posts readily violated the Standard. “The user praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted. Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained.”

But in two previous decisions, the Oversight Board criticized the standard. In a decision overturning Facebook’s removal of a post featuring a quotation misattributed to Joseph Goebbels, the Oversight Board admonished Facebook for not including all aspects of its policy on dangerous individuals and organizations in the community standard.

Facebook apparently has self-designated lists of individuals and organizations subject to the policy that it does not share with users, and treats any quoting of such persons as an “expression of support” unless the user provides additional context to make their benign intent explicit, a condition also not disclosed to users. Facebook’s lists evidently include US-designated foreign terrorist organizations, but also seems to go beyond that list.

As the Oversight Board concluded, “this results in speech being suppressed which poses no risk of harm” and found that the standard fell short of international human rights standards: “the policy lacks clear examples that explain the application of ‘support,’ ‘praise’ and ‘representation,’ making it difficult for users to understand this Community Standard. This adds to concerns around legality and may create a perception of arbitrary enforcement among users.” Moreover, “the policy fails to explain how it ascertains a user’s intent, making it hard for users to foresee how and when the policy will apply and conduct themselves accordingly.”

The Oversight Board recommended that Facebook explain and provide examples of the application of key terms used in the policy, including the meanings of “praise,” “support,” and “representation.” The Board also recommended that the community standard provide clearer guidance to users on making their intent apparent when discussing such groups, and that a public list of “dangerous” organizations and individuals be provided to users.

The United Nations Special Rapporteur on Freedom of Expression also expressed concern that the standard, and specifically the language of “praise” and “support,” was “excessively vague.”

Recommendations

Policies such as Facebook’s that restrict references to designated terrorist organizations may be well-intentioned, but in their blunt application, they can have serious consequences for documentation of crimes—including war crimes—as well as vital expression, including counterspeech, satire, and artistic expression, as we’ve previously documented. While companies, including Facebook, have regularly claimed that they are required to remove such content by law, it is unclear to what extent this is true. The legal obligations are murky at best. Regardless, Facebook should be transparent about the composition of its “Dangerous Individuals and Organizations” list so that users can make informed decisions about what they post.

But while some content may require removal under certain jurisdictions, it is clear that other decisions are made on the basis of internal policies and external pressure—and are often not in the best interest of the individuals that they claim to serve. This is why it is vital that companies include vulnerable communities—in this case, Palestinians—in policy conversations.

Finally, transparency and appropriate notice to users would go a long way toward mitigating the harm of such takedowns—as would ensuring that every user has the opportunity to appeal content decisions in every circumstance. The Santa Clara Principles on Transparency and Accountability in Content Moderation offer a baseline for companies.

Share
Categories
Commentary Intelwars Legal Analysis privacy Security Technical Analysis

How Your DNA—or Someone Else’s—Can Send You to Jail

Although DNA is individual to you—a “fingerprint” of your genetic code—DNA samples don’t always tell a complete story. The DNA samples used in criminal prosecutions are generally of low quality, making them particularly complicated to analyze. They are not very concentrated, not very complete, or are a mixture of multiple individual’s DNA—and often, all of these conditions are true. If a DNA sample is like a fingerprint, analyzing mixed DNA samples in criminal prosecutions can often be like attempting to isolate a single person’s print from a doorknob of a public building after hundreds of people have touched it. Despite the challenges in analyzing these DNA samples, prosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process. This is why it is essential that any DNA analysis tool’s source code is made available for evaluation. It is critical to determine whether the software is reliable enough to be used in the legal system, and what weight its results should be given. 

A Breakdown of DNA Data

To understand why DNA software analyses can be so misleading, it helps to know a tiny bit about how it works. To start, DNA sequences are commonly called genes. A more generic way to refer to a specific location in the gene sequence is a “locus” (plural “loci”). The variants of a given gene or of the DNA found at a particular locus are called “alleles.” To oversimplify, if a gene is like a highway, the numbered exits are loci, and alleles are the specific towns at each exit.

[P]rosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process.

Forensic DNA analysis typically focuses on around 13 to 20 loci and the allele present at each locus, making up a person’s DNA profile. By looking at a sufficient number of loci, whose alleles are distributed among the population, a kind of fingerprint can be established. Put another way, knowing the specific towns and exits a driver drove past can also help you figure out which highway they drove on.

To figure out the alleles present in a DNA sample, a scientist chops the DNA into different alleles, then uses an electric charge to draw it through a gel in a method called electrophoresis. Different alleles will travel at different rates, and the scientist can measure how far each one traveled and look up which allele corresponds to that length. The DNA is also stained with a dye, so that the more of it there is, the darker that blob will be on the gel.

Analysts infer what alleles are present based on how far they traveled through the gel, and deduce what amounts are present based on how dark the band is—which can work well in an untainted, high quality sample. Generally, the higher the concentration of cells from an individual and the less contaminated the sample by any other person’s DNA, the more accurate and reliable the generated DNA profile.

The Difficulty of Analyzing DNA Mixtures

Our DNA is found in all of our cells. The more cells that we shed, the higher the concentration of our DNA can be found, which generally also means more accuracy from DNA testing. However, our DNA can also be transferred from one object to another. So it’s possible that your DNA can be found on items you’ve never had contact with or at locations you’ve never been. For example, if you’re sitting in a doctor’s waiting room and scratch your face, your DNA may be found on the magazines on a table next to you that you never flipped through. Your DNA left on a jacket you lent a friend can transfer onto items they brush by or at locations they travel to. 

Given the ease at which DNA is deposited, it is no surprise that DNA samples from crime scenes are often a mixture of DNA from multiple individuals, or “donors.” Investigators gather DNA samples by swiping a cotton swab at the location that the perpetrator may have deposited their DNA, such as a firearm, a container of contraband, or the body of a victim. In many cases where the perpetrator’s bodily fluids are not involved, the DNA sample may only contain a small amount of the perpetrator’s DNA, which could be less than a few cells, and is likely to also contain the DNA of others. This makes trying to identify whether a person’s DNA is found in a complex DNA mixture a very difficult problem. It’s like having to figure out whether someone drove on a specific interstate when all you have is an incomplete and possibly inaccurate list of towns and exits they passed, all of which could have been from any one of the roads they used. You don’t know the number of roads they drove on, and can only guess at which towns and exits were connected. 

Running these DNA mixture samples through electrophoresis creates much noisier results, and often contains errors that indicate additional alleles at a locus or ignore alleles that are present. Human analysts then decide which alleles appear dark enough in the gel to count or which are light enough to ignore. At least, traditional DNA analysis worked in this binary way: an allele either counted or did not count as part of a specific DNA donor profile.

Probabilistic Genotyping Software and Their Problems 

Enter probabilistic genotyping software. The proprietors of these programs—the two biggest players are STRMix and TrueAllele—claim that their products, using statistical modeling, can determine the likelihood that a DNA profile or combinations of DNA profiles contributed to a DNA mixture, instead of the binary approach. Prosecutors often describe the analysis from these programs this way: It is X times more likely that defendant, rather than a random person, contributed to this DNA mixture sample.

However, these tools, like any statistical model, can be constructed poorly. And whether, what, and how assumptions are incorporated in them can cause the results to vary. They can be analogized to the election forecast models from FiveThirtyEight, The Economist, and The New York Times. They all use statistical modeling, but the final numbers are different because of the myriad design differences from each publisher. Probabilistic genotyping software is the same: they all use statistical modeling, but the output probability is affected by how that model is built. Like the different election models, different probabilistic DNA software has diverging approaches for which, and at what threshold, factors are considered, counteracted, or ignored. Additionally, input from human analysts, such as the hypothetical number of people who contributed to the DNA mixture, also change the calculation. If this is less rigorous than you expected, that’s exactly the point—and the problem. In our highway analogy, this is like a software program that purports to tell you how likely it is that you drove on a specific road based on a list of towns and exits you passed. Not only is the result affected by the completeness and accuracy of the list, but the map the software uses, and the data available to it, matter tremendously as well.

If this is less rigorous than you expected, that’s exactly the point—and the problem. 

Because of these complex variables, a probability result is always specific to how the program used was designed, the conditions at the lab, and any additional or discretionary input used during the analysis. In practice, different DNA analysis programs have resulted in substantially different probabilities for whether a defendant’s DNA appeared in the same DNA sample, even breathtaking discrepancies in the millions-fold!

And yet it is impossible to determine which result, or software, is the most accurate. There is no objective truth against which those numbers can be compared. We simply cannot know what the probability that a person contributed to a DNA mixture is. In controlled testing, we know whether a person’s DNA was part of a DNA mixture or not, but there is no way to figure out whether it was 100 times more likely that the donor’s DNA rather than an unknown person’s contributed to the mixture, or a million times more likely. And while there is no reason to assume that the tool that outputs the highest statistical likelihood is the most accurate, the software’s designers may nevertheless be incentivized to program their product in a way that is more likely to output a larger number, because “1 quintillion” sounds more precise than “10,000”—especially when there is no way to objectively evaluate the accuracy.

DNA Software Review is Essential

Because of these issues, it is critical to examine any DNA software’s source code that is used in the legal system. We need to know exactly how these statistical models are built, and looking at the source code is the only way to discover non-obvious coding errors. Yet, the companies that created these programs have fought against the release of the source code—even when it would only be examined by the defendant’s legal team and be sealed under a court order. In the rare instances where the software code was reviewed, researchers have found programming errors with the potential to implicate innocent people.

Forensic DNA analyses have the whiff of science—but without source code review, it’s impossible to know whether or not they pass the smell test. Despite the opacity of their design and the impossibility of measuring their accuracy, these programs have become widely used in the legal system. EFF has challenged—and continues to challenge—the failure to disclose the source code of these programs. The continued use of these tools, the accuracy of which cannot be ensured, threatens the administration of justice and the reliability of verdicts in criminal prosecutions.

Share
Categories
Commentary Intelwars The Public Interest Internet

Outliving Outrage on the Public Interest Internet: the CDDB Story

This is the second in our blog series on the public interest internet: past, present and future.

In our previous blog post, we discussed how in the early days of the internet, regulators feared that without strict copyright enforcement and pre-packaged entertainment, the new digital frontier would be empty of content. But the public interest internet barn-raised to fill the gap—before the fledgling digital giants commercialised and enclosed those innovations. These enclosures did not go unnoticed, however—and some worked to keep the public interest internet alive.

Compact discs (CDs) were the cutting edge of the digital revolution a decade before the web. Their adoption initially followed Lehman’s rightsholder-led transition – where existing publishers led the charge into a new medium, rather than the user-led homesteading of the internet. The existing record labels maintained control of CD production and distribution, and did little to exploit the new tech—but they did profit from bringing their old back catalogues onto the new digital format. The format was immensely profitable, because everyone re-bought their existing vinyl collections to move it onto CD. Beyond the improved fidelity of CDs, the music industry had no incentive to add new functionality to CDs or their players. When CD players were first introduced, they were sold exclusively as self-contained music devices—a straight-up replacement for record players that you could plug into speakers or your hi-fi “music centre,”  but not much else. They were digital, but in no way online or integrated with any other digital technology.

The exception was the CD playing hardware that was incorporated into the latest multimedia PCs—a repurposing of the dedicated music playing hardware which sent the CD to the PC as a pile of digital data. With this tech, you could use CDs as a read-only data store, a fixed set of data, a “CD-ROM”; or you could insert a CD music disc, and use your desktop PC to read in and play its digital audio files through tinny desktop speakers, or headphones.

The crazy thing was that those music CDs contained raw dumps of audio, but almost nothing else. There was no bonus artist info stored on the CDs; no digital record of the CD title, no digital version of the CD’s cover image JPEG, not even a user-readable filename or two: just 74 minutes of untitled digital sound data, split into separate tracks, like its vinyl forebear. Consequently, a PC with a CD player could read and play a CD, but had no idea what it was playing. About the only additional information a computer could extract from the CD beyond the raw audio was the total number of tracks, and how long each track lasted. Plug a CD into a player or a PC, and all it could tell you was that you were now listening to Track 3 of 12.

Around about the same time as movie enthusiasts were building the IMDb, music enthusiasts were solving this problem by collectively building their own compact disk database—the CD Database (CDDB). Programmer Ti Kan wrote open source client software that would auto-run when a CD was put into a computer, and grab the number of tracks and their length. This client would query a public online database (designed by another coder, Steve Scherf) to see if anyone else had seen a CD with the same fingerprint. If no one had, the program would pop up a window asking the PC user to enter the album details themselves, and would upload that information to the collective store, ready for the next user to find. All it took was one volunteer to enter the album info and associate it with the unique fingerprint of track durations, and every future CDDB client owner could grab the data and display it the moment the CD was inserted, and let its user pick tracks by their name, peruse artist details, and so on. 

The modern internet, buffeted as it is by monopolies, exploitation, and market and regulatory failure, still allows people to organize at low cost, with high levels of informality.

When it started, most users of the CDDB had to precede much of their music-listening time with a short burst of volunteer data entry. But within months, the collective contributions of the Internet’s music fans had created a unique catalogue of current music that far exceeded the information contained even in expensive, proprietary industry databases. Deprived of any useful digital accommodations by the music industry, CD fans, armed with the user-empowering PC and the internet, built their own solution.

This story, too, does not have a happy ending. In fact, in some ways the CDDB is the most notorious tale of enclosure on the early Net. Kan and Scherf soon realised the valuable asset that they were sitting on, and along with the hosting administrator of the original database server, built it into a commercial company, just as the overseers of Cardiff’s movie database had. Between 2000 and 2001, as “Gracenote”, this commercial company shifted from a free service, incorporated by its many happy users into a slew of open source players, to serving hardware companies, who they charged for a CD recognition service. It changed its client software to a closed proprietary software license, attached restrictive requirements on any code that used its API, and eventually blocked clients who did not agree to its license entirely.

The wider CDDB community was outraged, and the bitterness persisted online for years afterwards. Five years later, Scherf defended his actions in a Wired magazine interview. His explanation was the same as IMDB’s founders: that finding a commercial owner and business model was the only way to fund CDDB as a viable ongoing concern. He noted that other groups of volunteers, notably an alternative service called freedb, had forked the database and client code from a point just before Gracenote locked it up. He agreed that was their right, and encouraged them to keep at it, but expressed scepticism that they would survive. “The focus and dedication required for CDDB to grow could not be found in a community effort,” he told Wired. “If you look at how stagnant efforts like freedb have been, you’ll see what I mean.”  By locking down and commercializing CDDB, Scherf said that he “fully expect[ed] our disc-recognition service to be running for decades to come.”

Scherf may have overestimated the lifetime of CDs, and underestimated the persistence of free versions of the CDDB. While freedb closed last year,  Gnudb, an alternative derived from freedb, continues to operate. Its far smaller set of contributors don’t cover as much of the latest CD releases, but its data remains open for everyone to use—not just for the remaining CD diehards, but also as a permanent historical record of the CD era’s back catalogue: its authors, its releases, and every single track. Publicly available, publicly collected, and publicly usable, in perpetuity. Whatever criticism might be laid at the feet of this form of the public interest internet, fragility is not one of them. It hasn’t changed much, which may count as stagnation to Scherf—especially compared to the multi-million dollar company that Gracenote has become. But as Gracenote itself was bought up (first by Sony, then by Nielsen), re-branded, and re-focused, its predecessor has distinctly failed to disappear.

Some Internet services do survive and prosper by becoming the largest, or by being bought by the largest. These success stories are very visible, if not organically, then because they can afford marketers and publicists. If we listen exclusively to these louder voices, our assumption would be that the story of the Internet is one of consolidation and monopolization. And if—or perhaps just when—these conglomerates go bad, their failings are just as visible.

But smaller stories, successful or not, are harder to see. When we dive into this area, things become more complicated. Public interest internet services can be engulfed and transformed into strictly commercial operations, but they don’t have to be. In fact, they can persist and outlast their commercial cousins.

And that’s because the modern internet, buffeted as it is by monopolies, exploitation, and market and regulatory failure, still allows people to organize at low cost, with high levels of informality, in a way that can often be more efficient, flexible and antifragile than strictly commercial, private interest services,or the centrally-planned government production of public goods.

Next week: we continue our look at music recognition, and see how public interest internet initiatives can not only hang on as long as their commercial rivals, but continue to innovate, grow, and financially support their communities.

Share
Categories
Commentary Intelwars The Public Interest Internet

Introducing the Public Interest Internet

Say the word “internet” these days, and most people will call to mind images of Mark Zuckerberg and Jeff Bezos, of Google and Twitter: sprawling, intrusive, unaccountable. This tiny handful of vast tech corporations and their distant CEOs demand our online attention and dominate the offline headlines. 

But on the real internet, one or two clicks away from that handful of conglomerates, there remains a wider, more diverse, and more generous world. Often run by volunteers, frequently without any obvious institutional affiliation, sometimes tiny, often local, but free for everyone online to use and contribute to, this internet preceded Big Tech, and inspired the earliest, most optimistic vision of its future place in society.

When Big Tech is long gone, a better future will come from the seed of this public interest internet: seeds that are being planted now, and which need everyone to nurture them. 

The word “internet” has been so effectively hijacked by its most dystopian corners that it’s grown harder to even refer to this older element of online life, let alone bring it back into the forefront of society’s consideration. In his work documenting this space and exploring its future, academic, entrepreneur, and author Ethan Zuckerman has named it our “digital public infrastructure.” Hana Schank and her colleagues at the New America think tank have revitalized discussions around what they call “public interest technology.”  In Europe, activists, academics and public sector broadcasters talk about the benefits of the internet’s “public spaces” and improving and expanding the “public stack.” Author and activist Eli Pariser has dedicated a new venture to advancing better digital spaces—what its participants describe as the “New Public”.

Not to be outdone, we at EFF have long used the internal term: “the public interest internet.” While these names don’t quite point to exactly the same phenomenon, they all capture some aspect of the original promise of the internet. Over the last two decades, that promise largely disappeared from wider consideration.  By fading from view, it has grown underappreciated, underfunded, and largely undefended. Whatever you might call it, we see our mission to not just act as the public interest internet’s legal counsel when it is under threat, but also to champion it when it goes unrecognized. 

This blog series, we hope, will serve as a guided tour of some of the less visible parts of the modern public interest internet. None of the stories here, the organizations, collectives, and ongoing projects have grabbed the attention of the media or congressional committees (at least, not as effectively as Big Tech and its moguls). Nonetheless, they remain just as vital a part of the digital space. They not only better represent the spirit and vision of the early internet, they underlie much of its continuing success: a renewable resource that tech monopolies and individual users alike continue to draw from.

When Big Tech is long gone, a better future will come from the seed of this public interest internet: seeds that are being planted now, and which need everyone to nurture them until they’re strong enough to sustain our future in a more open and free society. 

But before we look into the future, let’s take a look at the past, to a time when the internet was made from nothing but the public—and because of that, governments and corporations declared that it could never prosper.

Share
Categories
Commentary Intelwars Security Education

Surveillance Self-Defense Playlist: Getting to Know Your Phone

We are launching a new Privacy Breakdown of Mobile Phones “playlist” on Surveillance Self-Defense, EFF’s online guide to defending yourself and your friends from surveillance by using secure technology and developing careful practices. This guided tour walks through the ways your phone communicates with the world, how your phone is tracked, and how that tracking data can be analyzed. We hope to reach everyone from those who may have a smartphone for the first time, to those who have had one for years and want to know more, to savvy users who are ready to level up.

The operating systems (OS) on our phones weren’t originally built with user privacy in mind or optimized fully to keep threatening services at bay. Along with the phone’s software, different hardware components have been added over time to make the average smartphone a Swiss army knife of capabilities, many of which can be exploited to invade your privacy and threaten your digital security. This new resource attempts to map out the hardware and software components, the relationships between the two, and what threats they can create. These threats can come from individual malicious hackers or organized groups all the way up to government level professionals. This guide will help users understand a wide range of topics relevant to mobile privacy, including: 

  • Location Tracking: Encompassing more than just GPS, your phone can be tracked through cellular data and WiFi as well. Find out the various ways your phone identifies your location.
  • Spying on Mobile Communications: The systems our phone calls were built on were based on a model that didn’t prioritize hiding information. That means targeted surveillance is a risk.
  • Phone Components and Sensors: Today’s modern phone can contain over four kinds of radio transmitters/receivers, including WiFi, Bluetooth, Cellular, and GPS.
  • Malware: Malicious software, or malware, can alter your phone in ways that make spying on you much easier.
  • Pros and Cons of Turning Your Phone Off: Turning your phone off can provide a simple solution to surveillance in certain cases, but can also be correlated with where it was turned off.
  • Burner Phones: Sometimes portrayed as a tool of criminals, burner phones are also often used by activists and journalists. Know the do’s and don’ts of having a “burner.”
  • Phone Analysis and Seized Phones: When your phone is seized and analyzed by law enforcement, certain patterns and analysis techniques are commonly used to draw conclusions about you and your phone use.

This isn’t meant to be a comprehensive breakdown of CPU architecture in phones, but rather of the capabilities that affect your privacy more frequently, whether that is making a phone call, texting, or using navigation to get to a destination you have never been to before. We hope to give the reader a bird’s-eye view of how that rectangle in your hand works, take away the mystery behind specific privacy and security threats, and empower you with information you can use to protect yourself.

EFF is grateful for the support of the National Democratic Institute in providing funding for this security playlist. NDI is a private, nonprofit, nongovernmental organization focused on supporting democracy and human rights around the world. Learn more by visiting https://NDI.org.

Share
Categories
Commentary Intelwars Legal Analysis privacy

Foriegn Intelligence Surveillance Court Rubber Stamps Mass Surveillance Under Section 702 – Again

As someone once said, “the Founders did not fight a revolution to gain the right to government agency protocols.”  Well it was not just someone, it was Chief Justice John Roberts. He flatly rejected the government’s claim that agency protocols could solve the Fourth Amendment violations created by police searches of our communications stored in the cloud and accessible through our phones.  

Apparently, the Foreign Intelligence Surveillance Court (FISC) didn’t get the memo. That’s because, under a recently declassified decision from November 2020, the FISC again found that a series of overly complex but still ultimately swiss cheese agency protocols — that are admittedly not even being followed — resolve the Fourth Amendment problems caused by the massive governmental seizures and searches of our communications currently occurring under FISA Section 702. The annual review by the FISC is required by law — it’s supposed to ensure that both the policies and the practices of the mass surveillance under 702 are sufficient. It failed on both counts.  

The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad — it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a communication

Justice Roberts was concerned with a single phone seized pursuant to a lawful arrest.  The FISC is apparently unconcerned when it rubber stamps mass surveillance impacting, by the government’s own admission, hundreds of thousand of nonsuspect Americans.

What’s going on here?  

From where we sit, it seems clear that the FISC continues to suffer from a massive case of national security constitutional-itis. That is the affliction (not really, we made it up) where ordinarily careful judges sworn to defend the Constitution effectively ignore the flagrant Fourth Amendment violations that occur when the NSA, FBI, (and to a lesser extent, the CIA, and NCTC) misuse the justification of national security to spy on Americans en mass. And this malady means that even when the agencies completely fail to follow the court’s previous orders, they still get a pass to keep spying.  

The FISC decision is disappointing on at least two levels. First, the protocols themselves are not sufficient to protect Americans’ privacy. They allow the government to tap into the Internet backbone and seize our international (and lots of domestic) communications as they flow by — ostensibly to see if they have been targeted. This is itself a constitutional violation, as we have long argued in our Jewel v. NSA case. We await the Ninth Circuit’s decision in Jewel on the government’s claim that this spying that everyone knows about is too secret to be submitted for real constitutional review by a public adversarial court (as opposed to the one-sided review by the rubber-stamping FISC).  

But even after that, the protocols themselves are swiss cheese when it comes to protecting Americans. At the outset, unlike traditional foreign intelligence surveillance, under Section 702, FISC judges do not authorize individualized warrants for specific targets. Rather, the role of a FISC judge under Section 702 is to approve abstract protocols that govern the Executive Branch’s mass surveillance and then review whether they have been followed.  

The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad — it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a conversation whose communications are both seized and searched without a warrant. It is also immaterial that the individuals targeted turn out to be U.S. persons.  This was one of the many problems which ultimately ended with the decommissioning of the Call Detail Records program, which despite being Congress’ attempt to rein in the program which started under section 215 of the Patriot Act, still mass surveilled communications metadata, including inadvertently collecting millions of call detail records from American persons illegally. 

Next, the protocols allow collection for any “foreign intelligence,” purpose, which is a much broader scope than merely searching for terrorists. The term encompasses information that, for instance, could give the U.S. an advantage in trade negotiations. Once these communications are collected, the protocols allow the FBI to use the information for domestic criminal prosecutions if related to national security.  This is what Senator Wyden and others in Congress have rightly pointed out is a “backdoor” warrantless search. And those are just a few of the problems.  

While the protocols are complex and confusing, the end result is that nearly all Americans have their international communications seized initially and a huge number of them are seized and searched by the FBI, NSA, CIA and NCTC, often multiple times for various reasons, all without individual suspicion, much less a warrant.

Second, the government agencies — especially the FBI — apparently cannot be bothered to follow even these weak protocols.  This means that in practice, we users don’t even get that minimal protection.  The FISC decision reports that the FBI has never limited its searches to just those related to national security. Instead agents query the 702 system for investigations relating to health care fraud, transnational organized crime, violent gangs, domestic terrorism, public corruption and bribery. And that’s in just 7 FBI field offices reviewed. This is not a new problem, as the FISC notes. Although it once again seems to think that the FBI just needs to be told again to do it and to do proper training (which it has failed to do for years). The court notes that it is likely that other field offices also did searches for ordinary crimes, but that the FBI also failed to do proper oversight so we just don’t know how.  

A federal court would accept no such tomfoolery…..Yet the FISC is perfectly willing to sign off on the FBI’s failures and the Bureau’s flagrant disregard of its own rulings for year upon year.

Next, the querying system for this sensitive information had been designed to make it hard not to search the 702-collected data, including by requiring agents to opt out (not in) to searching the 702 data and then timing out that opt-out after only thirty minutes. And even then, the agents could just toggle “yes” to search 702 collected data, with no secondary checking prior to those searches. This happened multiple times (that we know of) to allow for searches without any national security justification. The FBI also continued to improperly conduct bulk searches, which are large batch queries using multiple search terms without written justifications as required by the protocols. Even the FISC calls these searches “indiscriminate,” yet it reauthorized the program.  

In her excellent analysis of the decision, Marcy Wheeler lists out the agency excuses that the Court accepted:

  • It took time for them to make the changes in their systems
  • It took time to train everyone
  • Once everyone got trained they all got sent home for COVID 
  • Given mandatory training, personnel “should be aware” of the requirements, even if actual practice demonstrates they’re not
  • FBI doesn’t do that many field reviews
  • Evidence of violations is not sufficient evidence to find that the program inadequately protects privacy
  • The opt-out system for FISA material — which is very similar to one governing the phone and Internet dragnet at NSA until 2011 that also failed to do its job — failed to do its job
  • The FBI has always provided national security justifications for a series of violations involving their tracking system where an Agent didn’t originally claim one
  • Bulk queries have operated like that since November 2019
  • He’s concerned but will require more reporting

And the dog also ate their homework.  While more reporting sounds nice, that’s the same thing ordered the last time, and the time before that.  Reporting of problems should lead to something actually being done to stop the problems.  

At this point, it’s just embarrassing. A federal court would accept no such tomfoolery from an impoverished criminal defendant facing years in prison. Yet the FISC is perfectly willing to sign off on the FBI and NSA failures and the agencies’ flagrant disregard of its own rulings for year upon year.  Not all FISC decisions are disappointing.  In 2017, we were heartened that another FISC judge had been so fed up that it issued requirements that led to the end of the “about” searching of collected upstream data and even its partial destruction. And the extra reporting requirements do give us at least a glimpse into how bad it is that we wouldn’t otherwise have.  

But this time the FISC has let us all down again. It’s time for the judiciary, whether a part of the FISC or not, to inoculate themselves against the problem of throwing out the Fourth Amendment whenever the Executive Branch invokes national security, particularly when the constitutional violations are so flagrant, long-standing and pervasive. The judiciary needs to recognize mass spying as unconstitutional and stop what remains of it. Americans deserve better than this charade of oversight. 

Share
Categories
Commentary Corporate Speech Controls Intelwars

The Florida Deplatforming Law is Unconstitutional. Always has Been.

Last week, the Florida Legislature passed a bill prohibiting social media platforms from “knowingly deplatforming” a candidate (the Transparency in Technology Act, SB 7072), on pain of a fine of up to $250k per day, unless, I kid you not, the platform owns a sufficiently large theme park. 

Governor DeSantis is expected to sign it into law, as he called for laws like this. He cited social media de-platforming Donald Trump as  examples of the political bias of what he called oligarchs in Silicon Valley.” The law is not just about candidates, it also bans “shadow-banning” and cancels cancel culture by prohibiting censoring “journalistic enterprises,” with “censorship” including things like posting “an addendum” to the content, i.e. fact checks.

This law, like similar previous efforts, is mostly performative, as it almost certainly will be found unconstitutional. Indeed, the parallels with a nearly 50 years old compelled speech precedent are uncanny. In 1974, in Miami Herald Publishing Co. v. Tornillo, the Supreme Court struck down another Florida statute that attempted to compel the publication of candidate speech. 

At the time, Florida had a dusty “right of reply” law on the books, which had not really been used, giving candidates the right to demand that any newspaper who criticized them print a reply to the newspaper’s charges, at no cost. The Miami Herald had criticized Florida House candidate Pat Tornillo, and refused to carry Tornillo’s reply. Tornillo sued.

Tornillo lost at the trial court, but found some solace on appeal to the Florida Supreme Court.  The Florida high court held that the law was constitutional, writing that the statute enhances rather than abridges freedom of speech and press protected by the First Amendment,” much like the proponents of today’s new law argue. 

So off the case went to the US Supreme Court. Proponents of the right of reply raised the same arguments used today—that government action was needed to ensure fairness and accuracy, because the ‘marketplace of ideas’ is today a monopoly controlled by the owners of the market.”  

Like today, the proponents argued new technology changed everything. As the Court acknowledged in 1974, “[i]n the past half century a communications revolution has seen the introduction of radio and television into our lives, the promise of a global community through the use of communications satellites, and the specter of a ‘wired’ nation by means of an expanding cable television network with two-way capabilities.”  Today, you might say that a wired nation with two-way communications had arrived in the global community, but you can’t say the Court didn’t consider this concern.

The Court also accepted that the consolidation of major media meant “the dominant features of a press that has become noncompetitive and enormously powerful and influential in its capacity to manipulate popular opinion and change the course of events,” and acknowledged the development of what the court called “advocacy journalism,” eerily similar to the arguments raised today. 

Paraphrasing the arguments made in favor of the law, the Court wrote “The abuses of bias and manipulative reportage are, likewise, said to be the result of the vast accumulations of unreviewable power in the modern media empires. In effect, it is claimed, the public has lost any ability to respond or to contribute in a meaningful way to the debate on issues,” just like today’s proponents of the Transparency in Technology Act.

The Court was not swayed, not because this was dismissed as an issue, but because government coercion could not be the answer. “However much validity may be found in these arguments, at each point the implementation of a remedy such as an enforceable right of access necessarily calls for some mechanism, either governmental or consensual. If it is governmental coercion, this at once brings about a confrontation with the express provisions of the First Amendment.” There is much to dislike about content moderation practices, but giving the government more control is not the answer.

Even if one should decry the lack of responsibility of the media, the Court recognized “press responsibility is not mandated by the Constitution and like many other virtues it cannot be legislated.”  Accordingly, Miami Herald v. Tornillo reversed the Florida Supreme Court, and held the Florida statute compelling publication of candidates’ replies unconstitutional.

Since Tornillo, courts have consistently applied it as binding precedent, including applying Tornillo to social media and internet search engines, the very targets of the Transparency in Technology Act (unless they own a theme park). Indeed, the compelled speech doctrine has even been used to strike down other attempts to counter perceived censorship of conservative speakers.[FN1]  

With the strong parallels with Tornillo, you might wonder why the Florida Legislature would pass a law doomed to failure, costing the state the time and expense of defending it in court. Politics, of course. The legislators who passed this bill probably knew it was unconstitutional, but may have seen political value in passing the base-pleasing statute, and blaming the courts when it gets struck down. 

Politics is also the reason for the much-ridiculed exception for theme park owners. It’s actually a problem for the law itself. As the Supreme Court explained in Florida Star v BJF, carve-outs like this make the bill even more susceptible to a First Amendment challenge as under-inclusive.  Theme parks are big business in Florida, and the law’s definition of social media platform would otherwise fit Comcast (which owns Universal Studios’ theme parks), Disney, and even Legoland.  Performative legislation is less politically useful if it attacks a key employer and economic driver of your state. The theme park exception has also raised all sorts of amusing possibilities for the big internet companies to address this law by simply purchasing a theme park, which could easily be less expensive than compliance, even with the minimum 25 acres and 1 million visitors/year. Much as Section 230 Land would be high on my own must-visit list, striking the law down is the better solution.

The law is bad, and the legislature should feel bad for passing it, but this does not mean that the control that the large internet companies have on our public conversations isn’t an important policy issue. As we have explained to courts considering the broader issue, if a candidate for office is suspended or banned from social media during an election, the public needs to know why, and and the candidate needs a process to appeal the decision. And this is not just for politicians – more often it is marginalized communities that bear the brunt of bad content moderation decisions. It is critical that the social platform companies provide transparency, accountability and meaningful due process to all impacted speakers, in the US and around the globe, and ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of all users’ rights. 

This is why EFF and a wide range of non-profit organizations in the internet space worked together to develop the Santa Clara Principles, which call upon social media to (1) publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines; (2) provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension; and (3) provide a meaningful opportunity for timely appeal of any content removal or account suspension. 

FN1: Provisions like Transparency in Technology Act’s ban on addendums to posts (such as fact checking or link to authoritative sources) are not covered by the compelled speech doctrine, but rather fail as prior restraints on speech. We need not spend much time on that, as the Supreme Court has roundly rejected prior restraint.

Share
Categories
Biden Biden lost his mask Big day blaze Blaze podcasts Blaze tv Blazetv Blazetv youtube Commentary conservative Conservative commentary Conservative podcast Coronavirus COVID-19 Covid-19 vaccine Dandelion Facebook.com Flower Intelwars Jill Jill Biden Joe Joe Biden Kamala Harris Mandate Mask Mask mandate news Pat Pat gray Pat gray podcast Pat gray radio Pat gray unleashed Pat gray videos Pat grey Pat grey unleashed Political commentary Radio Social commentary TALK RADIO TheBlaze Vaccine vaccines Video Youtube.com

Pat Gray: Our fearless leader, Joe Biden, lost his mask

On Friday’s show, Pat Gray played two videos which he described as “very special for a special leader.”

The first video featured President Joe Biden as he approached the microphone to address a crowd in Duluth, Georgia, to promote his first 100 days in office. The first 30 seconds of the clip showed a panicked President Biden searching for his mask. “I can’t find my mask. I’m in trouble,” Biden said to the crowd.

The second video shows President Biden and first lady Jill Biden walking toward Marine One headed for Georgia. In the clip, Biden stops to pick up a dandelion for Jill. Though the media had painted the interaction as romantic, Pat, Jeffy, and Keith had a different take. Watch the clip for more. Can’t watch? Download the podcast here.

Want more from Pat Gray?

To enjoy more of Pat’s biting analysis and signature wit as he restores common sense to a senseless world, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution and live the American dream.

Share
Categories
Commentary free speech Intelwars Section 230 of the Communications Decency Act

EFF at 30: Protecting Free Speech, with Senator Ron Wyden

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF was proud to welcome Senator Ron Wyden as our second special guest in EFF’s yearlong Fireside Chat series. Senator Wyden is a longtime supporter of digital rights, and as co-author of Section 230, one of the key pieces of legislation protecting speech online, he’s a well-recognized champion of free speech. EFF’s Legal Director, Dr. Corynne McSherry, spoke with the senator about the fight to protect free expression and how Section 230, despite recent attacks, is still the “single best law for small businesses and single best law for free speech.” He also answered questions from the audience about some of the hot topics that have swirled around the legislation for the last few years. 

You can watch the full conversation here or read the transcript.

On May 5, we’ll be holding our third EFF30 Fireside Chat, on surveillance, with special guest Edward Snowden. He will be joined by EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia as they weigh in on surveillance in modern culture, activism, and the future of privacy. 

RSVP NOW

Section 230 and Social Movements

Senator Wyden began the fireside chat with a reminder that some of the most important, and often divisive, social issues of the last few years, from #BlackLivesMatter to the #MeToo movement, would likely be censored much more heavily on platforms without Section 230. That’s because the law gives platforms both the power to moderate as they see fit, and partial immunity from liability for what’s posted on those sites, making the speech the legal responsibility of the original speaker.

Section 230…has always been for the person who doesn’t have deep pockets

The First Amendment protects most speech online, but without Section 230, many platforms would be unable to host much of this important, but controversial speech because they would be stuck in litigation far more often. Section 230 has been essential for those who “don’t own their own TV stations” and others “without deep pockets” for getting their messages online, Wyden explained. 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FELSJofIhnRM%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Wyden also discussed the history of Section 230, which was passed in 1996. ”[Senator Chris Cox] and I wanted to make sure that innovators and creators and people who had promising ideas and wanted to know how they were going to get them out – we wanted to make sure that this new concept known as the internet could facilitate that.” 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FF916aJbM96Q%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Misconceptions Around Section 230

Wyden took aim at several of the misconceptions around 230, like the fact that the law is a benefit only for Big Tech. “One of the things that makes me angry…the one [idea] that really infuriates me, is that Section 230 is some kind of windfall for Big Tech. The fact of the matter is Big Tech’s got so much money that they can buy themselves out of any kind of legal scrape. We sure learned that when the first bill to start unraveling Section 230 passed, called SESTA/FOSTA.”

We need that fact-finding so that we make smart technology policy

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FtwOpQY2htzs%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Another common misunderstanding around the law is that it mandates platforms to be “neutral.” This couldn’t be further from the truth, Wyden explained: “There’s not a single word in Section 230 that requires neutrality….The point was essentially to let ‘lots of flowers bloom.’ If you want to have a conservative platform, more power to you…If you want to have a progressive platform, more power to you.“ 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEM_gj6ZqCpA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

How to Think About Changes to Intermediary Liability Laws

All the positive benefit for online speech that Section 230 allows doesn’t mean that Section 230 is perfect, however. But before making changes to the law, Wyden suggested, “There ought to be some basic fact finding before the Congress just jumps in to making sweeping changes to speech online.” EFF Legal Director, Corynne McSherry, agreed wholeheartedly: “We need that fact-finding so that we make smart technology policy,” adding that we need go no further than our experience with SESTA/FOSTA and its collateral damage to prove this point. 

The first thing we ought to do is tackle the incredible abuses in the privacy area

There are other ways to improve the online ecosystem as well. Asked for his thoughts on better ways to address problems, Senator Wyden was blunt: “The first thing we ought to do is tackle the incredible abuses in the privacy area. Every other week in this country Americans learn about what amounts to yet another privacy disaster.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FhDT4J224EB4%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Another area where we can improve the online ecosystem is in data sales and collection. Wyden recently introduced a bill, “The Fourth Amendment is Not For Sale,” that will help reign in the problem of apps and commercial data brokers selling things user location data.

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FusMYK5rKCpA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

To wrap up the discussion, Senator Wyden took some questions about potential changes to Section 230. He lambasted SESTA/FOSTA, which EFF is challenging in court on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist, as an example of a poorly guided amendment. 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fcl48SEXjliI%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Senator Wyden pointed out that every time a proposal to amend the law comes up, there should be a rubric of several questions asked about how the change would work, and what impact it would have on users. (EFF has its own rubric for laws that would affect intermediary liability for just these purposes.)

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FAWJ6o6jOKgA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

We thank Senator Wyden for joining us to discuss free speech, Section 230, and the battle for digital rights. Please join us in the continuation of this fireside chat series on May 5 as we discuss surveillance with whistleblower Edward Snowden.

Share
Categories
Commentary COVID-19 and Digital Rights Intelwars

No Digital Vaccine Bouncers

The U.S. is distributing more vaccines and the population is gradually becoming vaccinated. Returning to regular means of activity and movement has become the main focus for many Americans who want to travel or see family.

An increasingly common proposal to get there is digital proof-of-vaccination, sometimes called “Vaccine Passports.” On the surface, this may seem like a reasonable solution. But to “return to normal”, we also have to consider that inequity and problems with access are a part of that normal. Also, these proposals require a new infrastructure and culture of doorkeepers to public places regularly requiring visitors to display a token as a condition of entry. This would be a giant step towards pervasive tracking of our day-to-day movements. And these systems would create new ways for corporations to monetize our data and for thieves to steal our data.

That’s why EFF opposes new systems of digital proof-of-vaccination as a condition of going about our day-to-day lives. They’re not “vaccine passports” that will speed our way back to normal. They’re “vaccine bouncers” that will unnecessarily scrutinize us at doorways and unfairly turn many of us away.

What Are Vaccine Bouncers?

So-called “vaccine passports” are digital credentials proposed to be convenient, digital, and accessible ways to store and present your medical data. In this case, it shows you have been vaccinated. These are not actual passports for international travel, nor are they directly related to systems we have in place to prove you have been vaccinated. Though different proposals vary, these are all new ways of displaying medical data in a way that is not typical for our society as a whole.

These schemes require the creation of a vast new electronic gatekeeping system. People will need to download a token to their phone, or in some cases may print that token and carry it with them. Public places will need to acquire devices that can read these tokens. To enter public places, people will need to display their token to a doorkeeper. Many people will be bounced away at the door, because they are not vaccinated, or they left their phone at home, or the system is malfunctioning. This new infrastructure and culture will be difficult to dismantle when we reach herd immunity.

We already have vaccination documents we need to obtain for international travel to certain countries. But even the World Health Organization (W.H.O.), the entity that issues Yellow Cards to determine if one has had a Yellow Fever vaccine, has come out against vaccine passports.

Requiring people to present their medical data to go to the grocery store, access public services, and other vital activities calls into question who will be ultimately barred from coming in. A large number of people not only in the U.S., but worldwide, do not have access to any COVID vaccines. Many others do not have access to mobile phones, or even to the printers required to create the paper QR code that is sometimes suggested as the supposed work-around.

Also, many solutions will be built by private companies offering smartphone applications. Meaning, they will give rise to new databases of information not protected by any privacy law and transmitted on a daily basis far more frequently than submitting a one-time paper proof-of-vaccination to a school. Since we have no adequate federal data privacy law, we are relying on the pinky-promises of private companies to keep our data private and secure.

We’ve already seen mission creep with digital bouncer systems. Years ago, some bars deployed devices that scanned patrons’ identification as a condition of entry. The rationale was to quickly ascertain, and then forget, a narrow fact about patrons: whether they are old enough to buy alcohol, and thus enter the premises. Then these devices started to also collect information from patrons, which bars share with each other. Thus, we are not comforted when we hear people today say: “don’t worry, digital vaccine bouncers will only check whether a person was vaccinated, and will not also collect information about them.” Once the infrastructure is built, it requires just a few lines of code to turn digital bouncers into digital panopticons.

Temporary Measures with Long Term Consequences

When we get to an approximation of normal, what is the plan for vaccine passports? Most proposals are not clear on this point. What will become of that medical data? Will there be a push for making this a permanent part of life?

As with any massive new technological system, it will take significant time and great effort to make the system work. We’ve already seen how easy it is to evade New York’s new vaccine bouncer system, and how other digital COVID systems, due to their flaws, fail to advance public health. Even with the best efforts, by the time the bugs are worked out of a new digital vaccine system for COVID, it may not be helpful to combat the pandemic. There’s no need to rush into building a system that will only provide value to the companies that profit by building it.

Instead, our scarce resources should go to getting more people vaccinated. We are all in this together, so we should be opening up avenues of access for everyone to a better future in this pandemic. We should not be creating more issues, concerns, and barriers with experimental technology that needs to be worked out during one of the most devastating modern global crises of our time.

Share
Categories
Commentary competition Intelwars

Fighting FLoC and Fighting Monopoly Are Fully Compatible

Are tech giants really damned if they do and damned if they don’t (protect our privacy)?

That’s a damned good question that’s been occasioned by Google’s announcement that they’re killing the invasive, tracking third-party cookie (yay!) and replacing it with FLoC, an alternative tracking scheme that will make it harder for everyone except Google to track you (uh, yay?).  (you can find out if Google is FLoCing with you with our Am I FLoCed tool).

Google’s move to kill the third party cookie has been greeted with both cheers and derision. On the one hand, some people are happy to see the death of one of the internet’s most invasive technologies. We’re glad to see it go, too – but we’re pretty upset to see that’s going to be replaced with a highly invasive alternative tracking technology (bad enough) that can eliminate the majority of Google’s competitors in the data-acquisition and ad-targeting sectors in a single stroke (worse). 

It’s no wonder that so many people have concluded that privacy and antitrust are on a collision course. Google’s says nuking the third-party cookie will help our privacy, specifically because it will remove so many of its (often more unethical) ad-tech competitors from the web. 

But privacy and competition are not in conflict.  As EFF’s recent white paper demonstrated,  We can have Privacy Without Monopoly. In fact, we can’t settle for anything less

FLoC is quite a power-move for Google. Faced with growing concerns about privacy, the company proposes to solve them by making itself the protector of our privacy, walling us off from third-party tracking except when Google does it. All the advertisers that rely on non-Google ad-targeting will have to move to Google, and pay for their services, using a marketplace that they’ve rigged in their favor.  To give credit where it is due, the move does mean that some bad actors in the digital ad space may be thwarted.But it’s a very cramped view of how online privacy should work. Google’s version of protecting our privacy is appointing itself the gatekeeper who decides when we’re spied on while skimming from advertisers with nowhere else to go. Compare that with Apple, which just shifted the default for to “no” for all online surveillance by apps, period (go, Apple!).

And while here we think Apple is better than Google, that’s not how any of this should work. The truth is, despite occasional counter-examples, the tech giants can’t be relied on to step up to provide real privacy for users when it conflicts with their business models.  The baseline for privacy should be a matter of law and basic human rights, not just a matter of corporate whim. America is long, long overdue for a federal privacy law with a private right of action. Users  must be empowered to enforce privacy accountability, instead of relying on the largesse of the giants or on overstretched civil servants. 

Just because FLoC is billed as pro-privacy and also criticized as anti-competitive, it doesn’t mean that privacy and competition aren’t compatible.  To understand how that can be, first remember that the reason to support competition: not for its own sake, but for what it can deliver to internet users. The benefit of well-thought-through competition is more control over our digital lives and better (not just more) choices.

Competition on its own is meaningless or even harmful: who wants companies to compete to see which one can trick or coerce you into surrendering your fundamental human rights in the most grotesque and humiliating ways at the least benefit to you? To make competition work for users, start with Competitive Compatibility and interoperability – the ability to connect new services to existing ones, with or without permission from their operators, so long as you’re helping users exercise more choice over their online lives.  A competitive internet – one dominated by interoperable services – would be one where you didn’t have to choose between your social relationships and your privacy. When all your friends are on Facebook, hanging out with them online means subjecting yourself to Facebook’s totalizing, creepy, harmful surveillance. 

But if Facebook was forced to be interoperable, then rival services that didn’t spy on you could enter the market, and you could use those services to talk to your friends who were still on Facebook (for reasons beyond your understanding).  This done poorly could be worse for privacy, but done well, it does not have to be.  Interoperability is key to smashing monopoly power, and interoperability’s benefits depend on strong laws protecting privacy.

With or without interoperability, we need a strong privacy law.  Tech companies unilaterally deciding what user privacy means is dangerous, even when they come up with a good answer (Apple) but especially not when their answer comes packaged in a nakedly anticompetitive power-grab (Google). Of course, it doesn’t help that some of the world’s largest, most powerful corporations depend on this unilateral power, and use some of their tremendous profits to fight every attempt to create a strong national privacy law that empowers users to hold them accountable.

Competition and privacy reinforce each other in technical ways, too: The lack of competition is the reason online tracking technologies all feed the same two companies’ data warehouses: these companies dominate logins, search, social media and the other areas that the people who build and maintain our digital tools need to succeed. A diverse and competitive online world is one with substantial technical hurdles to building the kinds of personal dossiers on users that today’s ad-tech companies depend on for their profitability. 

The only sense in which “pro-privacy” and “competition” are in tension is the twisted sense implied by FLoC, where “pro-privacy” means “only one company gets to track you and present who you are to others.”  

Of course that’s incompatible with competition.

(What’s more, FLoC won’t even deliver that meaningless assurance. As we note in our original post, FLoC also creates real opportunities for fingerprinting and other forms of re-identification. FLoC is anti-competitive and anti-privacy.)

Real privacy – less data-collection, less data-retention and less data-processing, with explicit consent when those activities take place – is perfectly compatible with competition. It’s one of the main reasons to want antitrust enforcement.

All of this is much easier to understand if you think about the issues from the perspective of users, not corporations. You can be pro-Apple (when Apple is laying waste to Facebook’s ability to collect our data) and anti-Apple (when Apple is skimming a destructive ransom from software vendors like Hey). This is only a contradiction if you think of it from Apple’s point of view – but if you think of it from the users’ point of view, there’s no contradiction at all.

We want competition because we want users to be in control of their digital lives – to have digital self-determination and choices that support that self-determination. Right now, that means that we need a strong privacy law and a competitive landscape that gives breathing space to better options than Google’s “track everything but in a slightly different way” FLoC.  

 As always, when companies have their users’ backs, EFF has the companies’ backs. And as always, the reason we get their backs is because we care about users, not companies.

We fight for the users.

Share
Categories
Commentary competition Intelwars privacy

Fighting FLoC and Fighting Monopoly Are Fully Compatible

Are tech giants really damned if they do and damned if they don’t (protect our privacy)?

That’s a damned good question that’s been occasioned by Google’s announcement that they’re killing the invasive, tracking third-party cookie (yay!) and replacing it with FLoC, an alternative tracking scheme that will make it harder for everyone except Google to track you (uh, yay?).  (you can find out if Google is FLoCing with you with our Am I FLoCed tool).

Google’s move to kill the third party cookie has been greeted with both cheers and derision. On the one hand, some people are happy to see the death of one of the internet’s most invasive technologies. We’re glad to see it go, too – but we’re pretty upset to see that it’s going to be replaced with a highly invasive alternative tracking technology (bad enough) that can eliminate the majority of Google’s competitors in the data-acquisition and ad-targeting sectors in a single stroke (worse). 

It’s no wonder that so many people have concluded that privacy and antitrust are on a collision course. Google’s says nuking the third-party cookie will help our privacy, specifically because it will remove so many of its (often more unethical) ad-tech competitors from the web. 

But privacy and competition are not in conflict.  As EFF’s recent white paper demonstrated,  We can have Privacy Without Monopoly. In fact, we can’t settle for anything less

FLoC is quite a power-move for Google. Faced with growing concerns about privacy, the company proposes to solve them by making itself the protector of our privacy, walling us off from third-party tracking except when Google does it. All the advertisers that rely on non-Google ad-targeting will have to move to Google, and pay for their services, using a marketplace that they’ve rigged in their favor.  To give credit where it is due, the move does mean that some bad actors in the digital ad space may be thwarted. But it’s a very cramped view of how online privacy should work. Google’s version of protecting our privacy is appointing itself the gatekeeper who decides when we’re spied on while skimming from advertisers with nowhere else to go. Compare that with Apple, which just shifted the default to “no” for all online surveillance by apps, period (go, Apple!).

And while here we think Apple is better than Google, that’s not how any of this should work. The truth is, despite occasional counter-examples, the tech giants can’t be relied on to step up to provide real privacy for users when it conflicts with their business models.  The baseline for privacy should be a matter of law and basic human rights, not just a matter of a corporate whim. America is long, long overdue for a federal privacy law with a private right of action. Users must be empowered to enforce privacy accountability, instead of relying on the largesse of the giants or on overstretched civil servants. 

Just because FLoC is billed as pro-privacy and also criticized as anti-competitive, it doesn’t mean that privacy and competition aren’t compatible.  To understand how that can be, first remember that the reason to support competition: not for its own sake, but for what it can deliver to internet users. The benefit of well-thought-through competition is more control over our digital lives and better (not just more) choices.

Competition on its own is meaningless or even harmful: who wants companies to compete to see which one can trick or coerce you into surrendering your fundamental human rights, in the most grotesque and humiliating ways at the least benefit to you? To make competition work for users, start with Competitive Compatibility and interoperability – the ability to connect new services to existing ones, with or without permission from their operators, so long as you’re helping users exercise more choice over their online lives.  A competitive internet – one dominated by interoperable services – would be one where you didn’t have to choose between your social relationships and your privacy. When all your friends are on Facebook, hanging out with them online means subjecting yourself to Facebook’s totalizing, creepy, harmful surveillance. 

But if Facebook was forced to be interoperable, then rival services that didn’t spy on you could enter the market, and you could use those services to talk to your friends who were still on Facebook (for reasons beyond your understanding).  This done poorly could be worse for privacy, but done well, it does not have to be. Interoperability is key to smashing monopoly power, and interoperabilitys benefits depend on strong laws protecting privacy.

With or without interoperability, we need a strong privacy law.  Tech companies unilaterally deciding what user privacy means is dangerous, even when they come up with a good answer (Apple) but especially not when their answer comes packaged in a nakedly anticompetitive power-grab (Google). Of course, it doesn’t help that some of the world’s largest, most powerful corporations depend on this unilateral power, and use some of their tremendous profits to fight every attempt to create a strong national privacy law that empowers users to hold them accountable.

Competition and privacy reinforce each other in technical ways, too: The lack of competition is the reason online tracking technologies all feed the same two companies’ data warehouses: these companies dominate logins, search, social media and the other areas that the people who build and maintain our digital tools need to succeed. A diverse and competitive online world is one with substantial technical hurdles to building the kinds of personal dossiers on users that today’s ad-tech companies depend on for their profitability. 

The only sense in which “pro-privacy” and “competition” are in tension is the twisted sense implied by FLoC, where “pro-privacy” means “only one company gets to track you and present who you are to others.”  

Of course that’s incompatible with competition.

(What’s more, FLoC won’t even deliver that meaningless assurance. As we note in our original post, FLoC also creates real opportunities for fingerprinting and other forms of re-identification. FLoC is anti-competitive and anti-privacy.)

Real privacy – less data-collection, less data-retention and less data-processing, with explicit consent when those activities take place – is perfectly compatible with competition. It’s one of the main reasons to want antitrust enforcement.

All of this is much easier to understand if you think about the issues from the perspective of users, not corporations. You can be pro-Apple (when Apple is laying waste to Facebook’s ability to collect our data) and anti-Apple (when Apple is skimming a destructive ransom from software vendors like Hey). This is only a contradiction if you think of it from Apple’s point of view – but if you think of it from the users’ point of view, there’s no contradiction at all.

We want competition because we want users to be in control of their digital lives – to have digital self-determination and choices that support that self-determination. Right now, that means that we need a strong privacy law and a competitive landscape that gives breathing space to better options than Google’s “track everything but in a slightly different way” FLoC.  

 As always, when companies have their users’ backs, EFF has the companies’ backs. And as always, the reason we get their backs is because we care about users, not companies.

We fight for the users.

Share
Categories
Blockchain Commentary Intelwars International

Indian Government’s Plans to Ban Cryptocurrency Outright Are A Bad Idea

While Turkey hit the headlines last week with a ban on paying for items with cryptocurrency, the government of India appears to be moving towards outlawing cryptocurrency completely. An unnamed senior government official told Reuters last month that a forthcoming bill this parliamentary session would include the prohibition of the “possession, issuance, mining, trading and transferring [of] crypto-assets.” Officials have subsequently done little to dispel the concern that they are seeking a full cryptocurrency ban: in response to questions by Indian MPs about the timing and the content of a potential Cryptocurrency Act, the Finance Ministry was non-committal, beyond stating that the bill would follow “due process.” 

If the Indian government plans to effectively police its own draconian rules, it would need to seek to block, disrupt, and spy on Internet traffic

If rumors of a complete ban accurately describe the bill, it would be a drastic and over-reaching prohibition that would require draconian oversight and control to enforce. But it would also be in keeping with previous overreactions to cryptocurrency by regulators and politicians in India.

India regulators’ involvement with cryptocurrency began four years ago with concerns about consumer safety in the face of scams, Ponzi schemes, and the unclear future of many blockchain projects. The central bank issued a circular prohibiting all regulated entities, including banks, from servicing businesses dealing in virtual currencies. Nearly two years later, the ban was overturned by the Indian Supreme Court on the ground that it amounted to disproportionate regulatory action in the absence of evidence of harm caused to the regulated entities. A subsequent report in 2019 by the Finance Ministry proposed a draft bill that would have led to a broad ban on the use of cryptocurrency. It’s this bill that commentators suspect will form the core of the new legislation.

The Indian government is worried about the use of cryptocurrency to facilitate illegal activity, but this ignores the many entirely legal uses for cryptocurrencies that already exist and that will continue to develop in the future. Cryptocurrency is naturally more censorship-resistant than many other forms of financial instruments currently available. It provides a powerful market alternative to the existing financial behemoths that exercise control over much of our online transactions today, so that websites engaged in legal (but controversial) speech have a way to receive funds when existing financial institutions refuse to serve them. Cryptocurrency innovation also holds the promise of righting other power imbalances: it can expand financial inclusion by lowering the cost of credit, offering instant transaction resolution, and enhancing customer verification processes. Cryptocurrency can help unbanked individuals get access to financial services.

If the proposed cryptocurrency bill does impose a full prohibition, as rumors suggest, the Indian government should consider, too, the enforcement regime it would have to create. Many cryptocurrencies, including Bitcoin, offer some privacy-enhancing features which make it relatively easy for the geographical location of a cryptocurrency transaction to be concealed, so while India’s cryptocurrency users would be prohibited from using local, regulated cryptocurrency services, they could still covertly join the rest of the world’s cryptocurrency markets. As the Internet and Mobile Association of India has warned, the result would be that Indian cryptocurrency transactions would move to “illicit” sites that would be far worse at protecting consumers.

Moreover, if the Indian government plans to effectively police its own draconian rules, it would need to seek to block, disrupt, and spy on Internet traffic to detect or prevent cryptocurrency transactions. Those are certainly powers that the past and present Indian administrations have sought: but unless they are truly necessary and proportionate to a legitimate aim, such interference will violate international law, and, if India’s Supreme Court decides they are unreasonable, will fail once again to pass judicial muster.

The Indian government has claimed that it does want to support blockchain technology in general. In particular, the current government has promoted the idea of a “Digital Rupee”, which it expects to be placed on a statutory footing in the same bill that bans private cryptocurrencies. It’s unclear what the two actions have in common. A centrally-run digital currency has no reason to be implemented on a blockchain, a technology that is primarily needed for distributed trust consensus, and has little applicability when the government itself is providing the centralized backstop for trust. Meanwhile, legitimate companies and individuals exploring the blockchain for purposes for which it is well-suited will always fear falling afoul of the country’s criminal sanctions—which will, Reuter’s source claims, include ten-year prison sentences in its list of punishments. Such liability would be the severest disincentive to any independent investor or innovator, whether they are commercial or working in the public interest.

Addressing potential concerns around cryptocurrency by banning the entire technology would be excessive and unjust. It denies Indians access to the innovations that may come from this sector, and, if enforced at all, would require prying into Indian’s digital communications to an unnecessary and disproportionate degree.

Share
Categories
Commentary Intelwars privacy

Senators Demand Answers on the Dangers of Predictive Policing

Predictive policing is dangerous and yet its use among law enforcement agencies is growing. Predictive policing advocates, and companies that make millions selling technology to police departments, like to say the technology is based on “data” and therefore it cannot be racially biased. But this technology will disproportionately hurt Black and other overpoliced communities, because the data was created by a criminal punishment system that is racially biased. For example, a data set of arrests, even if they are nominally devoid of any racial information, can still be dangerous by virtue of the fact that police make a disparately high number of arrests in Black neighborhoods.

Technology can never predict crime. Rather, it can invite police to regard with suspicion those people who were victims of crime, or live and work in places where crime has been committed in the past. 

For all these reasons and more, EFF has argued that the technology should be banned from being used by law enforcement agencies, and some cities across the United States have already begun to do so. 

Now, a group of our federal elected officials is raising concerns on the dangers of predictive policing. Sen. Ron Wyden penned a probing letter to Attorney General Garland asking about how the technology is used. He is joined by Rep. Yvette Clarke, Sen. Ed Markey, Sen. Elizabeth Warren, Sen. Jeffery Merkley, Sen. Alex Padilla, Sen. Raphael Warnock, and Rep. Sheila Jackson Lee.. 

They ask, among other things, whether the U.S. Department of Justice (DOJ)  has done any legal analysis to see if the use of Predictive Policing complies with the 1964 Civil Rights Act. It’s clear that the Senators and Representatives are concerned with the harmful legitimizing effects “data” can have on racially biased policing: “These algorithms, which automate police decisions, not only suffer from a lack of meaningful oversight over whether they actually improve public safety, but are also likely to amplify prejudices against historically marginalized groups.”

The elected officials are also concerned about how many jurisdictions the DOJ has helped to fund predictive policing and the data collection requisite to run such programs, as well as whether these programs are measured in any real way for efficacy, reliability, and validity. This is important considering that many of the algorithms being used are withheld from public scrutiny on the assertion that they are proprietary and operated by private companies. Recently, an audit by the state of Utah found that the the state had contracted with a company for surveillance, data analysis, and predictive AI, yet the company actually had no functioning AI and was able to hide that fact inside the black box of proprietary secrets. 

You can read more of the questions the elected officials asked of the Attorney General in the full letter, which you can find below. 

Share
Categories
Commentary free speech Intelwars Student Privacy

Proctoring Tools and Dragnet Investigations Rob Students of Due Process

Like many schools, Dartmouth College has increasingly turned to technology to monitor students taking exams at home. And while many universities have used proctoring tools that purport to help educators prevent cheating, Dartmouth’s Geisel School of Medicine has gone dangerously further. Apparently working under an assumption of guilt, the university is in the midst of a dragnet investigation of complicated system logs, searching for data that might reveal student misconduct, without a clear understanding of how those logs can be littered with false positives. Worse still, those attempting to assert their rights have been met with a university administration more willing to trust opaque investigations of inconclusive data sets rather than their own students.

The Boston Globe explains that the medical school administration’s attempts to detect supposed cheating have become a flashpoint on campus, exemplifying a worrying trend of schools prioritizing misleading data over the word of their students. The misguided dragnet investigation has cast a shadow over the career aspirations of over twenty medical students.

Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating

What’s Wrong With Dartmouth’s Investigation

In March, Dartmouth’s Committee on Student Performance and Conduct (CSPC) accused several students of accessing restricted materials online during exams. These accusations were based on a flawed review of an entire year’s worth of the students’ log data from Canvas, the online learning platform that contains class lectures and information. This broad search was instigated by a single incident of confirmed misconduct, according to a contentious town hall between administrators and students (we’ve re-uploaded this town hall, as it is now behind a Dartmouth login screen). These logs show traffic between students’ devices and specific files on Canvas, some of which contain class materials, such as lecture slides. At first glance, the logs showing that a student’s device connected to class files would appear incriminating: timestamps indicate the files were retrieved while students were taking exams. 

But after reviewing the logs that were sent to EFF by a student advocate, it is clear to us that there is no way to determine whether this traffic happened intentionally, or instead automatically, as background requests from student devices, such as cell phones, that were logged into Canvas but not in use. In other words, rather than the files being deliberately accessed during exams, the logs could have easily been generated by the automated syncing of course material to devices logged into Canvas but not used during an exam. It’s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. Most of us don’t log out of every app, service, or webpage on our smartphones when we’re not using them.

Much like a cell phone pinging a tower, the logs show files being pinged in short time periods and sometimes being accessed at the exact second that students are also entering information into the exam, suggesting a non-deliberate process. The logs also reveal that the files accessed are largely irrelevant to the tests in question, also indicating  an automated, random process. A UCLA statistician wrote a letter explaining that even an automated process can result in multiple false-positive outcomes. Canvas’ own documentation explicitly states that the data in these logs “is meant to be used for rollups and analysis in the aggregate, not in isolation for auditing or other high-stakes analysis involving examining single users or small samples.” Given the technical realities of how these background refreshes take place, the log data alone should be nowhere near sufficient to convict a student of academic dishonesty. 

Along with The Foundation for Individual Rights in Education (FIRE), EFF sent a letter to the Dean of the Medical School on March 30th, explaining how these background connections work and pointing out that the university has likely turned random correlations into accusations of misconduct. The Dean’s reply was that the cases are being reviewed fairly. We disagree.

For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software

It appears that the administration is the victim of confirmation bias, turning fallacious evidence of misconduct into accusations of cheating. The school has admitted in some cases that the log data appeared to have been created automatically, acquitting some students who pushed back. But other students have been sanctioned, apparently entirely based on this spurious interpretation of the log data. Many others are anxiously waiting to hear whether they will be convicted so they can begin the appeal process, potentially with legal counsel. 

These convictions carry heavy weight, leaving permanent marks on student transcripts that could make it harder for them to enter residencies and complete their medical training. At this level of education, this is not just about being accused of cheating on a specific exam. Being convicted of academic dishonesty could derail an entire career. 

University Stifles Speech After Students Express Concerns Online

Worse still, following posts from an anonymous Instagram account apparently run by students concerned about the cheating accusations and how they were being handled, the Office of  Student Affairs introduced a new social media policy.

An anonymous Instagram account detailed some concerns students have with how these cheating allegations were being handled (accessed April 7). As of April 15, the account was offline.

The policy was emailed to students on April 7 but backdated to April 5—the day the Instagram posts appeared. The new policy states that, “Disparaging other members of the Geisel UME community will trigger disciplinary review.” It also prohibits social media speech that is not “courteous, respectful, and considerate of others” or speech that is “inappropriate.” Finally, the policy warns, “Students who do not follow these expectations may face disciplinary actions including dismissal from the School of Medicine.” 

One might wonder whether such a policy is legal. Unfortunately, Dartmouth is a private institution and so not prohibited by the First Amendment from regulating student speech.

If it were a public university with a narrower ability to regulate student speech, the school would be stepping outside the bounds of its authority if it enforced the social media policy against medical school students speaking out about the cheating scandal. On the one hand, courts have upheld the regulation of speech by students in professional programs at public universities under codes of ethics and other established guidance on professional conduct. For example, in a case about a mortuary student’s posts on Facebook, the Minnesota Supreme Court held that a university may regulate students’ social media speech if the rules are “narrowly tailored and directly related to established professional conduct standards.” Similarly, in a case about a nursing student’s posts on Facebook, the Eleventh Circuit held that “professional school[s] have discretion to require compliance with recognized standards of the profession, both on and off campus, so long as their actions are reasonably related to legitimate pedagogical concerns.” On the other hand, the Sixth Circuit has held that a university can’t invoke a professional code of ethics to discipline a student when doing so is clearly a “pretext” for punishing the student for her constitutionally protected speech.

Although the Dartmouth medical school is immune from a claim that its social media policy violates the First Amendment, it seems that the policy might unfortunately be a pretext to punish students for legitimate speech. Although the policy states that the school is concerned about social media posts that are “lapses in the standards of professionalism,” the timing of the policy suggests that the administrators are sending a message to students who dare speak out against the school’s dubious allegations of cheating. This will surely have a chilling effect on the community to the extent that students will refrain from expressing their opinions about events that occur on campus and affect their future careers. The Instagram account was later taken down, indicating that the chilling effect on speech may have already occurred. (Several days later, a person not affiliated with Dartmouth, and therefore protected from reprisal, has reposted many of the original Instagram’s posts.)

Students are at the mercy of private universities when it comes to whether their freedom of speech will be respected. Students select private schools based on their academic reputation and history, and don’t necessarily think about a school’s speech policies. Private schools shouldn’t take advantage of this, and should instead seek to sincerely uphold free speech principles.

Investigations of Students Must Start With Concrete Evidence

Though this investigation wasn’t the result of proctoring software, it is part and parcel of a larger problem: educators using the pandemic as an excuse to comb for evidence of cheating in places that are far outside their technical expertise. Proctoring tools and investigations like this one flag students based on flawed metrics and misunderstandings of technical processes, rather than concrete evidence of misconduct. 

Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career. 

Proctoring software that assumes all students take tests the same way—for example, in rooms that they can control, their eyes straight ahead, fingers typing at a routine pace—puts a black mark on the record of students who operate outside the norm. One problem that has been widely documented with proctoring software is that students with disabilities (especially those with motor impairment) are consistently flagged as exhibiting suspicious behavior by software suites intended to detect cheating. Other proctoring software has flagged students for technical snafus such as device crashes and Internet cuts out, as well as completely normal behavior that could indicate misconduct if you squint hard enough.

For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software. Across the country, thousands of students, and some parents, have created petitions against the use of proctoring tools, most of which (though not all) have been ignored. Students taking the California and New York bar exams—as well as several advocacy organizations and a group of deans—advocated against the use of proctoring tools for those exams. As expected, many of those students then experienced “significant software problems” with the Examsoft proctoring software, specifically, causing some students to fail. 

Many proctoring companies have defended their dangerous, inequitable, privacy-invasive, and often flawed software tools by pointing out that humans—meaning teachers or administrators—usually have the ability to review flagged exams to determine whether or not a student was actually cheating. That defense rings hollow when those reviewing the results don’t have the technical expertise—or in some cases, the time or inclination—to properly examine them.

Similar to schools that rely heavily on flawed proctoring software, Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating. Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career. 

The Dartmouth faculty has stated that they will not continue to look at Canvas logs in the future for violations (51:45 into the video of the town hall). That’s a good step forward. We insist that the school also look beyond these logs for the students currently being investigated, and end this dragnet investigation entirely, unless additional evidence is presented.

Share
Categories
Armageddon asteroid asteroids blaze Blaze podcasts Blaze tv Blazetv Commentary conservative Conservative commentary Facebook.com Intelwars launch NASA news Pat Pat gray Pat gray podcast Pat gray radio Pat gray unleashed Pat gray videos Pat grey unleashed Pat unleashed Pathhead Plan plans Political commentary political humor political news Politics News Punch Radio rocket sci-fi Science Fiction Shuttle Social commentary Space Space ship Spaceship TALK RADIO TheBlaze Trending news Video Youtube.com

NASA approved to test new technology that will PUNCH an asteroid headed toward Earth

It would be impressive if technology existed that could deflect an asteroid on a path toward Earth and save mankind from devastating impact. Well, NASA plans to test new technology that aims to do just that.

On Wednesday’s show, Pat Gray and Jeff Fisher detailed how NASA has approved a project called the Double Asteroid Redirection Test, the aim of which is to throw a “small” asteroid off course in October 2022.

Pat called the technology amazing and later added that it would be impressive if we have the capability to move an asteroid out of its flight path. Watch the clip for more from Pat.

Can’t watch? Download the podcast here.

Use promo code PAT to save $10 on one year of BlazeTV.

Want more from Pat Gray?

To enjoy more of Pat’s biting analysis and signature wit as he restores common sense to a senseless world, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution and live the American dream.

Share
Categories
Coders' Rights Project Commentary competition Intelwars privacy

553,000,000 Reasons Not to Let Facebook Make Decisions About Your Privacy

Another day, another horrific Facebook privacy scandal. We know what comes next: Facebook will argue that losing a lot of our data means bad third-party actors are the real problem that we should trust Facebook to make more decisions about our data to protect against them. If history is any indication, that’ll work. But if we finally wise up, we’ll respond to this latest crisis with serious action: passing America’s long-overdue federal privacy law (with a private right of action) and forcing interoperability on Facebook so that its user/hostages can escape its walled garden.

Facebook created this problem, but that doesn’t make the company qualified to fix it, nor does it mean we should trust them to do so. 

In January 2021, Motherboard reported on a bot that was selling records from a 500 million-plus person trove of Facebook  data, offering phone numbers and other personal information. Facebook said the data had been scraped by using a bug that was available as early as 2016, and which the company claimed to have patched in 2019. Last week, a dataset containing 553 million Facebook users’ data—including phone numbers, full names, locations, email addresses, and biographical informationwas published for free online. (It appears this is the same dataset Motherboard reported on in January). More than half a billion current and former Facebook users are now at high risk of various kinds of fraud.

While this breach is especially ghastly, it’s also just another scandal for Facebook, a company that spent decades pursuing deceptive and anticompetitive tactics to amass largely nonconsensual dossiers on its 2.6 billion users as well as many billions of people who have no Facebook, Instagram or WhatsApp account, including many who never had an account with Facebook.

Based on past experience, Facebook’s next move is all but inevitable: after regretting this irretrievable data breach, the company will double down on the tactics that lock its users into its walled gardens, in the name of defending their privacy. That’s exactly what the company did during the Cambridge Analytica fiasco, when it used the pretense of protecting users from dangerous third-parties to lock out competitors, including those who use Facebook’s APIs to help users part ways with the service without losing touch with their friends, families, communities, and professional networks.

According to Facebook, the data in this half-billion-person breach was harvested thanks to a bug in its code. We get that. Bugs happen. That’s why we’re totally unapologetic about defending the rights of security researchers and other bug-hunters who help discover and fix those bugs. The problem isn’t that a Facebook programmer made a mistake: the problem is that this mistake was so consequential.

Facebook doesn’t need all this data to offer its users a social networking experience: it needs that data so it can market itself to advertisers, who paid the company $84.1 billion in 2020. It warehoused that data for its own benefit, in full knowledge that bugs happen, and that a bug could expose all of that data, permanently. 

Given all that, why do users stay on Facebook? For many, it’s a hostage situation: their friends, families, communities, and professional networks are on Facebook, so that’s where they have to be. Meanwhile, those friends, family members, communities, and professional networks are stuck on Facebook because their friends are there, too. Deleting Facebook comes at a very high cost.

It doesn’t have to be this way. Historically, new online services—including, at one time, Facebook—have smashed big companies’ walled gardens, allowing those former user-hostages to escape from dominant services but still exchange messages with the communities they left behind, using techniques like scraping, bots, and other honorable tools of reverse-engineering freedom fighters. 

Facebook has gone to extreme lengths to keep this from ever happening to its services. Not only has it sued rivals who gave its users the ability to communicate with their Facebook friends without subjecting themselves to Facebook’s surveillance, the company also bought out successful upstart rivals specifically because it knew it was losing users to them. It’s a winning combination: use the law to prevent rivals from giving users more control over their privacy, use the monopoly rents those locked-in users generate to buy out anyone who tries to compete with you.

Those 553,000,000 users whose lives are now an eternal open book to the whole internet never had a chance. Facebook took them hostage. It harvested their data. It bought out the services they preferred over Facebook. 

And now that 553,000,000 people should be very, very angry at Facebook, we need to watch carefully to make sure that the company doesn’t capitalize on their anger by further increasing its advantage. As governments from the EU to the U.S. to the UK consider proposals to force Facebook to open up to rivals so that users can leave Facebook without shattering their social connections, Facebook will doubtless argue that such a move will make it impossible for Facebook to prevent the next breach of this type.

Facebook is also likely to weaponize this breach in its ongoing war against accountability: namely, against a scrappy group of academics and Facebook users. Ad Observer and Ad Observatory are a pair of projects from NYU’s Online Transparency Project that scrape the ads its volunteers are served by Facebook and places them in a public repository, where scholars, researchers, and journalists can track how badly Facebook is living up to its promise to halt paid political disinformation.

Facebook argues that any scraping—even highly targeted, careful, publicly auditable scraping that holds the company to account—is an invitation to indiscriminate mass-scraping of the sort that compromised the half-billion-plus users in the current breach. Instead of scraping its ads, the company says that its critics should rely on a repository that Facebook itself provides, and trust that the company will voluntarily reveal any breaches of its own policies.

From Facebook’s point of view, a half-billion person breach is a half-billion excuses not to open its walled garden or permit accountability research into its policies. In fact, the worse the breach, the more latitude Facebook will argue it should get: “If this is what happens when we’re not being forced to allow competitors and critics to interoperate with our system, imagine what will happen if these digital trustbusters get their way!”

Don’t be fooled. Privacy does not come from monopoly. No one came down off a mountain with two stone tablets, intoning “Thou must gather and retain as much user data as is technologically feasible!” The decision to gobble up all this data and keep it around forever has very little to do with making Facebook a nice place to chat with your friends and everything to do with maximizing the company’s profits. 

Facebook’s data breach problems  are the inevitable result of monopoly, in particular the knowledge that it can heap endless abuses on its users and retain them. Even if they resign from Facebook, they’re going to end up on acquired Facebook subsidiaries like Instagram or WhatsApp, and even if they don’t, Facebook will still get to maintain its dossiers on their digital lives.

Facebook’s breaches are proof that we shouldn’t trust Facebook—not that we should trust it more. Creating a problem in no way qualifies you to solve that problem. As we argued in our January white-paper, Privacy Without Monopoly: Data Protection and Interoperability, the right way to protect users is with a federal privacy law with a private right of action.

Right now, Facebook’s users have to rely on Facebook to safeguard their interests. That doesn’t just mean crossing their fingers and hoping Facebook won’t make another half-billion-user blunder—it also means hoping that Facebook won’t intentionally disclose their information to a third party as part of its normal advertising activities. 

Facebook is not qualified to decide what the limits on its own data-processing should be. Those limits should come from democratically accountable legislatures, not autocratic billionaire CEOs. America is sorely lacking a federal privacy law, particularly one that empowers internet users to sue companies that violate their privacy. A privacy law with a private right of action would mean that you wouldn’t be hostage to the self-interested privacy decisions of vast corporations, and it would mean that when they did you dirty, you could get justice on your own, without having to convince a District Attorney or Attorney General to go to bat for you.

A federal privacy law with a private right of action would open a vast possible universe of new interoperable services that plugged into companies like Facebook, allowing users to leave without cancelling their lives; these new services would have to play by the federal privacy rules, too.

That’s not what we’re going to hear from Facebook, though: in Facebookland, the answer to their abuse of our trust is to give them more of our trust; the answer to the existential crisis of their massive scale is to make them even bigger. Facebook created this problem, and they are absolutely incapable of solving it.

Share
Categories
Commentary Economic Crisis Intelwars prepper preppers shortage shortages toilet paper toilet paper shortage Toilet Paper Shortages

Yes, America Is On The Verge Of Yet Another Toilet Paper Crisis

It is starting to happen again.  Do you remember how “panic buying” caused a massive shortage of toilet paper and other basic essentials during the early days of the COVID pandemic last year?  Well, shortages are back, but this time around different factors are at play.  We are being told that the shortages should just be temporary, and that is good news.  But this is yet another example that shows how exceedingly vulnerable global supply chains have become in this day and age.  If another major global crisis were to suddenly strike, we could quickly be facing long-term shortages of certain items that would be quite severe.  So hopefully this short-term toilet paper shortage will be a wake up call for all of us.  The following comes from Yahoo News

Who can forget last year’s empty store shelves and pandemic panic shopping, right? Cleaning products, hand sanitizer and toilet paper were all but impossible to find, and only recently have name-brand disinfectant wipes and sprays begun showing up in stores again,

Well, brace yourself, because another product shortage is looming — and yes, it may very well become tough to find toilet paper again.

This new toilet paper shortage is not being caused by “panic buying”.  Instead, two factors have combined to create serious delays in getting products from overseas…

For six days, Egypt’s Suez Canal was blocked by a giant cargo vessel that got stuck and held up hundreds of other ships on a route that handles about 12% of world trade. The Ever Given — about as long as the Empire State Building is tall — was finally freed Monday, but analysts say it could take more than a week to clear the backup.

Meanwhile, a shortage of shipping containers also is causing problems in the cargo transport industry. Many of the factories that build the giant metal boxes are in China — and a number of them shut down in the early days of the COVID crisis, cutting supplies short.

Needless to say, it won’t just be toilet paper that is affected.

Even now, you are probably noticing empty shelves at some of your favorite retail outlets, and that will probably continue to be the case for quite some time.

Meanwhile, prices on many of our favorite consumer products are going to be rising substantially.  For example, even though we are being told that the current toilet paper shortages will just be “temporary”, the price increases that are coming will not be

The maker of the Cottonelle, Scott and Viva brands announced Wednesday that it will hike prices on “a majority of its North America consumer products business,” including toilet paper and baby care items.

Kimberly-Clark Corporation blamed rising commodity costs for the increases.

In response to the COVID pandemic, governments all over the globe have been creating, borrowing and spending money like there is no tomorrow.

As a result, commodity prices have shot through the roof, and companies are starting to pass those price increases along to consumers.

Our politicians always seem to think that “more money” is the solution, but their reckless policies are starting to cause some very serious long-term problems.  Here in the United States, we have now seen the money supply grow at an insane pace for 11 months in a row

In February, money supply growth hit yet another all-time high. February’s surge in money-supply growth makes February the eleventh month in a row of remarkably high growth, and came in the wake of unprecedented quantitative easing, central bank asset purchases, and various stimulus packages.

During February 2021, year-over-year (YOY) growth in the money supply was at 39.1 percent. That’s up slightly from January’s rate of 38.7 percent, and up from the February 2020 rate of 7.3 percent. Historically this is a very large surge in growth, year over year. It is also quite a reversal from the trend that only just ended in August of 2019 when growth rates were nearly bottoming out around 2 percent.

We are on a hyperinflationary path, and everyone knows that this story is not going to end well.

One area where we are already seeing tremendous inflation is in home prices

House prices rose by 11.2% from a year ago, the biggest increase since the peak of Housing Bubble 1 in 2006, according to today’s National Case-Shiller Home Price Index for January.

The index is a good measure of “house-price inflation” because it’s based on the “sales pairs” method, comparing the sales price of a house in the current month to the price of the same house when it sold previously, thereby tracking the amount of dollars it takes to buy the same house over time.

But that number doesn’t tell the entire story.

Home prices are actually going down in some core urban areas, and they are escalating dramatically in desirable rural and suburban communities around the nation.  This is something that I discussed in my previous article entitled “We Have Never Seen A Home Buying Frenzy Quite Like This”.

Those at the very top of the economic pyramid are gobbling up homes at a feverish pace, but meanwhile an increasing number of Americans are no longer able to afford homes at all.  Homelessness and poverty are absolutely exploding all over the country, and things are particularly bad on the west coast.

In Seattle, tents have taken over sidewalks and parks all over the city.  If you drive through the heart of Seattle, you will literally see homeless encampments everywhere that you go

Fox News analyst Lawrence Jones spoke with Seattle residents about the homeless crisis and encampments taking over their neighborhood parks.

When asked how the presence of homeless tents and encampments in her neighborhood made her feel, one Seattle resident told Jones that it made her feel “sad” and “depressed.”

In one residential area of Seattle, tents filled parks and sports fields where young children would typically play.

I would like to tell you that this homelessness crisis is just temporary, but I can’t.

Unfortunately, the truth is that economic conditions in America will eventually get a whole lot worse.

The short-term toilet paper shortages that we are facing now are an inconvenience, but the truth is that they aren’t even worth comparing to the problems that we will be facing down the road.

So stock up and get prepared while you still can, because things aren’t going to be getting any easier as we roll into a very uncertain future.

***Michael’s new book entitled “Lost Prophecies Of The Future Of America” is now available in paperback and for the Kindle on Amazon.***

About the Author: My name is Michael Snyder and my brand new book entitled “Lost Prophecies Of The Future Of America” is now available on Amazon.com.  In addition to my new book, I have written four others that are available on Amazon.com including The Beginning Of The EndGet Prepared Now, and Living A Life That Really Matters. (#CommissionsEarned)  By purchasing the books you help to support the work that my wife and I are doing, and by giving it to others you help to multiply the impact that we are having on people all over the globe.  I have published thousands of articles on The Economic Collapse BlogEnd Of The American Dream and The Most Important News, and the articles that I publish on those sites are republished on dozens of other prominent websites all over the globe.  I always freely and happily allow others to republish my articles on their own websites, but I also ask that they include this “About the Author” section with each article.  The material contained in this article is for general information purposes only, and readers should consult licensed professionals before making any legal, business, financial or health decisions.  I encourage you to follow me on social media on FacebookTwitter and Parler, and any way that you can share these articles with others is a great help.  During these very challenging times, people will need hope more than ever before, and it is our goal to share the gospel of Jesus Christ with as many people as we possibly can.

The post Yes, America Is On The Verge Of Yet Another Toilet Paper Crisis first appeared on End Of The American Dream.

Share