Categories
Intelwars International Privacy Standards Legal Analysis Necessary and Proportionate privacy Surveillance and Human Rights

Global Law Enforcement Convention Weakens Privacy & Human Rights

The Council of Europe Cybercrime Committee’s (T-CY) recent decision to approve new international rules for law enforcement access to user data without strong privacy protections is a blow for global human rights in the digital age. The final version of the draft Second Additional Protocol to the Council of Europe’s (CoE) widely adopted Budapest Cybercrime Convention, approved by the T-CY drafting committee on May 28th, places few limits on law enforcement data collection. As such, the Protocol can endanger technology users, journalists, activists, and vulnerable populations in countries with flimsy privacy protections and weaken everyone’s right to privacy and free expression across the globe. 

The Protocol now heads to members of CoE’s Parliamentary Committee (PACE) for their opinion. PACE’s Committee on Legal Affairs and Human Rights can recommend further amendments, and decide which ones will be adopted by the Standing Committee or the Plenary. Then, the Council of Ministers will vote on whether to integrate PACE’s recommendations into the final text. The CoE’s plan is to finalize the Protocol’s adoption by November. If adopted, the Protocol will be open for signatures to any country that has signed the Budapest Convention sometime before 2022.

The next step for countries is at the signature stage when they will ask to reserve the right not to abide by certain provisions n the Protocol, especially Article 7 on direct cooperation between law enforcement and companies holding user data. 

If countries sign the Protocol as it stands and in its entirety, it will reshape how state police access digital data from Internet companies based in other countries by prioritizing law enforcement demands, sidestepping judicial oversight, and lowering the bar for privacy safeguards. 

CoE’s Historical Commitment to Transparency Conspicuously Absent

While transparency and a strong commitment to engaging with external stakeholders have been hallmarks of CoE treaty development, the new Protocol’s drafting process lacked robust engagement with civil society. The T-CY adopted internal rules that have fostered a largely opaque process, led by public safety and law enforcement officials. T-CY’s periodic consultations with external stakeholders and the public have lacked important details, offered short response timelines, and failed to meaningfully address criticisms.

In 2018, nearly 100 public interest groups called on the CoE to allow for expert civil society input on the Protocol’s development. In 2019, the European Data Protection Board (EDPB) similarly called on T-CY to ensure “early and more proactive involvement of data protection authorities” in the drafting process, a call it felt the need to reiterate earlier this year. And when presenting the Protocol’s draft text for final public comment, T-CY provided only 2.5 weeks, a timeframe that the EDPB noted “does not allow for a timely and in-depth analysis” from stakeholders. That version of the Protocol also failed to include the explanatory text for the data protection safeguards, which was only published later, in the final version of May 28, without public consultation. Even other branches of the CoE, such as its data protection committee, have found it difficult to provide meaningful input under these conditions. 

Last week, over 40 civil society organizations called on CoE to provide an additional opportunity to comment on the final text of the Protocol. The Protocol aims to set a new global standard across countries with widely varying commitments to privacy and human rights. Meaningful input from external stakeholders including digital rights organizations and privacy regulators is essential. Unfortunately, CoE refused and will likely vote to open the Protocol for state signatures starting in November.

With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.

Eroding Global Protection for Online Anonymity 

The new Protocol provides few safeguards for online anonymity, posing a threat to the safety of activists, dissidents, journalists, and the free expression rights of everyday people who go online to comment on and criticize politicians and governments. When Internet companies turn subscriber information over to law enforcement, the real-world consequences can be dire. Anonymity also plays an important role in facilitating opinion and expression online and is necessary for activists and protestors around the world. Yet the new Protocol fails to acknowledge the important privacy interests it places in jeopardy and, by ensuring most of its safeguards are optional, permits police access to sensitive personal data without systematic judicial supervision. 

As a starting point, the new Protocol’s explanatory text claims that: ”subscriber information … does not allow precise conclusions concerning the private lives and daily habits of individuals concerned,” deeming it less intrusive than other categories of data.  

This characterization is directly at odds with growing recognition that police frequently use subscriber data access to identify deeply private anonymous communications and activity. Indeed, the Court of Justice of the European Union (CJEU) recently held that letting states associate subscriber data with anonymous digital activity can constitute a ‘serious’ interference with privacy. The Protocol’s attempt to paint identification capabilities as non-intrusive even conflicts with CoE’s own European Court of Human Rights (ECtHR). By encoding the opposite conclusion in an international protocol, the new explanatory text can deter future courts from properly recognizing the importance of online anonymity. As the ECtHR held doing so would, “deny the necessary protection to information which might reveal a good deal about the online activity of an individual, including sensitive details of his or her interests, beliefs and intimate lifestyle.”

Articles 7 and 8 of the Protocol in particular adopt intrusive police powers while requiring few safeguards. Under Article 7, states must clear all legal obstacles to “direct cooperation” between local companies and law enforcement. Any privacy laws that prevent Internet companies from voluntarily identifying customers to foreign police without a court order are incompatible with Article 7 and must be amended. “Direct cooperation” is intended to be the primary means of accessing subscriber data, but Article 8 provides a supplementary power to force disclosure from companies that refuse to cooperate. While Article 8 does not require judicial supervision of police, countries with strong privacy protections may continue relying on their own courts when forcing a local service provider to identify customers. Both Articles 7 and 8 also allow countries to screen and refuse any subscriber data demands that might threaten a state’s essential interests. But these screening mechanisms also remain optional, and refusals are to be “strictly limited,” with the need to protect private data invoked only in “exceptional cases.” 

By leaving most privacy and human rights protections to each state’s discretion, Articles 7 and 8 permit access to sensitive identification data under conditions that the ECtHR described as “offer[ing] virtually no protection from arbitrary interference … and no safeguards against abuse by State officials.”

The Protocol’s drafters have resisted calls from civil society and privacy regulators to require some form of judicial supervision in Articles 7 and 8. Some police agencies object to reliance on the courts, arguing that judicial supervision leads to slower results. But systemic involvement of the courts is a critical safeguard when access to sensitive personal data is at stake. The Office of the Privacy Commissioner of Canada put it cogently: “Independent judicial oversight may take time, but it’s indispensable in the specific context of law enforcement investigations.” Incorporating judicial supervision as a minimum threshold for cross-border access is also feasible. Indeed, a majority of states in T-CY’s own survey require prior judicial authorization for at least some forms of subscriber data in their respective national laws. 

At a minimum, the new Protocol text is flawed for its failure to recognize the deeply private nature of anonymous online activity and the serious threat posed to human rights when State officials are allowed open-ended access to identification data. Granting states this access makes the world less free and seriously threatens free expression. Article 7’s emphasis on non-judicial ‘cooperation’ between police and Internet companies poses a particularly insidious risk, and must not form part of the final adopted Convention.

Imposing Optional Privacy Standards

Article 14, which was recently publicized for the first time, is intended to provide detailed safeguards for personal information. Many of these protections are important, imposing limits on the treatment of sensitive data, the retention of personal data, and the use of personal data in automated decision-making, particularly in countries without data protection laws. The detailed protections are complex, and civil society groups continue to unpack their full legal impact. That being said, some shortcomings are immediately evident.

Some of Article 14’s protections actively undermine privacy—for example, paragraph 14.2.a prohibits signatories from imposing any additional “generic data protection conditions” when limiting the use of personal data. Paragraph 14.1.d also strictly limits when a country’s data protection laws can prevent law enforcement-driven personal data transfers to another country. 

More generally, and in stark contrast to the Protocol’s lawful access obligations, the detailed privacy safeguards encoded in Article 14 are not mandatory and can be ignored if countries have other arrangements in place (Article 14.1). States can rely on a wide variety of agreements to bypass the Article 14 protections. The OECD is currently negotiating an agreement that might systematically displace the Article 14 protections and, under the United States Clarifying Lawful Overseas Use of Data (CLOUD) Act, the U.S. executive branch can enter into “agreements” with other states to facilitate law enforcement transfers. Paragraph 14.1.c even contemplates informal agreements that are neither binding, nor even public, meaning that countries can secretly and systematically bypass the Article 14 safeguards. No real obligations are put in place to ensure these alternative arrangements provide an adequate or even sufficient level of privacy protection. States can therefore rely on the Protocol’s law enforcement powers while using side agreements to bypass its privacy protections, a particularly troubling development given the low data protection standards of many anticipated signatories. 

The Article 14 protections are also problematic because they appear to fall short of the minimum data protection that the CJEU has required. The full list of protections in Article 14, for example, resembles that inserted by the European Commission into its ‘Privacy Shield’ agreement. Internet companies relied upon the Privacy Shield to facilitate economic transfers of personal data from the European Union (EU) to the United States until the CJEU invalidated the agreement in 2020, finding its privacy protections and remedies insufficient. Similarly, clause 14.6 limits the use of personal data in purely automated decision-making systems that will have significant adverse effects on relevant individual interests. But the CJEU has also found that an international agreement for transferring air passenger data to Canada for public safety objectives was inconsistent with EU data protection guarantees despite the inclusion of a similar provision.

Conclusion

These and other substantive problems with the Protocol are concerning. Cross-border data access is rapidly becoming common in even routine criminal investigations, as every aspect of our lives continues its steady migration to the digital world. Instead of baking robust human rights and privacy protections into cross-border investigations, the Protocol discourages court oversight, renders most of its safeguards optional, and generally weakens privacy and freedom of expression.

Share
Categories
Computer Fraud And Abuse Act Reform Intelwars Legal Analysis Security

Van Buren is a Victory Against Overbroad Interpretations of the CFAA, and Protects Security Researchers

The Supreme Court’s Van Buren decision today overturned a dangerous precedent and clarified the notoriously ambiguous meaning of “exceeding authorized access” in the Computer Fraud and Abuse Act, the federal computer crime law that’s been misused to prosecute beneficial and important online activity. 

The decision is a victory for all Internet users, as it affirmed that online services cannot use the CFAA’s criminal provisions to enforce limitations on how or why you use their service, including for purposes such as collecting evidence of discrimination or identifying security vulnerabilities. It also rejected the use of troubling physical-world analogies and legal theories to interpret the law, which in the past have resulted in some of its most dangerous abuses.

The Van Buren decision is especially good news for security researchers, whose work discovering security vulnerabilities is vital to the public interest but often requires accessing computers in ways that contravene terms of service. Under the Department of Justice’s reading of the law, the CFAA allowed criminal charges against individuals for any website terms of service violation. But a majority of the Supreme Court rejected the DOJ’s interpretation. And although the high court did not narrow the CFAA as much as EFF would have liked, leaving open the question of whether the law requires circumvention of a technological access barrier, it provided good language that should help protect researchers, investigative journalists, and others. 

The CFAA makes it a crime to “intentionally access[] a computer without authorization or exceed[] authorized access, and thereby obtain[] . . . information from any protected computer,” but does not define what authorization means for purposes of exceeding authorized access. In Van Buren, a former Georgia police officer was accused of taking money in exchange for looking up a license plate in a law enforcement database. This was a database he was otherwise entitled to access, and Van Buren was charged with exceeding authorized access under the CFAA. The Eleventh Circuit analysis had turned on the computer owner’s unilateral policies regarding use of its networks, allowing private parties to make EULA, TOS, or other use policies criminally enforceable. 

The Supreme Court rightly overturned the Eleventh Circuit, and held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Rather, the statute’s prohibition is limited to someone who “accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him.” The Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. If you need to break through a digital gate to get in, entry is a crime, but if you are allowed through an open gateway, it’s not a crime to be inside.

This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA. For example, if you can look at housing ads as a user, it is not a hacking crime to pull them for your bias-in-housing research project, even if the TOS forbids it. Van Buren is really good news for port scanning, for example: so long as the computer is open to the public, you don’t have to worry about the conditions for use to scan the port. 

While the decision was centered around the interpretation of the statute’s text, the Court bolstered its conclusion with the policy concerns raised by the amici, including a brief EFF filed on behalf of computer security researchers and organizations that employ and support them. The Court’s explanation is worth quoting in depth:

If the “exceeds authorized access” clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals. Take the workplace. Employers commonly state that computers and electronic devices can be used only for business purposes. So on the Government’s reading of the statute, an employee who sends a personal e-mail or reads the news using her work computer has violated the CFAA. Or consider the Internet. Many websites, services, and databases …. authorize a user’s access only upon his agreement to follow specified terms of service. If the “exceeds authorized access” clause encompasses violations of circumstance-based access restrictions on employers’ computers, it is difficult to see why it would not also encompass violations of such restrictions on website providers’ computers. And indeed, numerous amici explain why the Government’s reading [would]  criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook.

This analysis shows the Court recognized the tremendous danger of an overly broad CFAA, and explicitly rejected the Government’s arguments for retaining wide powers, tempered only by their prosecutorial discretion. 

Left Unresolved: Whether CFAA Violations Require Technical Access Limitations 

The Court’s decision was limited in one important respect. In a footnote, the Court left as an open question if the enforceable access restriction meant only “technological (or ‘code-based’) limitations on access, or instead also looks to limits contained in contracts or policies,” meaning that the opinion neither adopted nor rejected either path. EFF has argued in courts and legislative reform efforts for many years that it’s not a computer hacking crime without hacking through a technological defense. 

This footnote is a bit odd, as the bulk of the majority opinion seems to point toward the law requiring someone to defeat technological limitations on access, and throwing shade at criminalizing TOS violations. In most cases, the scope of your access once on a computer is defined by technology, such as an access control list or a requirement to reenter a password. Professor Orin Kerr suggested that this may have been a necessary limitation to build the six justice majority. 

Later in the Van Buren opinion, the Court rejected a Government argument that a rule against “using a confidential database for a non-law-enforcement purpose” should be treated as a criminally enforceable access restriction, different from “using information from the database for a non-law-enforcement purpose” (emphasis original). This makes sense under the “gates-up-or-down” approach adopted by the Court. Together with the policy issues the Court acknowledged regarding enforcing terms of service quoted above, this helps us understand the limitation footnote, suggesting cleverly writing a TOS will not easily turn a conditional rule on why you can access, or what you can do with information later, into a criminally enforceable access restriction.

Nevertheless, leaving the question open means that we will have to litigate whether and under what circumstance a contract or written policy can amount to an access restriction in the years to come. For example, in Facebook v. Power Ventures, the Ninth Circuit found that a cease and desist letter removing authorization was sufficient to create a CFAA violation for later access, even though a violation of the Facebook terms alone was not. Service providers will likely argue that this is the sort of non-technical access restriction that was left unresolved by Van Buren. 

Court’s Narrow CFAA Interpretation Should Help Security Researchers

Even though the majority opinion left this important CFAA question unresolved, the decision still offers plenty of language that will be helpful for later cases on the scope of the statute. That’s because the Van Buren majority’s focus on the CFAA’s technical definitions, and the types of computer access that the law restricts, should provide guidance to lower courts that narrow the law’s reach. 

This is a win because broad CFAA interpretations have in the past often deterred or chilled important security research and investigative journalism. The CFAA put these activities in legal jeopardy, in part, because courts often struggle with using non-digital legal concepts and physical analogies to interpret the statute. Indeed, one of the principle disagreements between the Van Buren majority and dissent is whether the CFAA should be interpreted based on physical property law doctrines, such as trespass and theft.

The majority opinion ruled that, in principle, computer access is different from the physical world precisely because the CFAA contains so many technical terms and definitions. “When interpreting statutes, courts take note of terms that carry ‘technical meaning[s],’” the majority wrote. 

The rule is particularly true for the CFAA because it focuses on malicious computer use and intrusions, the majority wrote. For example, the term “access” in the context of computer use has its own specific, well established meaning: “In the computing context, ‘access’ references the act of entering a computer ‘system itself’ or a particular ‘part of a computer system,’ such as files, folders, or databases.” Based on that definition, the CFAA’s “exceeding authorized access” restriction should be limited to prohibiting “the act of entering a part of the system to which a computer user lacks access privileges.” 

The majority also recognized that the portions of the CFAA that define damage and loss are premised on harm to computer files and data, rather than general non-digital harm such as trespassing on another person’s property: “The statutory definitions of ‘damage’ and ‘loss’ thus focus on technological harms—such as the corruption of files—of the type unauthorized users cause to computer systems and data,” the Court wrote. This is important because loss and damage are prerequisites to civil CFAA claims, and the ability of private entities to enforce the CFAA has been a threat that deters security research when companies might rather their vulnerabilities remain unknown to the public. 

Because the CFAA’s definitions of loss and damages focus on harm to computer files, systems, or data, the majority wrote that they “are ill fitted, however, to remediating ‘misuse’ of sensitive information that employees may permissibly access using their computers.”

The Supreme Court’s Van Buren decision rightly limits the CFAA’s prohibition on “exceeding authorized access” to prohibiting someone from accessing particular computer files, services, or other parts of the computer that are otherwise off-limits to them. And the Court’s overturning the Eleventh Circuit decision that permitted CFAA liability based on someone violating a website’s terms of service or an employers’ computer use restrictions ensures that lots of important, legitimate computer use is not a crime. 

But there is still more work to be done to ensure that computer crime laws are not misused against researchers, journalists, activists, and everyday internet users. As longtime advocates against overbroad interpretations of the CFAA, EFF will continue to lead efforts to push courts and lawmakers to further narrow the CFAA and similar state computer crime laws so they can no longer be misused.

Share
Categories
announcement Creativity & Innovation DMCA Intelwars Legal Analysis

If Not Overturned, a Bad Copyright Decision Will Lead Many Americans to Lose Internet Access

This post was co-written by EFF Legal Intern Lara Ellenberg

In going after internet service providers (ISPs) for the actions of just a few of their users, Sony Music, other major record labels, and music publishing companies have found a way to cut people off of the internet based on mere accusations of copyright infringement. When these music companies sued Cox Communications, an ISP, the court got the law wrong. It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital internet access as ISPs start to cut off more and more customers to avoid massive damages.

EFF, together with the Center for Democracy & Technology, the American Library Association, the Association of College and Research Libraries, the Association of Research Libraries, and Public Knowledge filed an amicus brief this week urging the U.S. Court of Appeals for the Fourth Circuit to protect internet subscribers’ access to essential internet services by overturning the district court’s decision.

The district court agreed with Sony that Cox is responsible when its subscribers—home and business internet users—infringe the copyright in music recordings by sharing them on peer-to-peer networks. It effectively found that Cox didn’t terminate accounts of supposedly infringing subscribers aggressively enough. An earlier lawsuit found that Cox wasn’t protected by the Digital Millennium Copyright Act’s (DMCA) safe harbor provisions that protect certain internet intermediaries, including ISPs, if they comply with the DMCA’s requirements. One of those requirements is implementing a policy of terminating “subscribers and account holders … who are repeat infringers” in “appropriate circumstances.” The court ruled in that earlier case that Cox didn’t terminate enough customers who had been accused of infringement by the music companies.

In this case, the same court found that Cox was on the hook for the copyright infringement of its customers and upheld the jury verdict of $1 billion in damages—by far the largest amount ever awarded in a copyright case.

The District Court Got the Law Wrong

When an ISP isn’t protected by the DMCA’s safe harbor provision, it can sometimes be held responsible for copyright infringement by its users under “secondary liability” doctrines. The district court found Cox liable under both varieties of secondary liability—contributory infringement and vicarious liability—but misapplied both of them, with potentially disastrous consequences.

An ISP can be contributorily liable if it knew that a customer infringed on someone else’s copyright but didn’t take “simple measures” available to it to stop further infringement. Judge O’Grady’s jury instructions wrongly implied that because Cox didn’t terminate infringing users’ accounts, it failed to take “simple measures.” But the law doesn’t require ISPs to terminate accounts to avoid liability. The district court improperly imported a termination requirement from the DMCA’s safe harbor provision (which was already knocked out earlier in the case). In fact, the steps Cox took short of termination actually stopped most copyright infringement—a fact the district court simply ignored.

The district court also got it wrong on vicarious liability. Vicarious liability comes from the common law of agency. It holds that people who are a step removed from copyright infringement (the “principal,” for example, a flea market operator) can be held liable for the copyright infringement of its “agent” (for example, someone who sells bootleg DVDs at that flea market), when the principal had the “right and ability to supervise” the agent. In this case, the court decided that because Cox could terminate accounts accused of copyright infringement, it had the ability to supervise those accounts. But that’s not how other courts have ruled. For example, the Ninth Circuit decided in 2019 that Zillow was not responsible when some of its users uploaded copyrighted photos to real estate listings, even though Zillow could have terminated those users’ accounts. In reality, ISPs don’t supervise the Internet activity of their users. That would require a level of surveillance and control that users won’t tolerate, and that EFF fights against every day.

The consequence of getting the law wrong on secondary liability here, combined with the $1 billion damage award, is that ISPs will terminate accounts more frequently to avoid massive damages, and cut many more people off from the internet than is necessary to actually address copyright infringement.

The District Court’s Decision Violates Due Process and Harms All Internet Users

Not only did the decision get the law on secondary liability wrong, it also offends basic ideas of due process. In a different context, the Supreme Court decided that civil damages can violate the Constitution’s due process requirement when the amount is excessive, especially when it fails to consider the public interests at stake. In the case against Cox, the district court ignored both the fact that a $1 billion damages award is excessive, and that its decision will cause ISPs to terminate accounts more readily and, in the process, cut off many more people from the internet than necessary.

Having robust internet access is an important public interest, but when ISPs start over-enforcing to avoid having to pay billion-dollar damages awards, that access is threatened. Millions of internet users rely on shared accounts, for example at home, in libraries, or at work. If ISPs begin to terminate accounts more aggressively, the impact will be felt disproportionately by the many users who have done nothing wrong but only happen to be using the same internet connection as someone who was flagged for copyright infringement.

More than a year after the start of the COVID-19 pandemic, it’s more obvious than ever that internet access is essential for work, education, social activities, healthcare, and much more. If the district court’s decision isn’t overturned, many more people will lose access in a time when no one can afford not to use the internet. That harm will be especially felt by people of color, poorer people, women, and those living in rural areas—all of whom rely disproportionately on shared or public internet accounts. And since millions of Americans have access to just a single broadband provider, losing access to a (shared) internet account essentially means losing internet access altogether. This loss of broadband access because of stepped-up termination will also worsen the racial and economic digital divide. This is not just unfair to internet users who have done nothing wrong, but also overly harsh in the case of most copyright infringers. Being effectively cut off from society when an ISP terminates your account is excessive, given the actual costs of non-commercial copyright infringement to large corporations like Sony Music.

It’s clear that Judge O’Grady misunderstood the impact of losing Internet access. In a hearing on Cox’s earlier infringement case in 2015, he called concerns about losing access “completely hysterical,” and compared them to “my son complaining when I took his electronics away when he watched YouTube videos instead of doing homework.” Of course, this wasn’t a valid comparison in 2015 and it rightly sounds absurd today. That’s why, as the case comes before the Fourth Circuit, we’re asking the court to get the law right and center the importance of preserving internet access in its decision.

Share
Categories
Commentary Intelwars Legal Analysis privacy Security Technical Analysis

How Your DNA—or Someone Else’s—Can Send You to Jail

Although DNA is individual to you—a “fingerprint” of your genetic code—DNA samples don’t always tell a complete story. The DNA samples used in criminal prosecutions are generally of low quality, making them particularly complicated to analyze. They are not very concentrated, not very complete, or are a mixture of multiple individual’s DNA—and often, all of these conditions are true. If a DNA sample is like a fingerprint, analyzing mixed DNA samples in criminal prosecutions can often be like attempting to isolate a single person’s print from a doorknob of a public building after hundreds of people have touched it. Despite the challenges in analyzing these DNA samples, prosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process. This is why it is essential that any DNA analysis tool’s source code is made available for evaluation. It is critical to determine whether the software is reliable enough to be used in the legal system, and what weight its results should be given. 

A Breakdown of DNA Data

To understand why DNA software analyses can be so misleading, it helps to know a tiny bit about how it works. To start, DNA sequences are commonly called genes. A more generic way to refer to a specific location in the gene sequence is a “locus” (plural “loci”). The variants of a given gene or of the DNA found at a particular locus are called “alleles.” To oversimplify, if a gene is like a highway, the numbered exits are loci, and alleles are the specific towns at each exit.

[P]rosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process.

Forensic DNA analysis typically focuses on around 13 to 20 loci and the allele present at each locus, making up a person’s DNA profile. By looking at a sufficient number of loci, whose alleles are distributed among the population, a kind of fingerprint can be established. Put another way, knowing the specific towns and exits a driver drove past can also help you figure out which highway they drove on.

To figure out the alleles present in a DNA sample, a scientist chops the DNA into different alleles, then uses an electric charge to draw it through a gel in a method called electrophoresis. Different alleles will travel at different rates, and the scientist can measure how far each one traveled and look up which allele corresponds to that length. The DNA is also stained with a dye, so that the more of it there is, the darker that blob will be on the gel.

Analysts infer what alleles are present based on how far they traveled through the gel, and deduce what amounts are present based on how dark the band is—which can work well in an untainted, high quality sample. Generally, the higher the concentration of cells from an individual and the less contaminated the sample by any other person’s DNA, the more accurate and reliable the generated DNA profile.

The Difficulty of Analyzing DNA Mixtures

Our DNA is found in all of our cells. The more cells that we shed, the higher the concentration of our DNA can be found, which generally also means more accuracy from DNA testing. However, our DNA can also be transferred from one object to another. So it’s possible that your DNA can be found on items you’ve never had contact with or at locations you’ve never been. For example, if you’re sitting in a doctor’s waiting room and scratch your face, your DNA may be found on the magazines on a table next to you that you never flipped through. Your DNA left on a jacket you lent a friend can transfer onto items they brush by or at locations they travel to. 

Given the ease at which DNA is deposited, it is no surprise that DNA samples from crime scenes are often a mixture of DNA from multiple individuals, or “donors.” Investigators gather DNA samples by swiping a cotton swab at the location that the perpetrator may have deposited their DNA, such as a firearm, a container of contraband, or the body of a victim. In many cases where the perpetrator’s bodily fluids are not involved, the DNA sample may only contain a small amount of the perpetrator’s DNA, which could be less than a few cells, and is likely to also contain the DNA of others. This makes trying to identify whether a person’s DNA is found in a complex DNA mixture a very difficult problem. It’s like having to figure out whether someone drove on a specific interstate when all you have is an incomplete and possibly inaccurate list of towns and exits they passed, all of which could have been from any one of the roads they used. You don’t know the number of roads they drove on, and can only guess at which towns and exits were connected. 

Running these DNA mixture samples through electrophoresis creates much noisier results, and often contains errors that indicate additional alleles at a locus or ignore alleles that are present. Human analysts then decide which alleles appear dark enough in the gel to count or which are light enough to ignore. At least, traditional DNA analysis worked in this binary way: an allele either counted or did not count as part of a specific DNA donor profile.

Probabilistic Genotyping Software and Their Problems 

Enter probabilistic genotyping software. The proprietors of these programs—the two biggest players are STRMix and TrueAllele—claim that their products, using statistical modeling, can determine the likelihood that a DNA profile or combinations of DNA profiles contributed to a DNA mixture, instead of the binary approach. Prosecutors often describe the analysis from these programs this way: It is X times more likely that defendant, rather than a random person, contributed to this DNA mixture sample.

However, these tools, like any statistical model, can be constructed poorly. And whether, what, and how assumptions are incorporated in them can cause the results to vary. They can be analogized to the election forecast models from FiveThirtyEight, The Economist, and The New York Times. They all use statistical modeling, but the final numbers are different because of the myriad design differences from each publisher. Probabilistic genotyping software is the same: they all use statistical modeling, but the output probability is affected by how that model is built. Like the different election models, different probabilistic DNA software has diverging approaches for which, and at what threshold, factors are considered, counteracted, or ignored. Additionally, input from human analysts, such as the hypothetical number of people who contributed to the DNA mixture, also change the calculation. If this is less rigorous than you expected, that’s exactly the point—and the problem. In our highway analogy, this is like a software program that purports to tell you how likely it is that you drove on a specific road based on a list of towns and exits you passed. Not only is the result affected by the completeness and accuracy of the list, but the map the software uses, and the data available to it, matter tremendously as well.

If this is less rigorous than you expected, that’s exactly the point—and the problem. 

Because of these complex variables, a probability result is always specific to how the program used was designed, the conditions at the lab, and any additional or discretionary input used during the analysis. In practice, different DNA analysis programs have resulted in substantially different probabilities for whether a defendant’s DNA appeared in the same DNA sample, even breathtaking discrepancies in the millions-fold!

And yet it is impossible to determine which result, or software, is the most accurate. There is no objective truth against which those numbers can be compared. We simply cannot know what the probability that a person contributed to a DNA mixture is. In controlled testing, we know whether a person’s DNA was part of a DNA mixture or not, but there is no way to figure out whether it was 100 times more likely that the donor’s DNA rather than an unknown person’s contributed to the mixture, or a million times more likely. And while there is no reason to assume that the tool that outputs the highest statistical likelihood is the most accurate, the software’s designers may nevertheless be incentivized to program their product in a way that is more likely to output a larger number, because “1 quintillion” sounds more precise than “10,000”—especially when there is no way to objectively evaluate the accuracy.

DNA Software Review is Essential

Because of these issues, it is critical to examine any DNA software’s source code that is used in the legal system. We need to know exactly how these statistical models are built, and looking at the source code is the only way to discover non-obvious coding errors. Yet, the companies that created these programs have fought against the release of the source code—even when it would only be examined by the defendant’s legal team and be sealed under a court order. In the rare instances where the software code was reviewed, researchers have found programming errors with the potential to implicate innocent people.

Forensic DNA analyses have the whiff of science—but without source code review, it’s impossible to know whether or not they pass the smell test. Despite the opacity of their design and the impossibility of measuring their accuracy, these programs have become widely used in the legal system. EFF has challenged—and continues to challenge—the failure to disclose the source code of these programs. The continued use of these tools, the accuracy of which cannot be ensured, threatens the administration of justice and the reliability of verdicts in criminal prosecutions.

Share
Categories
Commentary Intelwars Legal Analysis privacy

Foriegn Intelligence Surveillance Court Rubber Stamps Mass Surveillance Under Section 702 – Again

As someone once said, “the Founders did not fight a revolution to gain the right to government agency protocols.”  Well it was not just someone, it was Chief Justice John Roberts. He flatly rejected the government’s claim that agency protocols could solve the Fourth Amendment violations created by police searches of our communications stored in the cloud and accessible through our phones.  

Apparently, the Foreign Intelligence Surveillance Court (FISC) didn’t get the memo. That’s because, under a recently declassified decision from November 2020, the FISC again found that a series of overly complex but still ultimately swiss cheese agency protocols — that are admittedly not even being followed — resolve the Fourth Amendment problems caused by the massive governmental seizures and searches of our communications currently occurring under FISA Section 702. The annual review by the FISC is required by law — it’s supposed to ensure that both the policies and the practices of the mass surveillance under 702 are sufficient. It failed on both counts.  

The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad — it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a communication

Justice Roberts was concerned with a single phone seized pursuant to a lawful arrest.  The FISC is apparently unconcerned when it rubber stamps mass surveillance impacting, by the government’s own admission, hundreds of thousand of nonsuspect Americans.

What’s going on here?  

From where we sit, it seems clear that the FISC continues to suffer from a massive case of national security constitutional-itis. That is the affliction (not really, we made it up) where ordinarily careful judges sworn to defend the Constitution effectively ignore the flagrant Fourth Amendment violations that occur when the NSA, FBI, (and to a lesser extent, the CIA, and NCTC) misuse the justification of national security to spy on Americans en mass. And this malady means that even when the agencies completely fail to follow the court’s previous orders, they still get a pass to keep spying.  

The FISC decision is disappointing on at least two levels. First, the protocols themselves are not sufficient to protect Americans’ privacy. They allow the government to tap into the Internet backbone and seize our international (and lots of domestic) communications as they flow by — ostensibly to see if they have been targeted. This is itself a constitutional violation, as we have long argued in our Jewel v. NSA case. We await the Ninth Circuit’s decision in Jewel on the government’s claim that this spying that everyone knows about is too secret to be submitted for real constitutional review by a public adversarial court (as opposed to the one-sided review by the rubber-stamping FISC).  

But even after that, the protocols themselves are swiss cheese when it comes to protecting Americans. At the outset, unlike traditional foreign intelligence surveillance, under Section 702, FISC judges do not authorize individualized warrants for specific targets. Rather, the role of a FISC judge under Section 702 is to approve abstract protocols that govern the Executive Branch’s mass surveillance and then review whether they have been followed.  

The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad — it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a conversation whose communications are both seized and searched without a warrant. It is also immaterial that the individuals targeted turn out to be U.S. persons.  This was one of the many problems which ultimately ended with the decommissioning of the Call Detail Records program, which despite being Congress’ attempt to rein in the program which started under section 215 of the Patriot Act, still mass surveilled communications metadata, including inadvertently collecting millions of call detail records from American persons illegally. 

Next, the protocols allow collection for any “foreign intelligence,” purpose, which is a much broader scope than merely searching for terrorists. The term encompasses information that, for instance, could give the U.S. an advantage in trade negotiations. Once these communications are collected, the protocols allow the FBI to use the information for domestic criminal prosecutions if related to national security.  This is what Senator Wyden and others in Congress have rightly pointed out is a “backdoor” warrantless search. And those are just a few of the problems.  

While the protocols are complex and confusing, the end result is that nearly all Americans have their international communications seized initially and a huge number of them are seized and searched by the FBI, NSA, CIA and NCTC, often multiple times for various reasons, all without individual suspicion, much less a warrant.

Second, the government agencies — especially the FBI — apparently cannot be bothered to follow even these weak protocols.  This means that in practice, we users don’t even get that minimal protection.  The FISC decision reports that the FBI has never limited its searches to just those related to national security. Instead agents query the 702 system for investigations relating to health care fraud, transnational organized crime, violent gangs, domestic terrorism, public corruption and bribery. And that’s in just 7 FBI field offices reviewed. This is not a new problem, as the FISC notes. Although it once again seems to think that the FBI just needs to be told again to do it and to do proper training (which it has failed to do for years). The court notes that it is likely that other field offices also did searches for ordinary crimes, but that the FBI also failed to do proper oversight so we just don’t know how.  

A federal court would accept no such tomfoolery…..Yet the FISC is perfectly willing to sign off on the FBI’s failures and the Bureau’s flagrant disregard of its own rulings for year upon year.

Next, the querying system for this sensitive information had been designed to make it hard not to search the 702-collected data, including by requiring agents to opt out (not in) to searching the 702 data and then timing out that opt-out after only thirty minutes. And even then, the agents could just toggle “yes” to search 702 collected data, with no secondary checking prior to those searches. This happened multiple times (that we know of) to allow for searches without any national security justification. The FBI also continued to improperly conduct bulk searches, which are large batch queries using multiple search terms without written justifications as required by the protocols. Even the FISC calls these searches “indiscriminate,” yet it reauthorized the program.  

In her excellent analysis of the decision, Marcy Wheeler lists out the agency excuses that the Court accepted:

  • It took time for them to make the changes in their systems
  • It took time to train everyone
  • Once everyone got trained they all got sent home for COVID 
  • Given mandatory training, personnel “should be aware” of the requirements, even if actual practice demonstrates they’re not
  • FBI doesn’t do that many field reviews
  • Evidence of violations is not sufficient evidence to find that the program inadequately protects privacy
  • The opt-out system for FISA material — which is very similar to one governing the phone and Internet dragnet at NSA until 2011 that also failed to do its job — failed to do its job
  • The FBI has always provided national security justifications for a series of violations involving their tracking system where an Agent didn’t originally claim one
  • Bulk queries have operated like that since November 2019
  • He’s concerned but will require more reporting

And the dog also ate their homework.  While more reporting sounds nice, that’s the same thing ordered the last time, and the time before that.  Reporting of problems should lead to something actually being done to stop the problems.  

At this point, it’s just embarrassing. A federal court would accept no such tomfoolery from an impoverished criminal defendant facing years in prison. Yet the FISC is perfectly willing to sign off on the FBI and NSA failures and the agencies’ flagrant disregard of its own rulings for year upon year.  Not all FISC decisions are disappointing.  In 2017, we were heartened that another FISC judge had been so fed up that it issued requirements that led to the end of the “about” searching of collected upstream data and even its partial destruction. And the extra reporting requirements do give us at least a glimpse into how bad it is that we wouldn’t otherwise have.  

But this time the FISC has let us all down again. It’s time for the judiciary, whether a part of the FISC or not, to inoculate themselves against the problem of throwing out the Fourth Amendment whenever the Executive Branch invokes national security, particularly when the constitutional violations are so flagrant, long-standing and pervasive. The judiciary needs to recognize mass spying as unconstitutional and stop what remains of it. Americans deserve better than this charade of oversight. 

Share
Categories
Intelwars Legal Analysis privacy

Your Service Provider’s Terms of Service Shouldn’t Overrule Your Fourth Amendment Rights

Last week, EFF, ACLU, and ACLU of Minnesota filed an amicus brief in State v. Pauli, a case in the Minnesota Supreme Court, where we argue that cloud storage providers’ terms of service (TOS) can’t take away your Fourth Amendment rights. This is the first case on this important issue to reach a state supreme court, and could mean that anyone in Minnesota who violated any terms of a providers’ TOS could lose Fourth Amendment protections over all the files in their account.

The facts of the case are a little hazy, but at some point, Dropbox identified video files in Mr. Pauli’s account as child pornography and submitted the files to the National Center for Missing and Exploited Children (NCMEC), a private, quasi-governmental entity created by statute that works closely with law enforcement on child exploitation issues. After viewing the files, a NCMEC employee then forwarded them with a report to the Minnesota Bureau of Criminal Apprehension. This ultimately led to Pauli’s indictment on child pornography charges. Pauli challenged the search, but the trial court held that Dropbox’s TOS—which notified Pauli that Dropbox could monitor his account and disclose information to third parties if it believed such disclosure was necessary to comply with the law—nullified Pauli’s expectation of privacy in the video files. After the appellate court agreed, Pauli petitioned the state supreme court for review.

The lower courts’ analysis is simply wrong. Under this logic, your Fourth Amendment rights rise or fall based on unilateral contracts with your service providers—contracts that none of us read or negotiate but all of us must agree to so that we can use services that are a necessary part of daily life. As we argued in our brief, a company’s TOS should not dictate your constitutional rights, because terms of service are rules about the relationship between you and your service provider—not you and the government.

Companies draft terms of service to govern how their platforms may be used, and the terms of these contracts are extremely broad. Companies’ TOS control what kind of content you can post, how you can use the platform, and how platforms can protect themselves against fraud and other damage. Actions that could violate a company’s TOS include not just criminal activity, such as possessing child pornography, but also—as defined solely by the provider—actions like uploading content that defames someone or contains profanity, sharing a copyrighted article without permission from the copyright holder, or marketing your small business to all of your friends without their advance consent. While some might find activities such as these objectionable or annoying, they shouldn’t justify the government ignoring your Fourth Amendment right to privacy in your files simply because you store them in the cloud.

Given the vast amount of storage many service providers offer (most offer up to 2 terabytes for a small annual fee), accounts can hold tens of thousands of private and personal files, including photos, messages, diaries, medical records, legal data, and videos—each of which could reveal intimate details about our private and professional lives. Storing these records in the cloud with a service provider allows users to free up space on their personal devices, access their files from anywhere, and share (or not share) their files with others. The convenience and cost savings offered by commercial third-party cloud-storage providers means that very few of us would take the trouble to set up our own server to try to achieve privately all that we can do with our data when we could store it with a commercial service provider. But this also means that the only way to take advantage of this convenience is if we agree to a company’s TOS.

And several billion of us do agree every day. Since its advent in 2007, Dropbox’s user-base has soared to more than 700 million registered users. Apple offers free iCloud storage to users of its more than 1.5 billion active phones, tablets, laptops, and other devices around the world. And Google’s suite of cloud services—which includes both Gmail and Google Drive (offering access to stored and shareable documents, spreadsheets, photos, slide presentations, videos, and more)—enjoy 2 billion monthly active users. These users would be shocked to discover that by agreeing to their providers’ TOS, they could be giving up an expectation of privacy in their most private records.

In 2018, in Carpenter v. United States, all nine justices on the Supreme Court agreed that even if we store electronic equivalents of our Fourth Amendment-protected “papers” and “effects” with a third-party provider, we still retain privacy interests in those records. These constitutional rights would be meaningless, however, if they could be ignored simply because a user agreed to and then somehow violated their provider’s TOS.

The appellate court’s ruling in Pauli allows private agreements to trump bedrock Fourth Amendment guarantees for private communications and cloud-stored records. The ruling affects far more than child pornography cases: anyone who violated any terms of a providers’ TOS could lose Fourth Amendment protections over all the files in their account.

We hope the Minnesota Supreme Court will reject such a sweeping invalidation of constitutional rights. We look forward to the court’s decision.

Share
Categories
Fair Use Intelwars Legal Analysis

Victory for Fair Use: The Supreme Court Reverses the Federal Circuit in Oracle v. Google

In a win for innovation, the U.S. Supreme Court has held that Google’s use of certain Java Application Programming Interfaces (APIs) is a lawful fair use. In doing so, the Court reversed the previous rulings by the Federal Circuit and recognized that copyright only promotes innovation and creativity when it provides breathing room for those who are building on what has come before.

This decision gives more legal certainty to software developers’ common practice of using, re-using, and re-implementing software interfaces written by others, a custom that underlies most of the internet and personal computing technologies we use every day.

To briefly summarize over ten years of litigation: Oracle claims a copyright on the Java APIs—essentially names and formats for calling computer functions—and claims that Google infringed that copyright by using (reimplementing) certain Java APIs in the Android OS. When it created Android, Google wrote its own set of basic functions similar to Java (its own implementing code). But in order to allow developers to write their own programs for Android, Google used certain specifications of the Java APIs (sometimes called the “declaring code”).

APIs provide a common language that lets programs talk to each other. They also let programmers operate with a familiar interface, even on a competitive platform. It would strike at the heart of innovation and collaboration to declare them copyrightable.

EFF filed numerous amicus briefs in this case explaining why the APIs should not be copyrightable and why, in any event, it is not infringement to use them in the way Google did. As we’ve explained before, the two Federal Circuit opinions are a disaster for innovation in computer software. Its first decision—that APIs are entitled to copyright protection—ran contrary to the views of most other courts and the long-held expectations of computer scientists. Indeed, excluding APIs from copyright protection was essential to the development of modern computers and the internet.

Then the second decision made things worse. The Federal Circuit’s first opinion had at least held that a jury should decide whether Google’s use of the Java APIs was fair, and in fact a jury did just that. But Oracle appealed again, and in 2018 the same three Federal Circuit judges reversed the jury’s verdict and held that Google had not engaged in fair use as a matter of law.

Fortunately, the Supreme Court agreed to review the case. In a 6-2 decision, Justice Breyer explained why Google’s use of the Java APIs was a fair use as a matter of law. First, the Court discussed some basic principles of the fair use doctrine, writing that fair use “permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.”

Furthermore, the court stated:

Fair use “can play an important role in determining the lawful scope of a computer program copyright . . . It can help to distinguish among technologies. It can distinguish between expressive and functional features of computer code where those features are mixed. It can focus on the legitimate need to provide incentives to produce copyrighted material while examining the extent to which yet further protection creates unrelated or illegitimate harms in other markets or to the development of other products.”

In doing so, the decision underlined the real purpose of copyright: to incentivize innovation and creativity. When copyright does the opposite, fair use provides an important safety valve.

Justice Breyer then turned to the specific fair use statutory factors. Appropriately for a functional software copyright case, he first discussed the nature of the copyrighted work. The Java APIs are a “user interface” that allow users (here the developers of Android applications) to “manipulate and control” task-performing computer programs. The Court observed that the declaring code of the Java APIs differs from other kinds of copyrightable computer code—it’s “inextricably bound together” with uncopyrightable features, such as a system of computer tasks and their organization and the use of specific programming commands (the Java “method calls”). As the Court noted:

Unlike many other programs, its value in significant part derives from the value that those who do not hold copyrights, namely, computer programmers, invest of their own time and effort to learn the API’s system. And unlike many other programs, its value lies in its efforts to encourage programmers to learn and to use that system so that they will use (and continue to use) Sun-related implementing programs that Google did not copy.

Thus, since the declaring code is “further than are most computer programs (such as the implementing code) from the core of copyright,” this factor favored fair use.

Justice Breyer then discussed the purpose and character of the use. Here, the opinion shed some important light on when a use is “transformative” in the context of functional aspects of computer software, creating something new rather than simply taking the place of the original. Although Google copied parts of the Java API “precisely,” Google did so to create products fulfilling new purposes and to offer programmers “a highly creative and innovative tool” for smartphone development. Such use “was consistent with that creative ‘progress’ that is the basic constitutional objective of copyright itself.”

The Court discussed “the numerous ways in which reimplementing an interface can further the development of computer programs,” such as allowing different programs to speak to each other and letting programmers continue to use their acquired skills. The jury also heard that reuse of APIs is common industry practice. Thus, the opinion concluded that the “purpose and character” of Google’s copying was transformative, so the first factor favored fair use.

Next, the Court considered the third fair use factor, the amount and substantiality of the portion used. As a factual matter in this case, the 11,500 lines of declaring code that Google used were less than one percent of the total Java SE program. And even the declaring code that Google used was to permit programmers to utilize their knowledge and experience working with the Java APIs to write new programs for Android smartphones. Since the amount of copying was “tethered” to a valid and transformative purpose, the “substantiality” factor favored fair use.

Finally, several reasons led Justice Breyer to conclude that the fourth factor, market effects, favored Google. Independent of Android’s introduction in the marketplace, Sun didn’t have the ability to build a viable smartphone. And any sources of Sun’s lost revenue were a result of the investment by third parties (programmers) in learning and using Java. Thus, “given programmers’ investment in learning the Sun Java API, to allow enforcement of Oracle’s copyright here would risk harm to the public. Given the costs and difficulties of producing alternative APIs with similar appeal to programmers, allowing enforcement here would make of the Sun Java API’s declaring code a lock limiting the future creativity of new programs.” This “lock” would interfere with copyright’s basic objectives.

The Court concluded that “where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.”

The Supreme Court left for another day the issue of whether functional aspects of computer software are copyrightable in the first place. Nevertheless, we are pleased that the Court recognized the overall importance of fair use in software cases, and the public interest in allowing programmers, developers, and other users to continue to use their acquired knowledge and experience with software interfaces in subsequent platforms.

Share
Categories
Intelwars Legal Analysis Right to Record

First Circuit Upholds First Amendment Right to Secretly Audio Record the Police

EFF applauds the U.S. Court of Appeals for the First Circuit for holding that the First Amendment protects individuals when they secretly audio record on-duty police officers. EFF filed an amicus brief in the case, Martin v. Rollins, which was brought by the ACLU of Massachusetts on behalf of two civil rights activists. This is a victory for people within the jurisdiction of the First Circuit (Massachusetts, Maine, New Hampshire, Puerto Rico and Rhode Island) who want to record an interaction with police officers without exposing themselves to possible reprisals for visibly recording.

The First Circuit struck down as unconstitutional the Massachusetts anti-eavesdropping (or wiretapping) statute to the extent it prohibits the secret audio recording of police officers performing their official duties in public. The law generally makes it a crime to secretly audio record all conversations without consent, even where participants have no reasonable expectation of privacy, making the Massachusetts statute unique among the states.

The First Circuit had previously held in Glik v. Cunniffe (2011) that the plaintiff had a First Amendment right to record police officers arresting another man in Boston Common. Glik had used his cell phone to openly record both audio and video of the incident. The court had held that the audio recording did not violate the Massachusetts anti-eavesdropping statute’s prohibition on secret recording because Glik’s cell phone was visible to officers.

Thus, following Glik, the question remained open as to whether individuals have a First Amendment right to secretly audio record police officers, or if instead they could be punished under the Massachusetts statute for doing so. (A few years after Glik, in Gericke v. Begin (2014), the First Circuit held that the plaintiff had a First Amendment right to openly record the police during someone else’s traffic stop to the extent she wasn’t interfering with them.)

The First Circuit in Martin held that recording on-duty police officers, even secretly, is protected newsgathering activity similar to that of professional reporters that “serve[s] the very same interest in promoting public awareness of the conduct of law enforcement—with all the accountability that the provision of such information promotes.” The court further explained that recording “play[s] a critical role in informing the public about how the police are conducting themselves, whether by documenting their heroism, dispelling claims of their misconduct, or facilitating the public’s ability to hold them to account for their wrongdoing.”

The ability to secretly audio record on-duty police officers is especially important given that many officers retaliate against civilians who openly record them, as happened in a recent Tenth Circuit case. The First Circuit agreed with the Martin plaintiffs that secret recording can be a “better tool” to gather information about police officers, because officers are less likely to be disrupted and, more importantly, secret recording may be the only way to ensure that recording “occurs at all.” The court stated that “the undisputed record supports the Martin Plaintiffs’ concern that open recording puts them at risk of physical harm and retaliation.”

Finally, the court was not persuaded that the privacy interests of civilians who speak with or near police officers are burdened by secretly audio recording on-duty police officers. The court reasoned that “an individual’s privacy interests are hardly at their zenith in speaking audibly in a public space within earshot of a police officer.”

Given the critical importance of recordings for police accountability, the First Amendment right to record police officers exercising their official duties has been recognized by a growing number of federal jurisdictions. In addition to the First Circuit, federal appellate courts in the Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right.

Disappointingly, the Tenth Circuit recently dodged the question. For all the reasons in the First Circuit’s Martin decision, the Tenth Circuit erred, and the remaining circuits must recognize the First Amendment right to record on-duty police officers as the law of the land.

Share
Categories
Intelwars Legal Analysis Right to Record

Tenth Circuit Misses Opportunity to Affirm the First Amendment Right to Record the Police

We are disappointed that the U.S. Court of Appeals for the Tenth Circuit this week dodged a critical constitutional question: whether individuals have a First Amendment right to record on-duty police officers.

EFF had filed an amicus brief in the case, Frasier v. Evans, asking the court to affirm the existence of the right to record the police in the states under the court’s jurisdiction (Colorado, Oklahoma, Kansas, New Mexico, Wyoming, and Utah, and those portions of the Yellowstone National Park extending into Montana and Idaho).

Frasier had used his tablet to record Denver police officers engaging in what he believed to be excessive force: the officers repeatedly punched a suspect in the face to get drugs out of his mouth as his head bounced off the pavement, and they tripped his pregnant girlfriend. Frasier filed a First Amendment retaliation claim against the officers for detaining and questioning him, searching his tablet, and attempting to delete the video.

Qualified Immunity Strikes Again

In addition to refusing to affirmatively recognize the First Amendment right to record the police, the Tenth Circuit held that even if such a right did exist today, the police officers who sought to intimidate Frasier could not be held liable for violating his constitutional right because they had “qualified immunity”—that is, because the right to record the police wasn’t clearly established in the Tenth Circuit at the time of the incident in August 2014.

The court held not only that the right had not been objectively established in federal case law, but also that it was irrelevant that the officers subjectively knew the right existed based on trainings they received from their own police department. Qualified immunity is a pernicious legal doctrine that often allows culpable government actors to avoid accountability for violations of constitutional rights.

Thus, the police officers who clearly retaliated against Frasier are off the hook, even though “the Denver Police Department had been training its officers since February 2007” that individuals have a First Amendment right to record them, and that “each of the officers in this case had testified unequivocally that, as of August 2014, they were aware that members of the public had the right to record them.”

Recordings of Police Officers Are Critical for Accountability

As we wrote last year in our guide to recording police officers, “[r]ecordings of police officers, whether by witnesses to an incident with officers, individuals who are themselves interacting with officers, or by members of the press, are an invaluable tool in the fight for police accountability. Often, it’s the video alone that leads to disciplinary action, firing, or prosecution of an officer.”

This is particularly true in the murder of George Floyd by former Minneapolis police officer Derek Chauvin. Chauvin’s criminal trial began this week and that Chauvin is being prosecuted at all is in large part due to the brave bystanders who recorded the scene.

Notwithstanding the critical importance of recordings for police accountability, the First Amendment right to record police officers exercising their official duties has not been recognized by all federal jurisdictions. Federal appellate courts in the First, Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right.

We had hoped that the Tenth Circuit would join this list. Instead, the court stated, “because we ultimately determine that any First Amendment right that Mr. Frasier had to record the officers was not clearly established at the time he did so, we see no reason to risk the possibility of glibly announcing new constitutional rights … that will have no effect whatsoever on the case.”

This statement by the court is surprisingly dismissive given the important role courts play in upholding constitutional rights. Even with the court’s holding that the police officers had qualified immunity against Frasier’s First Amendment claim, if the court declared that the right to record the police, in fact, exists within the Tenth Circuit, this would unequivocally help to protect the millions of Americans who live within the court’s jurisdiction from police misconduct.

But the Tenth Circuit refused to do so, leaving this critical question to another case and another appellate panel.  

All is Not Lost in Colorado

Although the Tenth Circuit refused to recognize that the right to record the police exists as a matter of constitutional law throughout its jurisdiction, it is comforting that the Colorado Legislature passed two statutes in the wake of the Frasier case.

The first law created a statutory right for civilians to record police officers (Colo. Rev. Stat. § 16-3-311). The second created a civil cause of action against police officers who interfere with an individual’s lawful attempt to record an incident involving a police officer, or who destroy, damage, or seize a recording or recording device (Colo. Rev. Stat. § 13-21-128).

Additionally, the Denver Police Department revised its operations manual to prohibit punching a suspect to get drugs out of his mouth (Sec. 116.06(3)(b)), and to explicitly state that civilians have a right to record the police and that officers may not infringe on this right (Sec. 107.04(3)).

Share
Categories
competition Intelwars Legal Analysis

Local Franchising, Big Cities, and Fiber Broadband

In 2005, the Federal Communications Commission (FCC) made a foundational decision on how broadband competition policy would work with the entry of fiber to the home. In short, the FCC concluded that competition was growing, government policy was unnecessary in deference to market forces, and that the era of communications monopoly was rapidly ending. The very next year at the request of companies like Verizon and AT&T, some states, including California, passed laws that consolidated local “franchises” into single statewide franchises, making the same assumption that the FCC did.

When you look at communities in the aggregate, you would be hard-pressed to find a single large American city where you couldn’t turn a profit with fiber.

They were wrong. To explore just how wrong, EFF and the Technology Law and Policy Center have published our newest white paper on the effects of these decisions. The research digs into New York, which decided to retain power at the local level and see what we can learn from the state as we look to the future. The big takeaway is that large cities that do not have local franchise authority are losing out because they lack the negotiating leverage needed to push private fiber to all city residents, particularly low-income residents.  

What are Franchises and Why Did They Change in 2006?

Franchises are basically the negotiated agreement between broadband carriers and local communities on how access will be provisioned—essentially, a license to do business in the area. These franchises exist because Internet service providers (ISPs) need access to the taxpayer-funded rights of way infrastructure such as roads, sewers, and other means of traveling throughout a community (and notably it would be impossible for ISPs to try to bypass the existing public infrastructure). ISPs also benefit because these agreements conveniently lay out a roadmap for deploying broadband access. The public interest goal of a franchise is to come to an agreement that is fair to the taxpayer for creating the infrastructure.

Earlier franchises were used by the cable television industry to secure a monopoly in a city in exchange for widespread deployment. The city would agree to the monopoly as long as the cable company built out to everyone. This is very similar to the original agreement AT&T secured from the federal government back in the telephone monopoly era. The cable TV monopoly status was, in turn, used to secure preferential financing to fund the construction of the coaxial cable infrastructure that we still mostly use today (although much of that infrastructure has been converted to hybrid fiber/cable, with the cable part still connected to residential dwellings). The cable industry made the argument to the banks when securing funding that because it had no competitors in a community, it would be able to pay back corporate debt quite easily. That part, at least, worked very well; cable is widespread today in cities. Congress later abolished monopolies in franchising in 1992 with the passage of the Cable Television Consumer Protection and Competition Act, which also forced negotiation between cable and television broadcasters.

In 2005, Verizon, which was set to launch their FiOS fiber network, led a lobbying effort in DC to rethink local franchising power. The broadband industry as a whole was hoping for a simpler process than having to negotiate city by city (like cable). After securing a massive deregulatory gift from the FCC’s decision to classify broadband as an information service not subject to federal competition policy of the 90s—or, as we found out years later in the courts, to net neutrality—the broadband industry wanted new rules for fiber to the home. 

Local governments without full local franchise power are extraordinarily constrained when it comes to pushing private service providers.

The industry argument was that we were no longer in the monopoly era of communications (spoiler alert: we are very much in a monopoly era for broadband access), and that competition would do the work to meet the policy goals of universal, affordable, and open Internet services. Congress came very close to agreeing to eliminate franchise authority and take it away from local communities in 2006, but the bill was stalled in the Senate by a bipartisan pair of Senators who wanted net neutrality protections. In response to their loss in Congress, Verizon and others took their argument to the states. Just months later they secured statewide franchising in California. They failed to secure the same change in New York. The industry’s failure there gives us the opportunity to compare two very large states and big cities to see how both approaches have played out for communities.

New York Shows That Local Franchising Works for Big Cities

Local governments without full local franchise power are extraordinarily constrained when it comes to pushing private service providers. With franchise power, local problems are hashed out during negotiations, when communities and companies are considering the mutual benefit of expanded private broadband services in exchange for access to the taxpayer-funded public rights of way. Without it, those negotiations never take place.

As Verizon lobbied to eliminate local franchising, New York’s state legislature studied the issue thoroughly through their state public utilities commission (PUC). The PUC’s research noted that local power promotes local competition and addresses antitrust concerns with communications infrastructure and that competition requires special attention to promote new, smaller entrants. Public interest regulation tailored for large incumbents to prevent discrimination was concluded to be best done at the local level. After their research, New York decided not to eliminate local franchising, and has stayed the course despite 16 years of lobbying by the big ISPs.

The benefits of this decision are clearly illustrated in New York City, which understood that its massive population, wealthy communities, business sector, and density would allow Verizon to deliver fiber to every single home for a profit. This was signed into a franchise in 2008. When Verizon discontinued its fiber service expansion in 2010, the city reminded the company that they had an agreement. Verizon tried to argue that the law’s requirement that service “pass” a home—commonly understood as meaning connecting the home—just meant that fiber was somewhere near the house, and argued that wireless broadband was the same. The city decided to take Verizon to court to enforce their franchise in 2014. While the litigation was lengthy, in November 2020 the city secured a settlement from Verizon to build another 500,000 fiber to the home connections to low income communities. Compare that to California big cities like Oakland and Los Angeles, where studies are showing rampant digital redlining of fiber against low-income people, particularly for black neighborhoods.

Big Cities Should Be 100 Percent Fibered, But They Need Their Power Back 

Carriers often want policymakers to think about broadband deployment as an isolated house-by-house effort, rather than community-wide or regional efforts. They do this to justify the digital redlining of fiber that happens across the country in cities that lack power to stop it. 

Our research shows that when you narrow in on a per household basis, you find that some homes aren’t profitable to connect in isolation. But when you look at communities in the aggregate, you would be hard-pressed to find a single large American city where you couldn’t turn a profit with fiber. That only comes from averaging out your costs and averaging out your profits—and laws that prohibit socioeconomic discrimination in broadband deployment force averaging rather than isolating. If states want more private fiber to be expanded in their big cities, it appears clear that allowing cities to negotiate on their own behalf is one powerful option that needs to be restored, particularly in California. It won’t solve the entire problem, but it can be a piece of how we get fiber to every home and business. 

For a more in-depth look at the issue, read more in the whitepaper

Share
Categories
competition Intelwars Legal Analysis Legislative Analysis net neutrality

AT&T’s HBO Max Deal Was Never Free

When it launched HBO Max, it was discovered that usage of the service would not count against the data caps of AT&T customers, a practice known as “zero-rating.” This means that people on limited data plans could watch as much HBO Max content as they wished without incurring overage fees. AT&T just declared that it would stop this practice, citing California’s net neutrality law as a reason. No matter what spin the telecom giant offers, this does not mean something “free” was taken away. That deal was never free to begin with.

It should be noted that net neutrality doesn’t prevent companies from zero rating in a non-discriminatory way. If AT&T wanted to zero rate all video streaming services, it could. What net neutrality laws prevent is ISPs from using their control over Internet access to advantage its own content or charging services for special access to its customer base. In the case of HBO Max and zero rating, since AT&T owns HBO Max, it costs them nothing to zero rate HBO Max. Other services had to pay for the same treatment or be disadvantaged when AT&T customers chose HBO Max to avoid overage fees.

This is why AT&T is claiming that it’s being forced to stop offering a “free” service because of California’s net neutrality rule. Rather than admit that the wireless industry knows zero rating can be used to shape traffic and user behavior and that perhaps users should determine the entire Internet experience, they want to turn this consumer victory into a defeat. But this basic consumer protection is long overdue having only taken this long because of former FCC Chairman Ajit Pai’s decision to abandon net neutrality and terminate investigations into AT&T’s unlawful practice in 2017, which prompted California to pass S.B. 822 in the first place.

You Already Paid AT&T to Offer the HBO Max Deal

American Internet services—mobile and to the home—are vastly more expensive than they should be. We pay more for worse services than in many other countries and practices like zero rating are part of that.

A comprehensive study by Epicenter.works showed that after zero rating was banned in the EU, consumers received cheaper mobile data over the years. This is because if the ISP can not do things like drive users towards its verticals through artificial scarcity schemes like data caps, it will need to raise its caps and be less willing to penalize usage of its network simply for using the service they purchased in order to appeal to customers. In fact, the infrastructure being laid out for modern wireless, fiber optics, has so much capacity that data caps really make no sense if the market was more competitive.

It is also very important to understand that the cost of moving data is getting cheaper and easier. As we move to fiber-backed infrastructure, the cost of moving data is coming down, speeds are going up exponentially, and the congestion challenges of the early days of the iPhone are a distant memory.

As a result, even though moving data is cheaper, AT&T prices haven’t changed accordingly. Profits for the companies grow, but consumers aren’t seeing prices that match the lowering cost of data. You have essentially paid the price of a real unlimited Internet plan for one with data caps, which continue to exist so that telecom companies can charge more for unlimited plans and collect overage fees. We know problem isn’t actual capacity, since AT&T lifted data caps at the start of the COVID-19 pandemic. If data caps and related data scarcity schemes were necessary for the operation of the network, then a time when usage is on the double-digit rise should have meant AT&T needed to keep its data caps intact and enforce them to keep things running. They didn’t because fiber-connected towers have more than enough capacity to handle growth, unlike older non-fibered cable systems who now throttle uploads.

AT&T’s Zero Rating Favored Big Tech and Was Anticompetitive

Competition among video streaming services is fierce and should be protected and enhanced. User-generated content on things like Twitch and YouTube, premium content from Netflix, Disney+, or Amazon Prime are all competing for your attention and eyeballs. AT&T wanted to give HBO a leg up by simply making the other services either more expensive via a data cap or to have them pay AT&T to be exempt so even if you were not watching AT&T’s product, money was coming to them. Such a structure makes is impossible for a small independent content creator to be competitive as they lack the resources to pay for an exemption and would need to provide content compelling enough for AT&T customers to pay extra to watch.

Furthermore, as the Epicenter.works study discovered, it took a lot of resources from Internet companies to obtain a zero rating exemption making it something only the Googles, Facebooks, and similarly large Internet companies could regularly engage in but not medium to small companies. AT&T doesn’t mind that because it just means more ways to extract rents from more players on the Internet despite being fully compensated by users for an unfettered Internet experience.

Low-Income Advocates Fought Hard to Ban AT&T’s Zero Rating

During the debate in California, AT&T attempted to reframe its zero-rating practice as “free data” and came awfully close to convincing Sacramento to leave it alone. But advocates representing the low-income residents of California came out in strong support of the California net neutrality law’s zero-rating provisions. Studies by the Pew Research Center showed that when income is limited, consumers opt to use only mobile phones for Internet access as opposed to both wireline and wireless service. Groups like the Western Center on Law and Poverty pointed out that for these low-income users, AT&T was giving them a lesser Internet and not equal access to higher-income users.

And that is the ultimate point of net neutrality, to ensure everyone has equal access to the Internet that is free from ISP gatekeeper decisions. When you take into consideration that AT&T is one of the most indebted companies on planet Earth, it starts to make sense why in the absence of federal net neutrality, AT&T started to seek out any and every way to nickel and dime everything that touches its network. But with California’s law starting to come online, users finally have a law that will stand against the effort to convert the Internet into cable television. Whether or not we have federal protection, it seems clear that the states are proving right now that they can be an effective backstop and the work in preserving a free and open Internet will continue not just in DC but in the remaining 49 states.

Share
Categories
free speech Intelwars Legal Analysis

EFF to First Circuit: Schools Should Not Be Policing Students’ Weekend Snapchat Posts

This blog post was co-written by EFF intern Haley Amster.

EFF filed an amicus brief in the U.S. Court of Appeals for the First Circuit urging the court to hold that under the First Amendment public schools may not punish students their off-campus speech, including posting to social media while off campus.

The Supreme Court has long held that students have the same constitutional rights to speak in their communities as do adults, and this principle should not change in the social media age. In its landmark 1969 student speech decision, Tinker v. Des Moines Independent Community School District, the Supreme Court held that a school could not punish students for wearing black armbands at school to protest the Vietnam War. In a resounding victory for the free speech rights of students, the Court made clear that school administrators are generally forbidden from policing student speech except in a narrow set of exceptional circumstances: when (1) a student’s expression actually causes a substantial disruption on school premises; (2) school officials reasonably forecast a substantial disruption, or (3) the speech invades the rights of other students.

However, because Tinker dealt with students’ antiwar speech at school, the Court did not explicitly address the question of whether schools have any authority to regulate student speech that occurs outside of school. At the time, it may have seemed obvious that students can publish op-eds or attend protests outside of school, and that the school has no authority to punish students for that speech even if it’s highly controversial and even if other students talk about it in school the next day. As we argued in our amicus brief, the Supreme Court’s three student speech cases following Tinker all involved discipline related to speech that may reasonably be characterized as on-campus.

In the social media age, the line between off- and on-campus has been blurred. Students frequently engage in speech on the Internet outside of school, and that speech is then brought into school by students on their smartphones and other mobile devices. Schools are increasingly punishing students for off-campus Internet speech brought onto campus.

In our amicus brief, EFF urged the First Circuit to make clear that schools have no authority under Tinker to police students’ off-campus speech, including when that speech occurs on social media. The case, Doe v. Hopkinton, involves two public high school students, “John Doe” and “Ben Bloggs,” who were suspended for making comments in a private Snapchat group that their school considered to be bullying. Doe and Bloggs filed suit asserting the school suspension violated their First Amendment rights.

The school made no attempt to show the lower court that Doe and Bloggs sent the messages at issue while on campus, and the federal judge erroneously concluded that “it does not matter whether any particular message was sent from an on- or off-campus location.”

As we explained in our amicus brief, that conclusion was wrong. Tinker made clear that students’ speech is entitled to First Amendment protection, and authorized schools to punish student speech only in narrow circumstances to ensure the safety and functioning of the school. The Supreme Court has never authorized or suggested that public schools have any authority to reach into students’ private lives and punish them for their speech while off school grounds or after school hours.

This is exactly what another federal appeals court considering this question concluded last summer. In B.L. v. Mahanoy Area School District, a high school student who had failed to advance from junior varsity to the varsity cheerleading squad posted a Snapchat selfie over the weekend with text that said, among other things, “fuck cheer.” One of her Snapchat connections took a screen shot of the post and shared it with the cheerleading coaches, who suspended the student from participation in the junior varsity cheer squad.

The Third Circuit in Mahanoy made clear that the narrow set of circumstances established in Tinker where a school may regulate disruptive student speech applies only to speech uttered at school. As such, it held that schools have no authority to punish students for their off-campus speech—even when that speech “involves the school, mentions teachers or administrators, is shared with or accessible to students, or reaches the school environment.”

This conclusion is especially critical given that students use social media to engage in a wide variety of self-expression, political speech, and activism. As we highlighted in our amicus brief, this includes expressing dissatisfaction with their schools’ COVID-19 safety protocols, calling out instances of racism at schools, and organizing protests against school gun violence. It is essential that courts draw a bright line prohibiting schools from policing off-campus speech so that students can exercise their constitutional rights outside of school without fear that they might be punished for it come Monday morning.

Mahanoy is currently on appeal to the Supreme Court, which will consider the case in this spring. We hope that the First Circuit and the Supreme Court will take this opportunity to reaffirm the free speech rights of public-school students and draw clear limits on schools’ ability to police students’ private lives.

Share
Categories
Computer Fraud And Abuse Act Reform Intelwars Legal Analysis

Raid on COVID Whistleblower in Florida Shows the Need to Reform Overbroad Computer Crime Laws and the Risks of Over-Reliance on IP Addresses

The armed Florida State Trooper raid on Monday on the Tallahassee Florida home of data scientist and COVID whistleblower Rebekah Jones was shocking on many levels. This incident smacks of retaliation against someone working to provide the public with truthful information about the most pressing issue facing both Florida and our nation: the spread and impact of COVID-19. It was an act of retaliation that depended on two broken systems that EFF has spent decades trying to fix: first, that our computer crime laws are so poorly written and broadly interpreted that they allow for outrageous misuses of police, prosecutorial and judicial resources; and second, that police continue to overstate the reliability of IP addresses as a means of identifying people or locations. 

All too often, misunderstandings about computers and the digital networks lead to gross miscarriages of justice.

On the first point, it seems that the police asked for, the prosecutors sought, (and the Court granted) a warrant for a home raid by state police in response to a text message sent to a group of governmental and nongovernmental people working on tracking COVID, urging members to speak up about government hiding and manipulating information about the COVID outbreak in Florida.

Police all too often liken an IP address to a “fingerprint”

How could a text message urging people to do the right thing ever result in an armed police home raid? Sadly, the answer lies in the vagueness and overbreadth of the Florida Computer Crime law, which closely mirrors the language in the federal Computer Fraud and Abuse Act (laws in many states across the country are likewise based on the CFAA).  The law makes it a crime – a serious felony –  to have “unauthorized access” to a computer. But it doesn’t define what “unauthorized” means.  In cases across the country, and in one currently pending before the U.S. Supreme court called Van Buren, we’ve seen that the lack of a clear definition and boundaries around the word “authorized” causes great harm. Here, based upon the Affidavit in the Rebekah Jones case, the police took the position that sending a single text message to a group that you are not (or are no longer) a part of is “unauthorized” access to a computer and so is a crime that merits an armed home police raid. This, despite the obvious fact that no harm happened as a result of people getting a single message urging them to do the right thing.

This isn’t just a one-off misuse: in other cases, we’ve seen the criminalization of “unauthorized” access used to threaten security researchers who investigate the tools we all rely on, prosecute a mother for impersonating her daughter on a social network, threaten journalists seeking to scrape Facebook to figure out what it is doing with our data, and prosecute employees who did disloyal things on company computers. “Unauthorized” access was also used to prosecute our friend Aaron Swartz, and threaten him with decades in jail for downloading academic articles from the JSTOR database. Facing such threats, he committed suicide.  

In fact, if you’ve ever shared a password with a family member or asked someone else to log into a service on your behalf or even lied about your age on a dating website, you’ve likely engaged in “unauthorized” access under some court interpretations. We urged the Supreme Court in the Van Buren case to rule that violations of terms of use (as opposed to overcoming technical blocks) can never be criminal CFAA violations. This won’t entirely fix the law, but it will take away some of the most egregious misuses. 

This case confirms our serious, ongoing national failure to protect whistleblowers. 

Even with the broader definition of “unauthorized,” though, it’s unclear whether the text message in question was criminal. The Affidavit from the police confirms that the text group shared a single user name and password and some have even said that the credentials were publicly available. Either way, it’s hard to see how the text could have been “unauthorized” if there was no technical or other notice to Ms. Jones that sending a message to the list was not allowed. Yet this wafer-thin reed was accepted by a Court as a basis for a search warrant of Ms. Jones’ family home. 

On the second point, the Affidavit indicates that the police relied heavily on the IP address of the sender of the message to seek a warrant to send armed police to Ms. Jones’ home. The affidavit fails to state how the police were able to connect the IP address with the physical address, simply stating that they used “investigative resources.” Press reports claim that Comcast – the ISP that handled that IP address –  did confirm that Ms. Jones home was the customer associated with the IP address, but that isn’t stated in the Affidavit. In other cases, the use of notoriously imprecise public reverse IP lookup tools has resulted in raids of the wrong homes, sometimes multiple times, so it is important that the police explain to the Court what they did to confirm the address and not just hide behind “investigative sources.”   

EFF has long warned that the overreliance on IP addresses as a basis for either the identity or location of a suspect is dangerous. Police all too often liken an IP address to a “fingerprint,” a misleading comparison that suggests that IP-based identifications are much more reliable than they really are, making the metaphor a dangerous one. The metaphor really falls apart when you consider  the reality that a single IP address used by a home network is usually providing Internet connectivity to multiple people with several different digital devices, making it difficult to pinpoint a particular individual. Here, the police did tell the court that that Ms. Jones had recently worked for the Florida Department of Health, so the IP address wasn’t the only fact before the court, but it’s still pretty thin for a home invasion warrant, rather than, say, a simple police request that Ms. Jones come in for questioning. 

Even if it turns out Florida police were correct in this case – and for now Ms. Jones has denied sending the text – the rest of us should be concerned that IP addresses alone, combined with some undisclosed “investigative resources” can be the basis for a judge allowing armed police into your home. And it shows that judges must scrutinize both IP address evidence and law enforcement claims about their reliability, along with other supporting evidence, before authorizing search warrants.

This case confirms our serious, ongoing national failure to protect whistleblowers. And in this case – as with Edward Snowden, Reality Winner, Chelsea Manning and many others – it’s clear that part of protecting whistleblowers means updating our computer crime laws to ensure that they can’t be used as  ready tools for prosecutorial overreach and misconduct. We also need to continue to educate judges about the unreliability of IP addresses so they require more information than just vague conclusions from police before granting search warrants.

All too often, misunderstandings about computers and the digital networks lead to gross miscarriages of justice. But computers and the Internet are here to stay. It’s long past time we ensured that our criminal laws and processes stopped relying on outdated and imprecise words like “authorized” and metaphors like “fingerprints,”  and instead apply technical rigor when deliberating about technology. 

Share
Categories
competition Intelwars Legal Analysis Legislative Analysis

The Last Smash and Grab at the Federal Communications Commission

AT&T and Verizon secured arguably one of the biggest regulatory benefits from the Federal Communications Commission (FCC) with the agency ending the last remnants of telecom competition law. In return for this massive gift from the federal government, they will give the public absolutely nothing. 

A Little Bit of Telecom History 

When the Department of Justice successfully broke up the AT&T monopoly into regional companies, it needed Congress to pass a law to open up the regional companies (known as Incumbent Local Exchange Carriers or ILECs) to competition. To do that, the Congress passed the Telecommunications Act of 1996 that established bedrock competition law and reaffirmed non-discrimination policies that net neutrality is based on.  The law Congress created a new industry that would interoperate with the ILECs. These companies were called Competitive Local Exchange Carriers (CLECs) and already existed locally at the time as many were selling early day Internet access via dialup over the AT&T telephone lines along with local phone services. As broadband came to market, CLECs used the copper wires of ILECs to sell competitive DSL services. In the early years the policy worked and competition sprung forth with 1,000s of new companies existing along with a massive new competition based investment in the telecom sector in general (see chart below).

Source: Data assembled by the California Public Utilities Commission in 2005

Congress intervened to create the competitive market through the FCC, but at the same time gave the FCC the keys (through a process called “forbearance”) to eliminate the regulatory interventions should competition take root on its own. The FCC started to apply forbearance just a short number of years after the passage of the ‘96 Act with arguably one of the most significant decisions in 2005 when fiber wires being deployed by ILECs did not have to be shared with CLECs like copper wires with the entry of Verizon FiOS. Many states followed course though with notable resistance because it was questionable whether future networks did not need strong competition policy to ensure the goals of affordability and universality could be met with market forces alone. One California Public Utilities Commissioner expressed concern that “within a few years there may not be any ZIP codes left in California with more than five or six providers.”

What Little Competition Remains Today

The ILECs today, essentially AT&T and Verizon, no longer deploy fiber broadband and have abandoned competition with cable companies to pursue wireless services. Without access to fiber, CLECs began to deploy their own fiber networks that were financed from their revenues they gained from copper DSL customers, even in rural markets. But for the last two years AT&T and Verizon have been trying to put a stop to that by asking the FCC to use its forbearance power to eliminate copper sharing as well, which they achieved today. Worst yet, AT&T is already positioning itself to abandon DSL copper lines rather than upgrade them to fiber in markets across the country leaving people with a cable monopoly or undependable cell phone service for Internet. In other words, the FCC will effectively make the digital divide worse for 100,000s of Americans by today’s decision. 

Broadband competition has been on a long slow decline for well over a decade after the FCC’s 2005 decision. In the years that followed the industry rapidly consolidated with smaller companies being snuffed out or closing shop. Virtually every single prediction the FCC made about the market in 2005 have failed to pan out and the end result is a huge number of Americans now face regional monopolies as their need for high-speed access has grown dramatically during the pandemic. It is time to rethink the approach, but today we got a double down. 

The FCC’s Decision is Not Final and a Future FCC Can Chart a New Course

The decline of competition didn’t happen exclusively at the hands of the current FCC, but the signs of the regional monopolization were obvious at the start of 2017 when the agency decided to abandon its authority over the industry and repeal net neutrality rules. Approving today’s AT&T/Verizon petition to end the 1996’s final remnants of competition policy, rather than improving and modernizing them to promote high-speed access competition, the FCC has decided that big ISPs know best. But we see what is going to happen next. AT&T will work hard to eliminate any small competitors on their copper lines because doing so impairs their ability to replace them with fiber. All that while they themselves will not provide the fiber replacement resulting in a worsening of the digital divide worse across the country. The right policy would make sure every American gets a fiber line rather than disconnected. None of this has to happen though and EFF will work hard in the states and in DC to bring back competition in broadband access.

Share
Categories
Commentary competition EU Policy Intelwars Legal Analysis

EU vs Big Tech: Leaked Enforcement Plans and the Dutch-French Counterproposal

At the end of September, multiple press outlets published leaked set of antimonopoly enforcement proposals proposed for the a new EU Digital Market Act , which EU officials say they will finalize this year.

The proposals confront the stark fact that the Internet has been thoroughly dominated by a handful of giant, U.S.-based firms, which compete on a global stage with a few giant Chinese counterparts and a handful of companies from Russia and elsewhere. The early promise of a vibrant, dynamic Internet, where giants were routinely toppled by upstarts helmed by outsiders seems to have died, strangled by a monopolistic moment in which the Internet has decayed into “a group of five websites, each consisting of screenshots of text from the other four.

Anti-Monopoly Laws Have Been Under-Enforced The tech sector is not exceptional in this regard: from professional wrestling to eyeglasses to movies to beer to beef and poultry, global markets have collapsed into oligarchies, with each sector dominated by a handful of companies (or just one).

Fatalistic explanations for the unchecked rise of today’s monopolized markets—things like network effects and first-mover advantage—are not the whole story. If these factors completely accounted for tech’s concentration, then how do we explain wrestling’s concentration? Does professional wrestling enjoy network effects too?

A simpler, more parsimonious explanation for the rise of monopolies across the whole economy can be found in the enforcement of anti-monopoly law, or rather, the lack thereof, especially in the U.S. For about forty years, the U.S. and many other governments have embraced a Reagan-era theory of anti-monopoly called “the consumer welfare standard.” This ideology, associated with Chicago School economic theorists, counsels governments to permit monopolistic behavior – mergers between large companies, “predatory acquisitions” of small companies that could pose future threats, and the creation of vertically integrated companies that control large parts of their supply chain – so long as there is no proof that this will lead to price-rises in the immediate aftermath of these actions.

For four decades, successive U.S. administrations from both parties, and many of their liberal and conservative counterparts around the world, have embraced this ideology and have sat by as firms have grown not by selling more products than their competitors, or by making better products than their competitors, but rather by ceasing to compete altogether by merging with one another to create a “kill zone” of products and services that no one can compete with.

After generations in ascendancy, the consumer welfare doctrine is finally facing a serious challenge, and not a moment too soon. In the U.S., both houses of Congressheld sweeping hearings on tech companies anticompetitive conduct, and the House’s bold report on its lengthy, deep investigation into tech monopolism signaled a political establishment ready to go beyond consumer welfare and return to a more muscular, pre-Reagan form of competition enforcement anchored in the idea that monopolies are bad for society, and that we should prevent them because they hurt workers and consumers, and because they distort politics and smother innovation — and not merely because they sometimes make prices go up.

A New Set of Anti-Monopoly Tools for the European Union These new EU leaks are part of this trend, and in them, we find a made-in-Europe suite of antimonopoly enforcement proposals that are, by and large, very welcome indeed. The EU defines a new, highly regulated sub-industry within tech called a “gatekeeper platform” — a platform that exercises “market power” within its niche (the precise definition of this term is hotly contested). For these gatekeepers, the EU proposes a long list of prohibitions:

  • A ban on platforms’ use of customer transaction data unless that data is also made available to the companies on the platform (so Amazon would have to share the bookselling data it uses in its own publishing efforts with the publishers that sell through its platform, or stop using that data altogether)
  • Platforms will have to obtain users’ consent before combining data about their use of the platform with other data from third parties
  • A ban on “preferential ranking” of platforms’ own offerings in their search results: if you search for an address, Google will have to show you the best map preview for that address, even if that’s not Google Maps
  • Platforms like iOS and Android can’t just pre-load their devices exclusively with their own apps, nor could Google they require Android manufacturers to preinstall Google’s preferred apps, and not other apps, on Android devices
  • A ban on devices that use “technical measures” (that’s what lawyers call DRM — any technology that stops you from doing what you want with your stuff) to prevent you from removing pre-installed apps.
  • A ban on contracts that force businesses to offer their wares everywhere on the same terms as the platform demands — for example, if platforms require monthly subscriptions, a business could offer the same product for a one-time payment somewhere else.
  • A ban on contracts that punish businesses on platforms from telling their customers about ways to use their products without using the platform (so a mobile game could inform you that you can buy cheaper power-ups if you use the company’s website instead of the app)
  • A ban on systems that don’t let you install unapproved apps (AKA “side-loading”)
  • A ban on gag-clauses in contracts that prohibit companies for complaining about the way the platform runs its business
  • A ban on requiring that you use a specific email provider to use a platform (think of the way that Android requires a Gmail address)
  • A requirement that users be able to opt out of signing into services operated by the platform they’re using — so you could sign into YouTube without being signed into Gmail

On top of those rules, there’s a bunch of “compliance” systems to make sure they’re not being broken:

  • Ad platforms will have to submit to annual audits that will help advertisers understand who saw their ads and in what context
  • Ad platforms will have to submit to annual audits disclosing their “cross-service tracking” of users and explaining how this complies with the GDPR, the EU’s privacy rules
  • Gatekeepers will have to produce documents on demand from regulators to demonstrate their compliance with rules
  • Gatekeepers will have to notify regulators of any planned mergers, acquisitions or partnerships
  • Gatekeepers will have to pay employees to act as compliance officers, watchdogging their internal operations

In addition to all this, the leak reveals a “greylist” of activities that regulators will intervene to stop:

  • Any actions that prevents sellers on a platform from acquiring “essential information” that the platform collects on their customers
  • Collecting more data than is needed to operate a platform
  • Preventing sellers on a platform from using the data that the platform collects on their customers
  • Anything that creates barriers preventing businesses on a platform or their customers from migrating to a rival’s platform
  • Keeping an ad platform’s click and search data secret — platforms will have to sell this data on a “fair, reasonable and non-discriminatory” basis
  • Any steps that stop users from accessing a rival’s products or services on a platform
  • App store policies that ban third-party sellers from replicating an operating system vendor’s own apps
  • Locking users into a platform’s own identity service
  • Platforms that degrade quality of service for competitors using the platform
  • Locking platform sellers into using the platform’s payment-processor, delivery service or insurance
  • Platforms that offer discounts on their services to some businesses but not others
  • Platforms that block interoperability for delivery, payment and analytics
  • Platforms that degrade connections to rivals’ services
  • Platforms that “mislead” users into switching from a third-party’s services to the platform’s own
  • Platforms that practice “tying” – forcing users to access unrelated third-party apps or services (think of an operating system vendor that requires you to get a subscription to a partner’s antivirus tools).

One worrying omission from this list: interoperability rules for dominant companies. The walled gardens with which dominant platforms imprison their users are a serious barrier to new competitors. Forcing them to install gateways – ways for users of new services to communicate with the friends and services they left behind when they switched will go a long way to reducing the power of the dominant companies. That is a more durable remedy than passing rules to force those dominant actors to use their power wisely.

That said, there’s plenty to like about these proposals, but the devil is in the details.

In particular, we’re concerned that all the rules in the world do no good if they are not enforced. Determining whether a company has “degraded service” to a rival is hard to determine from the outside — can we be certain that service problems are a deliberate act of sabotage? What about companies’ claims that these are just normal technical issues arising from providing service to a third party whose servers and network connections are out of its control?

Harder still is telling whether a search-result unduly preferences a platform’s products over rivals: the platforms will say (they do say) that they link to their own services ahead of others because they rank their results by quality and their weather reports, stores, maps, or videos are simply better than everyone else’s. Creating an objective metric of the “right” way to present search results is certain to be contentious, even among people of goodwill who agree that the platform’s own services aren’t best.

What to do then? Well, as economists like to say, “incentives matter.” Companies preference their own offerings in search, retail, pre-loading, and tying because they have those offerings. A platform that competes with its customers has an incentive to cheat on any rules of conduct in order to preference its products over the competing products offered by third parties.

Traditional antimonopoly law recognized this obvious economic truth, and responded to it with a policy called “structural separation“: this was an industry-by-industry ban on certain kinds of vertical integration. For example, rail companies were banned from operating freight companies that competed with the freighters who used the rails; banks were banned from owning businesses that competed with the businesses they loaned money to. The theory of structural separation is that in some cases, dominant companies simply can’t be trusted not to cheat on behalf of their subsidiaries, and catching them cheating is really hard, so we just remove the temptation by banning them from operating subsidiaries that benefit from cheating.

A structural separation regime for tech — say, one that prevented store-owners from competing with the businesses that sold things in their store, or one that prevented search companies from running ad-companies that would incentivize them to distort their search results — would take the pressure off of many of the EU’s most urgent (and hardest-to-enforce) rules. Not only would companies who broke those rules fail to profit by doing so, but detecting their cheating would be a lot easier.

Imposing structural separation is not an easy task. Given the degree of vertical integration in the tech sector today, structural separation would mean unwinding hundreds of mergers, spinning off independent companies, or requiring independent management and control of subsidiaries. The companies will fight this tooth-and-nail.

But despite this, there is political will for separation. The Dutch and French governments have both signaled their displeasure with the leaked proposal, insisting that it doesn’t go far enough, signing a (non-public) position paper that calls for structural separation, with breakups “on the table.”

Whatever happens with these proposals, the direction of travel is clear. Monopolies are once again being recognized as a problem in and of themselves, regardless of their impact on short-term prices. It’s a welcome, long overdue change.

Share
Categories
Biometrics face surveillance Genetic Information Privacy Intelwars Legal Analysis Mandatory National IDs and Biometric Databases medical privacy

EFF Files Comment Opposing the Department of Homeland Security’s Massive Expansion of Biometric Surveillance

EFF, joined by several leading civil liberties and immigrant rights organizations, recently filed a comment calling on the Department of Homeland Security (DHS) to withdraw a proposed rule that would exponentially expand biometrics collection from both U.S. citizens and noncitizens who apply for immigration benefits and would allow DHS to mandate the collection of face data, iris scans, palm prints, voice prints, and DNA. DHS received more than 5,000 comments in response to the proposed rule, and five U.S. Senators also demanded that DHS abandon the proposal.    

DHS’s biometrics database is already the second largest in the world. It contains biometrics from more than 260 million people. If DHS’s proposed rule takes effect, DHS estimates that it would nearly double the number of people added to that database each year, to over 6 million people. And, equally important, the rule would expand both the types of biometrics DHS collects and how DHS uses them.  

What the Rule Would Do

Currently, DHS requires applicants for certain, but not all, immigration benefits to submit fingerprints, photographs, or signatures. DHS’s proposed rule would change that regime in three significant ways.   

First, the proposed rule would make mandatory biometrics submission the default for anyone who submits an application for an immigration benefit. In addition to adding millions of non-citizens, this change would sweep in hundreds of thousands of U.S. citizens and lawful permanent residents who file applications on behalf of family members each year. DHS also proposes to lift its restrictions on the collection of biometrics from children to allow the agency to mandate collection from children under the age of 14. 

Second, the proposed rule would expand the types of biometrics DHS can collect from applicants. The rule would explicitly give DHS the authority to collect palm prints, photographs “including facial images specifically for facial recognition, as well as photographs of physical or anatomical features such as scars, skin marks, and tattoos,” voice prints, iris images, and DNA. In addition, by proposing a new and expansive definition of the term “biometrics,” DHS is laying the groundwork to collect behavioral biometrics, which can identify a person through the analysis of their movements, such as their gait or the way they type. 

Third, the proposed rule would expand how DHS uses biometrics. The proposal states that a core goal of DHS’s expansion of biometrics collection would be to implement “enhanced and continuous vetting,” which would require immigrants “be subjected to continued and subsequent evaluation to ensure they continue to present no risk of causing harm subsequent to their entry.” This type of enhanced vetting was originally contemplated in Executive Order 13780, which also banned nationals of Iran, Libya, Somalia, Sudan, Syria, and Yemen from entering the United States. While DHS offers few details about what such a program would entail, it appears that DHS would collect biometric data as part of routine immigration applications in order to share that data with other law enforcement agencies and monitor individuals indefinitely.

The Rule Is Fatally Flawed and Must Be Stopped 

EFF and our partners oppose this proposed rule on multiple grounds. It fails to take into account the serious privacy and security risks of expanding biometrics collection; it threatens First Amendment activity; and it does not adequately address the risk of error in the technologies and databases that store biometric data. Lastly, DHS has failed to  provide sufficient justification for these drastic changes, and the proposed changes exceed DHS’s statutory authority. 

Privacy and Security Threats

The breadth of the information DHS wants to collect is massive. DHS’s new definition of biometrics would allow for virtually unbounded biometrics collection in the future, creating untold threats to privacy and personal autonomy. This is especially true of behavioral biometrics, which can be collected without a person’s knowledge or consent, expose highly personal and sensitive information about a person beyond mere identity, and allow for tracking on a mass scale. Notably, both Democratic and Republican members of Congress have condemned China’s similar use of biometrics to track the Uyghur Muslim population in Xinjiang.

Of the new types of biometrics DHS plans to collect, DNA collection presents unique threats to privacy. Unlike other biometrics such as fingerprints, DNA contains our most private and personal information. DHS plans to collect DNA specifically to determine genetic family relationships and will store that relationship information with each DNA profile, thus allowing the agency to identify and map immigrant families and, eventually over time, whole immigrant communities. DHS suggests that it will store DNA data indefinitely and makes clear that it retains the authority to share this data with law enforcement. Sharing this data with law enforcement only increases the risk those required to give samples will be erroneously linked to a crime, while exacerbating problems related to the disproportionate number of people of color whose samples are included in government DNA databases. 

Not only is the government’s increased collection of highly sensitive personal data troubling because of the ways the government might use it, but also because that data could end up in the hands of bad actors. Put simply, DHS has not demonstrated that it can keep biometrics safe. For example, just last month, DHS’s Office of Inspector General (OIG) found that the agency’s inadequate security practices enabled bad actors to steal nearly 200,000 travelers’ face images from a subcontractor’s computers. A Government Accountability Office report similarly “identified long-standing challenges in CBP’s efforts to develop and implement [its biometric entry and exit] system.” There have also been serious security breaches from insiders at USCIS. And other federal agencies have had similar challenges in securing biometric data: in 2015, sensitive data on more than 25 million people stored in the Office of Personnel Management databases was stolen. And, as the multiple security breaches of India’s Aadhar national biometric database have shown in the international context, these breaches can make millions of individuals subject to fraud and identity theft.

The risk of security breaches to children’s biometrics is especially acute. A recent U.S. Senate Commerce Committee report collects a number of studies that “indicate that large numbers of children in the United States are victims of identity theft.” Breaches of children’s biometric data further exacerbate this security risk because biometrics cannot be changed. As a recent UNICEF report explains, the collection of children’s biometric information exposes them to “lifelong data risks” that are not possible to presently evaluate. Never before has biometric information been collected from birth, and we do not know how the data collected today will be used in the future.

First Amendment Risks

This massive collection of biometric data—and the danger that it could be leaked—places a significant burden on First Amendment activity. By collecting and retaining biometric data like face recognition and sharing it broadly with federal, state, and local agencies, as well as with contractors and foreign governments, DHS lays the groundwork for a vast surveillance and tracking network that could impact individuals and communities for years to come. DHS could soon build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like at the border, but anywhere there are cameras. This burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups that are the most likely to encounter DHS. 

If immigrants and their U.S. citizen and permanent resident family members know the government can request, retain, and share with other law enforcement agencies their most intimate biometric information at every stage of the immigration lifecycle, many may self-censor and refrain from asserting their First Amendment rights. Studies show that surveillance systems and the overcollection of data by the government chill expressive and religious activity. For example, in 2013, a study involving Muslims in New York and New Jersey found excessive police surveillance in Muslim communities had a significant chilling effect on First Amendment-protected activities.

Problems with Biometric Technology

DHS’s decision to move forward with biometrics expansion is also questionable because the agency fails to consider the lack of reliability of many biometric technologies and the databases that store this information. One of the methods DHS proposes to employ to collect DNA, known as Rapid DNA, has been shown to be error prone. Meanwhile, studies have found significant error rates across face recognition systems for people with darker skin, and especially for Black women. 

Moreover, it remains far from clear that collecting more biometrics will make DHS’s already flawed databases any more accurate. In fact, in a recent case challenging the reliability of DHS databases, a federal district court found that independent investigations of several DHS databases highlighted high error rates within the systems. For example, in 2017, the DHS OIG found that the database used for information about visa overstays was wrong 42 percent of the time. Other databases used to identify lawful permanent residents and people with protected status had a 30 percent error rate.

DHS’s Flawed Justification

DHS has offered little justification for this massive expansion of biometric data collection. In the proposed rule, DHS suggests that the new system will “provide DHS with the improved ability to identify and limit fraud.” However, the scant evidence that DHS offers to demonstrate the existence of fraud cannot justify its expansive changes. For example, DHS purports to justify its collection of DNA from children based on the fact that there were “432 incidents of fraudulent family claims” between July 1, 2019 and November 7, 2019 along the southern border. Not only does DHS not define what constitutes a “fraudulent family,” but also it leaves out that during that same period, an estimated 100,000 family units crossed the southern border, meaning that the so-called “fraudulent family” units made up less than one-half of one percent of all family crossings. And we’ve seen this before: the Trump administration has a troubling record of raising false alarms about fraud in the immigration context.

In addition, DHS does not address the privacy costs discussed in depth above. The proposed rule merely notes that “[t]here could be some unquantified impacts related to privacy concerns for risks associated with the collection.” And of course, the changes would come at a considerable financial cost to taxpayers, at a time when USCIS is already experiencing fiscal challenges. Even with the millions of dollars in new fees USCIS will collect, the rule is estimated to cost anywhere from $2.25 to $5 billion over the next 10 years. DHS also notes that additional costs could manifest.

Beyond DHS’s Mandate

Congress has not given DHS the authority to expand biometrics collection in this manner. When Congress has wanted DHS to use biometrics, it has said so clearly. For example, after 9/11, Congress directed DHS to “develop a plan to accelerate the full implementation of an automated biometric entry and exit data system.” But DHS can point to no such authorization in this instance. In fact, Congress is actively considering measures to restrict the government’s use of biometrics. It is not the place for a federal agency to supersede debate in Congress. Elected lawmakers must resolve these important matters through the democratic process before DHS can put forward a proposal like the proposed rule, which seeks to perform an end run around the democratic process.    

What’s Next

If DHS makes this rule final, Congress has the power to block it from taking effect. We hope that DHS will take seriously our comments. But if it doesn’t, Congress will be hearing from us and our members.

Share
Categories
Intelwars Legal Analysis

Victory! EFF Wins Appeal for Access to Wiretap Application Records

Imagine learning that you were wiretapped by law enforcement, but couldn’t get any information about why. That’s what happened to retired California Highway Patrol officer Miguel Guerrero, and EFF sued on his behalf to get more information about the surveillance. This week, a California appeals court ruled in his case that people who are targets of wiretaps are entitled to inspect the wiretap materials, including the order application and intercepted communications, if a judge finds that such access would be in the interests of justice. This is a huge victory for transparency and accountability in California courts.

This case arose from the grossly disproportionate volume of wiretaps issued by the Riverside County Superior Court in 2014 and 2015. In those years, wiretaps from that single, suburban county accounted for almost twice as many wiretaps as were issued in the rest of California combined, and almost one-fifth of all state and federal wiretaps issued nationwide. After journalists exposed Riverside County’s massive surveillance campaign, watchdog groups and even a federal judge warned that the sheer scale of the wiretaps suggested that the applications and authorizations violated federal law.

Guerrero learned from family members that his phone number was the subject of a wiretap order in 2015. Guerrero, a former law enforcement officer, has no criminal record, and was never arrested or charged with any crime in relation to the wiretap. And, although the law requires that targets of wiretaps receive notice within 90 days of the wiretap’s conclusion, he never received any such notice. He wanted to see the records both to inform the public and to assess whether to bring an action challenging the legality of the wiretap.

When we first went to court, the judge ruled that targets of wiretaps can unseal the wiretap application and order only by proving “good cause” for disclosure. The court then found that neither Guerrero’s desire to pursue a civil action nor the grossly disproportionate volume of wiretaps established good cause for disclosure, commenting that the number of wiretaps was “nothing more than routine.” The court further rejected our argument that the public has a First Amendment right of access to the wiretap order and application.

We appealed, and the Court of Appeal agreed that the trial court erred. The appeals court made clear that, under California law, the target of a wiretap need not show good cause. Instead, the target of a wiretap need only demonstrate that disclosure of the wiretap order and application is “in the interest of justice”—which unlike the good cause standard, does not include any presumption of secrecy.

Importantly, the court provided guidance for how to assess the “interest of justice” in this context, becoming one of the first courts in the nation to interpret this standard. As the court explained, the “interest of justice” analysis requires a court to consider the requester’s interest in access, the government’s interest in secrecy, and the interests of others intercepted persons, and, significantly, the public interest. In considering the public interest, the court explained, courts should consider the huge volume of wiretaps approved in Riverside County. The court specifically rejected the trial court’s assessment that those statistics, on their own, were irrelevant without an independent showing of nefarious conduct.

The case now returns to the trial court, where the judge must apply the Court of Appeal’s analysis. We hope Mr. Guerrero will finally get some answers.

Share
Categories
competition Intelwars Legal Analysis Legislative Analysis

California Is Putting Together A Broadband Plan. We Have Thoughts.

Right now the California Public Utilities Commission and the California Broadband Council are collecting public comment to create the California Broadband Plan, per Governor Newsom’s Executive Order 73-20. The order’s purpose is to get a means of delivering 100 mbps-capable Internet connections to around 2 million Californians who lack access to at least one high-speed connection. These Californians overwhelmingly live in two types of places: rural and low-income urban. 

California’s Systemic Broadband Access Failures Will Require a Lot of Work to Fix, But It Can Be Done

California has some major broadband access problems, despite the size and reach of its economy. The state has the largest number of students (1.298 million) in the country who are unable to access high-speed Internet access. When we see kids going to fast food parking lots for Internet access, like the two little girls in Salinas, California doing their homework with Taco Bell parking lot WiFi, that is a pretty clear sign of policy failure in cities. When 2 million, mostly rural, Californian residents are dependent on the now bankrupt Frontier Communications— which received billions in federal subsidies and spent a lot of it on obsolete copper DSL instead of profitable fiber to the home—that is a pretty clear sign of both market failure and policy failure. And when studies of California cities find that ISPs are more likely to avoid Black neighborhoods with fiber in Los Angeles and have deployment patterns of high-speed access that mirror 1930s-era redlining in Oakland, we have a failure to modernize and enforce our existing non-discrimination laws. 

As bad as things are today, particularly in light of how the pandemic has shifted more of our needs online, the state has a great opportunity to fix these problems. The Electronic Frontier Foundation has been studying this issue of 21st-century access to the Internet and based on our legal and technical research, we’ve submitted the following recommendations to the state outlining how to bring fiber-to-the-home to all people. 

Here is the good news. It is already commercially feasible to deliver fiber to the home to 90% of households in the United States (and California) within a handful of years—if we adopt the right policies soon. Public sector deployments led by rural cooperatives across the country are proving that even in the cases of the absolutely hardest deployments, such as a 7,500 person cooperative reaching only 2.5 people per square mile, it’s possible to get access to gigabit fiber-to-the-home. So, contrary to what a legacy industry overly reliant on limiting customers to yesterday’s infrastructure may say, or the arguments of policy makers who adopt the belief that some residents deserve 2nd-class access to the Internet, it can not only be done, it should have already been happening now. 

EFF’s Recommendations to the State of California to Deliver a Fiber for All Future

If our current and past policies have failed us, what is the new approach? You can read our is our full filing here (pdf), but the main points we make are as follows:

  1. Prioritize local private and public options and de-emphasize reliance on large national ISPs tethered to 3 to 5 years return on investment formulas. This short-term focus is incompatible with resolving the digital divide that will require 10 year or longer return on investment strategies (that’s ok with fiber that will last several decades).
  2. Mandate full fiber deployment in California cities with a density of more than 1,000 people per square mile in areas where low-income neighborhoods are being denied fiber infrastructure, in violation of current law. Give Internet service providers (ISPs) an opportunity to remedy, but make the consequence of failing to do so losing both their franchise for lack of compliance and the right to do business in California.
  3. Promote open access fiber policies and sharing opportunities in fiber infrastructure to reduce the overall cost of deployment. Leverage existing fiber needs in non-telecom markets such as electrical utilities to jointly work with telecom providers. 
  4. Adopt statewide San Francisco’s tenants ordinance, which expanded broadband competition through the city’s apartment complexes and ended the monopoly arrangement between cable companies and landlords. 
  5. Have the state assist in the creation of special districts to fill the role of rural cooperatives when a rural cooperative does not exist. Provide support in feasibility studies, technical education support, long term financing, and regulatory support to ensure such networks can interconnect with the nearest fiber infrastructure. 
  6. Begin mapping public sector fiber infrastructure, and open it up to shared uses to facilitate more local public and private options. Require private ISPs to provide mapping data of their fiber deployments to facilitate sharing, and intervene to address monopoly withholding of fiber access if necessary.  
  7. Standardize and expedite the permitting process for deploying fiber. Explore ways to improve micro-trenching policy to make it easier to deploy fiber in cities.    
  8. Support local government efforts to build fiber by financially supporting their ability to access the bond market and obtain long term low interest debt financing.

We hope that state policymakers recognize the importance of fiber optics as the core ingredient for 21st-century networks. It is already a universally adopted medium to deliver high-capacity networks, with more than 1 billion gigabit fiber connections coming online in a few years (primarily led by China). And to the extent that policymakers wish to see 5G high-speed wireless in all places, the industry has made it clear that is contingent on dense fiber networks being everywhere first

We urge the state to start the work needed to bring broadband access to every Californian as soon as possible. It will be hard work, but we can do it—if we start with the right policies, right now.

Share
Categories
Intelwars Legal Analysis privacy

Come Back with a Warrant for my Virtual House

Virtual Reality and Augmented Reality in your home can involve the creation of an intimate portrait of your private life.  The VR/AR headsets can request audio and video of the inside of our house, telemetry about our movements, depth data and images that can build a highly accurate geometrical representation of your place, that can map exactly where that mug sits on your coffee table, all generated by a simultaneous localization and mapping (SLAM) system.  As Facebook’s Reality Labs explains, their “high-accuracy depth capture system, which uses dots projected into the scene in infrared, serves to capture the exact shape of big objects like tables and chairs and also smaller ones, like the remote control on the couch.” VR/AR providers can create “Replica re-creations of the real spaces that even a careful observer might think are real,” which is both the promise of and the privacy problem with this technology.

If the government wants to get that information, it needs to bring a warrant. 

Nearly twenty years ago, the Supreme Court examined another technology that would allow law enforcement to look through your walls into the sanctity of your private space—thermal imaging. In Kyllo v. United States, the Court held that a thermal scan, even from a public place outside the house, to monitor the heat emanating in your home was a Fourth Amendment search, and required a warrant.  This was an important case, building upon some earlier cases, like United States v. Karo, which found a search when the remote activation of a beeper showed a can of ether was inside a home. 

More critically, Kyllo established the principle that new technologies1 that can “explore details of the home that would previously have been unknowable without physical intrusion, the surveillance is a ‘search’ and is presumptively unreasonable without a warrant.” A VR/AR setup at home can provide a wealth of information—“details  of the home”—that was previously unknowable without the police coming in through the door.

This is important, not just to stop people from seeing the dirty dishes in your sink, or the politically provocative books on your bookshelf.  The protection of your home from government intrusion is essential to preserve your right to be left alone, and to have autonomy in your thoughts and expression without the fear of Big Brother breathing down your neck. While you can choose to share your home with friends, family or the public, the ability to make that choice is a fundamental freedom essential to human rights.

Of course, a service provider may require sharing this information before providing certain services.  You might want to invite your family to a Covid-safe housewarming, their avatars appearing in a exact replica of your new home, sharing the joy of seeing your new space. To get the full experience and fulfill the promise of the new technology, the details of your house—your furnishings, the art on your walls, the books on your shelf may need to be shared with a service provider to be enjoyed by your friends. And, at the same time, creating a tempting target for law enforcement wanting to look inside your house. 

Of course, the ideal would be that strong encryption and security measures would protect that information, such that only the intended visitors to your virtual house could get to wander the space, and the government would be unable to obtain the unencrypted information from a third-party.  But we also need to recognize that governments will continue to press for unencrypted access to private spaces. Even where encryption is strong between end points, governments may, like the United Kingdom, ask for the ability to insert an invisible ghost to attend the committee of correspondence meeting you hold in your virtual dining room. 

While it is clear that monitoring the real-time audio in your virtual home requires a wiretap order, the government may argue that they can still observe a virtual home in real-time. Not so.  Carpenter v. United States provides the constitutional basis to keep the government at bay when the technology is not enough.  Two years ago, in a landmark decision, the Supreme Court established that accessing historical records containing the physical locations of cellphones required a search warrant, even though they were held by a third-party. Carpenter cast needed doubt on the third-party doctrine, which allows access to third-party held records without a warrant, noting that “few could have imagined a society in which a phone goes wherever its owner goes, conveying to the wireless carrier not just dialed digits, but a detailed and comprehensive record of the person’s movements.”

Likewise, when the third-party doctrine was created in 1979, few could have imagined a society in which VR/AR systems can map, in glorious three dimensional detail, the interior of one’s home and their personal behavior and movements, conveying to the VR/AR service provider a detailed and comprehensive record of the goings on of a person’s house. Carpenter and Kyllo stand strongly for requiring a warrant for any information created by your VR/AR devices that shows the interior of your private spaces, regardless of whether that information is held by a service provider.

In California, where many VR/AR service providers are based, CalECPA generally requires a warrant or wiretap order before the government may obtain this sensitive data from service providers, with a narrow exception for subpoenas, where “access to the information via the subpoena is not otherwise prohibited by state or federal law.”  Under Kyllo and Carpenter, warrantless access to your home through VR/AR technology is prohibited by the ultimate federal law, the Constitution.

We need to be able to take advantage of the awesomeness of this new technology, where you can have a fully formed virtual space—and invite your friends to join you from afar—without creating a dystopian future where the government can teleport into a photo-realistic version of your house, able to search all the nooks and crannies measured and recorded by the tech, without a warrant. 

Carpenter led to a sea change in the law, and since has been cited in hundreds of criminal and civil cases across the country, challenging the third-party doctrine for surveillance sources, like real-time location tracking, 24/7 video cameras and automatic license plate readers. Still the development of the doctrine will take time. No court has yet ruled on a warrant for a virtual search of your house.  For now, it is up to the service providers to give a pledge, backed by a quarrel of steely-eyed privacy lawyers, that if the government comes to knock on your VR door, they will say “Come back with a warrant.” 

  • 1. Kyllo used the phrase “device that is not in general public use,” which sets up an unfortunate and unnecessary test that could erode our privacy as new technologies become more widespread. Right now, the technology to surreptitiously view the interior of a SLAM-mapped home is not in general use, and even when VR and AR are ubiquitous, courts have recognized that technologies to surveil cell phones are not “in general public use,” even though the cell phones themselves are.
Share
Categories
competition Intelwars Legal Analysis Patents

Throwing Out the FTC’s Suit Against Qualcomm Moves Antitrust Law in the Wrong Direction

The government bestows temporary monopolies in the form of patents to promote future innovation and economic growth. Antitrust law empowers the government to break up monopolies when their power is so great and their conduct is so corrosive of competition that they can dictate market outcomes without worrying about their rivals. In theory, patent and antitrust law serve the same goals—promoting economic and technological development—but in practice, they often butt heads.

The relationship between antitrust and patent law is especially thorny when it comes to “standards-essential patents” or “SEPs.” These are patents that cover technologies considered “essential” for implementing standards—agreed-upon rules and protocols that allow different manufacturers’ devices to communicate with each other using shared network infrastructure. Some technology standards become standards by achieving widespread adoption through market forces (the QWERTY keyboard layout is on example). But many are the result of extensive deliberation and cooperation among industry players (including competitors), like the MP3 audio compression and 3G wireless communication standards.

Standards can enhance competition and consumer choice, but they also massively inflate the value of patents deemed essential to the standard, and give their owners the power to sue companies that implement the standard for money damages or injunctions to block them from using their SEPs. When standards cover critical features like wireless connectivity, SEP owners wield a huge amount of “hold-up” power because their patents allow them to effectively block access to the standard altogether. That lets them charge unduly large tolls to anyone who wants to implement the standard.

To minimize that risk, standard-setting organizations typically require companies that want their patented technology incorporated into a standard to promise in advance to license their SEPs to others on fair, reasonable, and non-discriminatory (FRAND) terms. But that promise strikes at a key tension between antitrust and patent law: patent owners have no obligation to let anyone use technology their patent covers, but to get those technologies incorporated into standards, patent owners usually have to promise that they will give permission to anyone who wants to implement the standard as long as they pay a reasonable license fee. 

Qualcomm is one of the most important and dominant companies in the history of wireless communication standards. It is a multinational conglomerate that has owned patents on every major wireless communication standard since its first CDMA patent in 1985, and it participates in the standard-setting organizations that define those standards. Qualcomm is somewhat unique in that it not only licenses SEPs, but also supplies the modem chips used by a wide range of devices. These include chips that implement wireless communication standards, which lie at the heart of every mobile computing device.

Although Qualcomm promised to license its SEPs (including patents essential to CDMA, 3G, 4G, and 5G) on FRAND terms, its conduct has to many looked unfair, unreasonable, and highly discriminatory. In particular, Qualcomm has drawn scrutiny for bundling tens of thousands of patents together—including many that are not standard-essential—and offering portfolio-only licenses no matter what licensees actually want or need; refusing to sell modem chips to anyone without a SEP license and threatening to withhold chips from companies trying to negotiate different license terms; refusing to license anyone other than original-equipment manufacturers (OEMs); and insisting on royalties calculated as a percentage of the sale price of a handset sold to end users for hundreds of dollars, despite the minimal contribution of any particular patent to the retail value.

In 2017, the U.S. Federal Trade Commission sued Qualcomm for violating both sections of the Sherman Antitrust Act by engaging in a number of anticompetitive SEP licensing practices. In May 2019, the U.S. District Court for the Northern District of California agreed with the FTC, identifying numerous instances of Qualcomm’s unlawful, anticompetitive conduct in a comprehensive 233-page opinion. We were pleased to see the FTC take action and the district court credit the overwhelming evidence that Qualcomm’s conduct is corrosive to market-based competition and threatens to cement Qualcomm’s dominance for years to come.

But this month, a panel of judges from the Court of Appeals for the Ninth Circuit unanimously overturned the district court’s decision, reasoning that Qualcomm’s conduct was “hypercompetitive” but not “anticompetitive,” and therefore not a violation of antitrust law. To reach that result, the Ninth Circuit made the patent grant more powerful and antitrust law weaker than ever.

According to the Ninth Circuit, patent owners don’t have a duty to let anyone use what their patent covers, and therefore Qualcomm had no duty to license its SEPs to anyone. But that framing requires ignoring the promises Qualcomm made to license its SEPs on reasonable and non-discriminatory terms—promises that courts in this country and around the world have consistently enforced. It also means ignoring antitrust principles like the essential facilities doctrine, which limits the ability of a monopolist with hold-up power over an essential facility (like a port) to shut out rivals. Instead, the Ninth Circuit held rather simplistically that a duty to deal could arise only if the monopolist had provided access, and then reversed its policy.

But even when Qualcomm restricted its licensing policies in critical ways, the Ninth Circuit found reasons to approve those restrictions. For example, Qualcomm stopped licensing its patents to chip manufacturers and started licensing them only to OEMs. This had a major  benefit: it let Qualcomm charge a much higher royalty rate based on the high retail price of the end user devices, like smartphones and tablets, that OEMs make and sell. If Qualcomm had continued to license to chip suppliers, its patents would be “exhausted” once the chips were sold to OEMs, extinguishing Qualcomm’s right to assert its patents and control how the chips were used.

Patent exhaustion is a century-old doctrine that protects the rights of consumers to use things they buy without getting the patent owner’s permission again and again. Patent exhaustion is important because it prevents price-gouging, but also because it protects space for innovation by letting people use things they buy freely, including to build innovations of their own. The doctrine thus helps patent law serve its underlying goal—promoting economic growth and innovation. In other words, the doctrine of exhaustion is baked into the patent grant; it is not optional. Nevertheless, the Ninth Circuit wholeheartedly approved of Qualcomm’s efforts to avoid exhaustion—even when that meant cutting off access to previous licensees (chip-makers) in ways that let Qualcomm charge far more in licensing fees than its SEPs could possibly have contributed to the retail value of the final product.

It makes no sense that Qualcomm could contract around a fundamental principle like patent exhaustion, but at the same time did not assume any antitrust duty to deal under these circumstances. Worse, it’s harmful for the economy, innovation, and consumers. Unfortunately, the kind of harm that antitrust law recognizes is limited to harm affecting “competition” or the “competitive process.” Antitrust law, at least as the Ninth Circuit interprets it, doesn’t do nearly enough to address the harm downstream consumers experience when they pay inflated prices for high-tech devices, and miss out on innovation that might have developed from fair, reasonable, and non-discriminatory licensing practices.

We hope the FTC sticks to its guns and asks the Ninth Circuit to go en banc and reconsider this decision. Otherwise, antitrust law will become an even weaker weapon against innovation-stifling conduct in technology markets.   

Share
Categories
border searches Intelwars Legal Analysis

Civil Rights and First Amendment Defenders Urge Court of Appeals to Require a Warrant for Border Device Searches

Last month, EFF, along with co-counsel ACLU and ACLU of Massachusetts, filed a brief in Alasaad v. Wolf urging the U.S. Court of Appeals for the First Circuit to require a warrant for searches of electronic devices at the border. In fiscal year 2019, border officers searched over 40,000 electronic devices, more than an eight-fold increase since 2012. Because of the significant privacy interests that travelers have in the digital data on their devices, we argued that the government’s warrantless, and usually suspicionless, searches and seizures of electronic devices violate the First and Fourth Amendments to the U.S. Constitution.

Seven amicus briefs were filed in support of our position:

  • Advancing Justice – Asian Law Caucus and law firm WilmerHale, on behalf of 24 civil rights organizations including the Council on American Islamic Relations, the Center for Constitutional Rights, and the CLEAR Project, filed an amicus brief highlighting how the government’s border search policies disparately target members of the Arab, Middle Eastern, Muslim, and South Asian communities, and that a warrant standard can help curtail discriminatory profiling.
  • The Constitutional Accountability Center filed an amicus brief discussing how the Fourth Amendment’s protection of personal papers from searches requires border officers to obtain a warrant or, at minimum, have reasonable suspicion before search.
  • The Yale Media Freedom and Information Access Clinic and law firm Brown Rudnick, on behalf of 18 First Amendment legal scholars, filed an amicus brief focusing on the privacy dimension of free expression and explained that the First Amendment requires a warrant for border searches of electronic devices.
  • The Harvard Cyberlaw Clinic, on behalf of the Harvard Immigration and Refugee Clinic, filed an amicus brief arguing that border device searches have profound chilling effects on free speech, which unduly impacts immigrant communities.
  • The Knight First Amendment Institute at Columbia University, the Reporters Committee for Freedom of the Press, and 12 media organizations filed an amicus brief that underscored the implications of electronic device searches at the border on the rights of journalists, and argued that a warrant is necessary under the First and Fourth Amendments.
  • The National Association of Criminal Defense Lawyers filed an amicus brief highlighting the impact of border device searches on criminal defense attorneys who often carry sensitive information relating to clients, and argued that a warrant is necessary under the Fourth and Sixth Amendments.
  • Law firm Covington & Burling, on behalf of the Center for Democracy and Technology, Brennan Center for Justice, R Street Institute, and TechFreedom, filed an amicus brief focusing on the intrusiveness of so-called “basic” searches and argued that the Fourth Amendment requires a warrant, or at least reasonable suspicion, for border searches of electronic devices.

EFF, ACLU, and ACLU of Massachusetts originally filed this lawsuit in September 2017 on behalf of 11 travelers, 10 U.S. citizens and one permanent resident, who have all suffered warrantless, suspicionless device searches due to the government’s policies.

In November 2019, the district court ruled that border officers must have reasonable suspicion that a device contains digital contraband for any search of the device’s digital content. As part of this ruling, the district court concluded that wholly suspicionless device searches at the border are unconstitutional. The government appealed on this issue, and we cross-appealed on the issue of whether border device searches actually require a warrant.

We are proud to see a diverse array of organizations and individuals who have filed briefs to support a warrant standard for border searches of electronic devices. We anticipate that the First Circuit will hear our case later this year or in early 2021.

Share
Categories
Artificial Intelligence & Machine Learning Intelwars Legal Analysis transparency

Victory! Court Orders CA Prisons to Release Race of Parole Candidates

In a win for transparency, a state court judge ordered the California Department of Corrections and Rehabilitation (CDCR) to disclose records regarding the race and ethnicity of parole candidates. This is also a win for innovation, because the plaintiffs will use this data to build new technology in service of criminal justice reform and racial justice.

In Voss v. CDCR, EFF represented a team of researchers (known as Project Recon) from Stanford University and University of Oregon who are attempting to study California parole suitability determinations using machine-learning models. This involves using automation to review over 50,000 parole hearing transcripts and identify various factors that influence parole determinations. Project Recon’s ultimate goal is to develop an AI tool that can identify parole denials that may have been influenced by improper factors as potential candidates for reconsideration. Project Recon’s work must account for many variables, including the race and ethnicity of individuals who appeared before the parole board.

Project Recon is a promising example of how AI might be used to identify and correct racial bias in our criminal justice system.

In September 2018, Project Recon requested from CDCR race and ethnicity information of parole candidates. CDCR denied the request, claiming that the information was not subject to the California Public Records Act (CPRA). Instead, CDCR shuttled the researchers through its discretionary research review process, where they remained in limbo for nearly a year. Ultimately, the head of the parole board declined to support the team’s request because one of its members had previously published research critical of California’s parole process.

In June 2020, EFF filed a lawsuit on behalf of Project Recon alleging that CDCR violated the CPRA and the First Amendment. Soon after, our case was consolidated with a similar case, Brodheim v. CDCR. We moved for a writ of mandate ordering CDCR to disclose the race data.

In its opposition, CDCR claimed it was protecting the privacy of incarcerated people, and that race data constituted “criminal offender record information” and was therefore exempt from disclosure. EFF pointed out that the public interest in disclosure is high—especially since racial disparities in the criminal justice system are a national topic of conversation—and thus was not outweighed by the public interest in nondisclosure. EFF also argued that race data could not constitute “criminal offender record information” since race has nothing to do with someone’s criminal record, but rather is demographic information.

The court agreed. It reasoned that the public has a strong public interest in disclosure of race and ethnicity data of parole candidates:

[T]his case unquestionably involves a weighty public interest in disclosure, i.e., to shed light on whether the parole process is infected by racial or ethnic bias. The importance of that public interest is vividly highlighted by the current national focus on the role of race in the criminal justice system and in American society generally . . . . Disclosure insures that government activity is open to the sharp eye of public scrutiny.  

Accordingly, the court ordered CDCR to produce the requested records. Last week, CDCR declined to appeal the court’s decision and produced the records.

Apart from being a win for transparency and open government, this case also is important for racial justice. As we identified in our briefing, CDCR has a history of racial bias, which the U.S. Supreme Court and California appellate courts alike have recognized. That makes it all the more important for information about potential racial disparities in parole determinations to be open for the public to analyze and debate.

Moreover, this case is a win for beneficial AI innovation. In a world where AI is often proposed for harmful and biased uses, Project Recon is an example of AI for good. Rather than substitute for human decision-making, the AI that Project Recon is attempting to build would shed a light on human decision-making by reviewing past decisions and identifying where bias may have played a role. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

Share
Categories
Intelwars Legal Analysis privacy Security

EFF and ACLU Tell Federal Court that Forensic Software Source Code Must Be Disclosed

Can secret software be used to generate key evidence against a criminal defendant? In an amicus filed ten days ago with the United States District Court of the Western District of Pennsylvania, EFF and the ACLU of Pennsylvania explain that secret forensic technology is inconsistent with criminal defendants’ constitutional rights and the public’s right to oversee the criminal trial process. Our amicus in the case of United States v. Ellis also explains why source code, and other aspects of forensic software programs used in a criminal prosecution, must be disclosed in order to ensure that innocent people do not end up behind bars, or worse—on death row.

The Constitution guarantees anyone accused of a crime due process and a fair trial. Embedded in those foundational ideals is the Sixth Amendment right to confront the evidence used against you. As the Supreme Court has recognized, the Confrontation Clause’s central purpose was to ensure that evidence of a crime was reliable by subjecting it to rigorous testing and challenges. This means that defendants must be given enough information to allow them to examine and challenge the accuracy of evidence relied on by the government.

In addition, the public has a constitutional right of access to court proceedings. While this right is not absolute, it is clearly implicated here, where the government seeks to use secret software to generate evidence of criminal culpability.

In this case, Mr. Ellis was accused of violating a federal law prohibiting people who have been previously convicted of a felony from possessing a firearm (18 U.S.C. 922(g)(1)). The weapon had not been found in Mr. Ellis’s possession, but was found in a car he was allegedly driving. Law enforcement officers retrieved a swab of DNA mixture swab from the gun which they submitted for analysis by the police forensic lab. The lab results were inconclusive as to whether Mr. Ellis could have contributed to the DNA in the mixture. The mixture sample was then sent to Cybergenetics, the owner of the probabilistic DNA software TrueAllele. Using TrueAllele, the company ran numerous variations of tests on the sample using different hypotheses to adjust the program settings, including alternative theories regarding the number of people whose DNA was in the mixture.

Prosecutors in the case seek to rely on the result of one particular analysis based on the assumption that four people contributed to the DNA sample from the gun. The results of this particular analysis suggest that Mr. Ellis’s DNA was present on the gun. In response, Mr. Ellis’s attorney requested the source code for TrueAllele, but the government refused to disclose it, arguing that the information is protected by trade secrets.

As EFF has previously pointed out, DNA analysis programs are not uniquely immune to errors and bugs, and criminal defendants cannot be forced to take anyone’s word when it comes to the evidence used to imprison them. Independent examination of the source code of forensic software similar to TrueAllele has revealed mistakes and flaws that call into question the accuracy of these tools and their suitability for the criminal justice system. A defendant’s Sixth Amendment right to confrontation requires that they are provided with the information necessary to challenge and expose any material defects in the purported evidence of their guilt. In an exceptional case, a court could issue a protective order limiting disclosure to the defense team, but the default must be disclosure.

Without disclosure, we, as the public, cannot have confidence in a verdict against Mr. Ellis.

Share
Categories
free speech Intelwars Legal Analysis social media surveillance

In Historic Opinion, Third Circuit Protects Public School Students’ Off-Campus Social Media Speech

The U.S. Court of Appeals for the Third Circuit issued an historic opinion in B.L. v. Mahanoy Area School District, upholding the free speech rights of public school students. The court adopted the position EFF urged in our amicus brief that the First Amendment prohibits disciplining public school students for off-campus social media speech.

B.L. was a high school student who had failed to make the varsity cheerleading squad and was placed on junior varsity instead. Out of frustration, she posted—over the weekend and off school grounds—a Snapchat selfie with text that said, among other things, “fuck cheer.” One of her Snapchat connections took a screen shot of the “snap” and shared it with the cheerleading coaches, who suspended B.L. from the J.V. squad for one year. She and her parents sought administrative relief to no avail, and eventually sued the school district with the help of the ACLU of Pennsylvania.

In its opinion protecting B.L.’s social media speech under the First Amendment, the Third Circuit issued three key holdings.

Social Media Post Was “Off-Campus” Speech

First, the Third Circuit held that B.L.’s post was indeed “off-campus” speech. The court recognized that the question of whether student speech is “on-campus” or “off-campus” is a “tricky” one whose “difficulty has only increased after the digital revolution.” Nevertheless, the court concluded that “a student’s online speech is not rendered ‘on campus’ simply because it involves the school, mentions teachers or administrators, is shared with or accessible to students, or reaches the school environment.”

Therefore, B.L.’s Snapchat post was “off-campus” speech because she “created the snap away from campus, over the weekend, and without school resources, and she shared it on a social media platform unaffiliated with the school.”

The court quoted EFF’s amicus brief to highlight why protecting off-campus social media speech is so critical:

Students use social media and other forms of communication with remarkable frequency. Sometimes the conversation online is a high-minded one, with students “participating in issue- or cause-focused groups, encouraging other people to take action on issues they care about, and finding information on protests or rallies.”

Vulgar Off-Campus Social Media Speech is Not Punishable

Second, the Third Circuit reaffirmed its prior holding that the ability of public school officials to punish students for vulgar, lewd, profane, or otherwise offensive speech, per the Supreme Court’s opinion in Bethel School District No. 403 v. Fraser (1986), does not apply to off-campus speech.

The court held that the fact that B.L.’s punishment related to an extracurricular activity (cheerleading) was immaterial. The school district had argued that students have “no constitutionally protected property right to participate in extracurricular activities.” The court expressed concern when any form of punishment is “used to control students’ free expression in an area traditionally beyond regulation.”

Off-Campus Social Media Speech That “Substantially Disrupts” the On-Campus Environment is Not Punishable

Third, the Third Circuit finally answered the question that had been left open by its prior decisions: whether public school officials may punish students for off-campus speech that is likely to “substantially disrupt” the on-campus environment. School administrators often make this argument based on a misinterpretation of the U.S. Supreme Court’s opinion in Tinker v. Des Moines Independent Community School (1969).

Tinker involved only on-campus speech: students wearing black armbands on school grounds, during school hours, to protest the Vietnam War. The Supreme Court held that the school violated the student protestors’ First Amendment rights by suspending them for refusing to remove the armbands because the students’ speech did not “materially and substantially disrupt the work and discipline of the school,” and school officials did not reasonably forecast such disruption.

Tinker was a resounding free speech victory when it was decided, reversing the previously widespread assumption that school administrators had wide latitude to punish student speech on campus. Nevertheless, lower courts have more recently read Tinker as a sword against student speech rather than a shield protecting it, allowing schools to punish student off-campus speech they deem “disruptive.”

The Third Circuit unequivocally rejected reading Tinker as creating a pathway to punish student off-campus speech, such as B.L.’s Snapchat post. The court concisely defined “off-campus” speech as “speech that is outside school-owned, -operated, or -supervised channels and that is not reasonably interpreted as bearing the school’s imprimatur.”

The Third Circuit noted that EFF was the only party to argue that the court should reach this holding (p. 22 n.8). The court reasoned that “social media has continued its expansion into every corner of modern life,” and that it was time to end the “legal uncertainty” that “in this context creates unique problems.” The court stated, “Obscure lines between permissible and impermissible speech have an independent chilling effect on speech.”

Possible Limits on Student Social Media Speech

The Third Circuit clarified that schools may punish on-campus disruption that was caused by an off-campus social media post when a “student who, on campus, shares or reacts to controversial off-campus speech in a disruptive manner.” That is, a “school can punish any disruptive speech or expressive conduct within the school context that meets” the Supreme Court’s demanding standards for actual and serious disruption of the school day.

Thus, “a student who opens his cellphone and shows a classmate a Facebook post from the night before” may be punished if that post, by virtue of being affirmatively shared on campus by the original poster, “substantially disrupts” the on-campus environment. Similarly, if other students act disruptively on campus in response to that Facebook post, they may be punished—but not the original poster if he himself did not share the post on campus.

Additionally, the Third Circuit “reserv[ed] for another day the First Amendment implications of off-campus student speech that threatens violence or harasses others,” an issue that was not presented in this case.

Supreme Court Review Possible

The Third Circuit’s opinion is historic because it is the first federal appellate court to affirm that the substantial disruption exception from Tinker does not apply to off-campus speech.

Other circuits have upheld regulating off-campus speech citing Tinker in various contexts and under different specific rules, such as when it is “reasonably foreseeable” that off-campus speech will reach the school environment, or when off-campus speech has a sufficient “nexus” to the school’s “pedagogical interests.”

The Third Circuit rejected all these approaches. The court argued that its “sister circuits have adopted tests that sweep far too much speech into the realm of schools’ authority.” The court was critical of these approaches because they “subvert[] the longstanding principle that heightened authority over student speech is the exception rather than the rule.”

Because there is a circuit split on this important First Amendment student speech issue, it is possible that the school district will seek certiorari and that the Supreme Court will grant review. Until then, we can celebrate this historic win for public school students’ free speech rights.

Share