Categories
arrest Emmanuel Macron FRANCE french president Head of state Intelwars Physical attack Security watch

VIDEO: Man slaps face of French President Emmanuel Macron; security quickly takes him down

French President Emmanuel Macron was slapped in the face by a man he greeted on the other side of a security barrier during a Tuesday visit to the southeastern region of the country, Reuters reported.

What are the details?

Video of the incident shows Macron, dressed in a white dress shirt, extending his arm toward a man dressed in a green shirt. Suddenly the man slaps Macron’s left cheek, and security swiftly takes the culprit down.


Hombre abofetea al presidente Emmanuel Macron

youtu.be

Reuters said prior to delivering the slap, the man could be heard shouting “A Bas La Macronie,” which means “Down with Macronia.” The outlet added that the man also shouted “Montjoie Saint Denis” — the battle cry of the French armies when the country was still a monarchy.

While Macron was ushered from the spot of the slap, he remained near the crowd for a few more seconds and also appeared to talk to someone on the other side of the barrier, Reuters said.

Two 28-year-old men — the man who slapped Macron and another accompanying him — were in police custody at 1:45 p.m. for alleged violence against a person holding public authority, CNN reported, citing the prosecutor’s office of the city of Valence.

The identity and motive of the man who slapped Macron were unclear, Reuters reported.

Macron’s visit involved meeting restaurateurs and students to discuss life returning to normal amid COVID-19, Reuters also said. Bars and restaurants will be able to reopen to indoor customers after seven months of closure, BBC News reported, adding that France’s overnight curfew on Wednesday is being pushed back from 9 p.m. to 11 p.m.

BBC News also said Macron had just visited a hotel school in Tain-l’Hermitage and that his visit to the area was set to continue Tuesday.

Other politicians react

Politicians have denounced the slap, BBC News reported.

Prime Minister Jean Castex told the National Assembly that while democracy means debate and legitimate disagreement “it must never in any case mean violence, verbal aggression, and even less physical attack,” the network said.

Far-left leader Jean-Luc Mélenchon tweeted his “solidarity with the president” after the incident, BBC News said, while far-right leader Marine Le Pen — a prominent Macron rival — said that “while democratic debate can be bitter, it can never tolerate physical violence.”

Share
Categories
Intelwars privacy Security

Security Tips for Online LGBTQ+ Dating

Dating is risky. Aside from the typical worries of possible rejection or lack of romantic chemistry, LGBTQIA people often have added safety considerations to keep in mind. Sometimes staying in the proverbial closet is a matter of personal security. Even if someone is open with their community about being LGBTQ+, they can be harmed by oppressive governments, bigoted law enforcement, and individuals with hateful beliefs. So here’s some advice for staying safe while online dating as an LGBTQIA+ person:

Step One: Threat Modeling

The first step is making a personal digital security plan. You should start with looking at your own security situation from a high level. This is often called threat modeling and risk assessment. Simply put, this is taking inventory of the things you want to protect and what adversaries or risks you might be facing. In the context of online dating, your protected assets might include details about your sexuality, gender identity, contacts of friends and family, HIV status, political affiliation, etc. 

Let’s say that you want to join a dating app, chat over the app, exchange pictures, meet someone safely, and avoid stalking and harassment. Threat modeling is how you assess what you want to protect and from whom. 

We touch in this post on a few considerations for people in countries where homosexuality is criminalized, which may include targeted harassment by law enforcement. But this guide is by no means comprehensive. Refer to materials by LGBTQ+ organizations in those countries for specific tips on your threat model.

Securely Setting Up Dating Profiles

When making a new dating account, make sure to use a unique email address to register. Often you will need to verify the registration process through that email account, so it’s likely you’ll need to provide a legitimate address. Consider creating an email address strictly for that dating app. Oftentimes there are ways to discover if an email address is associated with an account on a given platform, so using a unique one can prevent others from potentially knowing you’re on that app. Alternatively, you might use a disposable temporary email address service. But if you do so, keep in mind that you won’t be able to access it in the future, such as if you need to recover a locked account. 

The same logic applies to using phone numbers when registering for a new dating account. Consider using a temporary or disposable phone number. While this can be more difficult than using your regular phone number, there are plenty of free and paid virtual telephone services available that offer secondary phone numbers. For example, Google Voice is a service that offers a secondary phone number attached to your normal one, registered through a Google account. If your higher security priority is to abstain from giving data to a privacy-invasive company like Google, a “burner” pay-as-you-go phone service like Mint Mobile is worth checking out. 

When choosing profile photos, be mindful of images that might accidentally give away your location or identity. Even the smallest clues in an image can expose its location. Some people use pictures with relatively empty backgrounds, or taken in places they don’t go to regularly.

Make sure to check out the privacy and security sections in your profile settings menu. You can usually configure how others can find you, whether you’re visible to others, whether location services are on (that is, when an app is able to track your location through your phone), and more. Turn off anything that gives away your location or other information, and later you can selectively decide which features to reactivate, if any. More mobile phone privacy information can be found on this Surveillance Self Defense guide.

Communicating via Phone, Email, or In-App Messaging

Generally speaking, using an end-to-end encrypted messaging service is the best way to go for secure texting. For some options like Signal, or Whatsapp, you may be able to use a secondary phone number to keep your “real” phone number private.

For phone calls, you may want to use a virtual phone service that allows you to screen calls, use secondary phone numbers, block numbers, and more. These aren’t always free, but research can bring up “freemium” versions that give you free access to limited features.

Be wary of messaging features within apps that offer deletion options or disappearing messages, like Snapchat. Many images and messages sent through these apps are never truly deleted, and may still exist on the company’s servers. And even if you send someone a message that self-deletes or notifies you if they take a screenshot, that person can still take a picture of it with another device, bypassing any notifications. Also, Snapchat has a map feature that shows live public posts around the world as they go up. With diligence, someone could determine your location by tracing any public posts you make through this feature.

Sharing Photos

If the person you’re chatting with has earned a bit of your trust and you want to share pictures with them, consider not just what they can see about you in the image itself, as described above, but also what they can learn about you by examining data embedded in the file.

EXIF metadata lives inside an image file and describes the geolocation it was taken, the device it was made with, the date, and more. Although some apps have gotten better at automatically withholding EXIF data from uploaded images, you still should manually remove it from any images you share with others, especially if you send them directly over phone messaging. 

One quick way is to  send the image to yourself on Signal messenger, which automatically strips EXIF data. When you search for your own name in contacts, a feature will come up with “Note to Self” where you have a chat screen to send things to yourself:

Screenshot of Signal’s Note to Self feature

Before sharing your photo, you can verify the results by using a tool to view EXIF data on an image file, before and after removing EXIF data.

For some people, it might be valuable to use a watermarking app to add your username or some kind of signature to images. This can verify who you are to others and prevent anyone from using your images to impersonate you. There are many free and mostly-free options in iPhone and Android app stores. Consider a lightweight version that allows you to easily place text on an image and lets you screenshot the result. Keep in mind that watermarking a picture is a quick way to identify yourself, which in itself is a trade-off.

watermark example overlaid on an image of the lgbtq+ pride flag

Sexting Safely

Much of what we’ve already gone over will step up your security when it comes to sexting, but here are some extra precautions:

Seek clearly communicated consent between you and romantic partners about how intimate pictures can be shared or saved. This is great non-technical security at work. If anyone else is in an image you want to share, make sure you have their consent as well. Also, be thoughtful as to whether or not to include your face in any images you share.

As we mentioned above, your location can be determined by public posts you make and Snapchat’s map application.

For video chatting with a partner, consider a service like Jitsi that allows temporary rooms, no registration, and is designed with privacy in mind. Many services are not built with privacy in mind, and require account registration, for example. 

Meeting Someone AFK

Say you’ve taken all the above precautions, someone online has gained your trust, and you want to meet them away-from-keyboard and in real life. Always meet first somewhere public and occupied with other people. Even better, meet in an area more likely to be accepting of LGBTQIA+ people. Tell a friend beforehand all the details about where you’re going, who you are meeting, and a given time that you promise to check back in with them that you’re ok.

If you’re living in one of the 69 countries where homosexuality is illegal and criminalized, make sure to check in with local advocacy groups about your area. Knowing your rights as a citizen will help keep you safe if you’re stopped by law enforcement.

Privacy and Security is a Group Effort

Although the world is often hostile to non-normative expressions of love and identity, your personal security, online and off, is much better supported when you include the help of others that you trust. Keeping each other safe, accountable, and cared for gets easier when you have more people involved. A network is always stronger when every node on it is fortified against potential threats. 

Happy Pride Month—keep each other safe.

Share
Categories
Computer Fraud And Abuse Act Reform Intelwars Legal Analysis Security

Van Buren is a Victory Against Overbroad Interpretations of the CFAA, and Protects Security Researchers

The Supreme Court’s Van Buren decision today overturned a dangerous precedent and clarified the notoriously ambiguous meaning of “exceeding authorized access” in the Computer Fraud and Abuse Act, the federal computer crime law that’s been misused to prosecute beneficial and important online activity. 

The decision is a victory for all Internet users, as it affirmed that online services cannot use the CFAA’s criminal provisions to enforce limitations on how or why you use their service, including for purposes such as collecting evidence of discrimination or identifying security vulnerabilities. It also rejected the use of troubling physical-world analogies and legal theories to interpret the law, which in the past have resulted in some of its most dangerous abuses.

The Van Buren decision is especially good news for security researchers, whose work discovering security vulnerabilities is vital to the public interest but often requires accessing computers in ways that contravene terms of service. Under the Department of Justice’s reading of the law, the CFAA allowed criminal charges against individuals for any website terms of service violation. But a majority of the Supreme Court rejected the DOJ’s interpretation. And although the high court did not narrow the CFAA as much as EFF would have liked, leaving open the question of whether the law requires circumvention of a technological access barrier, it provided good language that should help protect researchers, investigative journalists, and others. 

The CFAA makes it a crime to “intentionally access[] a computer without authorization or exceed[] authorized access, and thereby obtain[] . . . information from any protected computer,” but does not define what authorization means for purposes of exceeding authorized access. In Van Buren, a former Georgia police officer was accused of taking money in exchange for looking up a license plate in a law enforcement database. This was a database he was otherwise entitled to access, and Van Buren was charged with exceeding authorized access under the CFAA. The Eleventh Circuit analysis had turned on the computer owner’s unilateral policies regarding use of its networks, allowing private parties to make EULA, TOS, or other use policies criminally enforceable. 

The Supreme Court rightly overturned the Eleventh Circuit, and held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Rather, the statute’s prohibition is limited to someone who “accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him.” The Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. If you need to break through a digital gate to get in, entry is a crime, but if you are allowed through an open gateway, it’s not a crime to be inside.

This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA. For example, if you can look at housing ads as a user, it is not a hacking crime to pull them for your bias-in-housing research project, even if the TOS forbids it. Van Buren is really good news for port scanning, for example: so long as the computer is open to the public, you don’t have to worry about the conditions for use to scan the port. 

While the decision was centered around the interpretation of the statute’s text, the Court bolstered its conclusion with the policy concerns raised by the amici, including a brief EFF filed on behalf of computer security researchers and organizations that employ and support them. The Court’s explanation is worth quoting in depth:

If the “exceeds authorized access” clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals. Take the workplace. Employers commonly state that computers and electronic devices can be used only for business purposes. So on the Government’s reading of the statute, an employee who sends a personal e-mail or reads the news using her work computer has violated the CFAA. Or consider the Internet. Many websites, services, and databases …. authorize a user’s access only upon his agreement to follow specified terms of service. If the “exceeds authorized access” clause encompasses violations of circumstance-based access restrictions on employers’ computers, it is difficult to see why it would not also encompass violations of such restrictions on website providers’ computers. And indeed, numerous amici explain why the Government’s reading [would]  criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook.

This analysis shows the Court recognized the tremendous danger of an overly broad CFAA, and explicitly rejected the Government’s arguments for retaining wide powers, tempered only by their prosecutorial discretion. 

Left Unresolved: Whether CFAA Violations Require Technical Access Limitations 

The Court’s decision was limited in one important respect. In a footnote, the Court left as an open question if the enforceable access restriction meant only “technological (or ‘code-based’) limitations on access, or instead also looks to limits contained in contracts or policies,” meaning that the opinion neither adopted nor rejected either path. EFF has argued in courts and legislative reform efforts for many years that it’s not a computer hacking crime without hacking through a technological defense. 

This footnote is a bit odd, as the bulk of the majority opinion seems to point toward the law requiring someone to defeat technological limitations on access, and throwing shade at criminalizing TOS violations. In most cases, the scope of your access once on a computer is defined by technology, such as an access control list or a requirement to reenter a password. Professor Orin Kerr suggested that this may have been a necessary limitation to build the six justice majority. 

Later in the Van Buren opinion, the Court rejected a Government argument that a rule against “using a confidential database for a non-law-enforcement purpose” should be treated as a criminally enforceable access restriction, different from “using information from the database for a non-law-enforcement purpose” (emphasis original). This makes sense under the “gates-up-or-down” approach adopted by the Court. Together with the policy issues the Court acknowledged regarding enforcing terms of service quoted above, this helps us understand the limitation footnote, suggesting cleverly writing a TOS will not easily turn a conditional rule on why you can access, or what you can do with information later, into a criminally enforceable access restriction.

Nevertheless, leaving the question open means that we will have to litigate whether and under what circumstance a contract or written policy can amount to an access restriction in the years to come. For example, in Facebook v. Power Ventures, the Ninth Circuit found that a cease and desist letter removing authorization was sufficient to create a CFAA violation for later access, even though a violation of the Facebook terms alone was not. Service providers will likely argue that this is the sort of non-technical access restriction that was left unresolved by Van Buren. 

Court’s Narrow CFAA Interpretation Should Help Security Researchers

Even though the majority opinion left this important CFAA question unresolved, the decision still offers plenty of language that will be helpful for later cases on the scope of the statute. That’s because the Van Buren majority’s focus on the CFAA’s technical definitions, and the types of computer access that the law restricts, should provide guidance to lower courts that narrow the law’s reach. 

This is a win because broad CFAA interpretations have in the past often deterred or chilled important security research and investigative journalism. The CFAA put these activities in legal jeopardy, in part, because courts often struggle with using non-digital legal concepts and physical analogies to interpret the statute. Indeed, one of the principle disagreements between the Van Buren majority and dissent is whether the CFAA should be interpreted based on physical property law doctrines, such as trespass and theft.

The majority opinion ruled that, in principle, computer access is different from the physical world precisely because the CFAA contains so many technical terms and definitions. “When interpreting statutes, courts take note of terms that carry ‘technical meaning[s],’” the majority wrote. 

The rule is particularly true for the CFAA because it focuses on malicious computer use and intrusions, the majority wrote. For example, the term “access” in the context of computer use has its own specific, well established meaning: “In the computing context, ‘access’ references the act of entering a computer ‘system itself’ or a particular ‘part of a computer system,’ such as files, folders, or databases.” Based on that definition, the CFAA’s “exceeding authorized access” restriction should be limited to prohibiting “the act of entering a part of the system to which a computer user lacks access privileges.” 

The majority also recognized that the portions of the CFAA that define damage and loss are premised on harm to computer files and data, rather than general non-digital harm such as trespassing on another person’s property: “The statutory definitions of ‘damage’ and ‘loss’ thus focus on technological harms—such as the corruption of files—of the type unauthorized users cause to computer systems and data,” the Court wrote. This is important because loss and damage are prerequisites to civil CFAA claims, and the ability of private entities to enforce the CFAA has been a threat that deters security research when companies might rather their vulnerabilities remain unknown to the public. 

Because the CFAA’s definitions of loss and damages focus on harm to computer files, systems, or data, the majority wrote that they “are ill fitted, however, to remediating ‘misuse’ of sensitive information that employees may permissibly access using their computers.”

The Supreme Court’s Van Buren decision rightly limits the CFAA’s prohibition on “exceeding authorized access” to prohibiting someone from accessing particular computer files, services, or other parts of the computer that are otherwise off-limits to them. And the Court’s overturning the Eleventh Circuit decision that permitted CFAA liability based on someone violating a website’s terms of service or an employers’ computer use restrictions ensures that lots of important, legitimate computer use is not a crime. 

But there is still more work to be done to ensure that computer crime laws are not misused against researchers, journalists, activists, and everyday internet users. As longtime advocates against overbroad interpretations of the CFAA, EFF will continue to lead efforts to push courts and lawmakers to further narrow the CFAA and similar state computer crime laws so they can no longer be misused.

Share
Categories
Commentary Computer Fraud And Abuse Act Reform Intelwars Security

Supreme Court Overturns Overbroad Interpretation of CFAA, Protecting Security Researchers and Everyday Users

EFF has long fought to reform vague, dangerous computer crime laws like the CFAA. We’re gratified that the Supreme Court today acknowledged that overbroad application of the CFAA risks turning nearly any user of the Internet into a criminal based on arbitrary terms of service. We remember the tragic and unjust results of the CFAA’s misuse, such as the death of Aaron Swartz, and we will continue to fight to ensure that computer crime laws no longer chill security research, journalism, and other novel and interoperable uses of technology that ultimately benefit all of us.

EFF filed briefs both encouraging the Court to take today’s case and urging it to make clear that violating terms of service is not a crime under the CFAA. In the first, filed alongside the Center for Democracy and Technology and New America’s Open Technology Institute, we argued that Congress intended to outlaw computer break-ins that disrupted or destroyed computer functionality, not anything that the service provider simply didn’t want to have happen. In the second, filed on behalf of computer security researchers and organizations that employ and support them, we explained that the broad interpretation of the CFAA puts computer security researchers at legal risk for engaging in socially beneficial security testing through standard security research practices, such as accessing publicly available data in a manner beneficial to the public yet prohibited by the owner of the data. 

Today’s win is an important victory for users everywhere. The Court rightly held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Thus, “an individual ‘exceeds authorized access’ when he accesses a computer with authorization but then obtains information located in particular areas of the computer— such as files, folders, or databases—that are off limits to him.” Rejecting the Government’s reading allowing CFAA charges for any website terms of service violation, the Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA.

Share
Categories
Intelwars privacy Security Student Privacy

Fighting Disciplinary Technologies

An expanding category of software, apps, and devices is normalizing cradle-to-grave surveillance in more and more aspects of everyday life. At EFF we call them “disciplinary technologies.” They typically show up in the areas of life where surveillance is most accepted and where power imbalances are the norm: in our workplaces, our schools, and in our homes.

At work, employee-monitoring “bosswareputs workers’ privacy and security at risk with invasive time-tracking and “productivity” features that go far beyond what is necessary and proportionate to manage a workforce. At school, programs like remote proctoring and social media monitoring follow students home and into other parts of their online lives. And at home, stalkerware, parental monitoring “kidware” apps, home monitoring systems, and other consumer tech monitor and control intimate partners, household members, and even neighbors. In all of these settings, subjects and victims often do not know they are being surveilled, or are coerced into it by bosses, administrators, partners, or others with power over them.

Disciplinary technologies are often marketed for benign purposes: monitoring performance, confirming compliance with policy and expectations, or ensuring safety. But in practice, these technologies are non-consensual violations of a subject’s autonomy and privacy, usually with only a vague connection to their stated goals (and with no evidence they could ever actually achieve them). Together, they capture different aspects of the same broader trend: the appearance of off-the-shelf technology that makes it easier than ever for regular people to track, control, and punish others without their consent.

The application of disciplinary technologies does not meet standards for informed, voluntary, meaningful consent. In workplaces and schools, subjects might face firing, suspension, or other severe punishment if they refuse to use or install certain software—and a choice between invasive monitoring and losing one’s job or education is not a choice at all. Whether the surveillance is happening on a workplace- or school-owned device versus a personal one is immaterial to how we think of disciplinary technology: privacy is a human right, and egregious surveillance violates it regardless of whose device or network it’s happening on.

And even when its victims might have enough power to say no, disciplinary technology seeks a way to bypass consent. Too often, monitoring software is deliberately designed to fool the end-user into thinking they are not being watched, and to thwart them if they take steps to remove it. Nowhere is this more true than with stalkerware and kidware—which, more often than not, are the exact same apps used in different ways.

There is nothing new about disciplinary technology. Use of monitoring software in workplaces and educational technology in schools, for example, has been on the rise for years. But the pandemic has turbo-charged the use of disciplinary technology on the premise that, if in-person monitoring is not possible, ever-more invasive remote surveillance must take its place. This group of technologies and the norms it reinforces are becoming more and more mainstream, and we must address them as a whole.

To determine the extent to which certain software, apps, and devices fit under this umbrella, we look at a few key areas:

The surveillance is the point. Disciplinary technologies share similar goals. The privacy invasions from disciplinary tech are not accidents or externalities: the ability to monitor others without consent, catch them in the act, and punish them is a selling point of the system. In particular, disciplinary technologies tend to create targets and opportunities to punish them where none existed before.

This distinction is particularly salient in schools. Some educational technology, while inviting in third parties and collecting student data in the background, still serves clear classroom or educational purposes. But when the stated goal is affirmative surveillance of students—via face recognition, keylogging, location tracking, device monitoring, social media monitoring, and more—we look at that as a disciplinary technology.

Consumer and enterprise audiences. Disciplinary technologies are typically marketed to and used by consumers and enterprise entities in a private capacity, rather than the police, the military, or other groups we traditionally associate with state-mandated surveillance or punishment. This is not to say that law enforcement and the state do not use technology for the sole purpose of monitoring and discipline, or that they always use it for acceptable purposes. What disciplinary technologies do is extend that misuse.

With the wider promotion and acceptance of these intrusive tools, ordinary citizens and the private institutions they rely on increasingly deputize themselves to enforce norms and punish deviations. Our workplaces, schools, homes, and neighborhoods are filled with cameras and microphones. Our personal devices are locked down to prevent us from countermanding the instructions that others have inserted into them. Citizens are urged to become police, in a digital world increasingly outfitted for the needs of a future police state.

Discriminatory impact. Disciplinary technologies disproportionately hurt marginalized groups. In the workplace, the most dystopian surveillance is used on the workers with the least power. In schools, programs like remote proctoring disadvantage disabled students, Black and brown students, and students without access to a stable internet connection or a dedicated room for test-taking. Now, as schools receive COVID relief funding, surveillance vendors are pushing expensive tools that will disproportionately discriminate against the students already most likely to be hardest hit by the pandemic. And in the home, it is most often (but certainly not exclusively) women, children, and the elderly who are subject to the most abusive non-consensual surveillance and monitoring.

And in the end, it’s not clear that disciplinary technologies even work for their advertised uses. Bossware does not conclusively improve business outcomes, and instead negatively affects employees’ job satisfaction and commitment. Similarly, test proctoring software fails to accurately detect or prevent cheating, instead producing rampant false positives and overflagging. And there’s little to no independent evidence that school surveillance is an effective safety measure, but plenty of evidence that monitoring students and children does decrease perceptions of safety, equity, and support, negatively affect academic outcomes,  and can have a chilling effect on development that disproportionately affects minoritized groups and young women. If the goal is simply to use surveillance to give authority figures even more power, then disciplinary technology could be said to “work”—but at great expense to its unwilling targets, and to society as a whole.

The Way Forward

Fighting just one disciplinary technology at a time will not work. Each use case is another head of the same Hydra that reflects the same impulses and surveillance trends. If we narrowly fight stalkerware apps but leave kidware and bossware in place, the fundamental technology will still be available to those who wish to abuse it with impunity. And fighting student surveillance alone is untenable when scholarly bossware can still leak into school and academic environments.

The typical rallying cries around user choice, transparency, and strict privacy and security standards are not complete remedies when the surveillance is the consumer selling point. Fixing the spread of disciplinary technology needs stronger medicine. We need to combat the growing belief, funded by disciplinary technology’s makers, that spying on your colleagues, students, friends, family, and neighbors through subterfuge, coercion, and force is somehow acceptable behavior for a person or organization. We need to show how flimsy disciplinary technologies’ promises are; how damaging its implementations can be; and how, for every supposedly reasonable scenario its glossy advertising depicts, the reality is that misuse is the rule, not the exception.

We’re working at EFF to craft solutions to the problems of disciplinary technology, from demanding anti-virus companies and app stores recognize spyware more explicitly, pushing companies to design for abuse cases, and exposing the misuse of surveillance technology in our schools and in our streets. Tools that put machines in power over ordinary people are a sickening reversal of how technology should work. It will take technologists, consumers, activists and the law to put it right.

Share
Categories
Commentary Intelwars Legal Analysis privacy Security Technical Analysis

How Your DNA—or Someone Else’s—Can Send You to Jail

Although DNA is individual to you—a “fingerprint” of your genetic code—DNA samples don’t always tell a complete story. The DNA samples used in criminal prosecutions are generally of low quality, making them particularly complicated to analyze. They are not very concentrated, not very complete, or are a mixture of multiple individual’s DNA—and often, all of these conditions are true. If a DNA sample is like a fingerprint, analyzing mixed DNA samples in criminal prosecutions can often be like attempting to isolate a single person’s print from a doorknob of a public building after hundreds of people have touched it. Despite the challenges in analyzing these DNA samples, prosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process. This is why it is essential that any DNA analysis tool’s source code is made available for evaluation. It is critical to determine whether the software is reliable enough to be used in the legal system, and what weight its results should be given. 

A Breakdown of DNA Data

To understand why DNA software analyses can be so misleading, it helps to know a tiny bit about how it works. To start, DNA sequences are commonly called genes. A more generic way to refer to a specific location in the gene sequence is a “locus” (plural “loci”). The variants of a given gene or of the DNA found at a particular locus are called “alleles.” To oversimplify, if a gene is like a highway, the numbered exits are loci, and alleles are the specific towns at each exit.

[P]rosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process.

Forensic DNA analysis typically focuses on around 13 to 20 loci and the allele present at each locus, making up a person’s DNA profile. By looking at a sufficient number of loci, whose alleles are distributed among the population, a kind of fingerprint can be established. Put another way, knowing the specific towns and exits a driver drove past can also help you figure out which highway they drove on.

To figure out the alleles present in a DNA sample, a scientist chops the DNA into different alleles, then uses an electric charge to draw it through a gel in a method called electrophoresis. Different alleles will travel at different rates, and the scientist can measure how far each one traveled and look up which allele corresponds to that length. The DNA is also stained with a dye, so that the more of it there is, the darker that blob will be on the gel.

Analysts infer what alleles are present based on how far they traveled through the gel, and deduce what amounts are present based on how dark the band is—which can work well in an untainted, high quality sample. Generally, the higher the concentration of cells from an individual and the less contaminated the sample by any other person’s DNA, the more accurate and reliable the generated DNA profile.

The Difficulty of Analyzing DNA Mixtures

Our DNA is found in all of our cells. The more cells that we shed, the higher the concentration of our DNA can be found, which generally also means more accuracy from DNA testing. However, our DNA can also be transferred from one object to another. So it’s possible that your DNA can be found on items you’ve never had contact with or at locations you’ve never been. For example, if you’re sitting in a doctor’s waiting room and scratch your face, your DNA may be found on the magazines on a table next to you that you never flipped through. Your DNA left on a jacket you lent a friend can transfer onto items they brush by or at locations they travel to. 

Given the ease at which DNA is deposited, it is no surprise that DNA samples from crime scenes are often a mixture of DNA from multiple individuals, or “donors.” Investigators gather DNA samples by swiping a cotton swab at the location that the perpetrator may have deposited their DNA, such as a firearm, a container of contraband, or the body of a victim. In many cases where the perpetrator’s bodily fluids are not involved, the DNA sample may only contain a small amount of the perpetrator’s DNA, which could be less than a few cells, and is likely to also contain the DNA of others. This makes trying to identify whether a person’s DNA is found in a complex DNA mixture a very difficult problem. It’s like having to figure out whether someone drove on a specific interstate when all you have is an incomplete and possibly inaccurate list of towns and exits they passed, all of which could have been from any one of the roads they used. You don’t know the number of roads they drove on, and can only guess at which towns and exits were connected. 

Running these DNA mixture samples through electrophoresis creates much noisier results, and often contains errors that indicate additional alleles at a locus or ignore alleles that are present. Human analysts then decide which alleles appear dark enough in the gel to count or which are light enough to ignore. At least, traditional DNA analysis worked in this binary way: an allele either counted or did not count as part of a specific DNA donor profile.

Probabilistic Genotyping Software and Their Problems 

Enter probabilistic genotyping software. The proprietors of these programs—the two biggest players are STRMix and TrueAllele—claim that their products, using statistical modeling, can determine the likelihood that a DNA profile or combinations of DNA profiles contributed to a DNA mixture, instead of the binary approach. Prosecutors often describe the analysis from these programs this way: It is X times more likely that defendant, rather than a random person, contributed to this DNA mixture sample.

However, these tools, like any statistical model, can be constructed poorly. And whether, what, and how assumptions are incorporated in them can cause the results to vary. They can be analogized to the election forecast models from FiveThirtyEight, The Economist, and The New York Times. They all use statistical modeling, but the final numbers are different because of the myriad design differences from each publisher. Probabilistic genotyping software is the same: they all use statistical modeling, but the output probability is affected by how that model is built. Like the different election models, different probabilistic DNA software has diverging approaches for which, and at what threshold, factors are considered, counteracted, or ignored. Additionally, input from human analysts, such as the hypothetical number of people who contributed to the DNA mixture, also change the calculation. If this is less rigorous than you expected, that’s exactly the point—and the problem. In our highway analogy, this is like a software program that purports to tell you how likely it is that you drove on a specific road based on a list of towns and exits you passed. Not only is the result affected by the completeness and accuracy of the list, but the map the software uses, and the data available to it, matter tremendously as well.

If this is less rigorous than you expected, that’s exactly the point—and the problem. 

Because of these complex variables, a probability result is always specific to how the program used was designed, the conditions at the lab, and any additional or discretionary input used during the analysis. In practice, different DNA analysis programs have resulted in substantially different probabilities for whether a defendant’s DNA appeared in the same DNA sample, even breathtaking discrepancies in the millions-fold!

And yet it is impossible to determine which result, or software, is the most accurate. There is no objective truth against which those numbers can be compared. We simply cannot know what the probability that a person contributed to a DNA mixture is. In controlled testing, we know whether a person’s DNA was part of a DNA mixture or not, but there is no way to figure out whether it was 100 times more likely that the donor’s DNA rather than an unknown person’s contributed to the mixture, or a million times more likely. And while there is no reason to assume that the tool that outputs the highest statistical likelihood is the most accurate, the software’s designers may nevertheless be incentivized to program their product in a way that is more likely to output a larger number, because “1 quintillion” sounds more precise than “10,000”—especially when there is no way to objectively evaluate the accuracy.

DNA Software Review is Essential

Because of these issues, it is critical to examine any DNA software’s source code that is used in the legal system. We need to know exactly how these statistical models are built, and looking at the source code is the only way to discover non-obvious coding errors. Yet, the companies that created these programs have fought against the release of the source code—even when it would only be examined by the defendant’s legal team and be sealed under a court order. In the rare instances where the software code was reviewed, researchers have found programming errors with the potential to implicate innocent people.

Forensic DNA analyses have the whiff of science—but without source code review, it’s impossible to know whether or not they pass the smell test. Despite the opacity of their design and the impossibility of measuring their accuracy, these programs have become widely used in the legal system. EFF has challenged—and continues to challenge—the failure to disclose the source code of these programs. The continued use of these tools, the accuracy of which cannot be ensured, threatens the administration of justice and the reliability of verdicts in criminal prosecutions.

Share
Categories
Intelwars Security State-Sponsored Malware Technical Analysis

FAQ: DarkSide Ransomware Group and Colonial Pipeline

With the attack on Colonial Pipeline by a ransomware group causing panic buying and shortages of gasoline on the US East Coast, many are left with more questions than answers to what exactly is going on. We have provided a short FAQ to the most common technical questions that are being raised, in an effort to shine light on some of what we already know.

What is Ransomware?

Ransomware is a combination word of “ransom”—holding stolen property to extort money for its return or release; and “malware”—malicious software installed on a machine. The principle is simple: the malware encrypts the victim’s files so that they can no longer use them and demands payment from the victim before decrypting them.

Most often, ransomware uses a vulnerability to infect a system or network and encrypt files to deny the owner access to those files. The key to decrypt the files is possessed by a third party—the extortionist—who then (usually through a piece of text left on the desktop or other obvious means) communicates instructions to the victim on how to pay them in exchange for the decryption key or program.

Most modern ransomware uses a combination of public-key encryption and symmetric encryption in order to lock out the victim from their files. Since the decryption and encryption key are separate in public-key encryption, the extortionist can guarantee that the decryption key is never (not even briefly, during the execution of the ransomware code) transmitted to the victim before payment.

Extortionists in ransomware attacks are mainly motivated by the prospects of payment. Other forms of cyberattack are most often used by hackers motivated by political or personal factors.

What is the Ransomware Industry?

Although ransomware has existed since the late 1980s, its use has expanded exponentially in recent years. This is partly due to the effectiveness of cryptocurrencies in facilitating payments to anonymous, remote recipients. An extortionist can demand payment in the form of bitcoin in exchange for decryption keys, rather than relying on older, much more regulated financial exchanges. This has driven the growth of a $1.4 billion ransomware industry in the US, based solely on locking out users and companies from their files. Average payments to extortionists are increasing as well. A report by Coveware shows a 31% growth in the average payment between Q2 and Q3 of 2020.

The WannaCry attack in 2017 was one of the largest ransomware incidents to date. Using a leaked NSA exploit dubbed “EternalBlue,” WannaCry spread to more than 200,000 machines across the world, demanding payment from operators of unpatched Windows systems. Displaying a message with a bitcoin address to send payment to, the attack cost hundreds of millions to billions of dollars. An investigation of WannaCry code by a number of information security firms and the FBI pointed to the hacking group behind the attack having connections to the North Korean state apparatus.

What is DarkSide?

The FBI revealed on Monday that the hacking group DarkSide is behind the latest ransomware attack on Colonial Pipeline. DarkSide is a relatively new ransomware group, only appearing on the scene in August 2020 in Russian-language hacking forums. They have poised themselves as a new type of ransomware-as-a-service business, attempting to inculcate “trust” and a sense of reliability between themselves and their victims. In order to ensure payment, DarkSide has found it useful to establish a reputation which ensures that when the victims deliver the ransom, they are guaranteed to receive a decryption key for their files. In this vein, the group has established a modern, polished website called DarkSide Leaks, aimed at reaching out to journalists and establishing a public face. They say that they solely target well-funded individuals and corporations which are able to pay the ransom asked for, and have a code of conduct claiming not to target hospitals, schools, or non-profits. They have also attempted to burnish their image with token donations to charity. Darkside, who reportedly typically asks for ransoms that range between $200,000 to $2,000,000, produced receipts showing a total of $20,000 in donations to charities Children International and The Water Project. The charities refused to accept the money.

DarkSide claims that they are not affiliated with any government, and that their motives are purely financial gain—a claim that has been assessed most likely to be true by cybersecurity firm Flashpoint. However, DarkSide code analyzed by the firm Cyberreason has been shown to check the systems language settings as a very first step, and halt the attack if the result is a language “associated with former Soviet Bloc nations.” This has fuelled speculation in the US that Russia may be affording the group special protection, or at least turning a blind eye to their misdeeds.

The result has been profitable for the cyber-extortion group. In mid-April, the group obtained $11 million from a high-profile victim. Bloomberg reports that Colonial Pipeline paid $5 million to the group.

What exactly happened last Friday?

Colonial Pipeline has operated continuously since the early 1960s, supplying 45% of the US East Coast gasoline supply, in addition to diesel and jet fuel. On Friday, May 8th, it shut down 5,500 miles of its pipeline infrastructure in response to a cyber-extortion attempt. The pipeline restarted on May 12th. Though the incident is still under investigation, the FBI confirmed on Monday what was already speculated: DarkSide was behind the attack.

In an apparent response to—though not an admission of involvement in—the attack, DarkSide released a statement on their website stating that they would introduce “moderation” to “avoid social consequences in the future.”

Why did they target Colonial Pipeline?

If patterns are any indication, DarkSide chose Colonial as a “big game” target due to the deep pockets of the firm, worth about $8 billion. Still, many suspect that DarkSide is now feeling a dawning sense of dread as the lateral effects of their attack are playing out: panic buying, gas shortages, and involvement by federal investigators as well as an executive order by President Biden intending to bolster America’s cyberdefenses as a response. Escalated to the level of an international incident, DarkSide may see the independence and latitude they are reported to enjoy dissipate under geopolitical pressure.

What can I do to defend myself against ransomware?

Frequently backing up your data to an external hard drive or cloud storage provider will ensure you are able to retrieve it later. If you already have a backup, do not plug the external hard drive into your computer after it is infected: the ransomware will likely target any new device that is recognized. You may need to reinstall your operating system, replace your hard drive, or bring it to a specialist to ensure complete removal of any infection.

You can also follow our guide to keeping your data safe. The Cybersecurity and Infrastructure Security Agency (CISA) has also provided a detailed guide on protecting yourself from ransomware. Note that it’s much easier to defend yourself against malware than to remove it once you’re infected, so it is always advisable to take proactive steps to defend yourself.

Share
Categories
Accusations biological weapons civilian population Conspiracy Fact and Theory Dangerous Death depopulation Destruction government is slavery Headline News illness Intelwars lethal Murder Nationalism rhetoric ria novosti Russia Security United States wake up Washington Weapons Yuri Averyanov

Russia: American “Germ Warfare” Could Kill Millions

A Russian Security Council member has argued that the United States is becoming a major exporter of deadly biological weapons to nations across the globe. Yuri Averyanov, the first deputy secretary of the country’s top national defense body, said the body count would be in the millions if this type of weapon is ever deployed.

According to a report by RT, in an interview with RIA Novosti on Tuesday to warn that “lethal and dangerous microorganisms… could potentially be released into the environment, allegedly by mistake.” He added that such an attack if used against Russia, “would lead to a massive destruction of the civilian population” both within the country and in neighboring states. He also said that Washington is currently working to increase biological weapons capabilities.

Dr. Anthony Fauci has known ties to “germ warfare” so is Averyanov’s accusation out of left field? You decide. Use critical thinking and discernment and look into things.  It’s getting crazy out there, and problems could really begin if people actually face biological warfare against them instead of just a fake scamdemic designed to impoverish and control them.

Tyrant Fauci EXPOSED: Explain The $3.7 Million In Funding To Wuhan Lab

These biological warfare programs seek to weaponize viruses and other pathogens “primarily for military purposes,” says Averyanov.

Last month, the secretary of the Security Council, Nikolai Patrushev, also warned about research facilities near the borders of Russia and China, suggesting that they could be used as part of a concerted biological warfare effort against the two nations. “There are research centers where Americans help local scientists develop new ways to combat dangerous diseases,” he said, but claimed that “the authorities in the countries where these centers are located have no real idea what is happening within their walls.”RT

The U.S. along with 183 other countries have “banned” biological warfare, but they wrote the rules, and there should be no expectation that they will follow them. Afterall, they wrote down a bill of rights that is supposed to “protect” the rights of those being ruled over by the ruling class, but the regularity disregard that piece of paper.  So who’s to say they won’t break other agreements?

It’s time to wake up to the reality that government is slavery and we should stop relying on the worst of humanity to save it.

The post Russia: American “Germ Warfare” Could Kill Millions first appeared on SHTF Plan – When It Hits The Fan, Don’t Say We Didn’t Warn You.

Share
Categories
Capitol siege Intelwars Officials Police Resign Security

Capitol security officials resign under pressure from Congressional leadership over breach

Three top officials responsible for protecting the U.S. Capitol and congressional chambers all resigned on Thursday, amid calls for heads to roll after the building was stormed by a mob of angry Trump supporters the day before.

What are the details?

First to go was House sergeant-at-arms Paul Irving, whose resignation was announced by Speaker Nancy Pelosi (D-Calif.) at her weekly news conference. The speaker also called for Capitol Police Chief Steven Sund to resign, criticizing the top cop for a “failure of leadership.”

According to The New York Times, Pelosi said of Sund, “He hasn’t even called us since this happened.” The Associated Press reported that the head of the Capitol Police union also called for Sund to step down.

Chief Sund announced his resignation that evening, writing in a short memorandum that he would remain in his active leadership post on Jan. 17, but then transition into “sick leave status” to use up the 440 hours remaining on his balance of paid sick time off.

Senate Majority Leader Mitch McConnell (R-Ky.) then released a statement confirming that he had “requested and received the resignation of Michael Stenger, the Senate Sergeant at Arms and Doorkeeper, effective immediately.”

Congressional leaders echoed the public dismay over how easily the Capitol building and its chambers were breached.

The AP also reported that three days prior to the protest, U.S. Capitol police turned down an offer from the Pentagon to send National Guard troops to assist in keeping the Capitol grounds secure. The department also turned down the Justice Department’s initial offer to send FBI agents on Wednesday as the building was being swarmed.

Capitol police also faced heavy criticism Wednesday when video circulating online showed officers appearing to allow outside barrier gates to open for protesters demanding to enter the premises. TheBlaze made repeated requests seeking comment on the footage Wednesday evening, but never received a response.

Another video published by TheBlaze showed officers using great effort to keep barriers in place, but they were overcome by the mob who forced their way past law enforcement.

In a separate statement Wednesday, McConnell praised the Capitol police officers “who stood bravely in harm’s way during yesterday’s failed insurrection,” before saying that the events of the day “represented a massive failure of institutions, protocols, and planning that are supposed to protect the first branch of our federal government.”

The Senate majority leader called for thorough investigations and “significant changes” before concluding:

“The ultimate blame for yesterday lies with the unhinged criminals who broke down doors, trampled our nation’s flag, fought with law enforcement, and tried to disrupt our democracy, and with those who incited them. But this fact does not and will not preclude our addressing the shocking failures in the Capitol’s security posture and protocols.”

Share
Categories
Intelwars Security Security Education

Doxxing: Tips To Protect Yourself Online & How to Minimize Harm

“Doxxing” is an eerie, cyber-sounding term that gets thrown around more and more these days, but what exactly does it mean? Simply put, it’s when a person or other entity exposes information about you, publicly available or secret, for the purpose of causing harm. It might be information you intended to keep secret, like your personal address or legal name. Often it is publicly available data that can be readily found online with just a bit of digging, like your phone number or workplace address.

By itself, being doxxed can be dangerous, as it may reveal information about you that could harm you if it were publicly known. More often it is used to escalate to greater harm such as mass online harassment, in-person violence, or targeting other members of your community. Your political beliefs or status as a member of a marginalized community can amplify these threats.

Although you aren’t always faced with the option, taking control of your data and considering precautionary steps to advance your personal security are best done before you’re threatened with a potential doxxing. Privacy does not work retroactively. A great place to start is to develop your personal threat model. After you’ve done that, you can take specific measures to advance your data hygiene.

First Steps To Protect Yourself

First: Take a look at the information that is already publicly available about you online. This is as simple as opening up a search engine and entering your name/nickname/handle/avatar and seeing what comes up. It’s common to be overwhelmed by what you find: there can be much more data about you than you expected readily available online to anyone that cares to do a little digging. Remind yourself that this is normal, and that you are on your way to reducing that information and taking the necessary steps to protecting yourself. Take note of any pieces that strike you as high priority to deal with. Keep track both of what the information is and where you found it.

Second: Identify who you can trust with your secrets. Friends, family, chosen family? If you are fearful of being doxxed, you’ll want to speak with these people directly. Not only because they can be implicated in a doxxing incident, but also because there is strength in your community. These trusted folks can help you plan how to prevent an incident from happening, and also what to do in the event of one (more on that below). Keep in mind that this list will change over time. It’s natural for relationships to ebb and flow, and so will the amount of trust you ascribe them with. Set a reminder for yourself to check in on this list once a year or so.

Set some data sharing community ground rules such as asking for permission before taking/posting photos, refraining from geotagging those photos, or using code words to imply something else that only trusted people know. These are all examples of steps you can take to strengthen your social community’s security posture.

Third: Read up on the policies your online accounts have. Most major social media platforms and other popular web apps have policies and procedures in place that protect users against doxxing and allow them to report any violations. Review that information and note how to get in contact with their support teams.

With these non-technical steps out of the way, you can begin to think about the more technical steps you can take: both as precautionary steps ahead of time, and if you have to respond to a doxxing incident.

Minimizing Your Publicly Available Data

The most obvious protective measure you can take to prevent being doxxed is to reduce the amount of material there is about you online.

Data brokers are companies that subsist entirely off collecting this data, repackaging it, and selling to the highest bidders. The information they gather is often from public records and online trackers, and augmented by commercial transactional data. It is a parasitic, rotten industry that survives by invading the privacy of everyday people online. Due to public pressure, many of these companies offer ways for users to opt out of their data being shared. We recommend starting with White Pages, Instant Check Mate, Acxiom, Intelius, and Spokeo. Also take a look at these other helpful guides on how to remove yourself from people-finding services and data brokerage companies.

For a more thorough—though more costly—approach, several professional services like DeleteMe or Privacy Duck claim to help minimize the data available about you online from these data brokers and similar sources. Beware that data brokers work by continually scraping public records and repopulating their data, and so services like these can require ongoing subscriptions to be most effective. They also cannot (and do not) promise comprehensive data minimization across all possible sources. Users should conduct their own research and consider whether these kinds of services can successfully target the data sources you are most concerned about.

Safe Browsing

Sometimes software behaving as expected can lead to our secrets ending up in places they shouldn’t. For example, suggested friends lists can sometimes “out” you to people despite your having multiple accounts for the very purpose of keeping parts of your life separate.

Most other common examples are the fault of user tracking, which you have the power to minimize. If that’s a concern you want to address, here are some steps you can take:

Check how “fingerprintable” your browser is with our tool Cover Your Tracks. This will give you an idea of how capable those very trackers are of uniquely identifying you and your actions online. We also recommend adding our install-and-forget tracker blocking tool, Privacy Badger, which is designed to silently halt those trackers and let you browse in peace.

From there, you can begin to assess the rest of your personal data hygiene online. To protect your account security, are you using strong unique passwords and multi-factor authentication on each of your accounts? Both of these steps will do wonders in preventing your account from being maliciously hijacked.

As you consider each of the accounts you use online, we highly recommend taking a moment with each to look at what information you share. Do you share with them the bare minimum so that you can continue to use their software, or are you giving more than what’s necessary? Instead of listing your mother’s maiden name, prom date, or pet’s name in response to security questions, consider inputting a random passphrase instead and keeping it in your password manager. And instead of handing over your phone number—a common bit of information behind account compromise and unwanted identification—consider what the phone number will be used for, whether Facebook or Twitter or whomever actually needs it, and if you can substitute your mobile number for something less individually identifying like a Google Voice number. Remaining mindful of what information you’re sharing, as well as when and where you’re sharing it will do wonders for your data hygiene.

Incident Response Plan

Being doxxed is a stressful, scary thing to endure. In the event of it happening, the last thing you’ll want to be doing is scrambling at the last minute to figure out how to respond. Having a ready-made plan in place will do wonders for you. Here are some suggestions on where to start:

Decide which accounts to lock or temporarily deactivate if you’re being doxxed. Make a list. It will help to walk through the process of deactivating/locking the account for each so that you can take note of any special steps that they may require.

Have a spreadsheet template handy to record incidents as they happen. You’ll want to have fields ready to mark when something took place, who it appears to be from, where it’s happening, and any details about what happened. Making this log will be incredibly useful: it can help you identify where the weakness is in your personal data security, as well as provide a detailed log of events that you could pass along to others.

Care for Yourself, Care for Others

Finally, you’ll want to include people from your trusted networks to help you in this process. Knowing you’ve got friends to support you if you’re being doxxed will not only ease the burden of stress and labor, but can also alert them to how they might be implicated. We recommend going over this whole process with a trusted friend. Knowing they’re available to take over during a crisis will ease your mind. Reciprocating that help for them builds community trust.

Data hygiene is a form of community self-care. Establishing data hygiene standards with your close network can be a way of caring for yourself, and them. After all, an incident on one node of a network could compromise other nodes on the same network. Caring for your own data hygiene will in part strengthen your community’s, and vice versa.

Share
Categories
Commentary COVID-19 and Digital Rights Intelwars medical privacy Security Surveillance and Human Rights Technical Analysis

Vaccine Passports: A Stamp of Inequity

A COVID vaccine has been approved and vaccinations have begun. With them have come proposals of ways to prove you have been vaccinated, based on the presumption that vaccination renders a person immune and unable to spread the virus. This is unclear. It also raises digital rights concerns, particularly if you look at the history of healthcare access, and consider how it maps onto current proposals to digitize and streamline “vaccination passports” for travel.

We must make sure that, in our scramble to reopen the economy, we do not overlook inequity of access to the vaccine; how personal health data in newly minted digital systems operate as gatekeepers to workplaces, schools, and other spaces; and the potential that today’s vaccine passport will act as a catalyst toward tomorrow’s system of national digital identification that can be used to systematically collect and store our personal information.

We have already witnessed problems with COVID-19 testing and its intersection with digital rights. Some individuals weren’t able to access testing simply because they did not have access to a vehicle. The digital divide emerged in places like San Francisco’s Tenderloin district, one of the city’s poorest neighborhoods, where many weren’t able to access testing because they did not have a smartphone. The danger of further social inequity is just one reason why we opposed a since-vetoed bill in California that proposed to create a blockchain-based system of verifiable credentials for medical test results, including COVID-19 antibody tests. We must draw on the lessons from the recent past and earlier vaccination efforts as we go forward.

Current Proposals

EFF is focused on proposals to distribute these vaccination credentials digitally. While paper-based credentials are possible, too, most proposed plans involve digital implementations. In fact, some companies already have digital passport systems. CLEAR is rolling out a HealthPass that logs testing or vaccination status. This company provides pre-flight screening in major airports around the country. Ticketmaster has considered partnering with CLEAR for another “Health Pass.” Such partnerships could lead to another intertwined network of unprecedented sharing of personal information, similar to issues we have currently with data brokers and advertising information.

Some have suggested using  W3C’s (The Worldwide Web Consortium) Verified Credentials and Digital Identifier specifications as a potential way to standardize vaccination passports. However, this standard does not tend to solve the equity issues of unequal access to vaccination and digital technologies. They are also not exempt from attacks that can potentially leak data.

Advocates of digital systems have suggested they would address the fraud and forgery concerns raised by paper-based credentials. Proposals like CommonPasswhich notifies users of local travel rules and attempts to verify that airline passengers are complying with those rules—are designed to face this issue head on. Informing users of local information is a great feature. However, these systems do little to address the more prevalent fraud targeting individuals during this pandemic. Until these vaccinations become accessible to all, concerns over fabrication should not overshadow concerns to access to the vaccine in the first place. 

Blockchain Is Not a Silver Bullet

Many proposals for vaccination passports reference blockchain technology, a distributed public ledger, as a means to share vaccine credentials. But there are qualities of blockchain that contradict privacy concerns. One is immutability, meaning the fact that personal health information can’t be changed. Immutability may have anti-forgery benefits, but that does more for the credential verifier than the credential holder. Permanence eliminates the ability to delete or correct sensitive personal information from the system.

Interoperability of data with the private sector does not equate to decentralization of data

Also, many healthcare systems have centralized authorities. One of blockchain’s main selling points is peer-to-peer decentralization—an attribute that’s diametrically opposed to the implementation of a health mandate. Interoperability of data with the private sector does not equate to decentralization of data.

Privacy is much more than just preventing a data breach or forgery. Limiting a definition of “privacy” to just these measures would short-change our need to control our personal information. Framing our policy goals should not be left to private companies seeking to sell products they say will help mitigate a pandemic. And, as researcher Harry Haplin notes in a recent paper,

“temporary measures meant for a purpose as seemingly harmless as reviving tourism could become normalized as the blockchain-based identity databases are by design permanent and are difficult to disassemble once the crisis has passed.”

For these reasons, layering blockchain to improve security or privacy for health documentation doesn’t make sense in this context, and has the potential to do far more harm than good.

Lessons Learned Should Be Lessons Applied

A digitized system based on proof of immunization will amplify the lack of access

The COVID-19 pandemic is unprecedented in our lifetimes, but there are lessons we can learn from the past. In 2009, the H1N1 (“swine flu”) vaccination rollout was plagued with inequitable access. With supply potentially limited for COVID-19 vaccinations for the next 6 months, more of the same can occur. A digitized system based on proof of immunization will amplify the lack of access. Resources, especially tax dollars, should be focused on giving people more information about and access to vaccinations, rather than creating a digital fence against those who haven’t been vaccinated yet—and subjecting people who have been vaccinated to new privacy risks.

Trust is critical to public health. Today, many people are already wary of the COVID vaccination. Sweeping in smartphone-based products and new privacy concerns would only harm public health efforts to ease the public’s mind. Immunizations and providing proof of immunizations are not new. However, there’s a big difference between utilizing existing systems to adapt to a public health crisis and vendor-driven efforts to deploy new, potentially dangerous technology under the guise of helping us all move past this pandemic.

Share
Categories
Intelwars Security

DNS, DoH, and ODoH, Oh My: Year-in-Review 2020

Government knowledge of what sites activists have visited can put them at risk of serious injury, arrest, or even death. This makes it a vitally important priority to secure DNS. DNS over HTTPS (DoH) is a protocol that encrypts the Domain Name System (DNS) by performing lookups over the secure HTTPS protocol. DNS translates human-readable domain names (such as eff.org) into machine-routable IP addresses (such as 173.239.79.196), but it has traditionally done this via cleartext queries over UDP port 53 (Do53). This allows anyone who can snoop on your connection—whether it’s your government, your ISP, or the hacker next to you on the same coffee shop WiFi—to see what domain you’re accessing and when.

In 2019, the effort to secure DNS through DoH made tremendous progress both in terms of the deployment of DoH infrastructure and in the Internet Engineering Task Force (IETF), an Internet governance body tasked with standardizing the protocols we all rely on. This progress was made despite large pushback by the Internet Service Providers’ Association in the UK, citing difficulties DoH would present to British ISPs, which are mandated by law to filter adult content.

2020 has also seen great strides in the deployment of DNS over HTTPS (DoH). In February, Firefox began the rollout of DoH to its users in the US, using Cloudflare’s DoH infrastructure to provide lookups by default. Google’s Chrome browser followed suit in May by switching users to DoH if their DNS provider supports it. Meanwhile, the list of publicly available DoH resolvers has expanded to the dozens, many of which implement strong privacy policies, such as not keeping connection logs.

This year’s expansion of DoH deployments has alleviated some of the problems critics have cited, such as the centralization of DoH infrastructure. Previously, only a few large Internet technology companies like Cloudflare and Google had deployed DoH servers at scale. This facilitated these companies’ access to large troves of DNS query data, which could theoretically be exploited to mine sensitive data on DoH users. Mozilla has sought to protect their Firefox users from this danger by requiring the browser’s DoH resolvers to observe strict privacy practices, outlined in their Trusted Recursive Resolver (TRR) policy document. Comcast joined Mozilla’s TRR partners Cloudflare and NextDNS in June.

In addition to policy and deployment strategies to alleviate the privacy concerns of DoH infrastructure centralization, a group of University of Washington academics and Cloudflare technologists published a paper late last month proposing a new protocol called Oblivious DNS over HTTPS (ODoH). The protocol introduces a proxy node to the DoH network layout. Instead of directly requesting records via DoH, a client creates a request for the DNS record, along with a symmetric key of their choice. The client then encrypts the request and symmetric key to the public key of the DoH server they wish to act as a resolver. The client sends this request to the proxy, along with the identity of the DoH resolver they wish to use. The proxy removes all identifying pieces of information from the request, such as the requester’s IP address, and forwards the request to the resolver. The resolver decrypts the request and symmetric key, recursively resolves the request, encrypts the response to the symmetric key provided, and sends it back to the ODoH proxy. The proxy forwards the encrypted response to the client, which is then able to decrypt it using the symmetric key it has retained in memory, and retrieve the DNS response. At no point does the proxy see the unencrypted request, nor does the resolver ever see the identity of the client.

ODoH guarantees that, in the absence of collusion between the proxy and the resolver, no one entity is able to determine both the identity of the requester and the content of the request. This is important because if powerful entities (whether it be your government, ISP, or even DNS resolver) know which people accessed what domain (and when), it gives that entity enormous power over those people. ODoH gives users a technological way to ensure that their domain lookups are secure and private so long as they trust that the proxy and the resolver do not join forces. This is a much lower level of trust than trusting that a single entity does not misuse the DNS queries you send them.

Looking ahead, one possibility worries us: using ODoH gives software developers an easy way to comply with the demands of a censorship regime in order to distribute their software without telling the regime the identity of users they’re censoring. If a software developer wished to gain distribution rights in Saudi Arabia or China, for example, they could choose a reputable ODoH proxy to connect to a resolver that refuses to resolve censored domains. A version of their software would be allowed for distribution in these countries, so long as it had a censorious resolver baked in. This would remove any potential culpability that software developers have for revealing the identity of a user to a government that can put them in danger, but it also facilitates the act of censorship. In traditional DoH, this is not possible. Giving developers an easy-out by facilitating “anonymous” censorship is a worrying prospect.

Nevertheless, the expansion of DoH infrastructure and conceptualization of ODoH is a net win for the Internet. Going into 2021, these developments give us hope for a future where our domain lookups will universally be both secure and private. It’s about time.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Share
Categories
Intelwars privacy Security Security Education Technical Analysis

macOS Leaks Application Usage, Forces Apple to Make Hard Decisions

Last week, users of macOS noticed that attempting to open non-Apple applications while connected to the Internet resulted in long delays, if the applications opened at all. The interruptions were caused by a macOS security service attempting to reach Apple’s Online Certificate Status Protocol (OCSP) server, which had become unreachable due to internal errors. When security researchers looked into the contents of the OCSP requests, they found that these requests contained a hash of the developer’s certificate for the application that was being run, which was used by Apple in security checks.[1] The developer certificate contains a description of the individual, company, or organization which coded the application (e.g. Adobe or Tor Project), and thus leaks to Apple that an application by this developer was opened.

Moreover, OCSP requests are not encrypted. This means that any passive listener also learns which application a macOS user is opening and when.[2] Those with this attack capability include any upstream service provider of the user; Akamai, the ISP hosting Apple’s OCSP service; or any hacker on the same network as you when you connect to, say, your local coffee shop’s WiFi. A detailed explanation can be found in this article.

Part of the concern that accompanied this privacy leak was the exclusion of userspace applications like LittleSnitch from the ability to detect or block this traffic. Even if altering traffic to essential security services on macOS poses a risk, we encourage Apple to allow power users the ability to choose trusted applications to control where their traffic is sent.

Apple quickly announced a new encrypted protocol for checking developer certificates and that they would allow users to opt out of the security checks. However, these changes will not roll out until sometime next year. Developing a new protocol and implementing it in software is not an overnight process, so it would be unfair to hold Apple to an impossible standard.

But why has Apple not simply turned the OCSP requests off for now? To answer this question, we have to discuss what the OCSP developer certificate check actually does. It prevents unwanted or malicious software from being run on macOS machines. If Apple detects that a developer has shipped malware (either through theft of signing keys or malice), Apple can revoke that developer’s certificate. When macOS next opens that application, Apple’s OCSP server will respond to the request (through a system service called `trustd`) that the developer is no longer trusted. So the application doesn’t open, thus preventing the malware from being run.

Fixing this privacy leak, while maintaining the safety of applications by checking for developer certificate revocations through OCSP, is not as simple as fixing an ordinary bug in code. This is a structural bug, so it requires structural fixes. In this case, Apple faces a balancing act between user privacy and safety. A criticism can be made that they haven’t given users the option to weigh the dilemma on their own, and simply made the decision for them. This is a valid critique. But the inevitable response is equally valid: that users shouldn’t be forced to understand a difficult topic and its underlying trade-offs simply to use their machines.

Apple made a difficult choice to preserve user safety, but at the peril of their more privacy-focused users. macOS users who understand the risks and prefer privacy can take steps to block the OCSP requests. We recommend that users who do this set a reminder for themselves to restore these OCSP requests once Apple adds the ability to encrypt them.

[1] Initial reports of the failure claimed Apple was receiving hashes of the application itself, which would have been even worse, if it were true.

[2] Companies such as Adobe develop many different applications, so an attacker would be able to establish that the application being opened is one of the set of all applications that Adobe has signed for macOS. Tor, on the other hand, almost exclusively develops a single application for end-users: the Tor Browser. So an attacker observing the Tor developer certificate will be able to determine that Tor Browser is being opened, even if the user takes steps to obscure their traffic within the app.

Share
Categories
Commentary Election Security Intelwars Security

Elections Are Partisan Affairs. Election Security Isn’t.

An Open Letter on Election Security

Voting is the cornerstone of our democracy. And since computers are deeply involved in all segments of voting at this point, computer security is vital to the protection of this fundamental right.  Everyone needs to be able to trust that the critical infrastructure systems we rely upon to safeguard our votes are defended, that problems are transparently identified, assessed and addressed, and that misinformation about election security is quickly and effectively refuted.  

While the work is not finished, we have made progress in making our elections more secure, and ensuring that problems are found and corrected. Paper ballots and risk-limiting audits have become more common.  Voting security experts have made great strides in moving elections to a more robust system that relies less on the hope of perfect software and systems.

This requires keeping partisan politics away from cybersecurity issues arising from elections. Obviously elections themselves are partisan. But the machinery of them should not be.  And the transparent assessment of potential problems or the assessment of allegations of security failure—even when they could affect the outcome of an election—must be free of partisan pressures.  Bottom line: election security officials and computer security experts must be able to do their jobs without fear of retribution for finding and publicly stating the truth about the security and integrity of the election. 

We are profoundly disturbed by reports that the White House is pressuring Chris Krebs, director of the Cybersecurity and Infrastructure Security Agency (CISA), to change CISA’s reports on election security. This comes just after Bryan Ware, assistant director for cybersecurity at CISA, resigned at the White House’s request. Director Krebs has said he expects to be fired but has refused to join the effort to cast doubt on the systems in place to support election technology and the election officials who run it. Instead, CISA published a joint statement renouncing “unfounded claims and opportunities for misinformation about the process of our elections.”  The White House pressure threatens to introduce partisanship, and unfounded allegations, into the expert, nonpartisan, evaluation of election security. 

We urge the White House to reverse course and support election security and the processes and people necessary to safeguard our vote.  

Signed,

(Organizations and companies)

Electronic Frontier Foundation

Bugcrowd

Center for Democracy & Technology

Disclose.io

ICS Village

SCYTHE, Inc.

Verified Voting

(Affiliations are for identification purposes only; listed alphabetically by surname.)

William T. Adler, Senior Technologist, Elections & Democracy, Center for Democracy & Technology
Matt Blaze, McDevitt Chair of Computer Science and Law, Georgetown University
Jeff Bleich, U.S. Ambassador to Australia (ret.)
Jake Braun, Executive Director, University of Chicago Harris Cyber Policy Initiative
Graham Brookie, Director and Managing Editor, Digital Forensic Research Lab, The Atlantic Council
Emerson T. Brooking, Resident Fellow, Digital Forensic Research Lab of the Atlantic Council.
Duncan Buell, NCR Professor of Computer Science and Engineering, University of South Carolina
Jack Cable, Independent Security Researcher.
Joel Cardella, Director, Product & Software Security, Thermo Fisher Scientific
Stephen Checkoway, Assistant Professor of Computer Science, Oberlin College
Casey Ellis, Chairman/Founder/CTO, Bugcrowd
Larry Diamond, Senior Fellow, Hoover Institution and Principal Investigator, Global Digital Policy Incubator, Stanford University
Renée DiResta, Research Manager, Stanford Internet Observatory
Michael Fischer, Professor of Computer Science, Yale University
Camille François, Chief Innovation Officer, Graphika
The Gruqq, Independent Security Researcher
Joseph Lorenzo Hall, Senior Vice President for a Strong Internet at The Internet Society (ISOC)
Candice Hoke, Founding Co-Director, Center for Cybersecurity & Privacy Protection, Cleveland State University
David Jefferson, Computer Scientist, Lawrence Livermore National Laboratory (retired)
Douglas W. Jones, Associate Professor of Computer Science, University of Iowa
Lou Katz, Commissioner, Oakland Privacy Advisory Commission
Joseph Kiniry, Principal Scientist, Galois, CEO and Chief Scientist, Free & Fair
Katie Moussouris, CEO, LutaSecurity
Peter G. Neumann, Chief Scientist, SRI International Computer Science Lab
Marc Rogers, Director of Cybersecurity, Okta
Aviel D. Rubin, Professor of Computer Science, Johns Hopkins University
John E. Savage, An Wang Emeritus Professor of Computer Science, Brown University
Bruce Schneier, Cyber Project Fellow and Lecturer, Harvard Kennedy School
Alex Stamos, Director, Stanford Internet Observatory
Philip B. Stark, Associate Dean, Mathematical and Physical Sciences, University of California, Berkeley
Camille Stewart, Cyber Fellow, Harvard Belfer Center
Megan Stifel, Executive Director, Americas; and Director, Craig Newmark Philanthropies Trustworthy Internet and Democracy Program, Global Cyber Alliance
Sara-Jayne Terp, CEO Bodacea Light Research
Cris Thomas (Space Rogue), Global Strategy Lead, IBM X-Force Red
Maurice Turner, Election Security Expert
Poorvi L. Vora, Professor of Computer Science, The George Washington University
Dan S. Wallach, Professor, Departments of Computer Science and Electrical & Computer Engineering, Rice Scholar, Baker Institute for Public Policy, Rice University
Nate Warfield, Security Researcher
Elizabeth Wharton, Chief of Staff, SCYTHE, Inc.
Tarah Wheeler, Belfer Center Cyber Fellow, Harvard University Kennedy School, and member EFF Advisory Board
Beau Woods, Founder/CEO of Stratigos Security and Cyber Safety Innovation Fellow at the Atlantic Council.
Daniel M. Zimmerman, Principal Researcher, Galois and Principled Computer Scientist, Free & Fair

Share
Categories
brainwashing civil disturbance contested election Donald Trump elections are selections Emergency Preparedness fence fences George floyd Headline News Intelwars Joe Biden looting non scalable fence political elitists Politicians Preparedness propaganda Protection Riots Security WHITE HOUSE

“Non-scalable Fence” To Be Put Up Around White House Before Elections

Federal authorities are going to put up a “non-scalable” fence around the White House in advance of the elections Tuesday.  The extra layer of security marks the most high-profile example to date of authorities preparing for unrest following this year’s election.

The plan has been to stir up the public for a long time. Therefore, putting up the fence shows that they expect their plan to create violence, and discord to work. According to a report by CNN, the fence will offer protection for politicians, particularly if there is no clear winner come November 4.

As CNN previously reported, the immediate perimeters around the White House have already been largely blocked off to the public this year for a range of reasons, from construction on the White House gate to protests and looting that occurred in downtown Washington in the wake of the police killing of George Floyd in Minneapolis back in May.

DC Metro police have been preparing its officers for well over a year, as it does ahead of every general election, ensuring that they are prepared to handle everything from civil disturbance to crowd control to potential disruptions to metro transit, Patrick Burke, executive director of the Washington, DC, Police Foundation, previously told CNN.

Police have also been working with intelligence officials to ensure the security of Washington, DC’s airspace in the event of an attack from above, Burke added, as they routinely do when preparing for moments of heightened anxiety.

“If there’s no winner, you will see significant deployments of officers at all levels across the capital,” said Burke. “Officers will get cancellations of days off, extensions of shifts, and full deployments of officers across the city.” –CNN

Unfortunately, every effort will be made to foment disgust and bring the political division in the United States to a breaking point. When that finally happens, please be where you’d like to be. Everything is in place to cause massive civil unrest no matter how this election goes. And it will go the way that will et the biggest response, because they’ve already promised us that much.   Hopefully, you have a safe place to be as this week unfolds just in case.

Another Secret Model: A Contested 2020 Election

Do not fear, but remain aware and brace yourself for the worst while hoping for the best.

The post “Non-scalable Fence” To Be Put Up Around White House Before Elections first appeared on SHTF Plan – When It Hits The Fan, Don't Say We Didn't Warn You.

Share
Categories
Election integrity Intelwars Philadelphia Security voter fraud Voting Machines warehouse watch

Days after computer theft from Philly elections warehouse, reporter strolls inside, walks by rows of voting machines — with no one else around

After a report in a major newspaper of stolen computer equipment from a major U.S. city’s elections warehouse, one may be inclined to conclude that the premises would be buttoned up a bit afterward.

But apparently that wasn’t the case at Philadelphia’s elections warehouse — at least not on Thursday when WHYY-TV reporter Max Marin said he was able to enter the facility with no problem, walk past rows of voting machines, and just hang out all by his lonesome for several minutes while recording the breach on his cellphone:

Marin wrote that he “strolled past hundreds of voting machines, various boxes, and other unidentified equipment without seeing other people.”

“Eventually” he “stumbled upon a staffer in an office, who said press was not allowed in the building and escorted the reporter to the door, locking it behind him. The staffer declined to answer questions about security, or answer why it was so easy to enter. No security cameras were immediately visible, either inside or outside the building,” Marin added.

He also noted in his report that upon leaving the warehouse, a guard was visible at the other side of the building and more staffers arrived at the facility later, with one taking up a station outside the door.

What’s the background?

The Philadelphia Inquirer noted earlier this week that the items stolen were a laptop belonging to an on-site employee for the company that supplies the voting machines and several memory sticks used to program the machines. The paper said the theft sparked a “scramble to investigate and to ensure the machines had not been compromised.”

City officials privately expressed concern that President Donald Trump and his allies might use news of the theft to cast doubt on the integrity of the city’s elections “in light of false claims and conspiracy theories he cited during Tuesday’s presidential debate,” the Inquirer reported.

The paper added that officials “initially refused to confirm the theft or that an investigation had been opened. They only did so after The Inquirer informed them it would be reporting the incident based on sources who were not authorized to publicly discuss it.”

Far-left Mayor Jim Kenney weighed in, telling the paper in a statement: “I have immediately committed to making necessary police resources available to investigate this incident and find the perpetrators. I have also committed to the city commissioners additional resources to provide enhanced security at the warehouse going forward. This matter should not deter Philadelphians from voting, nor from having confidence in the security of this election.”

Yet the WHYY reporter still got inside

In the wake of Marin’s report of lax security at the warehouse, he said Deputy Commissioner Nick Custodio told him a security guard should have been stationed outside the door he walked through — but didn’t know if the guard was supposed to be there 24 hours or only during operating hours.

Custodio works for the Office of City Commissioners, which oversees elections in Philadelphia, and told WHYY he would address the situation.

In response to the theft and Marin’s breach of the warehouse, city spokesperson Mike Dunn told the station new safeguards would include:

  • Greatly increasing the number of security personnel stationed at the site (24/7);
  • Adding a round-the-clock police presence;
  • Instituting a strict logging procedure for anyone entering and exiting the buildings;
  • Enforcing strict adherence to the current policy.
Share
Categories
1984 Big Brother Children Consent COVID-19 DHS Education Federal Law George Orwell Government Headline News Hoax Intelwars Kids LIES mass surveillance Monitoring Orwellian Police State public schools put away anyone scamdemic school district School resource officers Security session United States USSA virtual learning Warrants watching welfare checks on the rise

Virtual School Dangers: The Hazards of a Police State Education During COVID-19

This article was originally published by  John W. Whitehead at The Rutherford Institute. 

“There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live—did live, from habit that became instinct—in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.”—George Orwell, 1984

Once upon a time in America, parents breathed a sigh of relief when their kids went back to school after a summer’s hiatus, content in the knowledge that for a good portion of the day, their kids would be gainfully occupied, out of harm’s way, and out of trouble.

Back then, if you talked back to a teacher, or played a prank on a classmate, or just failed to do your homework, you might find yourself in detention or doing an extra writing assignment after school or suffering through a parent-teacher conference about your shortcomings.

Of course, that was before school shootings became a part of our national lexicon.

As a result, over the course of the past 30 years, the need to keep the schools “safe” from drugs and weapons has become a thinly disguised, profit-driven campaign to transform them into quasi-prisons, complete with surveillance cameras, metal detectors, police patrols, zero-tolerance policies, lockdowns, drug-sniffing dogs, school resource officers, strip searches, and active shooter drills.

Suddenly, under school zero-tolerance policies, students were being punished with suspension, expulsion, and even arrest for childish behavior and minor transgressions such as playing cops and robbers on the playground, bringing LEGOs to school, or having a food fight.

Things got even worse once schools started to rely on police (school resource officers) to “deal with minor rulebreaking: sagging pants, disrespectful comments, brief physical skirmishes.”

As a result, students are being subjected to police tactics such as handcuffs, leg shackles, tasers, and excessive force for “acting up,” in addition to being ticketed, fined, and sent to court for behavior perceived as defiant, disruptive, or disorderly such as spraying perfume and writing on a desk.

This is what constitutes a police state education these days: lessons in compliance meted out with aggressive, totalitarian tactics.

The COVID-19 pandemic has added yet another troubling layer to the ways in which students (and their families) can run afoul of a police state education now that school (virtual or in-person) is back in session.

Significant numbers of schools within the nation’s 13,000 school districts have opted to hold their classes online, in-person or a hybrid of the two, fearing further outbreaks of the virus. Yet this unprecedented foray into the virtual world carries its own unique risks.

Apart from the technological logistics of ensuring that millions of students across the country have adequate computer and internet access, consider the Fourth Amendment ramifications of having students attend school online via video classes from the privacy of their homes.

Suddenly, you’ve got government officials (in this case, teachers or anyone at the school on the other end of that virtual connection) being allowed carte blanche visual access to the inside of one’s private home without a warrant.

Anything those school officials see—anything they hear—anything they photograph or record—during that virtual visit becomes fair game for scrutiny and investigation not just by school officials but by every interconnected government agency to which that information can be relayed: the police, social services, animal control, the Department of Homeland Security, you name it.

After all, this is the age of overcriminalization, when the federal criminal code is so vast that the average American unknowingly commits about three federal felonies per day, a U.S. Attorney can find a way to charge just about anyone with violating federal law.

It’s a train wreck just waiting to happen.

In fact, we’re already seeing this play out across the country. For instance, a 12-year-old Colorado boy was suspended for flashing a toy gun across his computer screen during an online art class. Without bothering to notify or consult with the boy’s parents, police carried out a welfare check on Isaiah Elliott, who suffers from ADHD and learning disabilities.

An 11-year-old Maryland boy had police descend on his home in search of weapons after school officials spied a BB gun on the boy’s bedroom wall during a Google Meet class on his laptop. School officials reported the sighting to the school resource officer, who then called the police.

And in New York and Massachusetts, growing numbers of parents are being visited by social services after being reported to the state child neglect and abuse hotline, all because their kids failed to sign in for some of their online classes. Charges of neglect, in some instances, can lead to children being removed from their homes.

You see what this is, don’t you?

This is how a seemingly well-meaning program (virtual classrooms) becomes another means by which the government can intrude into our private lives, further normalizing the idea of constant surveillance and desensitizing us to the dangers of an existence in which we are never safe from the all-seeing eyes of Big Brother.

This is how the police sidestep the Fourth Amendment’s requirement for probable cause and a court-issued warrant in order to spy us on in the privacy of our homes: by putting school officials in a position to serve as spies and snitches via online portals and virtual classrooms, and by establishing open virtual doorways into our homes through which the police can enter uninvited and poke around.

Welfare checks. Police searches for weapons. Reports to Social Services.

It’s only a matter of time before the self-righteous Nanny State uses this COVID-19 pandemic as yet another means by which it can dictate every aspect of our lives.

At the moment, it’s America’s young people who are the guinea pigs for the police state’s experiment in virtual authoritarianism. Already, school administrators are wrestling with how to handle student discipline for in-person classes and online learning in the midst of COVID-19.

Mark my words, this will take school zero-tolerance policies—and their associated harsh disciplinary penalties—to a whole new level once you have teachers empowered to act as the Thought Police.

As Kalyn Belsha reports for Chalkbeat, “In Jacksonville, Florida, students who don’t wear a mask repeatedly could be removed from school and made to learn online. In some Texas districts, intentionally coughing on someone can be classified as assault. In Memphis, minor misbehaviors could land students in an online ‘supervised study.’”

Depending on the state and the school district, failing to wear a face mask could constitute a dress code violation. In Utah, not wearing a face mask at school constitutes a criminal misdemeanor. In Texas, it’s considered an assault to intentionally spit, sneeze, or cough on someone else. Anyone removing their mask before spitting or coughing could be given a suspension from school.

Virtual learning presents its own challenges with educators warning dire consequences for students who violate school standards for dress code and workspaces, even while “learning” at home. According to Chalkbeat, “In Shelby County, Tennessee, which includes Memphis, that means no pajamas, hats, or hoods on screen, and students’ shirts must have sleeves. (The district is providing ‘flexibility’ on clothing bottoms and footwear when a student’s full-body won’t be seen on video.) Other rules might be even tougher to follow: The district is also requiring students’ work stations to be clear of ‘foreign objects’ and says students shouldn’t eat or drink during virtual classes.”

See how quickly the Nanny State a.k.a. Police State takes over?

All it takes for you to cease being the master of your own home is to have a child engaged in virtual learning. Suddenly, the government gets to have a say in how you order your space and when those in your home can eat and drink and what clothes they wear.

If you think the schools won’t overreact in a virtual forum, you should think again.

These are the same schools that have been plagued by a lack of common sense when it comes to enforcing zero-tolerance policies for weapons, violence, and drugs.

These are the very same schools that have exposed students to a steady diet of draconian zero-tolerance policies that criminalize childish behavior, overreaching anti-bullying statutes that criminalize speech, school resource officers (police) tasked with disciplining and/or arresting so-called “disorderly” students, standardized testing that emphasizes rote answers over critical thinking, politically correct mindsets that teach young people to censor themselves and those around them, and extensive biometric and surveillance systems that, coupled with the rest, acclimate young people to a world in which they have no freedom of thought, speech or movement.

Zero tolerance policies that were intended to make schools safer by discouraging the use of actual drugs and weapons by students have turned students into suspects to be treated as criminals by school officials and law enforcement alike while criminalizing childish behavior.

For instance, 9-year-old Patrick Timoney was sent to the principal’s office and threatened with suspension after school officials discovered that one of his LEGOs was holding a 2-inch toy gun. David Morales, an 8-year-old Rhode Island student, ran afoul of his school’s zero-tolerance policies after he wore a hat to school decorated with an American flag and tiny plastic Army figures in honor of American troops. School officials declared the hat out of bounds because the toy soldiers were carrying miniature guns.

A high school sophomore was suspended for violating the school’s no-cell-phone policy after he took a call from his father, a master sergeant in the U.S. Army who was serving in Iraq at the time. In Houston, an 8th grader was suspended for wearing rosary beads to school in memory of her grandmother (the school has a zero-tolerance policy against the rosary, which the school insists can be interpreted as a sign of gang involvement).

Even imaginary weapons (hand-drawn pictures of guns, pencils twirled in a “threatening” manner, imaginary bows and arrows, even fingers positioned like guns) can also land a student in detention. Equally outrageous was the case in New Jersey where several kindergartners were suspended from school for three days for playing a make-believe game of “cops and robbers” during recess and using their fingers as guns.

With the distinctions between student offenses erased, and all offenses expellable, we now find ourselves in the midst of what Time magazine described as a “national crackdown on Alka-Seltzer.” Students have actually been suspended from school for possession of the fizzy tablets in violation of zero-tolerance drug policies. Students have also been penalized for such inane “crimes” as bringing nail clippers to school, using Listerine or Scope, and carrying fold-out combs that resemble switchblades.

A 13-year-old boy in Manassas, Virginia, who accepted a Certs breath mint from a classmate, was actually suspended and required to attend drug-awareness classes, while a 12-year-old boy who said he brought powdered sugar to school for a science project was charged with a felony for possessing a look-alike drug.

Acts of kindness, concern, basic manners or just engaging in childish behavior can also result in suspensions.

One 13-year-old was given detention for exposing the school to “liability” by sharing his lunch with a hungry friend. A third-grader was suspended for shaving her head in sympathy for a friend who had lost her hair to chemotherapy. And then there was the high school senior who was suspended for saying “bless you” after a fellow classmate sneezed.

In South Carolina, where it’s against the law to disturb a school, more than a thousand students a year—some as young as 7 years old—“face criminal charges for not following directions, loitering, cursing, or the vague allegation of acting ‘obnoxiously.’ If charged as adults, they can be held in jail for up to 90 days.”

Things get even worse when you add police to the mix.

Thanks to a combination of media hype, political pandering, and financial incentives, the use of armed police officers (a.k.a. school resource officers) to patrol school hallways has risen dramatically in the years since the Columbine school shooting (nearly 20,000 by 2003). What this means, notes Mother Jones, is greater police “involvement in routine discipline matters that principals and parents used to address without involvement from law enforcement officers.”

Funded by the U.S. Department of Justice, these school resource officers (SROs) have become de facto wardens in the elementary, middle, and high schools, doling out their own brand of justice to the so-called “criminals” in their midst with the help of tasers, pepper spray, batons, and brute force.

The horror stories are legion.

One SRO is accused of punching a 13-year-old student in the face for cutting in the cafeteria line. That same cop put another student in a chokehold a week later, allegedly knocking the student unconscious and causing a brain injury.

In Pennsylvania, a student was tased after ignoring an order to put his cell phone away.

A 12-year-old New York student was hauled out of school in handcuffs for doodling on her desk with an erasable marker. Another 12-year-old was handcuffed and jailed after he stomped in a puddle, splashing classmates.

On any given day when school is in session, kids who “act up” in class are pinned facedown on the floor, locked in dark closets, tied up with straps, bungee cords, and duct tape, handcuffed, leg shackled, tasered, or otherwise restrained, immobilized or placed in solitary confinement in order to bring them under “control.”

In almost every case, these undeniably harsh methods are used to punish kids for simply failing to follow directions or throwing tantrums.

Very rarely do the kids pose any credible danger to themselves or others.

For example, a 4-year-old Virginia preschooler was handcuffed, leg shackled and transported to the sheriff’s office after reportedly throwing blocks and climbing on top of the furniture. School officials claim the restraints were necessary to protect the adults from injury.

6-year-old kindergarten student in a Georgia public school was handcuffed, transported to the police station, and charged with simple battery of a schoolteacher and criminal damage to property for throwing a temper tantrum at school.

This is the end product of all those so-called school “safety” policies, which run the gamut from zero-tolerance policies that punish all infractions harshly to surveillance cameras, metal detectors, random searches, drug-sniffing dogs, school-wide lockdowns, active-shooter drills, and militarized police officers.

Yet these police state tactics did not make the schools any safer.

As I point out in my book Battlefield America: The War on the American People, police state tactics never make anyone safer so much as they present the illusion of safety and indoctrinate the populace to comply, fear, and march in lockstep with the government’s dictates.

Now with virtual learning in the midst of this COVID-19 pandemic, the stakes are even higher.

It won’t be long before you start to see police carrying out knock-and-talk investigations based on whatever speculative information is gleaned from those daily virtual classroom sessions that allow government officials entry to your homes in violation of the Fourth Amendment.

It won’t take much at all for SWAT teams to start crashing through doors based on erroneous assumptions about whatever mistaken “contraband” someone may have glimpsed in the background of a virtual classroom session: a maple leaf that looks like marijuana, a jar of sugar that looks like cocaine, a toy gun, someone playfully shouting for help in the distance.

This may sound far-fetched now, but it’s only a matter of time before this slippery slope becomes yet another mile marker on the one-way road to tyranny.

The post Virtual School Dangers: The Hazards of a Police State Education During COVID-19 first appeared on SHTF Plan – When It Hits The Fan, Don't Say We Didn't Warn You.

Share
Categories
Encrypting the Web Intelwars privacy Security

Cryptographer and Entrepreneur Jon Callas Joins EFF as Technology Projects Director

Some of the most important work we do at EFF is build technologies to protect users’ privacy and security, and give developers tools to make the entire Internet ecosystem more safe and secure. Every day, EFF’s talented and dedicated computer scientists and engineers are creating and making improvements to our free, open source extensions, add-ons, and software to solve the problems of creepy tracking and unreliable encryption on the web.

Joining EFF this week to direct and shepherd these technology projects is internationally-recognized cybersecurity and encryption expert Jon Callas. He will be working with our technologists on Privacy Badger, a browser add-on that stops advertisers and other third-party trackers from secretly tracking users’ web browsing, and HTTPS Everywhere, a Firefox, Chrome, and Opera extension that encrypts user communications with major websites, to name of few of EFF’s tech tools.

Callas will also bring his considerable crypto and security chops to our policy efforts around encryption and securing the web. In the last two decades he has designed and built core cryptographic and concurrent programming systems that are in use by hundreds of millions of people.

As an entrepreneur, Callas has been at the center of key security and privacy advancements in mobile communications and email—the best-known of which is PGP (Pretty Good Privacy), one of the most important encryption standards to date. He was chief scientist at the original PGP Inc., and co-founded PGP Corp. in 2002. Later, Callas was chief technology officer at Internet security company Entrust, and co-founded encrypted communications firm Silent Circle, where he led teams making apps for encrypted chat and phone calls, including secure conference calls and an extra-secure Android phone called Blackphone.

Callas also did a couple of stints at Apple, where he helped design the encryption system to protect data stored on the Mac, and led a team that hacked new products to expose vulnerabilities before release. Along the way, he has garnered extensive leadership experience, having managed large engineering teams. In 2018, Callas left the corporate world to focus on policy as a technology fellow at the ACLU. In July he took aim at fatal flaws in the UK’s proposal to force service providers to give the government “exceptional access” to people’s encrypted communications (in other words, let the government secretly access private, encrypted messages).

The proposal’s authors denied the plan would “break” encryption, saying it would merely suppress notifications that the government happened to be accessing communications that people believe are secure, private, and free of interception. As Callas wrote at the ACLU, “a proposal that keeps encryption while breaking confidentiality is a distinction without a difference.”

We couldn’t agree more. EFF has fought since its founding 30 years ago to keep the government from breaking, or forcing others to break, encryption, and for people’s right to private and secure communications. We’re proud to have such a talented and passionate advocate for these principles on our team. Welcome, Jon!

 

 

 

 

Share
Categories
Intelwars Legal Analysis privacy Security

EFF and ACLU Tell Federal Court that Forensic Software Source Code Must Be Disclosed

Can secret software be used to generate key evidence against a criminal defendant? In an amicus filed ten days ago with the United States District Court of the Western District of Pennsylvania, EFF and the ACLU of Pennsylvania explain that secret forensic technology is inconsistent with criminal defendants’ constitutional rights and the public’s right to oversee the criminal trial process. Our amicus in the case of United States v. Ellis also explains why source code, and other aspects of forensic software programs used in a criminal prosecution, must be disclosed in order to ensure that innocent people do not end up behind bars, or worse—on death row.

The Constitution guarantees anyone accused of a crime due process and a fair trial. Embedded in those foundational ideals is the Sixth Amendment right to confront the evidence used against you. As the Supreme Court has recognized, the Confrontation Clause’s central purpose was to ensure that evidence of a crime was reliable by subjecting it to rigorous testing and challenges. This means that defendants must be given enough information to allow them to examine and challenge the accuracy of evidence relied on by the government.

In addition, the public has a constitutional right of access to court proceedings. While this right is not absolute, it is clearly implicated here, where the government seeks to use secret software to generate evidence of criminal culpability.

In this case, Mr. Ellis was accused of violating a federal law prohibiting people who have been previously convicted of a felony from possessing a firearm (18 U.S.C. 922(g)(1)). The weapon had not been found in Mr. Ellis’s possession, but was found in a car he was allegedly driving. Law enforcement officers retrieved a swab of DNA mixture swab from the gun which they submitted for analysis by the police forensic lab. The lab results were inconclusive as to whether Mr. Ellis could have contributed to the DNA in the mixture. The mixture sample was then sent to Cybergenetics, the owner of the probabilistic DNA software TrueAllele. Using TrueAllele, the company ran numerous variations of tests on the sample using different hypotheses to adjust the program settings, including alternative theories regarding the number of people whose DNA was in the mixture.

Prosecutors in the case seek to rely on the result of one particular analysis based on the assumption that four people contributed to the DNA sample from the gun. The results of this particular analysis suggest that Mr. Ellis’s DNA was present on the gun. In response, Mr. Ellis’s attorney requested the source code for TrueAllele, but the government refused to disclose it, arguing that the information is protected by trade secrets.

As EFF has previously pointed out, DNA analysis programs are not uniquely immune to errors and bugs, and criminal defendants cannot be forced to take anyone’s word when it comes to the evidence used to imprison them. Independent examination of the source code of forensic software similar to TrueAllele has revealed mistakes and flaws that call into question the accuracy of these tools and their suitability for the criminal justice system. A defendant’s Sixth Amendment right to confrontation requires that they are provided with the information necessary to challenge and expose any material defects in the purported evidence of their guilt. In an exceptional case, a court could issue a protective order limiting disclosure to the defense team, but the default must be disclosure.

Without disclosure, we, as the public, cannot have confidence in a verdict against Mr. Ellis.

Share
Categories
Agenda 21 civil unrest Civilians control disobedience Emergency Preparedness freedom George Soros Headline News hypocritical immoral Intelwars jeff merkley Mainstream media Manipulation Martial Law mind control oregon paid agitators peace police brutality Police State Politicians portland power predictive programming rioters Rioting Roy Wyden Security Self-Defense suspended rights War Warning

Senator Warns: The U.S. Is “Staring Down The Barrel of Martial Law”

Martial law should be considered nothing less than a straight-up war on the American people.  Now, an Oregon senator is warning that Americans are “starring down the barrel” of martial law.

Martial Law Masquerading as Law and Order: The Police State’s Language of Force

Anyone who relishes any level of freedom and security should not ever accept martial law. It’s by far, one of the evilest and most deceitful ways to conduct a war on the people who the ruling class has stolen from for years in order to ensure they can outgun the enemy. We are on the cusp of it too, and some areas are already under martial law.  The government has declared war on its own people and they will not give up power willingly. You have two choices, obey or disobey. Neither are good.

Agenda 21 Requires Martial Law

In interviews with the Guardian, Democrat Ron Wyden said the federal government’s authoritarian tactics in Portland and other cities posed an “enormous” threat to democracy, while his fellow senator Jeff Merkley described it as “an all-out assault in military-style fashion.” Keep in mind, that it isn’t like the protestors in Portland are being peaceful either. If anything, they are being destructive and the regular citizens, are being restrained from protecting themselves in the midst of so many laws and regulations.

Racism is Dead! Racist Elk Statue Burns In Portland, Oregon

Speaking by phone, Wyden said: “Unless America draws a line in the sand right now, I think we could be staring down the barrel of martial law in the middle of a presidential election.”  Protesting peacefully and rioting are two different things.  Are some of the rioters paid agitators? That’s more likely than not. We know George Soros is funding the majority of this violence.  So should cities just burn? No, that’s not a solution either. It is possible to be against the violence of the federal agents and the violence of the rioters at the same time.  In fact, it’s hypocritical and immoral to take any other position.

You Can Be Against Police Brutality & Looting and Rioting At The Same Time

Prepare for martial law and continued civil unrest. People are not going to be content nor will they accept the outcome of the election in November regardless of which puppet the Federal Reserve has already decided will sit on the throne. If you don’t know how to defend yourself and some of the fundamental basics of self-defense, now would be an excellent time to learn. The mainstream media and the politicians are using predictive programming to tell us what’s coming, and from what I can tell, it will not be a fun autumn or winter this year. 

Those Who Planned The Enslavement of Mankind Warn Of “A Dark Winter” For Us

Share
Categories
Intelwars Security

EFF Welcomes Cybersecurity Expert Tarah Wheeler to Advisory Board

Cybersecurity policy expert. Security researcher. Women in tech advocate. Entrepreneur. Tarah Wheeler’s expertise and experience encompasses the most pressing issues in tech, and we’re honored to announce that she is joining EFF’s advisory board. She will be helping us with our work on information security, data privacy, building diverse and effective engineering teams, and influencing the future of cybersecurity.

Wheeler has long been involved in making tech systems more secure for everyone. She is an International Security Fellow at New America’s International Security Program, leading a new international cybersecurity capacity building project with the Hewlett Foundation’s Cyber Initiative. At Splunk, a big data analyzation platform, Wheeler was head of offensive security and technical data privacy. Earlier she was senior director of engineering and principal security advocate at Symantec Website Security, and designed systems at encrypted mobile communications firm Silent Circle. Wheeler is founder of information security consultancy Red Queen Technologies, and her 2018 Foreign Policy article on cyberwar called attention to cyberwarfare’s impact on civilians.

In May Wheeler received the US/UK Fulbright Cyber Security Scholar Award for distinguished scholars in the field. She will conduct research at the University of Oxford and with the UK National Health Service (NHS) on defining cyber war crimes and mitigating civilian bystander harms in nation-state sponsored cyberattacks. Wheeler’s Fulbright-supported research will explore both the technical and the social elements of protecting people against cyberconflict by examining the civilian impact of the WannaCry ransomware attack on the NHS.

Adding diversity and ensuring gender equity in tech and infosec has been a focus of Wheeler’s for nearly a decade. She is the lead author of the 2016 best-selling book Women In Tech: Take Your Career to The Next Level With Practical Advice And Inspiring Stories, which provides guidance from top female engineers, entrepreneurs, gamers, and coders.

Wheeler is also a poker player, and says the game isn’t unlike cybersecurity work. Fixing security problems is tough, and in the moment can feel like everything rests on a single decision. “But, over time, you start to fine-tune your sense of decision making,” Wheeler said in an interview last year.

“That’s what poker is like—folding what you have calculated is likely not a winning hand, even if you’re not perfectly sure. Being sure enough that you make a good decision and following through on that good decision and gradually tuning your game so you’re better over time is what poker brought me when it comes to my decision-making process in cybersecurity.”

Welcome to EFF, Tarah!

Share
Categories
Intelwars privacy Security Social Networks

After This Week’s Hack, It Is Past Time for Twitter to End-to-End Encrypt Direct Messages

Earlier this week, chaos reigned supreme on Twitter as high-profile public figures—from Elon Musk to Jeff Bezos to President Barack Obama—started tweeting links to the same bitcoin scam.

Twitter’s public statement and reporting from Motherboard suggest attackers gained access to an internal admin tool at the company, and used it to take over these accounts. Twitter says that approximately 130 accounts were targeted. According to Reuters, the attackers offered accounts for sale right before the bitcoin scam attack.

The full extent of the attack is unclear at this point, including what other capabilities the attackers might have had, or what other user information they could have accessed and how. Users cannot avoid a hack like this by strengthening their password or using two-factor authentication (though you should still take those steps to protect against other, much more common attacks). Instead, it’s Twitter’s responsibility to provide robust internal safeguards. Even with Twitter’s strong security team, it is almost impossible to defend against all insider threats and social engineering attacks—so these safeguards must prevent even an insider from getting unnecessary access.

Twitter direct messages (or DMs), some of the most sensitive user data on the platform, are vulnerable to this week’s kind of internal compromise. That’s because they are not end-to-end encrypted, so Twitter itself has access to them. That means Twitter can hand them over in response to law enforcement requests, they can be leaked, and—in the case of this week’s attack—internal access can be abused by malicious hackers and Twitter employees themselves.

End-to-end encryption provides the robust internal safeguard that Twitter needs. Twitter wouldn’t have to worry about whether or not this week’s attackers read or exfiltrated DMs if it had end-to-end encrypted them, like we have been asking Twitter to do for years.

Senator Ron Wyden also called for Twitter to end-to-end encrypt DMs after the hack, reminding Twitter CEO Jack Dorsey that he reassured the Senator that end-to-end encryption was in the works two years ago.

Many other popular messaging systems are already using end-to-end encryption, including WhatsApp, iMessage, and Signal. Even Facebook Messenger offers an end-to-end encrypted option, and Facebook has announced plans to end-to-end encrypt all its messaging tools. It’s a no-brainer that Twitter should protect your DMs too, and they have been unencrypted for far too long.

Finally, let’s all pour one out for Twitter’s Incident Response team, living the security response nightmare in real time. We appreciate their work, and @TwitterSupport for providing ongoing updates on the investigation.

Share
Categories
competition Intelwars International privacy Security transparency

OTF’s Work Is Vital for a Free and Open Internet

Keeping the internet open, free, and secure requires eternal vigilance and the constant cooperation of freedom defenders all over the web and the world. Over the past eight years, the Open Technology Fund (OTF) has fostered a global community and provided support—both monetary and in-kind—to more than four hundred projects that seek to combat censorship and repressive surveillance, enabling more than two billion people in over 60 countries to more safely access the open Internet and advocate for democracy.

OTF has earned trust over the years through its open source ethos, transparency, and a commitment to independence from its funder, the US Agency for Global Media (USAGM), which receives its funding through Congressional appropriations.

In the past week, USAGM has removed OTF’s leadership and independent expert board, prompting a number of organizations and individuals to call into question OTF’s ability to continue its work and maintain trust among the various communities it serves. USAGM’s new leadership has been lobbied to redirect funding for OTF’s open source projects to a new set of closed-source tools, leaving many well-established tools in the lurch.

Why OTF Matters

EFF has maintained a strong relationship with OTF since its inception. Several of our staff members serve or have served on its Advisory Council, and OTF’s annual summits have provided crucial links between EFF and the international democracy tech community. OTF’s support has been vital to the development of EFF’s software projects and policy initiatives. Guidance and funding from OTF have been foundational to Certbot, helping the operators of tens of millions of websites use EFF’s tool to generate and install Let’s Encrypt certificates. The OTF-sponsored fellowship for Wafa Ben-Hassine produced impactful research and policy analysis about how Arab governments repress online speech.

OTF’s funding is focused on tools to help individuals living under repressive governments. For example, OTF-funded circumvention technologies including Lantern and Wireguard are used by tens of millions of people around the world, including millions of daily users in China. OTF also incubated and assisted in the initial development of the Signal Protocol, the encryption back-end used by both Signal and WhatsApp. By sponsoring Let’s Encrypt’s implementation of multi-perspective validation, OTF helped protect the 227 million sites using Let’s Encrypt from BGP attacks, a favorite technique of nation-states that hijack websites for censorship and propaganda purposes.

While these tools are designed for users living under repressive governments, they are used by individuals and groups all over the world, and benefit movements as diverse as Hong Kong’s Democracy movement, the movement for Black lives, and LGBTQ+ rights defenders. 

OTF requires public, verifiable security audits for all of its open-source software grantees. These audits greatly reduce risk for the vulnerable people who use OTF-funded technology. Perhaps more importantly, they are a necessary step in creating trust between US-funded software and foreign activists in repressive regimes.  Without that trust, it is difficult to ask people to risk their lives on OTF’s work.

Help Us #SaveInternetFreedom

It is not just OTF that is under threat, but the entire ecosystem of open source, secure technologies—and the global community that builds those tools. We urge you to join EFF and more than 400 other organizations in signing the open letter, which asks members of Congress to:

  • Require USAGM to honor existing FY2019 and FY2020 spending plans to support the Open Technology Fund;
  • Require all US-Government internet freedom funds to be awarded via an open, fair, competitive, and evidence-based decision process;
  • Require all internet freedom technologies supported with US-Government funds to remain fully open-source in perpetuity;
  • Require regular security audits for all internet freedom technologies supported with US-Government funds; and
  • Pass the Open Technology Fund Authorization Act.

EFF is proud to join the voices of hundreds of organizations and individuals across the globe calling on UGASM and OTF’s board to recommit to the value of open source technology, robust security audits, and support for global Internet freedom. These core values—which have been a mainstay of OTF’s philanthropy—are vital to uplifting the voices of billions of technology users facing repression all over the world.

Share
Categories
COVID-19 and Digital Rights Digital Rights and the Black-led Movement Against Police Violence Intelwars privacy Security Security Education Technical Analysis

Staying Private While Using Google Docs for Legal & Mutual Aid Work

Regardless of your opinion about Google, their suite of collaborative document editing tools provides a powerful resource in this tumultuous time. Across the country, grassroots groups organizing mutual aid relief work in response to COVID-19 and legal aid as part of the recent wave of protests have relied on Google Docs to coordinate efforts and get help to those that need it. Alternatives to the collaborative tools either do not scale well, are not as usable or intuitive, or just plain aren’t available. Using Google Sheets to coordinate who needs help and how can provide much-needed relief to those hit hardest. But it’s easy to use these tools in a way Google didn’t envision, and trigger account security lockouts in the process.

The need for privacy when doing sensitive work is often paramount, so it’s understandable that organizers often won’t want to use their personal Google accounts. But administering aid documents from a single centralized account and sharing the password amongst peers is not recommended. If one person accessing the account connects from an IP address Google has marked as suspicious, it may lock that account for some time (this can happen for a variety of reasons—a neighbor piggybacking off of your WiFi and using it to hack a website, for example). The bottom line is: the more IPs that connect to a single account, the more likely the account will be flagged as suspicious.

In addition, sharing a password makes it easy for someone to change that password, locking everyone else out. It also means that you can’t protect the account with 2-step verification without a lot of difficulty. 2-step verification protects accounts so that you have to use an app that displays a temporary code or an authentication key every time you sign in to an account.  This protects the account from various password-stealing attacks.

For any documents that you create, you’ll want clear attribution for any changes made, even if it is attributable only to a pseudonym. This helps ensure that if false or malicious data is introduced, you know where it came from. Google Docs and Sheets allow you to see the history of changes to a document, and who made those changes. You can also revert to a previous version of the document.

Unfortunately, in our testing we found that Google requires a valid phone number to create and edit documents from an account. (Instead of Google Sheets to organize data, you might consider using Google Forms instead, which allows you to build out a custom form that anyone can submit to, even without an account.) The author of a document can also share the document via a link with editor or commenter permissions, but this also requires a Google account. Google already has a mechanism for determining if a user is legitimate, via its reCAPTCHA service. Instead of requiring sensitive identifying information like phone numbers, it should allow users to create anonymous or pseudonymous accounts without having to link a phone number.

There are a number of routes to getting a phone number that Google will accept and send you a verification code for. The best method for setting up your account depends on how private you want the account to be. Your real phone number is often easily linked back to your address.  One step of removal is using a service that generates real phone numbers that can accept SMS messages. There are many such services out there, and most will have you sign up with your real phone number to generate those numbers. These include apps like Burner and full communications platforms such as Twilio.  When you establish an account relying on a phone number generated by a third-party (but, ultimately, connected to your phone number), linking a document to your identity will require information from both Google as well as the third-party service. For extra privacy, users should look into purchasing a prepaid SIM card and use a burner phone to receive the verification SMS. If you’re going down this route, you’ll probably also be interested in using a VPN or Tor Browser when collaborating.

There is not a one-size-fits-all solution to collaborating privately with Google Docs. Your decisions on how private you want to be will depend on your own security plan, as well as that of your collaborators.

Share