Categories
Digital Rights and the Black-led Movement Against Police Violence Intelwars Locational Privacy privacy Surveillance Technologies transparency

Emails from 2016 Show Amazon Ring’s Hold on the LAPD Through Camera Giveaways

In March 2016, “smart” doorbell camera maker Ring was a growing company attempting to market its wireless smart security camera when it received an email from an officer in the Los Angeles Police Department (LAPD) Gang and Narcotics Division, who was interested in purchasing a slew of devices.

The Los Angeles detective wanted 20 cameras, consisting of 10 doorbell cameras and 10 “stick up” cameras, which retailed for nearly $3,000. Ring, headquartered in nearby Santa Monica, first offered a discount but quickly sweetened the deal: “I’d be happy to send you those units free of charge,” a Ring employee told the officer, according to emails released in response to California Public Records Act (CPRA) requests filed by EFF and NBC’s Clark Fouraker. These emails are also the subject of a detailed new report from the Los Angeles Times.

Email from Ring employee to LAPD officer

Ring offered nearly $3,000 worth of camera equipment to the LAPD in 2016, to aid in an investigation.

A few months later, in July 2016, Ring was working with an LAPD officer to distribute a discount code that would allow officers to purchase Ring cameras for $50 off. As a growing number of people used his discount code, Ring offered the officer more and more free equipment.

Officers were offered rewards based on how many people had used their personal coupon codes to order products.

These officers receiving free equipment, either for an investigation or for their “hard work” helping to promote the sale of Ring through discount codes, were not isolated incidents. Across the LAPD—from the gang division in Downtown to community policing units in East Los Angeles and Brentwood—Ring offered, or officers requested, thousands of dollars’ worth of free products in exchange for officers’ promotion of Ring products to fellow officers and the larger community, seemingly in violation of department prohibitions on both accepting gifts from vendors and endorsing products.

In another incident, the LAPD asked Ring for cameras to aid in an investigation involving a slew of church break-ins. Ring offered to send the police a number of cameras free of charge, but not without recognizing a marketing opportunity: “If the church sees value in the devices, perhaps it’s something that they can talk about with their members. Let’s talk more about this on the phone, but for now, I’ll get those devices sent out ASAP.”

While offering free cameras to aid in a string of church robberies, a Ring representative suggested marketing the cameras to the church’s members.

The LAPD released over 3,000 pages of emails from 2016 between Ring representatives and LAPD personnel in response to the CPRA requests. The records show that leading up to Ring’s official launch of partnerships with police departments—which now number almost 150 in California and over 2000 across the country—Ring worked steadily with Los Angeles police officers to provide free or discounted cameras for official and personal use, and in return, the LAPD worked to encourage the spread of Ring’s products throughout the community. The emails show officers were ready to tout the Ring camera as a device they used themselves, one they “love,” “completely believe in,” and “support.”

In an email, an employee of the LAPD says they recommend Ring’s doorbell camera to everyone they meet.

For over a year, EFF has been sounding the alarm about Ring and its police partnerships, which have in effect created neighborhood-wide surveillance networks without public input or debate. As part of these partnerships, Ring controls when and how police speak about Ring—with the company often requiring final say over statements and admonishing police departments who stray from the script.

Racial justice and civil liberties advocates have continually pointed out how Ring enables racial profiling. Rather than making people feel safer in their own homes, Ring cameras can often have the reverse effect. By having a supposed crime-fighting tool alert a user every time a person approaches their home, the user can easily get the impression that their home is under siege. This paranoia can turn public neighborhoods filled with innocent pedestrians and workers into de facto police states where Ring owners can report “suspicious” people to their neighbors via Ring’s Neighbors social media platform, or the police. In a recent investigation, VICE found that a vast majority of people labeled “suspicious” were people of color. Ring, with its motion detection alerts, gives residents a digitally aided way of enforcing who does and does not belong in their neighborhood based on their own biases and prejudices.

Ring also has serious implications on First Amendment activities. Earlier this year, EFF reported that LAPD requested footage from Ring cameras related to protests in Los Angeles following the police murder of George Floyd.

These emails further add to these concerns, as they point to a scheme in which public servants have used their positions for private gain and contributed to an environment of fear and suspicion in communities already deeply divided.

When confronted by police encouraging residents to mount security cameras, people should not have to decide whether their local police are operating out of a real concern over safety—or whether they are motivated by the prospect of receiving free equipment.

EFF has submitted a letter raising these concerns and calling on the California Attorney General to initiate a public integrity investigation into the relationship between Ring and the LAPD. The public has a right to know whether officers in their communities have received or are receiving benefits from Ring, and whether those profits have influenced when and if police have encouraged communities to buy and use Ring cameras. Although the incidents recorded in these emails occurred primarily in 2016, Ring’s police partnerships and influence have only spread in the resulting years. It’s time for the California Department of Justice to step in and use its authority to investigate if and when Ring wielded inappropriate influence over California’s police and sheriff’s departments.

Emails between the LAPD and Ring:
https://www.documentcloud.org/documents/20485679-19-4563-emails-binder-b-final/#document/p1097

EFF’s Letter to the California Department of Justice on the relationship between the LAPD and Ring:
https://www.eff.org/document/eff-letter-ca-ag-lapdring

Share
Categories
Intelwars privacy Risk Assessment Surveillance Trust VPN

VPNs and Trust

TorrentFreak surveyed nineteen VPN providers, asking them questions about their privacy practices: what data they keep, how they respond to court order, what country they are incorporated in, and so on.

Most interesting to me is the home countries of these companies. Express VPN is incorporated in the British Virgin Islands. NordVPN is incorporated in Panama. There are VPNs from the Seychelles, Malaysia, and Bulgaria. There are VPNs from more traditional companies like the U.S., Switzerland, Canada, and Sweden. Presumably all of those companies follow the laws on their home country.

And it matters. I’ve been thinking about this since Trojan Shield was made public. This is the joint US/Australia-run encrypted messaging service that lured criminals to use it, and then spied on everything they did. Or, at least, Australian law enforcement spied on everyone. The FBI wasn’t able to because the US has better privacy laws.

We don’t talk about it a lot, but VPNs are entirely based on trust. As a consumer, you have no idea which company will best protect your privacy. You don’t know the data protection laws of the Seychelles or Panama. You don’t know which countries can put extra-legal pressure on companies operating within their jurisdiction. You don’t know who actually owns and runs the VPNs. You don’t even know which foreign companies the NSA has targeted for mass surveillance. All you can do is make your best guess, and hope you guessed well.

Share
Categories
Biometrics Identification Intelwars privacy Social Media

TikTok Can Now Collect Biometric Data

This is probably worth paying attention to:

A change to TikTok’s U.S. privacy policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained. Reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.

Share
Categories
Big tech Commentary Creativity & Innovation Digital Services Act EUROPEAN UNION Intelwars International Online Behavioral Tracking privacy

The GDPR, Privacy and Monopoly

In Privacy Without Monopoly: Data Protection and Interoperability, we took a thorough look at the privacy implications of various kinds of interoperability. We examined the potential privacy risks of interoperability mandates, such as those contemplated by 2020’s ACCESS Act (USA), the Digital Services Act and Digital Markets Act (EU), and the recommendations presented in the Competition and Markets Authority report on online markets and digital advertising (UK). 

We also looked at the privacy implications of “competitive compatibility” (comcom, AKA adversarial interoperability), where new services are able to interoperate with existing incumbents without their permission, by using reverse-engineering, bots, scraping, and other  improvised techniques common to unsanctioned innovation.

Our analysis concluded that while interoperability created new privacy risks (for example, that a new firm might misappropriate user data under cover of helping users move from a dominant service to a new rival), these risks can largely be mitigated with thoughtful regulation and strong enforcement. More importantly, interoperability also had new privacy benefits, both because it made it easier to leave a service with unsuitable privacy policies, and because this created real costs for dominant firms that did not respect their users’ privacy: namely, an easy way for those users to make their displeasure known by leaving the service.

Critics of interoperability (including the dominant firms targeted by interoperability proposals) emphasize the fact that weakening a tech platform’s ability to control its users weakens its power to defend its users.

 They’re not wrong, but they’re not complete either. It’s fine for companies to defend their users’ privacy—we should accept nothing less—but the standards for defending user-privacy shouldn’t be set by corporate fiat in a remote boardroom, they should come from democratically accountable law and regulation.

The United States lags in this regard: Americans whose privacy is violated have to rely on patchy (and often absent) state privacy laws. The country needs—and deserves—a strong federal privacy law with a private right of action.

That’s something Europeans actually have. The General Data Protection Regulation (GDPR), a powerful, far-reaching, and comprehensive (if flawed and sometimes frustrating) privacy law came into effect in 2018.

The European Commission’s pending Digital Services Act (DSA) and Digital Markets Act (DMA) both contemplate some degree of interoperability, prompting two questions:

  1. Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy? And
  2. Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

We think the answers are “no” and “no,” respectively. Below, we explain why.

Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy?

Increased interoperability can help to address user lock-in and ultimately create opportunities for services to offer better data protection.

The European Data Protection Supervisor has weighed in on the relation between the GDPR and the Digital Markets Act (DMA), and they affirmed that interoperability can advance the GDPR’s goals.

Note that the GDPR doesn’t directly mandate interoperability, but rather “data portability,” the ability to take your data from one online service to another. In this regard, the GDPR represents the first two steps of a three-step process for full technological self-determination: 

  1. The right to access your data, and
  2. The right to take your data somewhere else.

The GDPR’s data portability framework is an important start! Lawmakers correctly identified the potential of data portability to help promote competition of platform services and to reduce the risk of user lock-in by reducing switching costs for users.

The law is clear on the duty of platforms to provide data in a structured, commonly used and machine-readable format and users should have the right to transmit data without hindrance from one data controller to another. Where technically feasible, users also have the right to ask the data controller to transmit the data to another controller.

Recital 68 of the GDPR explains that data controllers should be encouraged to develop interoperable formats that enable data portability. The WP29, a former official European data protection advisory body, explained that this could be implemented by making application programme interfaces (APIs) available.

However, the GDPR’s data portability limits and interoperability shortcomings have become more obvious since it came into effect. These shortcomings are exacerbated by lax enforcement. Data portability rights are insufficient to get Europeans the technological self-determination the GDPR seeks to achieve.

The limits the GDPR places on which data you have the right to export, and when you can demand that export, have not served their purpose. They have left users with a right to data portability, but few options about where to port that data to.

Missing from the GDPR is step three:

      3. The right to interoperate with the service you just left.

The DMA proposal is a legislative way of filling in that missing third step, creating a “real time data portability” obligation, which is a step toward real interop, of the sort that will allow you to leave a service, but remain in contact with the users who stayed behind. An interop mandate breathes life into the moribund idea of data-portability.

Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?

The GDPR is very far-reaching, and European officials are still coming to grips with its implications. It’s conceivable that the Commission could propose a regulation that cannot be reconciled with EU data protection rules. We learned that in 2019, when the EU Parliament adopted the Copyright Directive without striking down the controversial and ill-conceived Article 13 (now Article 17). Article 17’s proponents confidently asserted that it would result in mandatory copyright filters for all major online platforms, not realizing that those filters cannot be reconciled with the GDPR.

But we don’t think that’s what’s going on here. Interoperability—both the narrow interop contemplated in the DMA, and more ambitious forms of interop beyond the conservative approach the Commission is taking—is fully compatible with European data protection, both in terms of what Europeans legitimately expect and what the GDPR guarantees.

Indeed, the existence of the GDPR solves the thorniest problem involved in interop and privacy. By establishing the rules for how providers must treat different types of data and when and how consent must be obtained and from whom during the construction and operation of an interoperable service, the GDPR moves hard calls out of the corporate boardroom and into a democratic and accountable realm.

Facebook often asserts that its duty to other users means that it has to block you from bringing some of “your” data with you if you want to leave for a rival service. There is definitely some material on Facebook that is not yours, like private conversations between two or more other people. Even if you could figure out how to access those conversations, we want Facebook to take steps to block your access and prevent you from taking that data elsewhere.

But what about when Facebook asserts that its privacy duties mean it can’t let you bring the replies to your private messages; or the comments on your public posts; or the entries in your address book; with you to a rival service? These are less clear-cut than the case of other peoples’ private conversations, but blocking you from accessing this data also helps Facebook lock you onto its platform, which is also one of the most surveilled environments in the history of data-collection.

There’s something genuinely perverse about deferring these decisions to the reigning world champions of digital surveillance, especially because an unfavorable ruling about which data you can legitimately take with you when you leave Facebook might leave you stuck on Facebook, without a ready means to address any privacy concerns you have about Facebook’s policies.

This is where the GDPR comes in. Rather than asking whether Facebook thinks you have the right to take certain data with you or to continue accessing that data from a rival platform, the GDPR lets us ask the law which kinds of data connections are legitimate, and when consent from other implicated users is warranted. Regulation can make good, accountable decisions about whether a survey app deserves access to all of the “likes” by all of its users’ friends (Facebook decided it did, and the data ended up in the hands of Cambridge Analytica), or whether a user should be able to download a portable list of their friends to help switch to another service (which Facebook continues to prevent).

The point of an interoperability mandate—either the modest version in the DMA or a more robust version that allows full interop—is to allow alternatives to high-surveillance environments like Facebook to thrive by reducing switching costs. There’s a hard collective action problem of getting all your friends to leave Facebook at the same time as you. If people can leave Facebook but stay in touch with their Facebook friends, they don’t need to wait for everyone else in their social circle to feel the same way. They can leave today.

In a world where platforms—giants, startups, co-ops, nonprofits, tinkerers’ hobbies—all treat the GDPR as the baseline for data-processing, services can differentiate themselves by going beyond the GDPR, sparking a race to the top for user privacy.

Consent, Minimization and Security

We can divide all the data that can be passed from a dominant platform to a new, interoperable rival into several categories. There is data that should not be passed. For example, a private conversation between two or more parties who do not want to leave the service and who have no connection to the new service. There is data that should be passed after a simple request from the user. For example, your own photos that you uploaded, with your own annotations; your own private and public messages, etc. Then there is data generated by others about you, such as ratings. Finally, there is someone else’s personal information contained in a reply to a message you posted.

The last category is tricky, and it turns on the GDPR’s very fulcrum: consent. The GDPR’s rules on data portability clarify that exporting data needs to respect the rights and freedom of others. Thus, although there is no ban on porting data that does not belong to the requesting user, data from other users shouldn’t be passed on without their explicit consent, or under another GDPR legal basis, and without further safeguards. 

That poses a unique challenge for allowing users to take their data with them to other platforms, when that data implicates other users—but it also promises a unique benefit to those other users.

If the data you take with you to another platform implicates other users, the GDPR requires that they consent to it. The GDPR’s rules for this are complex, but also flexible.

For example, say, in the future, that Facebook obtains consent from users to allow their friends to take the comments, annotations, and messages they send to those friends with them to new services. If you quit Facebook and take your data (including your friends’ contributions to it) to a new service, the service doesn’t have to bother all your friends to get their consent again—under the WP Guidelines, so long as the new service uses the data in a way that is consistent with the uses Facebook obtained consent for in the first place, that consent carries over.

But even though the new service doesn’t have to obtain consent from your friends, it does have to notify them within 30 days – so your friends will always know where their data ended up.

And the new platform has all the same GDPR obligations that Facebook has: they must only process data when they have a “lawful basis” to do so; they must practice data minimization; they must maintain the confidentiality and security of the data; and they must be accountable for its use.

None of that prevents a new service from asking your friends for consent when you bring their data along with you from Facebook. A new service might decide to do this just to be sure that they are satisfying the “lawfulness” obligations under the GDPR.

One way to obtain that consent is to incorporate it into Facebook’s own consent “onboarding”—the consent Facebook obtains when each user creates their account. To comply with the GDPR, Facebook already has to obtain consent for a broad range of data-processing activities. If Facebook were legally required to permit interoperability, it could amend its onboarding process to include consent for the additional uses involved in interop.

Of course, the GDPR does not permit far-reaching, speculative consent. There will be cases where no amount of onboarding consent can satisfy either the GDPR or the legitimate privacy expectations of users. In these cases, Facebook can serve as a “consent conduit,” through which consent to allow their friends to take data with muddled claims with them to a rival platform can be sought, obtained, or declined.

Such a system would mean that some people who leave Facebook would have to abandon some of the data they’d hope to take with them—their friends’ contact details, say, or the replies to a thread they started—and it would also mean that users who stayed behind would face a certain amount of administrative burden when their friends tried to leave the service. Facebook might dislike this on the grounds that it “degraded the user experience,” but on the other hand, a flurry of notices from friends and family who are leaving Facebook behind might spur the users who stayed to reconsider that decision and leave as well.

For users pondering whether to allow their friends to take their blended data with them onto a new platform, the GDPR presents a vital assurance: because the GDPR does not permit companies to seek speculative, blanket consent for future activities for new purposes that you haven’t already consented to, and because the companies your friends take your data to have no way of contacting you, they generally cannot lawfully make any further use of that data (except through one of the other narrow bases permitted by GDPR, for example, to fulfil a “legitimate interest”) . Your friends can still access it, but neither they, nor the services they’ve fled to, can process your data beyond the scope of the initial consent to move it to the new context. Once the data and you are separated, there is no way for third parties to obtain the consent they’d need to lawfully repurpose it for new products or services.

Beyond consent, the GDPR binds online services to two other vital obligations: “data minimization” and “data security.” These two requirements act as a further backstop to users whose data travels with their friends to a new platform.

Data minimization means that any user data that lands on a new platform has to be strictly necessary for its users’ purposes (whether or not there might be some commercial reason to retain it). That means that if a Facebook rival imports your comments to its new user’s posts, any irrelevant data that Facebook transmits along with that data (say, your location when you left the comment, or which link brought you to the post), must be discarded. This provides a second layer of protection for users whose friends migrate to new services: not only is their consent required before their blended data travels to the new service, but that service must not retain or process any extraneous information that seeps in along the way.

The GDPR’s security guarantee, meanwhile, guards against improper handling of the data you consent to let your friends take with them to new services. That means that the data in transit has to be encrypted, and likewise the data at rest, on the rival service’s servers. And no matter that the new service is a startup, it has a regulated, affirmative duty to practice good security across the board, with real liability if it commits a material omission that leads to a breach.

Without interoperability, the monopolistic high-surveillance platforms are likely to enjoy long term, sturdy dominance. The collective action problem represented by getting all the people on Facebook whose company you enjoy to leave at the same time you do means that anyone who leaves Facebook incurs a high switching cost.

Interoperability allows users to depart Facebook for rival platforms, including those that both honor the GDPR and go beyond its requirements. These smaller firms will have less political and economic influence than the monopolists whose dominance they erode, and when they do go wrong, their errors will be less consequential because they impact fewer users.

Without interoperability, privacy’s best hope is to gentle Facebook, rendering it biddable and forcing it to abandon its deeply held beliefs in enrichment through nonconsensual surveillance —and to do all of this without the threat of an effective competitor that Facebook users can flee to no matter how badly it treats them.

Interoperability without privacy safeguards is a potential disaster, provoking a competition to see who can extract the most data from users while offering the least benefit in return. Every legislative and regulatory interoperability proposal in the US, the UK, and the EU contains some kind of privacy consideration, but the EU alone has a region-wide, strong privacy regulation that creates a consistent standard for data-protection no matter what measure is being contemplated. Having both components – an interoperability requirement and a comprehensive privacy regulation – is the best way to ensure interoperability leads to competition in desirable activities, not privacy invasions.

Share
Categories
Intelwars medical privacy privacy

Big Data Profits If We Deregulate HIPAA

This blog post was written by Kenny Gutierrez, EFF Bridge Fellow.

Recently proposed modifications to the federal Health Insurance Portability and Accountability Act (HIPAA) would invade your most personal and intimate health data. The Office of Civil Rights (OCR), which is part of the U.S. Department of Health and Human Services (HHS), proposes loosening our health privacy protections to address misunderstandings by health professionals about currently permissible disclosures.

EFF recently filed objections to the proposed modifications. The most troubling change would expand the sharing of your health data without your permission, by enlarging the definition of “health care operations” to include “case management” and “care coordination,” which is particularly troubling since these broad terms are not defined. Additionally, the modifications seek to lower the standard of disclosure for emergencies. They also will require covered entities to disclose personal health information (PHI) to uncovered health mobile applications upon patient request. Individually, the changes are troublesome enough. When combined, the impact on the release of PHI, with and without consent, is a threat to patient health and privacy.

Trust in Healthcare is Crucial

The proposed modifications would undermine the requisite trust by patients for health professionals to disclose their sensitive and intimate medical information. If patients no longer feel their doctors will protect their PHI, they will not disclose it or even seek treatment. For example, since there is pervasive prejudice and stigma surrounding addiction, an opiate- dependent patient will probably be less likely to seek treatment, or fully disclose the severity of their condition, if they fear their diagnosis could be shared without their consent. Consequently, the HHS proposal will hinder care coordination and case management. That would increase the cost of healthcare, because of decreased preventative care in the short-term, and increased treatment in the long-term, which is significantly more expensive. Untreated mental illness costs the nation more than $100 billion annually. Currently, only 2.5 million of the 21.2 million people suffering from mental illness seek treatment.

The current HIPAA privacy rule is flexible enough, counter to the misguided assertions of some health care professionals. It protects patient privacy while allowing disclosure, without patient consent, in critical instances such as for treatment, in an emergency, and when a patient is a threat to themselves or public safety.

So, why does HHS seek to modify an already flexible rule? Two congressional hearings, in 2013 and 2015, revealed that there is significant misunderstanding of HIPAA and permissive disclosures amongst medical professionals. As a result, HIPAA is misperceived as rigidly anti-disclosure, and mistakenly framed it as a “regulatory barrier” or “burden.” Many of the proposed modifications double down on this misunderstanding with privacy deregulation, rather than directly addressing some professionals’ confusion with improved training, education, and guidance.

The HHS Proposals Would Reduce Our Health Privacy

Modifications to HIPAA will cause more problems than solutions. Here is a brief overview of the most troubling modifications:

  1. The proposed rule would massively expand a covered entity’s (CE) use and disclosure of personal health information (PHI) without patient consent. Specifically, it allows unconsented use and disclosure for “care coordination” and “case management,” without adequately defining these vague and overbroad terms. This expanded exception would swallow the consent requirement for many uses and disclosure decisions. Consequently, Big Data (such as corporate data brokers) would obtain and sell this PHI. That could lead to discrimination in insurance policies, housing, employment, and other critical areas because of pre-existing medical conditions, such as substance abuse, mental health illness, or severe disabilities that carry a stigma.
  2. HHS seeks to lower the standard of unconsented disclosure from “professional judgment” to “good faith belief.” This would undermine patient trust. Currently, a covered entity may disclose some PHI based on their “professional judgment” that it is in the individual’s best interest. The modification would lower this standard to a “good faith belief,” and apparently shift the burden to the injured individual to prove their doctor’s lack of good faith. Professional judgment is properly narrower: it is objective and grounded in expert standards. “Good faith” is both broader and subjective.
  3. Currently, to disclose PHI in an emergency, the standard for disclosure is “imminent” harm, which invokes a level of certainty that harm is surely impending. HHS proposes instead just “reasonably foreseeable” harm, which is too broad and permissive. This could lead to a doctor disclosing your PHI because you have a sugar-filled diet, you’re a smoker, or you have unprotected sex. Harm in such cases would not be “imminent,” but it could be “reasonably foreseeable.”

Weaker HIPAA Rules for Phone Health Apps Would Hand Our Data to Brokers

The proposed modifications will likely result in more intimate, sensitive, and highly valuable information being sent to entities not covered by HIPAA, including data brokers.

Most Americans have personal health application on their phones for health goals, such as weight management, stress management, and smoking cessation. However, these apps are not covered by HIPAA privacy protections.

A 2014 Federal Trade Commission study revealed that 12 personal health apps and devices transmitted information to 76 different third parties, and some of the data could be linked back to specific users. In addition, 18 third parties received device-specific identifiers, and 22 received other key health information.

If the proposed HIPAA modifications are adopted, a covered provider would be required to share a patient’s PHI with their health app’s developer upon the patient’s request. This places too much burden on patients. They are often ill-equipped to understand privacy policies, terms of use, and permissions. They may also not realize all of the consequences of such sharing of personal health information. In many ways, the deck is stacked against them. App and device policies, practices, and permissions are often confusing and unclear.

Worse, depending on where the PHI is stored, other apps may grant themselves access to your PHI through their own separate permissions. Such permissions have serious consequences because many apps can access data on one’s device that is unrelated to what the app is supposed to do. In a study of 99 apps, researchers found that free apps included more unnecessary permissions than paid apps.

Next Steps

During the pandemic, we have learned once again the importance of trust in the health care system. Ignoring CDC guidelines, many people have not worn masks or practiced social distancing, which has fueled the spread of the virus. These are symptoms of public distrust of health care professionals. Trust is critical in prevention, diagnosis, and treatment.

The proposed HHS changes to HIPAA’s health privacy rules would undoubtedly lead to increased disclosures of PHI without patient consent, undermining the necessary trust the health care system requires. That’s why EFF opposes these changes and will keep fighting for your health privacy.

Share
Categories
Biometrics Electronic Frontier Alliance face surveillance Intelwars mobile devices privacy Street-Level Surveillance Surveillance and Human Rights Surveillance Technologies

A Year of Action in Support of the Black-Led Movement Against Police Violence and Racism

“Black lives matter on the streets. Black lives matter on the internet.” A year ago, EFF’s Executive Director, Cindy Cohn, shared these words in EFF’s statement about the police killings of Breonna Taylor and George Floyd. Cindy spoke for all of us in committing EFF to redouble its efforts to support the movement for Black lives. She promised we would continue providing guides and resources for protesters and journalists on the front lines; support our allies as they navigate the complexities of technology and the law; and resist surveillance and other high-tech abuses while protecting the rights to organize, assemble, and speak securely and freely.

Like many of you, the anniversary of George Floyd’s murder has inspired us to reflect on these commitments and the work of so many courageous people who stood up to demand justice. Our world has been irrevocably changed. While there is still an immeasurably long way to go toward becoming a truly just society, EFF is inspired by this leaderful movement and humbled as we reflect on the ways in which we have been able to support its critical work.

Surveillance Self-Defense for Protesters and Journalists 

EFF believes that people engaged in the Black-led movement against police violence deserve to hold those in power accountable and inspire others through the act of protest, without fear of police surveillance of our faces, bodies, electronic devices, and other digital assets. So, as protests began to spread throughout the nation, we worked quickly to publish a guide to cell phone surveillance at protests, including steps protesters can take to protect themselves.

We also worked with the National Lawyers Guide (NLG) to develop a guide to observing visible, and invisible, surveillance at protests—in video and blog form. The published guide and accompanying training materials were made available to participants in the NLG’s Legal Observer program. The 25-minute videos—available in English and Spanish—explain how protesters and legal observers can identify various police surveillance technologies, like body-worn cameras, drones, and automated license plate readers. Knowing what technologies the police use at a protest can help defense attorneys understand what types of evidence the police agencies may hold, find exculpatory evidence, and potentially provide avenues for discovery in litigation to enforce police accountability.

We also significantly updated our Surveillance Self-Defense guide to attending protests. We elaborated on our guidance on documenting protests, in order to minimize the risk of exposing other protesters to harmful action by law enforcement or vigilantes; gave practical tips for maintaining anonymity and physical safety in transit to and at protests; and recommended options for anonymizing images and scrubbing metadata. Documenting police brutality during protest is necessary. Our aim is to provide options to mitigate risk when fighting for a better world.

Protecting the Right to Record the Police

Using our phones to record on-duty police action is a powerful way to expose and end police brutality and racism. In the words of Darnella Frazier: “My video didn’t save George Floyd, but it put his murderer away and off the streets.” Many have followed in her courageous footsteps. For example, Caron Nazario used his phone to film excessive police force against him during a traffic stop. Likewise, countless protesters against police abuse have used their phones to document police abuse against other protesters. As demonstrations heated up last spring, EFF published advice on how to safely and legally record police.

EFF also has filed many amicus briefs in support of your right to record on-duty police. Earlier this year, one of these cases expanded First Amendment protection of this vital tool for social change. Unfortunately, another court proceeded to dodge the issue by hiding under “qualified immunity,” which is one reason EFF calls on Congress to repeal this dangerous doctrine. Fortunately, six federal appellate courts have squarely vindicated your right to film police. We’ll keep fighting until every court does so.

Revealing Police Surveillance of Protesters

As we learned after Occupy Wall Street, the #NoDAPL movement, and the 2014-2015 Movement for Black Lives uprisings, sometimes it takes years to learn about all the police surveillance measures used against protest movements. EFF has helped expose the local, state, federal, and private surveillance that the government unleashed on activists, organizers, and protestors during last summer’s Black-led protests against police violence.

In July 2020, public records requests we sent to the semi-public Union Square Business Improvement District (USBID) in San Francisco revealed that the USBID collaborated with the San Francisco Police Department (SFPD) to spy on protesters. Specifically, they gave the SFPD a large “data dump” of footage (USBID’s phrase). They also granted police live access to their cameras for a week in order to surveil protests.

In February 2021, public records we obtained from the Los Angeles Police Department (LAPD) revealed that LAPD detectives had requested footage of protests from residents’ Ring surveillance doorbell cameras. The requests, from detective squads allegedly investigating illegal activity in proximity to the First Amendment-protected protest, sought an undisclosed number of hours of footage. The LAPD’s use of Ring doorbell cameras for political surveillance, and the SFPD’s use of USBID cameras for the same purpose, demonstrate how police are increasingly reliant on non-city and privately-owned, highly-networked security cameras, thus blurring the lines between private and public surveillance.

Enforcing Legal Limits on Police Spying

In October 2020, EFF and the ACLU of Northern California filed a lawsuit against the City of San Francisco regarding its illegal video surveillance of protestors against police violence and racism, revealed through our public records requests discussed above. SFPD’s real-time monitoring of dissidents violated the City’s Surveillance Technology Ordinance, enacted in 2019, which bars city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, unless they first obtain approval from the Board of Supervisors following a public process with ample opportunity for community members to make their voices heard.

The lawsuit was filed on behalf of three activists of color who participated in and organized protests against police violence in May and June of 2020. They seek a court order requiring San Francisco and its police to stop using surveillance technologies in violation of the Ordinance.

Helping Communities Say “No” to Surveillance Technology  

Around the country, EFF is working with local activists to ban government use of face recognition technologya particularly pernicious form of biometric surveillance. Since 2019, when San Francisco became the first city to adopt such a ban, more than a dozen communities across the country have followed San Francisco’s lead. In each city, residents stood up to say “no,” and their elected representatives answered that call. In the weeks and months following the nationwide protests against police violence, we continued to work closely with our fellow Electronic Frontier Alliance members, local ACLU chapters, and other dedicated organizers to support new bans on government face surveillance across the United States, including in Boston, MA, Portland, OR, Minneapolis, MN, and Kings County, WA.

Last year’s protests for police accountability made a big difference in New York City, where we actively supported the work of local advocates for three years to pass a surveillance transparency ordinance. That City’s long overdue POST Act was passed as part of a three-bill package that many had considered longshots before the protests. However, amid calls to defund the police, many of the bill’s detractors, including New York City Mayor Bill de Blasio, came to see the measure as appropriate and balanced.

EFF also aided our allies in St. Louis and Baltimore, who put the brakes on a panopticon-like aerial surveillance system, developed by a vendor ominously named Persistent Surveillance Systems. The spy plane program first invaded the privacy of Baltimore residents in the wake of the in-custody killing of Freddy Gray by police. EFF submitted a friend-of-the-court brief in a federal civil rights lawsuit, filed by ACLU, challenging Baltimore’s aerial surveillance program. We were joined by the Brennan Center for Justice, the Electronic Privacy Information Center, FreedomWorks, the National Association of Criminal Defense Lawyers, and the Rutherford Institute. In St. Louis, EFF and local advocates—including the ACLU of Missouri and Electronic Frontier Alliance member Privacy Watch STL—worked to educate lawmakers and their constituents about the dangers and unconstitutionality of a bill that would have forced the City to enter into a contract to replicate the Baltimore spying program over St. Louis.

Ending the Sale of Face Recognition Technology to Law Enforcement

Protesters compelled companies around the country to reconcile their relationship to a deadly system of policing with their press releases in support of Black lives. Some companies heeded the calls from activists to stop their sale of face recognition technology to police departments. In June 2020, IBM, Microsoft, and Amazon paused these sales. Amazon said its pause would continue until such time as the government could “place stronger regulations to govern the ethical use of facial recognition.”

This was, in many ways, an admission of guilt: companies recognized how harmful face recognition is in the hands of police departments. One year later, the regulatory landscape at the federal level has hardly moved. Following increased pressure by a coalition of civil rights and racial justice organizations, Amazon recently announced it was indefinitely extending its moratorium on selling Rekognition, its face recognition product, to police.

These are significant victories for activists, but the fight is not over. With companies like Clearview AI continuing to sell their face surveillance products to police, we still need a federal ban on government use of face recognition.

The Fight Is Far From Over

Throughout the last year of historic protests for Black lives, it has been more apparent than ever that longstanding EFF concerns, such as law enforcement surveillance and freedom of expression, are part of our nation’s long-needed reckoning with racial injustice.

EFF will continue to stand with our neighbors, communities mourning the victims of police homicide, and the Black-led movement against police violence. We stand with the protesters demanding true and lasting justice. We stand with the journalists facing arrest and other forms of violence for exposing these atrocities. And we will stand with all those using their cameras, phones, and other digital tools to lift up the voices of the survivors, those we’ve lost, and all who demand a truly safe and just future. 

Share
Categories
Intelwars International Privacy Standards News Update privacy Surveillance and Human Rights

Civil Society Groups Seek More Time to Review, Comment on Rushed Global Treaty for Intrusive Cross Border Police Powers

Electronic Frontier Foundation (EFF), European Digital Rights (EDRi), and 40 other civil society organizations urged the Council of Europe’s Parliament Assembly and Committee of Ministers to allow more time for them to provide much-needed analysis and feedback on the flawed cross border police surveillance treaty its cybercrime committee rushed to approve without adequate privacy safeguards.

Digital and human rights groups were largely sidelined and excluded during the drafting process of the Second Additional Protocol to the Budapest Convention, an international treaty that will establish global procedures for law enforcement in one country to access personal user data from technology companies in other countries. The CoE Cybercrime Committee (T-CY)—which oversees the Budapest Convention—adopted in 2017 internal rules that foster a narrower range of participants for the drafting of this new Protocol.

The process has been largely opaque, led by public safety and law enforcement officials. And T-CY’s periodic consultations with civil society and the public have been criticized for their lack of detail, their short response timelines, and the lack of knowledge about countries’ deliberation on these issues. The T-CY rushed approval of the text on May 28th, signing off on provisions that put few limitations and provide little oversight, on police access to sensitive user data held by Internet companies around the world.

The Protocol now heads to the Council of Europe Parliamentary Assembly (PACE) Committee on Legal Affairs and Human Rights, which can recommend further amendments. We hope the PACE will hear civil society’s privacy concerns and issue an opinion addressing the lack of adequate data protection safeguards. 

In a letterdated March 31st, to PACE President Rik Daems and Chair of the Committee of Ministers Péter Szijjártó, digital and human rights groups said the treaty will likely be used extensively, with far-reaching implications on the security and privacy of people everywhere. It is imperative that the fundamental rights guaranteed in the European Convention on Human Rights and in other agreements are not sidestepped even if they take time in favor of easier law enforcement access to user data free of judicial oversight and strong privacy protections. The CoE’s plan is to finalize the Protocol’s adoption by November and begin accepting signatures from countries sometime before 2022.

We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement,” EFF and its allies said in the letter. “The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments.”

In 2018, EFF along with 93 civil society organizations from across asked the TC-Y to invite civil society as experts in the drafting plenary meetings, as is customary in other Council of Europe Committee sessions. The goal was to listen to Member States opinions and build upon the richness of the discussion among States and experts, a discussion that civil society missed since we were not invited to observe such drafting process. While  EFF has participated in every public consultation of the TC-Y process since our 2018 coalition letter, such level of participation has failed to comply with meaningful multi-stakeholder principles of transparency, inclusion and accountability”.  As Tamir Israel (CIPPIC) and Katitza Rodriguez (EFF) explained:

With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.

The full text of the letter:

Re: Ensuring Meaningful Consultation in Cybercrime Negotiations

We, the undersigned individuals and organizations, write to ask for a meaningful opportunity to give the final draft text of the proposed second additional protocol to Convention 185, the Budapest Cybercrime Convention, the full and detailed consideration which it deserves. We specifically ask that you provide external stakeholders further opportunity to comment on the significant changes introduced to the text on the eve of the final consultation round ending on 6th May, 2021.

The Second Additional Protocol aims to standardise cross-border access by law enforcement authorities to electronic personal data. While competing initiatives are also underway at the United Nations and the OECD, the draft Protocol has the potential to become the global standard for such cross-border access, not least because of the large number of states which have already ratified the principal Convention. In these circumstances, it is imperative that the Protocol should lay down adequate standards for the protection of fundamental rights.

Furthermore, the initiative comes at a time when even routine criminal investigations increasingly include cross-border investigative elements and, in consequence, the protocol is likely to be used widely. The protocol therefore assumes great significance in setting international standards, and is likely to be used extensively, with far-reaching implications for privacy and human rights around the world. It is important that its terms are carefully considered and ensure a proportionate balance between the objective of securing or recovering data for the purposes of law enforcement and the protection of fundamental rights guaranteed in the European Convention on Human Rights and in other relevant national and international instruments.

In light of the importance of this initiative, many of us have been following this process closely and have participated actively, including at the Octopus Conference in Strasbourg in November, 2019 and the most recent and final consultation round which ended on 6th May, 2021.

Although many of us were able to engage meaningfully with the text as it stood in past consultation rounds, it is significant that these earlier iterations of the text were incomplete and lacked provisions to protect the privacy of personal data. In the event, the complete text of the draft Protocol was not publicly available before 12th April, 2021. The complete draft text introduces a number of significant alterations, most notably the inclusion of Article 14, which added for the first time proposed minimum standards for privacy and data protection. While external stakeholders were previously notified that these provisions were under active consideration and would be published in due course, the publication of the revised draft on 12th April offered the first opportunity to examine these provisions and consider other elements of the Protocol in the full light of these promised protections.

We were particularly pleased to see the addition of Article 14, and welcome its important underlying intent—to balance law enforcement objectives with fundamental rights. However, the manner in which this is done is, of necessity, complex and intricate, and, even on a cursory preliminary examination, it is apparent that there are elements of the article which require careful and thoughtful scrutiny, in the light of which might be capable of improvement.[1]

As a number of stakeholders has noted,[2] the latest (and final) consultation window was too short. It is essential that adequate time is afforded to allow a meaningful analysis of this provision and that all interested parties be given a proper chance to comment. We believe that such continued engagement can serve only to improve the text.

The introduction of Article 14 is particularly detailed and transformative in its impact on the entirety of the draft Protocol. Keeping in mind the multiple national systems potentially impacted by the draft Protocol, providing meaningful feedback on this long anticipated set of safeguards within the comment window has proven extremely difficult for civil society groups, data protection authorities and a wide range of other concerned experts.

Complicating our analysis further are gaps in the Explanatory Report accompanying the draft Protocol. We acknowledge that the Explanatory Report might continue to evolve, even after the Protocol itself is finalised, but the absence of elaboration on a pivotal provision such as Article 14 poses challenges to our understanding of its implications and our resulting ability meaningfully to engage in this important treaty process.

We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement. The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments. Misalignments between Article 14 and existing legal frameworks on data protection such as Convention 108/108+ similarly demand careful scrutiny so that their implications are fully understood.

In these circumstances, we anticipate that the Council will wish to accord the highest priority to ensuring that fundamental rights are adequately safeguarded and that the consultation process is sufficiently robust to instill public confidence in the Protocol across the myriad jurisdictions which are to consider its adoption. The Council will, of course, appreciate that these objectives cannot be achieved without meaningful stakeholder input.

We are anxious to assist the Council in this process. In that regard, constructive stakeholder engagement requires a proper opportunity fully to assess the draft protocol in its entirety, including the many and extensive changes introduced in April 2021. We anticipate that the Council will share this concern, and to that end we respectfully suggest that the proposed text (inclusive of a completed explanatory report) be widely disseminated and that a minimum period of 45 days be set aside for interested stakeholders to submit comments.

We do realise that the T-CY Committee had hoped for an imminent conclusion to the drafting process. That said, adding a few months to a treaty process that has already spanned several years of internal drafting is both necessary and proportionate, particularly when the benefits of doing so will include improved public accountability and legitimacy, a more effective framework for balancing law enforcement objectives with fundamental rights, and a finalised text that reflects the considered input of civil society.

We very much look forward to continuing our engagement with the Council both on this and on future matters.

With best regards,

  1. Electronic Frontier Foundation (international)
  2. European Digital Rights (European Union)
  3. The Council of Bars and Law Societies of Europe (CCBE) (European Union)
  4. Access Now (International)
  5. ARTICLE19 (Global)
  6. ARTICLE19 Brazil and South America
  7. Association for Progressive Communications (APC)
  8. Association of Technology, Education, Development, Research and Communication – TEDIC (Paraguay)
  9. Asociación Colombiana de Usuarios de Internet (Colombia)
  10. Asociación por los Derechos Civiles (ADC) (Argentina)
  11. British Columbia Civil Liberties Association (Canada)
  12. Chaos Computer Club e.V. (Germany)
  13. Content Development & Intellectual Property (CODE-IP) Trust (Kenya)
  14. net (Sweden)
  15. Derechos Digitales (Latinoamérica)
  16. Digitale Gesellschaft (Germany)
  17. Digital Rights Ireland (Ireland)
  18. Danilo Doneda, Director of Cedis/IDP and member of the National Council for Data Protection and Privacy (Brazil)
  19. Electronic Frontier Finland (Finland)
  20. works (Austria)
  21. Fundación Acceso (Centroamérica)
  22. Fundacion Karisma (Colombia)
  23. Fundación Huaira (Ecuador)
  24. Fundación InternetBolivia.org (Bolivia)
  25. Hiperderecho (Peru)
  26. Homo Digitalis (Greece)
  27. Human Rights Watch (international)
  28. Instituto Panameño de Derecho y Nuevas Tecnologías – IPANDETEC (Central America)
  29. Instituto Beta: Internet e Democracia – IBIDEM (Brazil)
  30. Institute for Technology and Society – ITS Rio (Brazil)
  31. International Civil Liberties Monitoring Group (ICLMG)
  32. Iuridicium Remedium z.s. (Czech Republic)
  33. IT-Pol Denmark (Denmark)
  34. Douwe Korff, Emeritus Professor of International Law, London Metropolitan University
  35. Laboratório de Políticas Públicas e Internet – LAPIN (Brazil)
  36. Laura Schertel Mendes, Professor, Brasilia University and Director of Cedis/IDP (Brazil)
  37. Open Net Korea (Korea)
  38. OpenMedia (Canada)
  39. Privacy International (international)
  40. R3D: Red en Defensa de los Derechos Digitales (México)
  41. Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic – CIPPIC (Canada)
  42. Usuarios Digitales (Ecuador)
  43. org (Netherlands)
  44. Xnet (Spain)

[1] See, for example Access Now, comments on the draft 2nd Additional Protocol to the Budapest Convention on Cybercrime, available at: https://rm.coe.int/0900001680a25783; EDPB, contribution to the 6th round of consultations on the draft Second Additional Protocol to the Council of Europe Budapest Convention on Cybercrime, available at: https://edpb.europa.eu/system/files/2021-05/edpb_contribution052021_6throundconsultations_budapestconvention_en.pdf.

[2] Alessandra Pierucci, Correspondence to Ms. Chloé Berthélémy, dated 17 May 2021; Consultative Committee of the Convention for the Protection of Individuals with Regard to Automated Processing of Personal Data, Directorate General Human Rights and Rule of Law, Opinion on Draft Second Additional Protocol, May 7, 2021, https://rm.coe.int/opinion-of-the-committee-of-convention-108-on-the-draft-second-additio/1680a26489; EDPB, see footnote 1; Joint Civil Society letter, 2 May: available at https://edri.org/wp-content/uploads/2021/05/20210420_LetterCoECyberCrimeProtocol_6thRound.pdf.  

Share
Categories
Intelwars privacy Social Networks

VICTORY: You Can Now Make Your Venmo Friends List Private. Here’s How.

It took two and a half years and one national security incident, but Venmo did it, folks: users now have privacy settings to hide their friends lists.

EFF first pointed out the problem with Venmo friends lists in early 2019 with our “Fix It Already” campaign. While Venmo offered a setting to make your payments and transactions private, there was no option to hide your friends list. No matter how many settings you tinkered with, Venmo would show your full friends list to anyone else with a Venmo account. That meant an effectively public record of the people you exchange money with regularly, along with whoever the app might have automatically imported from your phone contact list or even your Facebook friends list. The only way to make a friends list “private” was to manually delete friends one at a time; turn off auto-syncing; and, when the app wouldn’t even let users do that, monitor for auto-populated friends and remove them one by one, too.

This public-no-matter-what friends list design was a privacy disaster waiting to happen, and it happened to the President of the United States. Using the app’s search tool and all those public friends lists, Buzzfeed News found President Biden’s account in less than 10 minutes, as well as those of members of the Biden family, senior staffers, and members of Congress.  This appears to have been the last straw for Venmo: after more than two years of effectively ignoring calls from EFF, Mozilla, and others, the company has finally started to roll out privacy settings for friends lists.

As we’ve noted before, this is the bare minimum. Providing more privacy settings options so users can opt-out of the publication of their friends list is a step in the right direction. But what Venmo—and any other payment app—must do next is make privacy the default for transactions and friends lists, not just an option buried in the settings.

In the meantime, follow these steps to lock down your Venmo account:

  1. Tap the three lines in the top right corner of your home screen and select Settings near the bottom. From the settings screen, select Privacy and then Friends List. (If the Friends List option does not appear, try updating your app, restarting it, or restarting your phone.
       

  2. The settings will look like this by default.

  3. Change the privacy setting to Private. If you do not wish to appear in your friends’ own friends lists—after all, they may not set theirs to private—click the toggle off at the bottom. The final result should look like this.

  4. Back on the Privacy settings page, make sure your Default privacy settings look like this: set your default privacy option for all future payments to Private.

  5. Now select Past Transactions.

  6. Select Change All to Private.

  7. Confirm the change and click Change to Private.

  8. Now go all the way back to the main settings page, and select Friends & social.

  9. From here, you may see options to unlink your Venmo account from your Facebook account, Facebook friends list, and phone contact list. (Venmo may not give you all of these options if, for example, you originally signed up for Venmo with your Facebook account.) Click all the toggles off if possible.

    Obviously your specific privacy preferences are up to you, but following the steps above should protect you from the most egregious snafus that the company has caused over the years with its public-by-default—or entirely missing— privacy settings. Although it shouldn’t take a national security risk to force a company to focus on privacy, we’re glad that Venmo has finally, at last, years later, provided friends list privacy options

Share
Categories
Commentary Intelwars privacy

Ring Changed How Police Request Door Camera Footage: What it Means and Doesn’t Mean

Amazon Ring has announced that it will change the way police can request footage from millions of doorbell cameras in communities across the country. Rather than the current system, in which police can send automatic bulk email requests to individual Ring users in an area of interest up to a square half mile, police will now publicly post their requests to Ring’s accompanying Neighbors app. Users of that app will see a “Request for Assistance” on their feed, unless they opt out of seeing such requests, and then Ring customers in the area of interest (still up to a square half mile) can respond by reviewing and providing their footage. 

Because only a portion of Ring users also are Neighbors users, and some of them may opt out of receiving police requests, this new system may  reduce the number of people who receive police requests, though we wonder whether Ring will now push more of its users to register for the app. 

This new model also may increase transparency over how police officers use and abuse the Ring system, especially as to people of color, immigrants, and protesters. Previously, in order to learn about police requests to Ring users, investigative reporters and civil liberties groups had to file public records requests with police departments–which consumed significant time and often yielded little information from recalcitrant agencies. Through this labor-intensive process, EFF revealed that the Los Angeles Police Department targeted Black Lives Matter protests in May and June 2020 with bulk Ring requests for doorbell camera footage that likely included First Amendment protected activities. Now, users will be able to see every digital request a police department has made to residents for Ring footage by scrolling through a department’s public page on the app. 

But making it easier to monitor historical requests can only do so much. It certainly does not address the larger problem with Ring and Neighbors: the network is predicated on perpetuating irrational fear of neighborhood crime, often yielding disproportionate scrutiny against people of color, all for the purposes of selling more cameras. Ring does so through police partnerships, which now encompass 1 in every 10 police departments in the United States. At their core, these partnerships facilitate bulk requests from police officers to Ring customers for their camera footage, built on a growing Ring surveillance network of millions of public-facing cameras. EFF adamantly opposes these Ring-police partnerships and advocates for their dissolution.

Nor does new transparency about bulk officer-to-resident requests through Ring erase the long history of secrecy about these shady partnerships. For example, Amazon has provided free Ring cameras to police, and limited what police were allowed to say about Ring, even including the existence of the partnership. 

Notably, Amazon has moved Ring functionality to its Neighbors app. Neighbors is a problematic technology. Like its peers Nextdoor and Citizen, it encourages its users to report supposedly suspicious people–often resulting in racially biased posts that endanger innocent residents and passersby. 

Ring’s small reforms invite  bigger questions: Why does a customer-focused technology company need to develop and maintain a feature for law enforcement in the first place? Why must Ring and other technology companies continue to offer police free features to facilitate surveillance and the transfer of information from users to the government? 

Here’s some free advice for Ring: Want to make your product less harmful to vulnerable populations? Stop facilitating their surveillance and harassment at the hands of police. 

Share
Categories
Intelwars privacy Security

Security Tips for Online LGBTQ+ Dating

Dating is risky. Aside from the typical worries of possible rejection or lack of romantic chemistry, LGBTQIA people often have added safety considerations to keep in mind. Sometimes staying in the proverbial closet is a matter of personal security. Even if someone is open with their community about being LGBTQ+, they can be harmed by oppressive governments, bigoted law enforcement, and individuals with hateful beliefs. So here’s some advice for staying safe while online dating as an LGBTQIA+ person:

Step One: Threat Modeling

The first step is making a personal digital security plan. You should start with looking at your own security situation from a high level. This is often called threat modeling and risk assessment. Simply put, this is taking inventory of the things you want to protect and what adversaries or risks you might be facing. In the context of online dating, your protected assets might include details about your sexuality, gender identity, contacts of friends and family, HIV status, political affiliation, etc. 

Let’s say that you want to join a dating app, chat over the app, exchange pictures, meet someone safely, and avoid stalking and harassment. Threat modeling is how you assess what you want to protect and from whom. 

We touch in this post on a few considerations for people in countries where homosexuality is criminalized, which may include targeted harassment by law enforcement. But this guide is by no means comprehensive. Refer to materials by LGBTQ+ organizations in those countries for specific tips on your threat model.

Securely Setting Up Dating Profiles

When making a new dating account, make sure to use a unique email address to register. Often you will need to verify the registration process through that email account, so it’s likely you’ll need to provide a legitimate address. Consider creating an email address strictly for that dating app. Oftentimes there are ways to discover if an email address is associated with an account on a given platform, so using a unique one can prevent others from potentially knowing you’re on that app. Alternatively, you might use a disposable temporary email address service. But if you do so, keep in mind that you won’t be able to access it in the future, such as if you need to recover a locked account. 

The same logic applies to using phone numbers when registering for a new dating account. Consider using a temporary or disposable phone number. While this can be more difficult than using your regular phone number, there are plenty of free and paid virtual telephone services available that offer secondary phone numbers. For example, Google Voice is a service that offers a secondary phone number attached to your normal one, registered through a Google account. If your higher security priority is to abstain from giving data to a privacy-invasive company like Google, a “burner” pay-as-you-go phone service like Mint Mobile is worth checking out. 

When choosing profile photos, be mindful of images that might accidentally give away your location or identity. Even the smallest clues in an image can expose its location. Some people use pictures with relatively empty backgrounds, or taken in places they don’t go to regularly.

Make sure to check out the privacy and security sections in your profile settings menu. You can usually configure how others can find you, whether you’re visible to others, whether location services are on (that is, when an app is able to track your location through your phone), and more. Turn off anything that gives away your location or other information, and later you can selectively decide which features to reactivate, if any. More mobile phone privacy information can be found on this Surveillance Self Defense guide.

Communicating via Phone, Email, or In-App Messaging

Generally speaking, using an end-to-end encrypted messaging service is the best way to go for secure texting. For some options like Signal, or Whatsapp, you may be able to use a secondary phone number to keep your “real” phone number private.

For phone calls, you may want to use a virtual phone service that allows you to screen calls, use secondary phone numbers, block numbers, and more. These aren’t always free, but research can bring up “freemium” versions that give you free access to limited features.

Be wary of messaging features within apps that offer deletion options or disappearing messages, like Snapchat. Many images and messages sent through these apps are never truly deleted, and may still exist on the company’s servers. And even if you send someone a message that self-deletes or notifies you if they take a screenshot, that person can still take a picture of it with another device, bypassing any notifications. Also, Snapchat has a map feature that shows live public posts around the world as they go up. With diligence, someone could determine your location by tracing any public posts you make through this feature.

Sharing Photos

If the person you’re chatting with has earned a bit of your trust and you want to share pictures with them, consider not just what they can see about you in the image itself, as described above, but also what they can learn about you by examining data embedded in the file.

EXIF metadata lives inside an image file and describes the geolocation it was taken, the device it was made with, the date, and more. Although some apps have gotten better at automatically withholding EXIF data from uploaded images, you still should manually remove it from any images you share with others, especially if you send them directly over phone messaging. 

One quick way is to  send the image to yourself on Signal messenger, which automatically strips EXIF data. When you search for your own name in contacts, a feature will come up with “Note to Self” where you have a chat screen to send things to yourself:

Screenshot of Signal’s Note to Self feature

Before sharing your photo, you can verify the results by using a tool to view EXIF data on an image file, before and after removing EXIF data.

For some people, it might be valuable to use a watermarking app to add your username or some kind of signature to images. This can verify who you are to others and prevent anyone from using your images to impersonate you. There are many free and mostly-free options in iPhone and Android app stores. Consider a lightweight version that allows you to easily place text on an image and lets you screenshot the result. Keep in mind that watermarking a picture is a quick way to identify yourself, which in itself is a trade-off.

watermark example overlaid on an image of the lgbtq+ pride flag

Sexting Safely

Much of what we’ve already gone over will step up your security when it comes to sexting, but here are some extra precautions:

Seek clearly communicated consent between you and romantic partners about how intimate pictures can be shared or saved. This is great non-technical security at work. If anyone else is in an image you want to share, make sure you have their consent as well. Also, be thoughtful as to whether or not to include your face in any images you share.

As we mentioned above, your location can be determined by public posts you make and Snapchat’s map application.

For video chatting with a partner, consider a service like Jitsi that allows temporary rooms, no registration, and is designed with privacy in mind. Many services are not built with privacy in mind, and require account registration, for example. 

Meeting Someone AFK

Say you’ve taken all the above precautions, someone online has gained your trust, and you want to meet them away-from-keyboard and in real life. Always meet first somewhere public and occupied with other people. Even better, meet in an area more likely to be accepting of LGBTQIA+ people. Tell a friend beforehand all the details about where you’re going, who you are meeting, and a given time that you promise to check back in with them that you’re ok.

If you’re living in one of the 69 countries where homosexuality is illegal and criminalized, make sure to check in with local advocacy groups about your area. Knowing your rights as a citizen will help keep you safe if you’re stopped by law enforcement.

Privacy and Security is a Group Effort

Although the world is often hostile to non-normative expressions of love and identity, your personal security, online and off, is much better supported when you include the help of others that you trust. Keeping each other safe, accountable, and cared for gets easier when you have more people involved. A network is always stronger when every node on it is fortified against potential threats. 

Happy Pride Month—keep each other safe.

Share
Categories
Intelwars International Privacy Standards Legal Analysis Necessary and Proportionate privacy Surveillance and Human Rights

Global Law Enforcement Convention Weakens Privacy & Human Rights

The Council of Europe Cybercrime Committee’s (T-CY) recent decision to approve new international rules for law enforcement access to user data without strong privacy protections is a blow for global human rights in the digital age. The final version of the draft Second Additional Protocol to the Council of Europe’s (CoE) widely adopted Budapest Cybercrime Convention, approved by the T-CY drafting committee on May 28th, places few limits on law enforcement data collection. As such, the Protocol can endanger technology users, journalists, activists, and vulnerable populations in countries with flimsy privacy protections and weaken everyone’s right to privacy and free expression across the globe. 

The Protocol now heads to members of CoE’s Parliamentary Committee (PACE) for their opinion. PACE’s Committee on Legal Affairs and Human Rights can recommend further amendments, and decide which ones will be adopted by the Standing Committee or the Plenary. Then, the Council of Ministers will vote on whether to integrate PACE’s recommendations into the final text. The CoE’s plan is to finalize the Protocol’s adoption by November. If adopted, the Protocol will be open for signatures to any country that has signed the Budapest Convention sometime before 2022.

The next step for countries is at the signature stage when they will ask to reserve the right not to abide by certain provisions n the Protocol, especially Article 7 on direct cooperation between law enforcement and companies holding user data. 

If countries sign the Protocol as it stands and in its entirety, it will reshape how state police access digital data from Internet companies based in other countries by prioritizing law enforcement demands, sidestepping judicial oversight, and lowering the bar for privacy safeguards. 

CoE’s Historical Commitment to Transparency Conspicuously Absent

While transparency and a strong commitment to engaging with external stakeholders have been hallmarks of CoE treaty development, the new Protocol’s drafting process lacked robust engagement with civil society. The T-CY adopted internal rules that have fostered a largely opaque process, led by public safety and law enforcement officials. T-CY’s periodic consultations with external stakeholders and the public have lacked important details, offered short response timelines, and failed to meaningfully address criticisms.

In 2018, nearly 100 public interest groups called on the CoE to allow for expert civil society input on the Protocol’s development. In 2019, the European Data Protection Board (EDPB) similarly called on T-CY to ensure “early and more proactive involvement of data protection authorities” in the drafting process, a call it felt the need to reiterate earlier this year. And when presenting the Protocol’s draft text for final public comment, T-CY provided only 2.5 weeks, a timeframe that the EDPB noted “does not allow for a timely and in-depth analysis” from stakeholders. That version of the Protocol also failed to include the explanatory text for the data protection safeguards, which was only published later, in the final version of May 28, without public consultation. Even other branches of the CoE, such as its data protection committee, have found it difficult to provide meaningful input under these conditions. 

Last week, over 40 civil society organizations called on CoE to provide an additional opportunity to comment on the final text of the Protocol. The Protocol aims to set a new global standard across countries with widely varying commitments to privacy and human rights. Meaningful input from external stakeholders including digital rights organizations and privacy regulators is essential. Unfortunately, CoE refused and will likely vote to open the Protocol for state signatures starting in November.

With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.

Eroding Global Protection for Online Anonymity 

The new Protocol provides few safeguards for online anonymity, posing a threat to the safety of activists, dissidents, journalists, and the free expression rights of everyday people who go online to comment on and criticize politicians and governments. When Internet companies turn subscriber information over to law enforcement, the real-world consequences can be dire. Anonymity also plays an important role in facilitating opinion and expression online and is necessary for activists and protestors around the world. Yet the new Protocol fails to acknowledge the important privacy interests it places in jeopardy and, by ensuring most of its safeguards are optional, permits police access to sensitive personal data without systematic judicial supervision. 

As a starting point, the new Protocol’s explanatory text claims that: ”subscriber information … does not allow precise conclusions concerning the private lives and daily habits of individuals concerned,” deeming it less intrusive than other categories of data.  

This characterization is directly at odds with growing recognition that police frequently use subscriber data access to identify deeply private anonymous communications and activity. Indeed, the Court of Justice of the European Union (CJEU) recently held that letting states associate subscriber data with anonymous digital activity can constitute a ‘serious’ interference with privacy. The Protocol’s attempt to paint identification capabilities as non-intrusive even conflicts with CoE’s own European Court of Human Rights (ECtHR). By encoding the opposite conclusion in an international protocol, the new explanatory text can deter future courts from properly recognizing the importance of online anonymity. As the ECtHR held doing so would, “deny the necessary protection to information which might reveal a good deal about the online activity of an individual, including sensitive details of his or her interests, beliefs and intimate lifestyle.”

Articles 7 and 8 of the Protocol in particular adopt intrusive police powers while requiring few safeguards. Under Article 7, states must clear all legal obstacles to “direct cooperation” between local companies and law enforcement. Any privacy laws that prevent Internet companies from voluntarily identifying customers to foreign police without a court order are incompatible with Article 7 and must be amended. “Direct cooperation” is intended to be the primary means of accessing subscriber data, but Article 8 provides a supplementary power to force disclosure from companies that refuse to cooperate. While Article 8 does not require judicial supervision of police, countries with strong privacy protections may continue relying on their own courts when forcing a local service provider to identify customers. Both Articles 7 and 8 also allow countries to screen and refuse any subscriber data demands that might threaten a state’s essential interests. But these screening mechanisms also remain optional, and refusals are to be “strictly limited,” with the need to protect private data invoked only in “exceptional cases.” 

By leaving most privacy and human rights protections to each state’s discretion, Articles 7 and 8 permit access to sensitive identification data under conditions that the ECtHR described as “offer[ing] virtually no protection from arbitrary interference … and no safeguards against abuse by State officials.”

The Protocol’s drafters have resisted calls from civil society and privacy regulators to require some form of judicial supervision in Articles 7 and 8. Some police agencies object to reliance on the courts, arguing that judicial supervision leads to slower results. But systemic involvement of the courts is a critical safeguard when access to sensitive personal data is at stake. The Office of the Privacy Commissioner of Canada put it cogently: “Independent judicial oversight may take time, but it’s indispensable in the specific context of law enforcement investigations.” Incorporating judicial supervision as a minimum threshold for cross-border access is also feasible. Indeed, a majority of states in T-CY’s own survey require prior judicial authorization for at least some forms of subscriber data in their respective national laws. 

At a minimum, the new Protocol text is flawed for its failure to recognize the deeply private nature of anonymous online activity and the serious threat posed to human rights when State officials are allowed open-ended access to identification data. Granting states this access makes the world less free and seriously threatens free expression. Article 7’s emphasis on non-judicial ‘cooperation’ between police and Internet companies poses a particularly insidious risk, and must not form part of the final adopted Convention.

Imposing Optional Privacy Standards

Article 14, which was recently publicized for the first time, is intended to provide detailed safeguards for personal information. Many of these protections are important, imposing limits on the treatment of sensitive data, the retention of personal data, and the use of personal data in automated decision-making, particularly in countries without data protection laws. The detailed protections are complex, and civil society groups continue to unpack their full legal impact. That being said, some shortcomings are immediately evident.

Some of Article 14’s protections actively undermine privacy—for example, paragraph 14.2.a prohibits signatories from imposing any additional “generic data protection conditions” when limiting the use of personal data. Paragraph 14.1.d also strictly limits when a country’s data protection laws can prevent law enforcement-driven personal data transfers to another country. 

More generally, and in stark contrast to the Protocol’s lawful access obligations, the detailed privacy safeguards encoded in Article 14 are not mandatory and can be ignored if countries have other arrangements in place (Article 14.1). States can rely on a wide variety of agreements to bypass the Article 14 protections. The OECD is currently negotiating an agreement that might systematically displace the Article 14 protections and, under the United States Clarifying Lawful Overseas Use of Data (CLOUD) Act, the U.S. executive branch can enter into “agreements” with other states to facilitate law enforcement transfers. Paragraph 14.1.c even contemplates informal agreements that are neither binding, nor even public, meaning that countries can secretly and systematically bypass the Article 14 safeguards. No real obligations are put in place to ensure these alternative arrangements provide an adequate or even sufficient level of privacy protection. States can therefore rely on the Protocol’s law enforcement powers while using side agreements to bypass its privacy protections, a particularly troubling development given the low data protection standards of many anticipated signatories. 

The Article 14 protections are also problematic because they appear to fall short of the minimum data protection that the CJEU has required. The full list of protections in Article 14, for example, resembles that inserted by the European Commission into its ‘Privacy Shield’ agreement. Internet companies relied upon the Privacy Shield to facilitate economic transfers of personal data from the European Union (EU) to the United States until the CJEU invalidated the agreement in 2020, finding its privacy protections and remedies insufficient. Similarly, clause 14.6 limits the use of personal data in purely automated decision-making systems that will have significant adverse effects on relevant individual interests. But the CJEU has also found that an international agreement for transferring air passenger data to Canada for public safety objectives was inconsistent with EU data protection guarantees despite the inclusion of a similar provision.

Conclusion

These and other substantive problems with the Protocol are concerning. Cross-border data access is rapidly becoming common in even routine criminal investigations, as every aspect of our lives continues its steady migration to the digital world. Instead of baking robust human rights and privacy protections into cross-border investigations, the Protocol discourages court oversight, renders most of its safeguards optional, and generally weakens privacy and freedom of expression.

Share
Categories
Big tech Biometrics Ccp China communist china Intelwars privacy Social Media Tiktok Tiktok biometrics

Chinese-owned TikTok can now collect your kids’ faceprints and voiceprints

Chinese-owned TikTok made a quiet update to its privacy policy in the United States this week. The massively popular social video app gave itself permission to collect biometric data of U.S. users, which includes faceprints and voiceprints.

“We may collect information about the images and audio that are a part of your User Content, such as identifying the objects and scenery that appear, the existence and location within an image of face and body features and attributes, the nature of the audio, and the text of the words spoken in your User Content,” the new privacy policy that was introduced on Wednesday stated.

“We may collect biometric identifiers and biometric information as defined under US laws, such as faceprints and voiceprints, from your User Content,” TikTok said. “Where required by law, we will seek any required permissions from you prior to any such collection.”

TechCrunch reported, “Only a handful of U.S. states have biometric privacy laws, including Illinois, Washington, California, Texas and New York. If TikTok only requested consent, ‘where required by law,’ it could mean users in other states would not have to be informed about the data collection.”

TikTok claimed they may need to “collect this information to enable special video effects, for content moderation, for demographic classification, for content and ad recommendations, and for other non-personally-identifying operations.”

“In response to various questions about what data the company is now collecting on users, how it defines ‘faceprints and voiceprints,’ what data it might collect in the future, and what it might do with that information,” a TikTok spokesperson told The Verge on Thursday, “As part of our ongoing commitment to transparency, we recently updated our Privacy Policy to provide more clarity on the information we may collect.”

The current expansion of TikTok’s privacy policy is especially concerning since TikTok lost a class-action lawsuit filed in the United States for collecting and sharing personal and biometric information of users without their consent. The suit alleged that the app collected “highly sensitive personal data” to track users and target ads to them. TikTok rejected the allegations but said it didn’t want to spend time litigating the issue. TikTok settled the class-action lawsuit by paying out a whopping $92 million to the users.

“As part of the settlement, TikTok has agreed to avoid several behaviors that could compromise user privacy unless it specifically discloses those behaviors in its privacy policy,” according to The Verge. “Those behaviors include storing biometric information, collecting GPS or clipboard data, and sending or storing US users’ data outside the country.”

In December 2019, TikTok Inc. and parent company ByteDance Technology Co. agreed to a $1.1 million lawsuit over alleged children’s privacy allegations. The lawsuit alleged that the kid-friendly app collected children’s information without their parents’ consent.

“The companies compiled and disclosed personal information and viewing data, including lip-syncing videos, of children who used their Musical.ly app, and sold it to third-party advertisers,” according to Bloomberg.

In February 2019, TikTok’s parent company agreed to pay $5.7 million to settle a Federal Trade Commission over alleged Children’s Online Privacy Protection Act violations. The company purportedly collected personal information from children illegally, according to the FTC, which added, “This is the largest civil penalty ever obtained by the Commission in a children’s privacy case.”

TikTok is a short-form video-sharing social media platform with nearly 100 million users in the U.S. that is especially popular with younger people – over 47% of TikTok’s users are between the ages of 10-29. TikTok is owned by the Beijing-based ByteDance Ltd., one of China’s biggest tech companies.

Last summer, then-President Donald Trump attempted to ban TikTok in the United States over spying concerns and the app’s connection to the CCP. Last year, then-Secretary of State Mike Pompeo said using TikTok puts “your private information in the hands of the Chinese Communist Party.” The Defense Department previously warned TikTok has “potential security risks associated with its use.” The Transportation Security Administration, U.S. Army, and U.S. Navy prohibited TikTok from being used on government-issued phones.

“What the American people have to understand is all of the data that goes into those mobile apps that kids have so much fun with and seem so convenient, it goes right to servers in China, right to the Chinese military, the Chinese communist party, and the agencies which want to steal our intellectual property,” said former White House trade adviser Peter Navarro.

In the end, the Trump administration was not able to institute a TikTok ban.

Share
Categories
Conferences Fear Intelwars privacy risks Security Conferences

Security and Human Behavior (SHB) 2021

Today is the second day of the fourteenth Workshop on Security and Human Behavior. The University of Cambridge is the host, but we’re all on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. The format translates well to Zoom, and we’re using random breakouts for the breaks between sessions.

I always find this workshop to be the most intellectually stimulating two days of my professional year. It influences my thinking in different, and sometimes surprising, ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, and thirteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Share
Categories
Intelwars Policy Analysis privacy

Your Avatar is You, However You See Yourself, and You Should Control Your Experience and Your Data

Virtual worlds are increasingly providing sophisticated, realistic, and often immersive experiences that are the stuff of fantasy. You can enter them by generating an avatar – a representation of the user that could take the form of an animal, a superhero, a historic figure, each some version of yourself or the image you’d like to project. You can often choose to express yourself by selecting how to customize your character. For many, Avatar customization is key for satisfying and immersive gameplay or online experience. Avatars used to be relatively crude, even cartoonish representations, but they are becoming increasingly life-like, with nuanced facial expressions backed by a wealth of available emotes and actions. Most games and online spaces now offer at least a few options for choosing your avatar, with some providing in-depth tools to modify every aspect of your digital representation. 

There is a broad array of personal and business applications for these avatars as well- from digital influencers, celebrities, customer service representatives, to your digital persona in the virtual workplace. Virtual reality and augmented reality promise to take avatars to the next level, allowing the avatar’s movement to mirror the user’s gestures, expressions, and physicality. 

The ability to customize how you want to be perceived in a virtual world can be incredibly empowering. It enables embodying rich personas to fit the environment and the circumstances or adopting a mask to shield your privacy and personal self from what you wish to make public. You might use one persona for gaming, another for in a professional setting, a third for a private space with your friends.

An avatar can help someone remove constraints imposed on them by wider societal biases. For example trans and gender non-conforming individuals can more accurately reflect their true self, relieving the effects of gender dysphoria and transphobia, which has shown therapeutic benefits. For people with disabilities, avatars can allow participants to pursue unique activities through which they can meet and interact with others. In some cases, avatars can help avoid harassment. For example, researchers found some women choose male avatars to avoid misogyny in World of Warcraft. 

Facebook, owner of Oculus VR and heavily investing in AR, has highlighted its technical progress in one Facebook Research project called Codec Avatar. The Codec Avatars research project focuses on ultra-realistic avatars, potentially modeled directly on users’ bodies, and modeling the user’s voice, movements, and likeness, looking to power the ‘future of connection’ with avatars that enable what Facebook calls ‘social presence’ in their VR platform. 

Social presence combines the telepresence aspect of a VR experience and the social element of being able to share the experience with other people. In order to deliver what Facebook envisions for an “authentic social connection” in virtual reality, you have to pass the mother test: ‘your mother has to love your avatar before the two of you feel comfortable interacting as you would in real life’, as Yaser Sheikh, Director of Research at Facebook Reality Labs, put it. 

While we’d hope your mother would love whatever avatar you make, Facebook seems to mean the Codec Avatars are striving to be indistinguishable from their human counterparts–a “picture-perfect representation of the sender’s likeness,” that has the “unique qualities that make you instantly recognizable,” as captured by a full-body scan, and animated by ego-centric surveillance. While some may prefer exact replicas like these, the project is not yet embracing a future that allows people the freedom to be whoever they want to be online.

By contrast, Epic Games has introduced MetaHumans, which also allows lifelike animation techniques via its Unreal Engine and motion capture, but does not require a copy of the user. Instead, it allows the user the choice to create and control how they appear in virtual worlds.

Facebook’s plan for Codec Avatars is to verify users with “through a combination of user authentication, device authentication, and hardware encryption, and is “exploring the idea of securing future avatars through an authentic account.”  This obsession with authenticated perfect replicas mirrors Facebook’s controversial history with insisting on “real names”, later loosened somewhat to allow “authentic names,” without resolving the inherent problems Indeed, Facebook’s insistence to tie your Oculus account with your Facebook account (and its authentic name) already brings these policies together, for the worse.  If Facebook insists on indistinguishable avatars, tied to a Facebook “authentic” account, in its future of social presence, this will put the names policy on steroids.  

Facebook should respect the will of individuals not to disclose their real identities online.

Until the end of next year, Oculus still allows existing users to keep a separate unique VR profile to log in, which does not need to be your Facebook name. With Facebook login, users can still set their name as visible to ‘Only Me’ in Oculus settings, so that at least people on Oculus won’t be able to find you by your Facebook name.  But this is a far cry from designing an online identity system that celebrates the power of avatars to enable people to be who they want to be.

Lifelike Avatars and Profiles of Faces, Bodies, and Behavior on the Horizon

A key part of realistic avatars is mimicking the body and facial expression of the user, derived from collecting your non-verbal and body cues (the way you frown, tilt your head, or move your eyes) and your body structure and motion. Facebook Modular Codec Avatar system seeks to make “inferences about what a face should look like” to construct authentic simulations; compared to the original Codec Avatar system which relied more on direct comparison with a person.  

While still a long way from the hyper-realistic Codec Avatar project, Facebook has recently announced a substantial step down that path, rolling out avatars with real-time animated gestures and expressions for Facebook’s virtual reality world, Horizon. These avatars will later be available for other Quest app developers to integrate into their own work.

 Facebook’s Codec Avatar research suggests that it will eventually require a lot of sensitive information about its users’ faces and body language: both their detailed physical appearance and structure (to recognize you for authentication applications, and to produce photorealistic avatars, including full-body avatars), and our moment-to-moment emotions and behaviors, captured in order to replicate them realistically in real-time in a virtual or augmented social setting. 

While this technology is still in development, the inferences coming from these egocentric data collection practices require even stronger human rights protections. Algorithmic tools can leverage the platform’s intimate knowledge of their users, assembled from thousands of seemingly unrelated data points and making inferences drawn from both individual and collective behavior. 

Research using unsophisticated cartoon avatars suggested that avatars may accurately infer some personality traits of the user. Animating hyper-realistic avatars of natural persons, such as Facebook’s Codec Avatars, will need to collect much more personal data. Think of it like walking around strapped to a dubious lie-detector, which measures your temperature, body responses, and heart-rate, as you go about your day. 

Inferences based on egocentric collection of data about users’ emotions, attention, likes, or dislikes provide platforms the power to control what your virtual vision sees, how your virtual body looks like, and how your avatar can behave. While wearing your headsets, you will see the 3D world through a lens made by those who control the infrastructure. 

Realistic Avatars Require Robust Security

Hyper-realistic avatars also raise concerns about “deep fakes”. Right now, deep fakes involving a synthetic video or audio “recording” may be mistaken for a real recording of the people it depicts. The unauthorized use of an avatar could also be confused with the real person it depicts. While any avatar, realistic or not, may be driven by a third party, hyper-realistic avatars, with human-like expressions and gestures, can more easily build trust.  Worse, in a dystopian future, realistic avatars of people you know could be animated automatically, for advertising or influencing opinion. For example, imagine an uncannily convincing ad where hyper-realistic avatars of your friends swoon over a product, or where an avatar of your crush tells you how good you’ll look in a new line of clothes.  More nefariously, hyper-realistic avatars of familiar people could be used for social engineering, or to draw people down the rabbit hole of conspiracy theories and radicalization.

‘Deep fake’ issues with a third party independently making a realistic fake depiction of a real person are well covered by existing law. The personal data captured to make ultra-realistic avatars, which is not otherwise readily available to the public, should not be used to act out expressions or interactions that people did not actually consent to present. To protect against this and put the user in charge of their experience, users must have strong security measures around the use of their accounts, what data is collected and how this data is used

A secure system for authentication does not require a verified match to one’s offline self. For some, of course, a verification linked to an offline identity may be valuable, but for others, the true value may lay in a way to connect without revealing their identity. Even if a user is presenting differently from their IRL body, they may still want to develop a consistent reputation and goodwill with their avatar persona, especially if it is used across a range of experiences. This important security and authentication can be provided without requiring a link to an authentic name account, or verification that the avatar presented matches the offline body. 

For example, the service could verify if the driver of the avatar was the same person who made it, without simultaneously revealing who the driver was offline. With appropriate privacy controls and data use limitations, a VR/AR device is well-positioned to verify the account holder biometrically, and thereby verify a consistent driver, even if that was not matched to an offline identity. 

Transparency and User Control Are Vital for the Avatars of the Virtual World

In the era of life-like avatars, it is even more important for users to have transparency and control from companies on the algorithms that underpin why their avatar will behave in specific ways, and to provide strong users control over the use of inferences. 

Facebook’s Responsible Innovation Principles, which allude to more transparency and control, are an important first step, but they remain incomplete and flawed. The first principle (“Never surprise people”)  fortunately implies greater transparency moving forward. Indeed, many of the biggest privacy scandals have stemmed from people being surprised by unfair data processing practices, even if the practice had been included in a privacy policy.  Simply informing people of your data practices, even if effectively done, does not ensure that the practices are good ones. 

Likewise, the second principle (“Provide controls that matter”) does not necessarily ensure that you as a user will have the controls over everything you think matters. One might debate over what falls into the category of things that “matter” enough to have controls, like the biometric data collected or the inferences generated by the service, or the look of one’s avatar.   This is particularly important when there can be so much data collected in a life-like avatar, and raises critical questions on how it could be used, even as the tech is in its infancy.  For example, if the experience requires an avatar that’s designed to reflect your identity, what is at stake inside the experience is your sense of self. The platform won’t just control the quality of the experience you observe (like watching a movie), but rather control an experience that has your identity and sense of self at its core. This is an unprecedented ability to potentially produce highly tailored forms of psychological manipulation according to your behavior in real-time.

Without strong user controls, social VR platforms or third-party developers may be tempted to use this data for other purposes, including psychological profiling of users’ emotions, interests, and attitudes, such as detecting nuances of how people feel about particular situations, topics, or other people.  It could be used to make emotionally manipulative content that subtly mirrors the appearance or mannerisms of people close to us, perhaps in ways we can’t quite put our fingers on.  

Data protection laws, like the GDPR, require that personal data collected for a specific purpose (like making your avatar more emotionally realistic in a VR experience) should not be used for other purposes (like calibrating ads to optimize your emotional reactions to them or mimicking your mannerisms in ads shown to your friends). 

While Facebook’s VR/AR policies for third-party developers prevent them, and rightly so, from using Oculus user data for marketing or advertising, among other things, including performing or facilitating surveillance for law enforcement purposes (without a valid court order), attempting to identify a natural person and combining user data with data from a third-party; the company has not committed to these restrictions, or to allowing strong user controls, on its own uses of data. 

Facebook should clarify and expand upon their principles, and confirm they understand that transparency and controls that “matter” include transparency about and control over not only the form and shape of the avatar but also the use or disclosure of the inferences the platform will make about users (their behavior, emotions, personality, etc.), including the processing of personal data running in the background. 

We urge Facebook to give users control and put people in charge of their experience. The notion that people must replicate their physical forms online to achieve the “power of connection,” fails to recognize that many people wish to connect in a variety of ways– including the use of different avatars to express themselves. For some, their avatar may indeed be a perfect replica of their real-world bodies. Indeed, it is critical for inclusion to allow avatar design options that reflect the diversity of users.  But for others, their authentic self is what they’ve designed in their minds or know in their hearts. And are finally able to reflect in glorious high resolution in a virtual world. 

.

Share
Categories
Intelwars Necessary and Proportionate Policy Analysis privacy Surveillance and Human Rights

Your Avatar is You, However You See Yourself, and You Should Control Your Experience and Your Data

Virtual worlds are increasingly providing sophisticated, realistic, and often immersive experiences that are the stuff of fantasy. You can enter them by generating an avatar – a representation of the user that could take the form of an animal, a superhero, a historic figure, each some version of yourself or the image you’d like to project. You can often choose to express yourself by selecting how to customize your character. For many, Avatar customization is key for satisfying and immersive gameplay or online experience. Avatars used to be relatively crude, even cartoonish representations, but they are becoming increasingly life-like, with nuanced facial expressions backed by a wealth of available emotes and actions. Most games and online spaces now offer at least a few options for choosing your avatar, with some providing in-depth tools to modify every aspect of your digital representation. 

There is a broad array of personal and business applications for these avatars as well- from digital influencers, celebrities, customer service representatives, to your digital persona in the virtual workplace. Virtual reality and augmented reality promise to take avatars to the next level, allowing the avatar’s movement to mirror the user’s gestures, expressions, and physicality. 

The ability to customize how you want to be perceived in a virtual world can be incredibly empowering. It enables embodying rich personas to fit the environment and the circumstances or adopting a mask to shield your privacy and personal self from what you wish to make public. You might use one persona for gaming, another for in a professional setting, a third for a private space with your friends.

An avatar can help someone remove constraints imposed on them by wider societal biases. For example trans and gender non-conforming individuals can more accurately reflect their true self, relieving the effects of gender dysphoria and transphobia, which has shown therapeutic benefits. For people with disabilities, avatars can allow participants to pursue unique activities through which they can meet and interact with others. In some cases, avatars can help avoid harassment. For example, researchers found some women choose male avatars to avoid misogyny in World of Warcraft. 

Facebook, owner of Oculus VR and heavily investing in AR, has highlighted its technical progress in one Facebook Research project called Codec Avatar. The Codec Avatars research project focuses on ultra-realistic avatars, potentially modeled directly on users’ bodies, and modeling the user’s voice, movements, and likeness, looking to power the ‘future of connection’ with avatars that enable what Facebook calls ‘social presence’ in their VR platform. 

Social presence combines the telepresence aspect of a VR experience and the social element of being able to share the experience with other people. In order to deliver what Facebook envisions for an “authentic social connection” in virtual reality, you have to pass the mother test: ‘your mother has to love your avatar before the two of you feel comfortable interacting as you would in real life’, as Yaser Sheikh, Director of Research at Facebook Reality Labs, put it. 

While we’d hope your mother would love whatever avatar you make, Facebook seems to mean the Codec Avatars are striving to be indistinguishable from their human counterparts–a “picture-perfect representation of the sender’s likeness,” that has the “unique qualities that make you instantly recognizable,” as captured by a full-body scan, and animated by ego-centric surveillance. While some may prefer exact replicas like these, the project is not yet embracing a future that allows people the freedom to be whoever they want to be online.

By contrast, Epic Games has introduced MetaHumans, which also allows lifelike animation techniques via its Unreal Engine and motion capture, but does not require a copy of the user. Instead, it allows the user the choice to create and control how they appear in virtual worlds.

Facebook’s plan for Codec Avatars is to verify users with “through a combination of user authentication, device authentication, and hardware encryption, and is “exploring the idea of securing future avatars through an authentic account.”  This obsession with authenticated perfect replicas mirrors Facebook’s controversial history with insisting on “real names”, later loosened somewhat to allow “authentic names,” without resolving the inherent problems Indeed, Facebook’s insistence to tie your Oculus account with your Facebook account (and its authentic name) already brings these policies together, for the worse.  If Facebook insists on indistinguishable avatars, tied to a Facebook “authentic” account, in its future of social presence, this will put the names policy on steroids.  

Facebook should respect the will of individuals not to disclose their real identities online.

Until the end of next year, Oculus still allows existing users to keep a separate unique VR profile to log in, which does not need to be your Facebook name. With Facebook login, users can still set their name as visible to ‘Only Me’ in Oculus settings, so that at least people on Oculus won’t be able to find you by your Facebook name.  But this is a far cry from designing an online identity system that celebrates the power of avatars to enable people to be who they want to be.

Lifelike Avatars and Profiles of Faces, Bodies, and Behavior on the Horizon

A key part of realistic avatars is mimicking the body and facial expression of the user, derived from collecting your non-verbal and body cues (the way you frown, tilt your head, or move your eyes) and your body structure and motion. Facebook Modular Codec Avatar system seeks to make “inferences about what a face should look like” to construct authentic simulations; compared to the original Codec Avatar system which relied more on direct comparison with a person.  

While still a long way from the hyper-realistic Codec Avatar project, Facebook has recently announced a substantial step down that path, rolling out avatars with real-time animated gestures and expressions for Facebook’s virtual reality world, Horizon. These avatars will later be available for other Quest app developers to integrate into their own work.

 Facebook’s Codec Avatar research suggests that it will eventually require a lot of sensitive information about its users’ faces and body language: both their detailed physical appearance and structure (to recognize you for authentication applications, and to produce photorealistic avatars, including full-body avatars), and our moment-to-moment emotions and behaviors, captured in order to replicate them realistically in real-time in a virtual or augmented social setting. 

While this technology is still in development, the inferences coming from these egocentric data collection practices require even stronger human rights protections. Algorithmic tools can leverage the platform’s intimate knowledge of their users, assembled from thousands of seemingly unrelated data points and making inferences drawn from both individual and collective behavior. 

Research using unsophisticated cartoon avatars suggested that avatars may accurately infer some personality traits of the user. Animating hyper-realistic avatars of natural persons, such as Facebook’s Codec Avatars, will need to collect much more personal data. Think of it like walking around strapped to a dubious lie-detector, which measures your temperature, body responses, and heart-rate, as you go about your day. 

Inferences based on egocentric collection of data about users’ emotions, attention, likes, or dislikes provide platforms the power to control what your virtual vision sees, how your virtual body looks like, and how your avatar can behave. While wearing your headsets, you will see the 3D world through a lens made by those who control the infrastructure. 

Realistic Avatars Require Robust Security

Hyper-realistic avatars also raise concerns about “deep fakes”. Right now, deep fakes involving a synthetic video or audio “recording” may be mistaken for a real recording of the people it depicts. The unauthorized use of an avatar could also be confused with the real person it depicts. While any avatar, realistic or not, may be driven by a third party, hyper-realistic avatars, with human-like expressions and gestures, can more easily build trust.  Worse, in a dystopian future, realistic avatars of people you know could be animated automatically, for advertising or influencing opinion. For example, imagine an uncannily convincing ad where hyper-realistic avatars of your friends swoon over a product, or where an avatar of your crush tells you how good you’ll look in a new line of clothes.  More nefariously, hyper-realistic avatars of familiar people could be used for social engineering, or to draw people down the rabbit hole of conspiracy theories and radicalization.

‘Deep fake’ issues with a third party independently making a realistic fake depiction of a real person are well covered by existing law. The personal data captured to make ultra-realistic avatars, which is not otherwise readily available to the public, should not be used to act out expressions or interactions that people did not actually consent to present. To protect against this and put the user in charge of their experience, users must have strong security measures around the use of their accounts, what data is collected and how this data is used

A secure system for authentication does not require a verified match to one’s offline self. For some, of course, a verification linked to an offline identity may be valuable, but for others, the true value may lay in a way to connect without revealing their identity. Even if a user is presenting differently from their IRL body, they may still want to develop a consistent reputation and goodwill with their avatar persona, especially if it is used across a range of experiences. This important security and authentication can be provided without requiring a link to an authentic name account, or verification that the avatar presented matches the offline body. 

For example, the service could verify if the driver of the avatar was the same person who made it, without simultaneously revealing who the driver was offline. With appropriate privacy controls and data use limitations, a VR/AR device is well-positioned to verify the account holder biometrically, and thereby verify a consistent driver, even if that was not matched to an offline identity. 

Transparency and User Control Are Vital for the Avatars of the Virtual World

In the era of life-like avatars, it is even more important for users to have transparency and control from companies on the algorithms that underpin why their avatar will behave in specific ways, and to provide strong users control over the use of inferences. 

Facebook’s Responsible Innovation Principles, which allude to more transparency and control, are an important first step, but they remain incomplete and flawed. The first principle (“Never surprise people”)  fortunately implies greater transparency moving forward. Indeed, many of the biggest privacy scandals have stemmed from people being surprised by unfair data processing practices, even if the practice had been included in a privacy policy.  Simply informing people of your data practices, even if effectively done, does not ensure that the practices are good ones. 

Likewise, the second principle (“Provide controls that matter”) does not necessarily ensure that you as a user will have the controls over everything you think matters. One might debate over what falls into the category of things that “matter” enough to have controls, like the biometric data collected or the inferences generated by the service, or the look of one’s avatar.   This is particularly important when there can be so much data collected in a life-like avatar, and raises critical questions on how it could be used, even as the tech is in its infancy.  For example, if the experience requires an avatar that’s designed to reflect your identity, what is at stake inside the experience is your sense of self. The platform won’t just control the quality of the experience you observe (like watching a movie), but rather control an experience that has your identity and sense of self at its core. This is an unprecedented ability to potentially produce highly tailored forms of psychological manipulation according to your behavior in real-time.

Without strong user controls, social VR platforms or third-party developers may be tempted to use this data for other purposes, including psychological profiling of users’ emotions, interests, and attitudes, such as detecting nuances of how people feel about particular situations, topics, or other people.  It could be used to make emotionally manipulative content that subtly mirrors the appearance or mannerisms of people close to us, perhaps in ways we can’t quite put our fingers on.  

Data protection laws, like the GDPR, require that personal data collected for a specific purpose (like making your avatar more emotionally realistic in a VR experience) should not be used for other purposes (like calibrating ads to optimize your emotional reactions to them or mimicking your mannerisms in ads shown to your friends). 

While Facebook’s VR/AR policies for third-party developers prevent them, and rightly so, from using Oculus user data for marketing or advertising, among other things, including performing or facilitating surveillance for law enforcement purposes (without a valid court order), attempting to identify a natural person and combining user data with data from a third-party; the company has not committed to these restrictions, or to allowing strong user controls, on its own uses of data. 

Facebook should clarify and expand upon their principles, and confirm they understand that transparency and controls that “matter” include transparency about and control over not only the form and shape of the avatar but also the use or disclosure of the inferences the platform will make about users (their behavior, emotions, personality, etc.), including the processing of personal data running in the background. 

We urge Facebook to give users control and put people in charge of their experience. The notion that people must replicate their physical forms online to achieve the “power of connection,” fails to recognize that many people wish to connect in a variety of ways– including the use of different avatars to express themselves. For some, their avatar may indeed be a perfect replica of their real-world bodies. Indeed, it is critical for inclusion to allow avatar design options that reflect the diversity of users.  But for others, their authentic self is what they’ve designed in their minds or know in their hearts. And are finally able to reflect in glorious high resolution in a virtual world. 

.

Share
Categories
anonymity Commentary Encrypting the Web free speech Intelwars Locational Privacy privacy Street-Level Surveillance

EFF at 30: Surveillance Is Not Obligatory, with Edward Snowden

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF was proud to welcome NSA whistleblower Edward Snowden for a chat about surveillance, privacy, and the concrete ways we can improve our digital world, as part of our EFF30 Fireside Chat series. EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia weighed in on the way the internet (and surveillance) actually function, the impact that has on modern culture and activism, and how we’re grappling with the cracks this pandemic has revealed—and widened—in our digital world. 

You can watch the full conversation here or read the transcript.

On June 3, we’ll be holding our fourth EFF30 Fireside Chat, on how to free the internet, with net neutrality pioneer Gigi Sohn. EFF co-founder John Perry Barlow once wrote, “We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.” This year marked the 25th anniversary of this audacious essay denouncing centralized authority on the blossoming internet. But modern tech has strayed far from the utopia of individual freedom that 90s netizens envisioned. We’ll be discussing corporatization, activism, and the fate of the internet, framed by Barlow’s “Declaration of the Independence of Cyberspace,” with Gigi, along with EFF Senior Legislative Counsel Ernesto Falcon and EFF Associate Director of Policy and Activism Katharine Trendacosta.

RSVP to the next EFF30 Fireside Chat

The Internet is Not Made of Magic

Snowden opened the discussion by explaining the reality that all of our internet usage is made up of a giant mesh of companies and providers. The internet is not magic—it’s other people’s computers: “All of our communications—structurally—are intermediated by other people’s computers and infrastructure…[in the past] all of these lines that you were riding across—the people who ran them were taking notes.” We’ve come a long way from that time when our communications were largely unencrypted, and everything you typed into the Google search box “was visible to everybody else who was on that Starbucks network with you, and your Internet Service Provider, who knew this person who paid for this account searched for this thing on Google….anybody who was between your communications could take notes.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FPYRaSOIbiOA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

How Can Tech Protect Us from Surveillance?

In 2013, Snowden came forward with details about the PRISM program, through which the NSA and FBI worked directly with large companies to see what was in individuals’ internet communications and activity, making much more public the notion that our digital lives were not safe from spying. This has led to a change in people’s awareness of this exploitation, Snowden said, and myriad solutions have come about to solve parts of what is essentially an ecosystem problem: some technical, some legal, some political, some individual. “Maybe you install a different app. Maybe you stop using Facebook. Maybe you don’t take your phone with you, or start using an encrypted messenger like Signal instead of something like SMS.” 

Nobody sells you a car without brakes—nobody should sell you a browser without security.

When it comes to the legal cases, like EFF’s case against the NSA, the courts are finally starting to respond. Technical solutions, like the expansion of encryption in everyday online usage, are also playing a part, Alexis Hancock, EFF’s Director of Engineering for Certbot, explained. “Just yesterday, I checked on a benchmark that said that 95% of web traffic is encrypted—leaps and bounds since 2013.” In 2015, web browsers started displaying “this site is not secure” messages on unencrypted sites, and that’s where EFF’s Certbot tool steps in. Certbot is a “free, open source software that we work on to automatically supply free SSL, or secure, certificates for traffic in transit, automating it for websites everywhere.” This keeps data private in transit—adding a layer of protection over what is traveling between your request and a website’s server. Though this is one of the things that don’t get talked about a lot, partly because these are pieces that you don’t see and shouldn’t have to see, but give people security. “Nobody sells you a car without brakes—nobody should sell you a browser without security.”  

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcJWq6ub0CQs%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Balancing the Needs of the Pandemic and the Dangers of Surveillance

We’ve moved the privacy needle forward in many ways since 2013, but in 2020, a global catastrophe could have set us back: the COVID-19 pandemic. As Hancock described it, EFF’s focus for protecting privacy during the pandemic was to track “where technology can and can’t help, and when is technology being presented as a silver bullet for certain issues around the pandemic when people are the center for being able to bring us out of this.”

There is a looming backlash of people who have had quite enough.

Our fear was primarily scope creep, she explained: from contact tracing to digital credentials, many of these systems already exist, but we must ask, “what are we actually trying to solve here? Are we actually creating more barriers to healthcare?” Contact tracing, for example, must put privacy first and foremost—because making it trustworthy is key to making it effective. 

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FR9CIDUhGOgU%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

The Melting Borders Between Corporate, Government, Local, and Federal Surveillance 

But the pandemic, unfortunately, isn’t the only nascent danger to our privacy. EFF’s Matthew Guariglia described the merging of both government and corporate surveillance, and federal and local surveillance, that’s happening around the country today: “Police make very effective marketers, and a lot of the manufacturers of technology are counting on it….If you are living in the United States today you are likely walking past or carrying around street level surveillance everywhere you go, and this goes double if you live in a concentrated urban setting or you live in an overpoliced community.”

Police make very effective marketers, and a lot of the manufacturers of technology are counting on it

From automated license plate readers to private and public security cameras to Shotspotter devices that listen for gunshots but also record cars backfiring and fireworks, this matters now more than ever, as the country reckons with a history of dangerous and inequitable overpolicing: “If a Shotspotter misfires, and sends armed police to the site of what they think is a shooting, there is likely to be a higher chance for a more violent encounter with police who think they’re going to a shooting.” This is equally true for a variety of these technologies, from automated license plate readers to facial recognition, which police claim are used for leads, but are too often accepted as fact. 

“Should we compile records that are so comprehensive?” asked Snowden about the way these records aren’t only collected, but queried, allowing government and companies to ask for the firehose of data. “We don’t even care what it is, we interrelate it with something else. We saw this license plate show up outside our store at a strip mall and we want to know how much money they have.” This is why the need for legal protections is so important, added Executive Director Cindy Cohn: “The technical tools are not going to get to the place where the phone company doesn’t know where your phone is. But the legal protections can make sure that the company is very limited in what they can do with that information—especially when the government comes knocking.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcLlVb_W8OmA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

 

After All This, Is Privacy Dead?

All these privacy-invasive regimes may lead some to wonder if privacy, or anonymity, are, to put it bluntly, dying. That’s exactly what one audience member asked during the question and answer section of the chat. “I don’t think it’s inevitable,” said Guariglia. “There is a looming backlash of people who have had quite enough.” Hancock added that optimism is both realistic and required: “No technology makes you a ghost online—none of it, even the most secure, anonymous-driven tools out there. And I don’t think that it comes down to your own personal burden…There is actually a more collective unit now that are noticing that this burden is not yours to bear…It’s going to take firing on all cylinders, with activism, technology, and legislation. But there are people fighting for you out there. Once you start looking, you’ll find them.” 

If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

“So many people care,” Snowden said. “But they feel like they can’t do anything….Does it have to be that way?…Governments live in a permissionless world, but we don’t. Does it have to be that way?” If you’re looking for a lever to pull—look at the presumptions these mass data collection systems make, and what happens if they fail: “They do it because mass surveillance is cheap…could we make these systems unlawful for corporations, and costly [for others]? I think in all cases, the answer is yes.”

play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEaeKVAbMO6s%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from youtube.com

Democracy, social movements, our relationships, and your own well being all require private space to thrive. If you missed this chat, please take an hour to watch it—whether you’re a privacy activist or an ordinary person, it’s critical for the safety of our society that we push back on all forms of surveillance, and protect our ability to communicate, congregate, and coordinate without fear of reprisal. We deeply appreciate Edward Snowden joining us for this EFF30 Fireside Chat and discussing how we can fight back against surveillance, as difficult as it may seem. As Hancock said (yes, quoting the anime The Last Airbender): “If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

___________________________

Check out additional recaps of EFF’s 30th anniversary conversation series, and don’t miss our next program where we’ll tackle digital access and the open web with Gigi Sohn on June 3, 2021—EFF30 Fireside Chat: Free the Internet.

Share
Categories
Intelwars International International Privacy Standards Necessary and Proportionate Policy Analysis privacy Surveillance and Human Rights

Why Indian Courts Should Reject Traceability Obligations

End-to-end encryption is under attack in India. The Indian government’s new and dangerous online intermediary rules forcing messaging applications to track—and be able to identify—the originator of any message, which is fundamentally incompatible with the privacy and security protections of strong encryption. Three petitions have been filed (Facebook; WhatsApp; Arimbrathodiyil) asking the Indian High Courts (in Delhi and Kerala) to strike down these rules.

The traceability provision—Rule 4(2) in the “Intermediary Guidelines and Digital Media Ethics Code” rules (English version starts at page 19)—was adopted by the Ministry of Electronics and Information Technology earlier this year. The rules require any large social media intermediary that provides messaging “shall enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received, or stored in any computer resource.)

The minister has claimed that the rules will “[not] impact the normal functioning of WhatsApp” and said that “the entire debate on whether encryption would be maintained or not is misplaced” because technology companies can still decide to use encryption—so long as they accept the “responsibility to find a technical solution, whether through encryption or otherwise” that permits traceability. WhatsApp strongly disagrees, writing that “traceability breaks end-to-end encryption and would severely undermine the privacy of billions of people who communicate digitally.” 

The Indian government’s assertion is bizarre because the rules compel intermediaries to know information about the content of users’ messages that they currently don’t and which is currently protected by encryption. This legal mandate seeks to change WhatsApp’s security model and technology, and the assumptions somehow seem to imply that such matter needn’t matter to users and needn’t bother companies.

That’s wrong. Because WhatsApp uses a specific privacy-by-design implementation that protects users’ secure communication by making forwarding indistinguishable for the private messaging app from other kinds of communications. So when a WhatsApp user forwards a message using the arrow, it serves to mark the forward information at the client-side, but the fact that the message has been forwarded is not visible to the WhatsApp server. The traceability mandate would make WhatsApp change the application to make this information, which was previously invisible to the server, now visible.  

The Indian government also defended the rules by noting that legal safeguards restrict the process of gaining access to the identity of a person who originated a message, that such orders can only be issued for national security and serious crime investigations, and on the basis that “it is not any individual who can trace the first originator of the information.” However, messaging services do not know ahead of time which messages will or will not be subject to such orders; as WhatsApp has noted,

there is no way to predict which message a government would want to investigate in the future. In doing so, a government that chooses to mandate traceability is effectively mandating a new form of mass surveillance. To comply, messaging services would have to keep giant databases of every message you send, or add a permanent identity stamp—like a fingerprint—to private messages with friends, family, colleagues, doctors, and businesses. Companies would be collecting more information about their users at a time when people want companies to have less information about them.  

India’s legal safeguards will not solve the core problem:

The rules represent a technical mandate for companies to re-engineer or re-design their systems for every user, not just for criminal suspects.

The overall design of messaging services must change to comply with the government’s demand to identify the originator of a message.

Such changes move companies away from privacy-focused engineering and data minimization principles that should characterize secure private messaging apps.

This provision is one of many features of the new rules that pose a threat to expression and privacy online, but it’s drawn particular attention because of the way it comes into collision with end-to-end encryption. WhatsApp previously wrote:

“Traceability” is intended to do the opposite by requiring private messaging services like WhatsApp to keep track of who-said-what and who-shared-what for billions of messages sent every day. Traceability requires messaging services to store information that can be used to ascertain the content of people’s messages, thereby breaking the very guarantees that end-to-end encryption provides. In order to trace even one message, services would have to trace every message.

Rule 4(2) applies to WhatsApp, Telegram, Signal, iMessage, or any “significant social media intermediaries” with more than 5 million registered users in India. It can also apply to federated social networks such as Mastodon or Matrix if the government decides these pose a “material risk of harm” to national security (rule 6). Free and open-source software developers are also afraid that they’ll be targeted next by this rule (and other parts of the intermediary rules), including for developing or operating more decentralized services. So Facebook and WhatsApp aren’t the only ones seeking to have the rules struck down; a free software developer named Praveen Arimbrathodiyil, who helps run community social networking services in India, has also sued, citing the burdens and risks of the rules for free and open-source software and not-for-profit communications tools and platforms.

This fight is playing out across the world. EFF has long said that end-to-end encryption, where intermediaries do not know the content of users’ messages, is a vitally important feature for private communications, and has criticized tech companies that don’t offer it or offer it in a watered-down or confusing way. Its end-to-end messaging encryption features are something WhatsApp is doing right—following industry best practices on how to protect users—and the government should not try to take this away.

Share
Categories
cryptocurency Digital privacy hush Intelwars Podcasts privacy

Episode-2882- Duke Leto on Hush Cryptocurrency and Digital Privacy

Duke Leto joins us today to discuss the crypto currency project known as Hush, (ticker HUSH).  Husk is a dPOW crypto currency that is 100% private with full encryption by default.  Build upon zkSNARKS with added enhanced privacy features, it Continue reading →

Share
Categories
Intelwars International Necessary and Proportionate privacy ¿Quién defiende tus datos?

Chile’s New “Who Defends Your Data?” Report Shows ISPs’ Race to Champion User Privacy

Derechos Digitales’ fourth ¿Quien Defiende Tus Datos? (Who Defends Your Data?) report on Chilean ISPs’ data privacy practices launched today, showing that companies must keep improving their commitments to user rights if they want to hold their leading positions. Although Claro  (América Móvil) remains at the forefront as in 2019’s report, Movistar (Telefónica) and GTD have made progress in all the evaluated categories. WOM lost points and ended in a tie with Entel in the second position, while VTR lagged behind.

Over the last four years, certain transparency practices that once seemed unusual in Latin America have become increasingly more common. In Chile, they have even become a default. This year, all companies evaluated except for VTR received credit for adopting three important industry-accepted best practices: publishing law enforcement guidelines, which help provide a glimpse into the process and standard companies use for analyzing government requests for user data; disclosing personal data processing practices in contracts and policies; and releasing transparency reports.

Overall, the publishing of transparency reports has also become more common. These are critical for understanding a company’s practice of managing user data and its handling of government data requests. VTR is the only company that has not updated its transparency report recently—since May 2019. After the last edition, GTD published its first transparency report and law enforcement guidelines. Similarly, for the first time Movistar has released specific guidelines for authorities requesting access to user’s data in Chile, and received credit for denying legally controversial government requests for user’s data.

Most of the companies also have policies stating their right to provide user notification when there is no secrecy obligation in place or its term has expired. But as in the previous edition, earning a full star in this category requires more than that. Companies have to clearly set up a notification procedure or make concrete efforts to put them in place. Derechos Digitales also urged providers to engage in legislative discussions regarding Chile’s cybercrime bill, in favor of stronger safeguards for user notification. Claro has upheld the right to notification within the country’s data protection law reform and has raised concerns against attempts to increase the data retention period for communications metadata in the cybercrime bill.    

Responding to concerns over government’s use of location data in the context of the COVID pandemic, the new report also sheds light on whether ISPs’ have made public commitments not to disclose user location data unless it is anonymized and aggregated, without a previous judicial order. While the pandemic has changed society in many ways, it has not reduced the need for privacy when it comes to sensitive personal data. Companies’ policies should also push back sensitive personal data requests that seek to target groups rather than individuals. In addition, the study aimed to spot which providers went public about their anonymized and aggregate location data-sharing agreements with private and public institutions. Movistar is the only company that has disclosed such agreements.

Together, the six researched companies account for 88.3% of fixed Internet users and 99.2% of mobile connections in Chile.

This year’s report rates providers in five criteria overall: data protection policies, law enforcement guidelines, defending users in courts or Congress, transparency reports, and user notification. The full report is available in Spanish, and here we highlight the main findings.

Main results

Data Protection Policies and ARCO Rights

Compared to 2019’s edition, Movistar and GTD improved their marks on data protection policies. Companies should not only publish those policies, but commit to support user-centric data protection principles inspired by the bill reforming the data protection law, under discussion in Chilean Congress. GTD has overcome its poor score from 2019, and has earned a full star in this category this year. Movistar received a partial score for failing to commit to the complete set of principles. On the upside, the ISP has devised a specific page to inform users about their ARCO rights (access, rectification, cancellation, and opposition). The report highlights other positive remarks for WOM, Claro, and Entel for providing a specific point of contact for users to demand these rights. WOM went above and beyond, and has made it easier for users to unsubscribe from the provider’s targeted ads database. 

Transparency Reports and Law Enforcement Guidelines

Both transparency reports and law enforcement guidelines have become an industry norm among Chile’s main ISPs. All featured companies have published them, although VTR has failed to disclose an updated transparency report since the 2019 study. Amid many advances since last edition, GTD  disclosed its first transparency report referring to government data requests during 2019. The company earned a partial score in this category for not releasing new statistical data about 2020’s requests.

As for law enforcement guidelines, not all companies clearly state the need for a judicial order to hand over different kinds of communication metadata to authorities. Claro, Entel, and GTD have more explicit commitments in this sense. VTR requests a judicial order before carrying out interception measures or handing call records to authorities. However, the ISP does not mention this requirement for other metadata, such as IP addresses. Movistar’s guidelines are detailed about the types of user data that the government can ask for, but it refers to judicial authorization only when addressing the interception of communications.

Finally, WOM’s 2021 guidelines explicitly require a warrant before handing phone and tower traffic data, as well as geolocation data. As the report points out, in early 2020, WOM was featured in the news as the only ISP to comply with a direct and massive location data request made by prosecutors, which the company denied. We’ve written about this case as an example of worrisome reverse searches, targeting all users in a particular area instead of specific individuals. Directly related to this concern, this year’s report underscores Claro’s and Entel’s commitment to only comply with individualized personal data requests. 

Pushing for User Notification about Data Requests

Claro remains in the lead when it comes to user notification. Beyond stating in the company policy that it has a right to notify users when this is not prohibited by law (as the other companies do, except for Movistar) – Claro’s policies also describe the user notice procedure for data requests in civil, labor, and family judicial cases. Derechos Digitales points out the ISP has also explored with the Public Prosecutor’s Office ways to implement such notification with regard to criminal cases, once the secrecy obligation has expired. WOM’s transparency report mentions similar efforts, urging authorities to collaborate in providing information to ISPs about the status of investigations and legal cases, so they are aware when a secrecy obligation is no longer in effect. As the company says:

“Achieving advances in this area would allow the various stakeholders to continue to comply with their legal duties and at the same time make progress in terms of transparency and safeguarding users’ rights.”

Having Users’ Backs Before Disproportionate Data Requests and Legislative Proposals

Companies can also stand with their users by challenging disproportionate data requests or defending users’ privacy in Congress. WOM and Claro have specific sections on their websites listing some of their work on this front (see, respectively, tabs “protocolo de entrega de información a la autoridad” y “relación con la autoridad”). Such reports include Claro’s meetings with Chilean senators who take part in the commission discussing the cybercrime bill. The ISP reports having emphasized concerns about the expansion of the mandatory retention period for metadata, as well as suggesting that the reform of the country’s data protection law should explicitly authorize telecom operators to notify users about surveillance measures. 

Entel and Movistar have received equally high scores in this category. Entel, in particular, has kept its fight against a disproportionate request made by Chile’s telecommunications regulator (Subtel) for subscriber data. In 2018, the regulator asked for personal information pertaining to the totality of Entel’s customer base in order to share those with private research companies for carrying out satisfaction surveys. Other Chilean ISPs received the same request, but only Entel challenged the legal grounds of Subtel’s authority for such a demand. The case, which was first reported for this category in the last edition, had a new development in late 2019, when the Supreme Court confirmed the sanctions against Entel for not delivering the data, but reduced the company’s fine. Civil society groups Derechos Digitales, Fundación Datos Protegidos, and Fundación Abriendo Datos have recently released a statement stressing how Subtel’s request conflicts with data protection principles, particularly purpose limitation, proportionality, and data security.

Movistar‘s credit in this category also relates to a Subtel request for subscriber data, this one in 2019. The ISP denied the demand, pointing out a legal tension between the agency’s oversight authority to request customer personal data without user consent and privacy safeguards provided by Chile’s Constitution and data protection law that set limits on personal data-sharing.

***

Since its first edition in 2017, Chile’s reports have shown solid and continuous progress, fostering ISP competition toward stronger standards and commitments in favor of users’ privacy and transparency. Derechos Digitales’ work is part of a series of reports across Latin America and Spain adapted from EFF’s Who Has Your Back? report, which for nearly a decade has evaluated the practices of major global tech companies.

Share
Categories
Apple Blazetv Facebook.com google Intelwars Pat gray Pat gray unleashed Pat unleashed privacy Third party tracking

Pat Gray: Apple’s Privacy Campaign is a JOKE

Have you seen Apple’s new privacy campaign? In their latest ad, Apple boasts that it now allows users to choose whether or not apps can track them and collect their personal information.

Apple claims: “Privacy is a fundamental human right. At Apple, it’s also one of our core values. Your devices are important to so many parts of your life. What you share from those experiences, and who you share it with, should be up to you. We design Apple products to protect your privacy and give you control over your information. It’s not always easy. But that’s the kind of innovation we believe in.”

In this clip, Pat Gray reacted to the Apple ad in amusement calling the move a joke. “First of all, Google paid billions to Apple for Safari [Apple’s default web browser] to route all internet searches through Google,” Pat said. He went on to say that while Apple now gives users more control over third party tracking, the tech company still licenses Google to be the default browser which generates revenue off of internet searches.

Watch the clip below for more. Can’t watch? Download the podcast here.

Want more from Pat Gray?

To enjoy more of Pat’s biting analysis and signature wit as he restores common sense to a senseless world, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution and live the American dream.

Share
Categories
Intelwars privacy Security Student Privacy

Fighting Disciplinary Technologies

An expanding category of software, apps, and devices is normalizing cradle-to-grave surveillance in more and more aspects of everyday life. At EFF we call them “disciplinary technologies.” They typically show up in the areas of life where surveillance is most accepted and where power imbalances are the norm: in our workplaces, our schools, and in our homes.

At work, employee-monitoring “bosswareputs workers’ privacy and security at risk with invasive time-tracking and “productivity” features that go far beyond what is necessary and proportionate to manage a workforce. At school, programs like remote proctoring and social media monitoring follow students home and into other parts of their online lives. And at home, stalkerware, parental monitoring “kidware” apps, home monitoring systems, and other consumer tech monitor and control intimate partners, household members, and even neighbors. In all of these settings, subjects and victims often do not know they are being surveilled, or are coerced into it by bosses, administrators, partners, or others with power over them.

Disciplinary technologies are often marketed for benign purposes: monitoring performance, confirming compliance with policy and expectations, or ensuring safety. But in practice, these technologies are non-consensual violations of a subject’s autonomy and privacy, usually with only a vague connection to their stated goals (and with no evidence they could ever actually achieve them). Together, they capture different aspects of the same broader trend: the appearance of off-the-shelf technology that makes it easier than ever for regular people to track, control, and punish others without their consent.

The application of disciplinary technologies does not meet standards for informed, voluntary, meaningful consent. In workplaces and schools, subjects might face firing, suspension, or other severe punishment if they refuse to use or install certain software—and a choice between invasive monitoring and losing one’s job or education is not a choice at all. Whether the surveillance is happening on a workplace- or school-owned device versus a personal one is immaterial to how we think of disciplinary technology: privacy is a human right, and egregious surveillance violates it regardless of whose device or network it’s happening on.

And even when its victims might have enough power to say no, disciplinary technology seeks a way to bypass consent. Too often, monitoring software is deliberately designed to fool the end-user into thinking they are not being watched, and to thwart them if they take steps to remove it. Nowhere is this more true than with stalkerware and kidware—which, more often than not, are the exact same apps used in different ways.

There is nothing new about disciplinary technology. Use of monitoring software in workplaces and educational technology in schools, for example, has been on the rise for years. But the pandemic has turbo-charged the use of disciplinary technology on the premise that, if in-person monitoring is not possible, ever-more invasive remote surveillance must take its place. This group of technologies and the norms it reinforces are becoming more and more mainstream, and we must address them as a whole.

To determine the extent to which certain software, apps, and devices fit under this umbrella, we look at a few key areas:

The surveillance is the point. Disciplinary technologies share similar goals. The privacy invasions from disciplinary tech are not accidents or externalities: the ability to monitor others without consent, catch them in the act, and punish them is a selling point of the system. In particular, disciplinary technologies tend to create targets and opportunities to punish them where none existed before.

This distinction is particularly salient in schools. Some educational technology, while inviting in third parties and collecting student data in the background, still serves clear classroom or educational purposes. But when the stated goal is affirmative surveillance of students—via face recognition, keylogging, location tracking, device monitoring, social media monitoring, and more—we look at that as a disciplinary technology.

Consumer and enterprise audiences. Disciplinary technologies are typically marketed to and used by consumers and enterprise entities in a private capacity, rather than the police, the military, or other groups we traditionally associate with state-mandated surveillance or punishment. This is not to say that law enforcement and the state do not use technology for the sole purpose of monitoring and discipline, or that they always use it for acceptable purposes. What disciplinary technologies do is extend that misuse.

With the wider promotion and acceptance of these intrusive tools, ordinary citizens and the private institutions they rely on increasingly deputize themselves to enforce norms and punish deviations. Our workplaces, schools, homes, and neighborhoods are filled with cameras and microphones. Our personal devices are locked down to prevent us from countermanding the instructions that others have inserted into them. Citizens are urged to become police, in a digital world increasingly outfitted for the needs of a future police state.

Discriminatory impact. Disciplinary technologies disproportionately hurt marginalized groups. In the workplace, the most dystopian surveillance is used on the workers with the least power. In schools, programs like remote proctoring disadvantage disabled students, Black and brown students, and students without access to a stable internet connection or a dedicated room for test-taking. Now, as schools receive COVID relief funding, surveillance vendors are pushing expensive tools that will disproportionately discriminate against the students already most likely to be hardest hit by the pandemic. And in the home, it is most often (but certainly not exclusively) women, children, and the elderly who are subject to the most abusive non-consensual surveillance and monitoring.

And in the end, it’s not clear that disciplinary technologies even work for their advertised uses. Bossware does not conclusively improve business outcomes, and instead negatively affects employees’ job satisfaction and commitment. Similarly, test proctoring software fails to accurately detect or prevent cheating, instead producing rampant false positives and overflagging. And there’s little to no independent evidence that school surveillance is an effective safety measure, but plenty of evidence that monitoring students and children does decrease perceptions of safety, equity, and support, negatively affect academic outcomes,  and can have a chilling effect on development that disproportionately affects minoritized groups and young women. If the goal is simply to use surveillance to give authority figures even more power, then disciplinary technology could be said to “work”—but at great expense to its unwilling targets, and to society as a whole.

The Way Forward

Fighting just one disciplinary technology at a time will not work. Each use case is another head of the same Hydra that reflects the same impulses and surveillance trends. If we narrowly fight stalkerware apps but leave kidware and bossware in place, the fundamental technology will still be available to those who wish to abuse it with impunity. And fighting student surveillance alone is untenable when scholarly bossware can still leak into school and academic environments.

The typical rallying cries around user choice, transparency, and strict privacy and security standards are not complete remedies when the surveillance is the consumer selling point. Fixing the spread of disciplinary technology needs stronger medicine. We need to combat the growing belief, funded by disciplinary technology’s makers, that spying on your colleagues, students, friends, family, and neighbors through subterfuge, coercion, and force is somehow acceptable behavior for a person or organization. We need to show how flimsy disciplinary technologies’ promises are; how damaging its implementations can be; and how, for every supposedly reasonable scenario its glossy advertising depicts, the reality is that misuse is the rule, not the exception.

We’re working at EFF to craft solutions to the problems of disciplinary technology, from demanding anti-virus companies and app stores recognize spyware more explicitly, pushing companies to design for abuse cases, and exposing the misuse of surveillance technology in our schools and in our streets. Tools that put machines in power over ordinary people are a sickening reversal of how technology should work. It will take technologists, consumers, activists and the law to put it right.

Share
Categories
Intelwars privacy

Help Bring Dark Patterns To Light

On social media, shopping sites, and even childrens’ apps, companies are using deceptive user experience design techniques to trick us into giving away our data, sharing our phone numbers and contact lists, and submitting to fees and subscriptions. Everyday, we’re exploited for profit through “dark patterns”: design tactics used in websites and apps to manipulate you into doing things you probably would not do otherwise. 

So today, we’re joining Consumer Reports, Access Now, PEN America, and Harry Brignull (founder of DarkPatterns.org), in announcing the Dark Patterns Tip Line. It’s an online platform hosted by Consumer Reports that allows people to submit and highlight deceptive design patterns they see in everyday products and services.

Your submissions will help privacy advocates, policymakers, and agency enforcers hold companies accountable for their dishonest and harmful practices. Especially misleading designs will be featured on the site.

Dark patterns can be deceptive in a variety of ways. For example, a website may trick visitors into submitting to unwanted follow-up emails by making the email opt-out checkbox on a checkout page harder to see: for instance, using a smaller font or placing the opt-out in an inconspicuous place in the user flow. Consider this example from Carfax

 "Send me special offers and other helpful information from CARFAX" checkbox, already ticked to agree by default.

The screenshot was gathered from a Reddit user u/dbilbey to the Asshole Design subreddit community in September 2020.

Another example: Grubhub hid a 15% service fee under the misleadingly vague  “taxes and fees” line of its receipt. 

 "This 15% helps Grubhub cover operating costs."

The screenshot was taken directly from the Grubhub iOS app in September 2020.

You can find many more samples of dark patterns on the “sightings” page of the Dark Patterns Tip Line.

The process for submitting a dark pattern to the Tip Line is simple. Just enter the name and type of company responsible, a short description of the deceptive design, and where you encountered it. You can also include a screenshot of the design.  Submitting to the Dark Pattern Tip Line requires you to agree to the Consumer Reports User Agreement and Privacy Policy. The Dark Patterns Tip Line site has some special limitations on Consumer Reports’ use of your email, and the site doesn’t use cookies or web tracking. You can opt-out of some of the permissions granted in the Consumer Reports Privacy Policy here.

 "The box did not allow me to unsubscribe" and the category "felt tricked" selected.

A sample submission to the Dark Patterns Tip Line.

Please share the Tip Line with people you think may be interested in submitting, such as community organizations, friends, family and colleagues. For this period, the Dark Patterns Tip Line is collecting submissions until June 9th.

Help us shine a light on these deceptive designs, and fight to end them, by submitting any dark patterns you’ve come across to the Dark Patterns Tip Line.

Share
Categories
Apple Censorship China Intelwars privacy Surveillance

Apple Censorship and Surveillance in China

Good investigative reporting on how Apple is participating in and assisting with Chinese censorship and surveillance.

Share
Categories
Commentary Intelwars Legal Analysis privacy Security Technical Analysis

How Your DNA—or Someone Else’s—Can Send You to Jail

Although DNA is individual to you—a “fingerprint” of your genetic code—DNA samples don’t always tell a complete story. The DNA samples used in criminal prosecutions are generally of low quality, making them particularly complicated to analyze. They are not very concentrated, not very complete, or are a mixture of multiple individual’s DNA—and often, all of these conditions are true. If a DNA sample is like a fingerprint, analyzing mixed DNA samples in criminal prosecutions can often be like attempting to isolate a single person’s print from a doorknob of a public building after hundreds of people have touched it. Despite the challenges in analyzing these DNA samples, prosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process. This is why it is essential that any DNA analysis tool’s source code is made available for evaluation. It is critical to determine whether the software is reliable enough to be used in the legal system, and what weight its results should be given. 

A Breakdown of DNA Data

To understand why DNA software analyses can be so misleading, it helps to know a tiny bit about how it works. To start, DNA sequences are commonly called genes. A more generic way to refer to a specific location in the gene sequence is a “locus” (plural “loci”). The variants of a given gene or of the DNA found at a particular locus are called “alleles.” To oversimplify, if a gene is like a highway, the numbered exits are loci, and alleles are the specific towns at each exit.

[P]rosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process.

Forensic DNA analysis typically focuses on around 13 to 20 loci and the allele present at each locus, making up a person’s DNA profile. By looking at a sufficient number of loci, whose alleles are distributed among the population, a kind of fingerprint can be established. Put another way, knowing the specific towns and exits a driver drove past can also help you figure out which highway they drove on.

To figure out the alleles present in a DNA sample, a scientist chops the DNA into different alleles, then uses an electric charge to draw it through a gel in a method called electrophoresis. Different alleles will travel at different rates, and the scientist can measure how far each one traveled and look up which allele corresponds to that length. The DNA is also stained with a dye, so that the more of it there is, the darker that blob will be on the gel.

Analysts infer what alleles are present based on how far they traveled through the gel, and deduce what amounts are present based on how dark the band is—which can work well in an untainted, high quality sample. Generally, the higher the concentration of cells from an individual and the less contaminated the sample by any other person’s DNA, the more accurate and reliable the generated DNA profile.

The Difficulty of Analyzing DNA Mixtures

Our DNA is found in all of our cells. The more cells that we shed, the higher the concentration of our DNA can be found, which generally also means more accuracy from DNA testing. However, our DNA can also be transferred from one object to another. So it’s possible that your DNA can be found on items you’ve never had contact with or at locations you’ve never been. For example, if you’re sitting in a doctor’s waiting room and scratch your face, your DNA may be found on the magazines on a table next to you that you never flipped through. Your DNA left on a jacket you lent a friend can transfer onto items they brush by or at locations they travel to. 

Given the ease at which DNA is deposited, it is no surprise that DNA samples from crime scenes are often a mixture of DNA from multiple individuals, or “donors.” Investigators gather DNA samples by swiping a cotton swab at the location that the perpetrator may have deposited their DNA, such as a firearm, a container of contraband, or the body of a victim. In many cases where the perpetrator’s bodily fluids are not involved, the DNA sample may only contain a small amount of the perpetrator’s DNA, which could be less than a few cells, and is likely to also contain the DNA of others. This makes trying to identify whether a person’s DNA is found in a complex DNA mixture a very difficult problem. It’s like having to figure out whether someone drove on a specific interstate when all you have is an incomplete and possibly inaccurate list of towns and exits they passed, all of which could have been from any one of the roads they used. You don’t know the number of roads they drove on, and can only guess at which towns and exits were connected. 

Running these DNA mixture samples through electrophoresis creates much noisier results, and often contains errors that indicate additional alleles at a locus or ignore alleles that are present. Human analysts then decide which alleles appear dark enough in the gel to count or which are light enough to ignore. At least, traditional DNA analysis worked in this binary way: an allele either counted or did not count as part of a specific DNA donor profile.

Probabilistic Genotyping Software and Their Problems 

Enter probabilistic genotyping software. The proprietors of these programs—the two biggest players are STRMix and TrueAllele—claim that their products, using statistical modeling, can determine the likelihood that a DNA profile or combinations of DNA profiles contributed to a DNA mixture, instead of the binary approach. Prosecutors often describe the analysis from these programs this way: It is X times more likely that defendant, rather than a random person, contributed to this DNA mixture sample.

However, these tools, like any statistical model, can be constructed poorly. And whether, what, and how assumptions are incorporated in them can cause the results to vary. They can be analogized to the election forecast models from FiveThirtyEight, The Economist, and The New York Times. They all use statistical modeling, but the final numbers are different because of the myriad design differences from each publisher. Probabilistic genotyping software is the same: they all use statistical modeling, but the output probability is affected by how that model is built. Like the different election models, different probabilistic DNA software has diverging approaches for which, and at what threshold, factors are considered, counteracted, or ignored. Additionally, input from human analysts, such as the hypothetical number of people who contributed to the DNA mixture, also change the calculation. If this is less rigorous than you expected, that’s exactly the point—and the problem. In our highway analogy, this is like a software program that purports to tell you how likely it is that you drove on a specific road based on a list of towns and exits you passed. Not only is the result affected by the completeness and accuracy of the list, but the map the software uses, and the data available to it, matter tremendously as well.

If this is less rigorous than you expected, that’s exactly the point—and the problem. 

Because of these complex variables, a probability result is always specific to how the program used was designed, the conditions at the lab, and any additional or discretionary input used during the analysis. In practice, different DNA analysis programs have resulted in substantially different probabilities for whether a defendant’s DNA appeared in the same DNA sample, even breathtaking discrepancies in the millions-fold!

And yet it is impossible to determine which result, or software, is the most accurate. There is no objective truth against which those numbers can be compared. We simply cannot know what the probability that a person contributed to a DNA mixture is. In controlled testing, we know whether a person’s DNA was part of a DNA mixture or not, but there is no way to figure out whether it was 100 times more likely that the donor’s DNA rather than an unknown person’s contributed to the mixture, or a million times more likely. And while there is no reason to assume that the tool that outputs the highest statistical likelihood is the most accurate, the software’s designers may nevertheless be incentivized to program their product in a way that is more likely to output a larger number, because “1 quintillion” sounds more precise than “10,000”—especially when there is no way to objectively evaluate the accuracy.

DNA Software Review is Essential

Because of these issues, it is critical to examine any DNA software’s source code that is used in the legal system. We need to know exactly how these statistical models are built, and looking at the source code is the only way to discover non-obvious coding errors. Yet, the companies that created these programs have fought against the release of the source code—even when it would only be examined by the defendant’s legal team and be sealed under a court order. In the rare instances where the software code was reviewed, researchers have found programming errors with the potential to implicate innocent people.

Forensic DNA analyses have the whiff of science—but without source code review, it’s impossible to know whether or not they pass the smell test. Despite the opacity of their design and the impossibility of measuring their accuracy, these programs have become widely used in the legal system. EFF has challenged—and continues to challenge—the failure to disclose the source code of these programs. The continued use of these tools, the accuracy of which cannot be ensured, threatens the administration of justice and the reliability of verdicts in criminal prosecutions.

Share