Categories
competition Intelwars privacy

Visa Wants to Buy Plaid, and With It, Transaction Data for Millions of People

Visa, the credit card network, is trying to buy financial technology company Plaid for $5.3 billion. The merger is bad for a number of reasons. First and foremost, it would allow a giant company with a controlling market share and a history of anticompetitive practices to snap up its fast-growing competition in the market for payment apps. But Plaid is more than a potential disruptor, it’s also sitting on a massive amount of financial data acquired through questionable means. By buying Plaid, Visa is buying all of its data. And Plaid’s users—even those protected by California’s new privacy law—can’t do anything about it.

Since mergers and acquisitions often fall outside the purview of privacy laws, only a pointed intervention by government authorities can stop the sale. Thankfully, this month, the US Department of Justice filed a lawsuit to do just that. This merger is about more than just competition in the financial technology (fintech) space; it’s about the exploitation of sensitive data from hundreds of millions of people. Courts should stop the merger to protect both competition and privacy.

Visa’s Monopolistic Hedge

The Department of Justice lawsuit outlines a very simple motive for the acquisition. Visa, it says, already controls around 70% of the digital debit card payment market, from which it earned approximately $2 billion last year. (Mastercard, at 25% market share, is Visa’s only significant competitor.) Thanks to network effects with merchants and consumers, plus exclusivity clauses in its agreements with banks, Visa is comfortably insulated from threats by traditional competitors. But apps like Venmo have started—just barely—to eat away at the digital transaction market. And Plaid sits at the center of that new wave, providing the infrastructure that Venmo and hundreds of other apps use to send money around the world.

According to the DoJ, a Visa executive predicted that Plaid would undercut its debit card processing business eventually, and that buying Plaid would be an “insurance policy” to protect Visa’s dominant market share. The lawsuit alleges that Plaid already had plans to leverage its relationships with banks and consumers to launch a new debit service. Seen through this lens, the acquisition is a simple preemptive strike against an emerging threat in one of Visa’s core markets. Challenging the purchase of a smaller company by a giant one, under the theory that the purchase eliminates future competition rather than creating a monopoly in the short term, is a strong step for the DoJ, and one we hope to see repeated in technology markets.

But users’ interest in the Visa-Plaid merger should extend beyond fears of market concentration. Both companies are deeply involved in the collection and monetization of personal data. And as the DoJ’s lawsuit underscores, “Acquiring Plaid would also give Visa access to Plaid’s enormous trove of consumer data, including real-time sensitive information about merchants and Visa’s rivals.”

Plaid, Yodlee, and the sorry state of fintech privacy

Plaid is what’s known as a “data aggregator” in the fintech space. It provides the infrastructure that connects banks to financial apps like Venmo and Coinbase, and its customers are usually apps that need programmatic access to a bank account.

It works like this: first, an app developer installs code from Plaid. When a user downloads the app, Plaid asks the user for their bank credentials, then logs in on their behalf. Plaid then has access to all the information the bank would normally share with the user, including balances, assets, transaction history, and debt. It collects data from the bank and passes it along to the app developer. From then on, the app can use Plaid’s services to initiate electronic transfers to and from the bank account, or to collect new information about the user’s activity.

In a shadowy industry, Plaid has tried to cultivate a reputation as the “trustworthy” data aggregator. Envestnet/Yodlee, a direct competitor, has long sold consumer behavior data to marketers and hedge funds. The company claims the data are “anonymous,” but reporters have discovered that that’s not always the case. And Finicity, another financial data aggregator, uses its access to moonlight as a credit reporting agency. A glance at data broker listings shows a thriving marketplace for individually-identified transactions data, with dozens of sellers and untold numbers of buyers. But Plaid is adamant that it doesn’t sell or monetize user data beyond its core business proposition. Until recently, Plaid has often been mentioned alongside Yodlee in order to contrast the two companies’ approaches, when it’s been mentioned at all.

Now, in the wake of the Visa announcement, two new lawsuits (Cottle et al v. Plaid Inc and Evans v. Plaid Inc) claim that Plaid has exploited users all along. Chief among the accusations is that Plaid’s interface misleads users into sharing their bank passwords with the company, a practice that plaintiffs allege runs afoul of California’s anti-phishing law. The lawsuits also claim that Plaid collected much more data than was necessary, deceived users about what it was doing, and made money by selling that data back to the apps which used it.

EFF is not involved in either lawsuit against Visa/Plaid, nor are we taking any position on the validity of the legal claims. We’re not privy to any information that hasn’t been reported publicly. But many of the facts presented by the lawsuits are relatively straightforward, and can be verified with Plaid’s own documentation. For example, at the time of writing, https://plaid.com/demo/ still hosts example sign-in flow with Plaid. Plaid does not dispute that it collects users’ real bank credentials in order to log in on their behalf. You can see for yourself what that looks like: the interface puts the bank’s logo front and center, and looks for all the world like a secure OAuth page. Try to think about whether, seeing this for the first time, you’d really understand who’s getting what information.

A series of mobile application screenshots showing how a user would log in to their Citi bank account with Plaid.

Who’s getting your credentials? Not just Citi.

Many users might not realize the scope of the data that Plaid receives. Plaid’s Transactions API gives both Plaid and app developers access to a user’s entire transaction and balance history, including a geolocation and category for each purchase made. Plaid’s other APIs grant access to users’ liabilities, including credit card debt and student loans; their investments, including individual stocks and bonds; and identity information, including name, address, email, and phone number.

A screenshot from a mobile application, stating "Plaid Demo uses Plaid to link your bank," with a button labeled "continue" at the bottom.

A screenshot from Plaid’s demo. What, exactly, does “link” mean?

For some products, Plaid’s demo will throw up a dialog box asking users to “Allow” the app to access certain kinds of data. (It doesn’t explain that Plaid will have access as well.) When we tested it, access to the “transactions,” “auth,” “identity,” and “investments” products didn’t trigger any prompts beyond the default “X uses Plaid to link to your bank” screen. It’s unclear how users are supposed to know what information an app will actually get, much less what they’ll do with it. And once a user enters their password, the data starts flowing.

Users can view the data they’re sharing through Plaid, and revoke access, after creating an account at my.plaid.com. This tool, which was apparently introduced in mid-2018 (after GDPR went into effect in Europe), is useful—for users who know where to look. But nothing in the standard “sign in with Plaid” flow directs users to the tool, or even lets them know it exists.

On the whole, it’s clear that Plaid was using questionable design practices to “nudge” people into sharing sensitive information.

What’s in it for Visa?

Whatever Plaid has been doing with its data until now, things are about to change.

Plaid is a hot fintech startup, but Visa thinks it can squeeze more out of Plaid than the company is making on its own. Visa is paying approximately 50 times Plaid’s annual revenue to acquire the company—a “very steep” sum by traditional metrics.

A huge part of Plaid’s value is its data. Like a canal on a major trade route, it sits at a key point between users and their banks, observing and directing flows of personal information both into and out of the financial system. Plaid currently makes money by charging apps for access to its system, like levying tariffs on those who pass through its port. But Visa is positioned to do much more.

For one, Visa already runs a targeted-advertising wing using customer transaction data, and thus has a straightforward way to monetize Plaid’s data stream. Visa aggregates transaction data from its own customers to create “audiences” based on their behavior, which it sells to marketers. It offers over two hundred pre-configured categories of users, including “recently engaged,” “international traveler – Mexico,” and “likely to have recently shifted spend from gasoline to public transportation services.” It also lets clients create custom audiences based on what people bought, where they bought it, and how much they spent.

A page from a Visa brochure for advertisers, inviting the reader to "discuss the best data points for your needs," including "category and brand spending," "time filters," "spend filters," and "travel filters."

Source: https://web.archive.org/web/20201125173340/https://usa.visa.com/dam/VCOM/global/run-your-business/documents/vsa218-08-visa-catalog-digital-may-2019-v2.pdf

Plaid’s wealth of transaction, liability, and identity information is good for more than selling ads. It can also be used to build financial profiles for credit underwriting, an obviously attractive application for credit-card magnate Visa, and to perform “identity matching” and other useful services for advertisers and lenders. Documents uncovered by the DoJ show that Visa is well aware of the value in Plaid’s data.

A sketch of a volcano partially submerged underwater. The words "bank connections, account validation, asset confirmation" are written on the volcano above the waterline and "fraud tools, identity matching, credit decisioning/underwriting, payment rails and delivery, advertising and marketing, financial management" are below the waterline.

Illustration by a Visa executive of Plaid’s untapped potential, included in Department of Justice filings. The executive “analogized Plaid to an island ‘volcano’ whose current capabilities are just ‘the tip showing above the water’ and warned that ‘what lies beneath, though, is a massive opportunity – one that threatens Visa.’” Note “identity matching,” “credit decisioning,” and “advertising and marketing”—all data-based businesses.

Through Plaid, Visa is about to acquire transaction data from millions of users of its competitors: banks, other credit and debit cards, and fintech apps. As TechCrunch has reported, “Buying Plaid is insurance against disruption for Visa, and also a way to know who to buy.” The DoJ went deeper into the data grab’s anticompetitive effects: “With this insight into which fintechs are more likely to develop competitive alternative payments methods, Visa could take steps to partner with, buy out, or otherwise disadvantage these up-and coming competitors,” positioning Visa to “insulate itself from competition.”

The Data-Sale Loophole

The California Privacy Rights Act, which amends the California Consumer Privacy Act (CCPA), was passed by California voters in early November. It’s the strongest law of its kind in the U.S., and it gives people a general right to opt out of the sale of their data. In addition, the Gramm-Leach-Bliley Act (GLBA), a federal law regulating financial institutions, allows Americans to tell financial institutions not to share their personal financial information. Since the CPRA exempts businesses which are already subject to GLBA, it’s not clear which of the two governs Plaid. But neither law restricts the transfer of data during a merger or acquisition. Plaid’s own privacy policy claims, loudly and clearly, that “We do not sell or rent personal information that we collect.” But elsewhere in the same section, Plaid admits it may share data “in connection with a change in ownership or control of all or a part of our business (such as a merger, acquisition, reorganization, or bankruptcy).” In other words, the data was always for sale under one condition: you had to buy everything.

That’s what Visa is doing. It’s acquiring everything Plaid has ever collected and—more importantly—access to data flows from everyone who uses a Plaid-connected app. It can monetize the data in ways Plaid never could. And the move completely side-steps restrictions on old-fashioned data sales.

Stop the Merger

It’s easy to draw parallels from the Visa/Plaid deal to other recent mergers. Some, like Facebook buying Instagram or Google buying YouTube, gave large companies footholds in new or emerging markets. Others, like Facebook’s purchase of Onavo, gave them data they could use to surveil both users and competitors. Still others, like Google’s acquisitions of Doubleclick and Fitbit, gave them abundant new inflows of personal information that they could fold into their existing databases. Visa’s acquisition of Plaid does all three.

The DoJ’s lawsuit argues that the acquisition would “unlawfully maintain Visa’s monopoly” and “unlawfully extend [Visa’s] advantage” in the U.S. online debit market, violating both the Clayton and Sherman antitrust acts. The courts should block Visa from buying up a nascent competitor and torrents of questionably-acquired data in one move.

Beyond this specific case, Congress should take a hard look at the trend of data-grab mergers taking place across the industry. New privacy laws often regulate the sharing or sale of data across company boundaries. That’s great as far as it goes—but it’s completely sidestepped by mergers and acquisitions. Visa, Google, and Facebook don’t need to buy water by the bucket, they can just buy the well. Moreover, analysts predict that this deal, if allowed to go through, could set off a spree of other fintech acquisitions. It may have already begun: just months after Visa announced its intention to buy Plaid, Mastercard (Visa’s rival in the debit duopoly) began the process of acquiring Plaid competitor Finicity. It’s long past time for better merger review and meaningful, enforceable restrictions on how companies can use our personal information.

Share
Categories
Banking System cancel Thanksgiving Cases change behavior Coronavirus COVID-19 dehumanizing economic distress face masks family freedom Headline News Intelwars liberty lockdowns Mainstream media plandemic privacy propaganda restrictions ritualistic shame muzzles scamdemic shaming Shut-down Social Engineering Social Media

MSM Wants You To CANCEL Your Thanksgiving

The mainstream media is telling people to cancel their Thanksgiving.  They say it’s to prevent the spread of COVID-19. Social media users content living in tyranny are already shaming those who are choosing to continue living their lives.

“At the risk of sounding like the Thanksgiving Grinch, let me be clear: A big feast with all of your loved ones is unnecessary, perhaps even immoral, during a global pandemic,” writes Suzette Hackney for USA Today.  “There really should be no debate. COVID-19 has canceled traditional Thanksgiving,” Hackney adds.

These puppets for the draconian elitists in government and the banking system desperately want your life to be upended and to destroy what it means to be a human. Why do you think they are pushing masks which have been shown to be ineffective at preventing the coronavirus? Because it’s a ritualistic dehumanizing shame ritual. 

CDC Study: Most COVID-19 Cases Were Admitted Mask Wearers

Last week, Chicago’s mayor issued a stay-at-home advisory effective for 30 days. Michigan Gov. Gretchen Whitmer shut down in-person classes at high schools and colleges, along with indoor dining, casinos, and movie theaters for three weeks. Washington Gov. Jay Inslee restricted indoor gatherings, eat-in restaurants, and shuttered gyms for at least four weeks. On Monday, officials in Philadelphiaannounced that all public or private indoor gatherings of any size are banned at least through Jan. 1. California Gov. Gavin Newsom also announced that indoor dining, gyms, and movie theaters must either remain closed or shut down in 41 of the state’s 58 counties. –USA Today

Ohio & Illinois Order More Lockdowns and Restrictions As COVID-19 Cases Surge

This tyranny will not end until we as the public on a whole decide that we’ve had enough.  It’s up to us. The mainstream media is also propagating the public with an attempt to convince them to let go of all of their freedom, including privacy within their own homes.

The problem is what people are doing behind closed doors. It’s what we don’t see that is contributing to surging numbers of cases. –Suzette Hackney for USA Today

The “hope” the mainstream media is sticking to is that a vaccine is on the horizon.  Just roll up your sleeve and take this rushed concoction of God only knows what and everything will be fine. Except, they have already said a vaccine won’t make things go back to normal.

They Moved The Goalposts…AGAIN!: “It’s Not Over When The Vaccine Arrives”

The post MSM Wants You To CANCEL Your Thanksgiving first appeared on SHTF Plan – When It Hits The Fan, Don't Say We Didn't Warn You.

Share
Categories
Banking System cancel Thanksgiving Cases change behavior Coronavirus COVID-19 dehumanizing economic distress face masks family freedom Headline News Intelwars liberty lockdowns Mainstream media plandemic privacy propaganda restrictions ritualistic shame muzzles scamdemic shaming Shut-down Social Engineering Social Media

MSM Wants You To CANCEL Your Thanksgiving

The mainstream media is telling people to cancel their Thanksgiving.  They say it’s to prevent the spread of COVID-19. Social media users content living in tyranny are already shaming those who are choosing to continue living their lives.

“At the risk of sounding like the Thanksgiving Grinch, let me be clear: A big feast with all of your loved ones is unnecessary, perhaps even immoral, during a global pandemic,” writes Suzette Hackney for USA Today.  “There really should be no debate. COVID-19 has canceled traditional Thanksgiving,” Hackney adds.

These puppets for the draconian elitists in government and the banking system desperately want your life to be upended and to destroy what it means to be a human. Why do you think they are pushing masks which have been shown to be ineffective at preventing the coronavirus? Because it’s a ritualistic dehumanizing shame ritual. 

CDC Study: Most COVID-19 Cases Were Admitted Mask Wearers

Last week, Chicago’s mayor issued a stay-at-home advisory effective for 30 days. Michigan Gov. Gretchen Whitmer shut down in-person classes at high schools and colleges, along with indoor dining, casinos, and movie theaters for three weeks. Washington Gov. Jay Inslee restricted indoor gatherings, eat-in restaurants, and shuttered gyms for at least four weeks. On Monday, officials in Philadelphiaannounced that all public or private indoor gatherings of any size are banned at least through Jan. 1. California Gov. Gavin Newsom also announced that indoor dining, gyms, and movie theaters must either remain closed or shut down in 41 of the state’s 58 counties. –USA Today

Ohio & Illinois Order More Lockdowns and Restrictions As COVID-19 Cases Surge

This tyranny will not end until we as the public on a whole decide that we’ve had enough.  It’s up to us. The mainstream media is also propagating the public with an attempt to convince them to let go of all of their freedom, including privacy within their own homes.

The problem is what people are doing behind closed doors. It’s what we don’t see that is contributing to surging numbers of cases. –Suzette Hackney for USA Today

The “hope” the mainstream media is sticking to is that a vaccine is on the horizon.  Just roll up your sleeve and take this rushed concoction of God only knows what and everything will be fine. Except, they have already said a vaccine won’t make things go back to normal.

They Moved The Goalposts…AGAIN!: “It’s Not Over When The Vaccine Arrives”

The post MSM Wants You To CANCEL Your Thanksgiving first appeared on SHTF Plan – When It Hits The Fan, Don't Say We Didn't Warn You.

Share
Categories
Commentary Intelwars privacy

Video Analytics User Manuals Are a Guide to Dystopia

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that’s not how cameras operate in today’s security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time. 

The term “video analytics” seems boring, but don’t confuse it with how many views you got on your YouTube “how to poach an egg” tutorial. In a law enforcement or private security context, video analytics refers to using machine learning, artificial intelligence, and computer vision to automate ubiquitous surveillance. 

Through the Atlas of Surveillance project, EFF has found more than 35 law enforcement agencies that use advanced video analytics technology. That number is steadily growing as we discover new vendors, contracts, and capabilities. To better understand how this software works, who uses it, and what it’s capable of, EFF has acquired a number of user manuals. And yes, they are even scarier than we thought. 

Briefcam, which is often packaged with Genetec video technology, is frequently used at real-time crime centers. These are police surveillance facilities that aggregate camera footage and other surveillance information from across a jurisdiction. Dozens of police departments use Briefcam to search through hours of footage from multiple cameras in order to, for instance, narrow in on a particular face or a specific colored backpack. This power of video analytic software would  be particularly scary if used to identify people out practicing their First Amendment right to protest. 

Avigilon systems are a bit more opaque, since they are often sold to business, which aren’t subject to the same transparency laws. In San Francisco, for instance, Avigilon provides the cameras and software for at least six business improvement districts (BIDs) and Community Benefit Districts (CBDs). These districts blanket neighborhoods in surveillance cameras and relay the footage back to a central control room. Avigilon’s video analytics can undertake object identification (such as whether things are cars and people), license plate reading, and potentially face recognition. 

You can read the Avigilon user manual here, and the Briefcam manual here

But what exactly is these software systems’ capabilities? Here’s what we learned: 

Pick a Face, Track a Face, Rate a Face

Instructions on how to select a face

If you’re watching video footage on Briefcam, you can select any face, then add it to a “watchlist.” Then, with a few more clicks, you can retrieve every piece of video you have with that person’s face in it. 

Briefcam assigns all face images 1-3 stars. One star: the AI can’t even recognize it as a person. Two stars: medium confidence. Three stars: high confidence.  

Detection of Unusual Events

A chart showing the different between the algorithms.

Avigilon has a pair of algorithms that it uses to predict what it calls “unusual events.” 

The first can detect “unusual motions,” essentially patterns of pixels that don’t match what you’d normally expect in the scene. It takes two weeks to train this self-learning algorithm.  The second can detect “unusual activity” involving cars and people. It only takes a week to train. 

Also, there’s “Tampering Detection” which, depending on how you set it, can be triggered by a moving shadow:

Enter a value between 1-10 to select how sensitive a camera is to tampering Events. Tampering is a sudden change in the camera field of view, usually caused by someone unexpectedly moving the camera. Lower the setting if small changes in the scene, like moving shadows, cause tampering events. If the camera is installed indoors and the scene is unlikely to change, you can increase the setting to capture more unusual events.

Pink Hair and Short Sleeves 

Color tool

With Briefcam’s shade filter, a person searching a crowd could filter by the color and length of items of clothing, accessories, or even hair. Briefcam’s manual even states the program can search a crowd or a large collection of footage for someone with pink hair. 

In addition, users of BriefCam can search specifically by what a person is wearing and other “personal attributes.” Law enforcement attempting to sift through crowd footage or hours of video could search for someone by specifying blue jeans or a yellow short-sleeved shirt.

Man, Woman, Child, Animal

BriefCam sorts people and objects into specific categories to make them easier for the system to search for. BriefCam breaks people into the three categories of “man,” “woman,” and “child.” Scientific studies show that this type of categorization can misidentify gender nonconforming, nonbinary, trans, and disabled people whose bodies may not conform to the rigid criteria the software looks for when sorting people. Such misidentification can have real-world harms, like triggering misguided investigations or denying access.

The software also breaks down other categories, including distinguishing between different types of vehicles and recognizing animals.

Proximity Alert

An example of the proximity filter

In addition to monitoring the total number of objects in a frame or the relative size of objects, BriefCam can detect proximity between people and the duration of their contact. This might make BriefCam a prime candidate for “COVID-19 washing,” or rebranding invasive surveillance technology as a potential solution to the current public health crisis. 

Avigilon also claims it can detect skin temperature, raising another possible assertion of public health benefit. But, as we’ve argued before, remote thermal imaging can often be very inaccurate, and fail to detect virus carriers that are asymptomatic. 

Public health is a collective effort. Deploying invasive surveillance technologies that could easily be used to monitor protestors and track political figures is likely to breed more distrust of the government. This will make public health collaboration less likely, not more. 

Watchlists 

One feature available both with Briefcam and Avigilon are watchlists, and we don’t mean a notebook full of names. Instead, the systems allow you to upload folders of faces and spreadsheets of license plates, and then the algorithm will find matches and track the targets’ movement. The underlying watchlists can be extremely problematic. For example, EFF has looked at hundreds of policy documents for automated license plate readers (ALPRs) and it is very rare for an agency to describe the rules for adding someone to a watchlist. 

Vehicles Worldwide 

Often, ALPRs are associated with England, the birthplace of the technology, and the United States, where it has metastasized. But Avigilon already has its sights set on new markets and has programmed its technology to identify license plates across six continents. 

It’s worth noting that Avigilon is owned by Motorola Solutions, the same company that operates the infamous ALPR provider Vigilant Solutions.

Conclusion

We’re heading into a dangerous time. The lack of oversight of police acquisition and use of surveillance technology has dangerous consequences for those misidentified or caught up in the self-fulfilling prophecies of AI policing

In fact,  Dr. Rashall Brackney, the Charlottesville Police Chief, described these video analytics as perpetuating racial bias at a recent panel. “ are often incorrect,” she said. “Over and over they create false positives in identifying suspects.”

This new era of video analytics capabilities causes at least two problems. First, police could rely more and more on this secretive technology to dictate who to investigate and arrest by, for instance, identifying the wrong hooded and backpacked suspect. Second, people who attend political or religious gatherings will justifiably fear being identified, tracked, and punished. 

Over a dozen cities across the United States have banned government use of face recognition, and that’s a great start. But this only goes so far. Surveillance companies are already planning ways to get around these bans by using other types of video analytic tools to identify people. Now is the time to push for more comprehensive legislation to defend our civil liberties and hold police accountable. 

To learn more about Real-Time Crime Centers, read our latest report here

Banner image source: Mesquite Police Department pricing proposal.

Share
Categories
announcement do not track Intelwars privacy

Introducing Cover Your Tracks!

Today, we’re pleased to announce Cover Your Tracks, the newest edition and rebranding of our historic browser fingerprinting and tracker awareness tool Panopticlick. Cover Your Tracks picks up where Panopticlick left off. Panopticlick was about letting users know that browser fingerprinting was possible; Cover Your Tracks is about giving users the tools to fight back against the trackers, and improve the web ecosystem to provide privacy for everyone.

A demonstration of the new, green Cover Your Tracks website, which uses animal paw prints to illustrate the concept of tracking and fingerprinting your browser. The user clicks "Test your fingerprint" to get results.

A screen capture of the front page of coveryourtracks.eff.org. The mouse clicks on “Test your browser” button, which loads a results page with a summary of protections the browser has in place against fingerprinting and tracking. The mouse scrolls down to toggle to “detailed view”, which shows more information about each metric, such as further information on System Fonts, Language, and AudioContext fingerprint, among many other metrics.

Over a decade ago, we launched Panopticlick as an experiment to see whether the different characteristics that a browser communicates to a website, when viewed in combination, could be used as a unique identifier that tracks a user as they browse the web. We asked users to participate in an experiment to test their browsers, and found that overwhelmingly the answer was yes—browsers were leaking information that allowed web trackers to follow their movements.

A screenshot of the older orange Panopticlick website, which shows a human fingerprint graphic.

The old Panopticlick website.

In this new iteration, Cover Your Tracks aims to make browser fingerprinting and tracking more understandable to the average user.  With helpful explainers accompanying each browser characteristic and how it contributes to their fingerprint, users get an in-depth look into just how trackers can use their browser against them.

Our browsers leave traces of identifiable information just like an animal might leave tracks in the wild. These traces can be combined into a unique identifier which follows users’ browsing of the web, like wildlife which has been tagged by an animal tracker. And, on the web and in the wild, one of the best ways to confuse trackers and make it hard for them to identify you individually. Some browsers are able to protect their users by making all instances of their browser look the same, regardless of the computer it’s running on. In this way, there is strength in numbers. Users can also “cover their tracks,” protecting themselves by installing extensions like our own Privacy Badger.

 "Every time you visit a website, your browser sends little bits of information about itself.

A screenshot from Cover Your Tracks’ learning page, https://coveryourtracks.eff.org/learn

For beginners, we’ve created a new learning page detailing the methodology we use to mimic trackers and test browsers, as well as next steps users can take to learn more and protect themselves. Because tracking and fingerprinting are so complex, we wanted to provide users a way to deep-dive into exactly what kind of tracking might be happening, and how it is performed.

We have also worked with browser vendors such as Brave to provide more accurate results for browsers that are employing novel anti-fingerprinting techniques. Add-ons and browsers that randomize the results of fingerprinting metrics have the potential to confuse trackers and mitigate the effects of fingerprinting as a method of tracking. In the coming months, we will provide new infographics that show users how they can become safer by using browsers that fit in with large pools of other browsers.

We invite you to test your own browser and learn more – just head over to Cover Your Tracks!

Share
Categories
Intelwars privacy Security Security Education Technical Analysis

macOS Leaks Application Usage, Forces Apple to Make Hard Decisions

Last week, users of macOS noticed that attempting to open non-Apple applications while connected to the Internet resulted in long delays, if the applications opened at all. The interruptions were caused by a macOS security service attempting to reach Apple’s Online Certificate Status Protocol (OCSP) server, which had become unreachable due to internal errors. When security researchers looked into the contents of the OCSP requests, they found that these requests contained a hash of the developer’s certificate for the application that was being run, which was used by Apple in security checks.[1] The developer certificate contains a description of the individual, company, or organization which coded the application (e.g. Adobe or Tor Project), and thus leaks to Apple that an application by this developer was opened.

Moreover, OCSP requests are not encrypted. This means that any passive listener also learns which application a macOS user is opening and when.[2] Those with this attack capability include any upstream service provider of the user; Akamai, the ISP hosting Apple’s OCSP service; or any hacker on the same network as you when you connect to, say, your local coffee shop’s WiFi. A detailed explanation can be found in this article.

Part of the concern that accompanied this privacy leak was the exclusion of userspace applications like LittleSnitch from the ability to detect or block this traffic. Even if altering traffic to essential security services on macOS poses a risk, we encourage Apple to allow power users the ability to choose trusted applications to control where their traffic is sent.

Apple quickly announced a new encrypted protocol for checking developer certificates and that they would allow users to opt out of the security checks. However, these changes will not roll out until sometime next year. Developing a new protocol and implementing it in software is not an overnight process, so it would be unfair to hold Apple to an impossible standard.

But why has Apple not simply turned the OCSP requests off for now? To answer this question, we have to discuss what the OCSP developer certificate check actually does. It prevents unwanted or malicious software from being run on macOS machines. If Apple detects that a developer has shipped malware (either through theft of signing keys or malice), Apple can revoke that developer’s certificate. When macOS next opens that application, Apple’s OCSP server will respond to the request (through a system service called `trustd`) that the developer is no longer trusted. So the application doesn’t open, thus preventing the malware from being run.

Fixing this privacy leak, while maintaining the safety of applications by checking for developer certificate revocations through OCSP, is not as simple as fixing an ordinary bug in code. This is a structural bug, so it requires structural fixes. In this case, Apple faces a balancing act between user privacy and safety. A criticism can be made that they haven’t given users the option to weigh the dilemma on their own, and simply made the decision for them. This is a valid critique. But the inevitable response is equally valid: that users shouldn’t be forced to understand a difficult topic and its underlying trade-offs simply to use their machines.

Apple made a difficult choice to preserve user safety, but at the peril of their more privacy-focused users. macOS users who understand the risks and prefer privacy can take steps to block the OCSP requests. We recommend that users who do this set a reminder for themselves to restore these OCSP requests once Apple adds the ability to encrypt them.

[1] Initial reports of the failure claimed Apple was receiving hashes of the application itself, which would have been even worse, if it were true.

[2] Companies such as Adobe develop many different applications, so an attacker would be able to establish that the application being opened is one of the set of all applications that Adobe has signed for macOS. Tor, on the other hand, almost exclusively develops a single application for end-users: the Tor Browser. So an attacker observing the Tor developer certificate will be able to determine that Tor Browser is being opened, even if the user takes steps to obscure their traffic within the app.

Share
Categories
Commentary competition Creativity & Innovation free speech Intelwars privacy Section 230 of the Communications Decency Act

Don’t Blame Section 230 for Big Tech’s Failures. Blame Big Tech.

Next time you hear someone blame Section 230 for a problem with social media platforms, ask yourself two questions: first, was this problem actually caused by Section 230? Second, would weakening Section 230 solve the problem? Politicians and commentators on both sides of the aisle frequently blame Section 230 for big tech companies’ failures, but their reform proposals wouldn’t actually address the problems they attribute to Big Tech. If lawmakers are concerned about large social media platforms’ outsized influence on the world of online speech, they ought to confront the lack of meaningful competition among those platforms and the ways in which those platforms fail to let users control or even see how they’re using our data. Undermining Section 230 won’t fix Twitter and Facebook; in fact, it risks making matters worse by further insulating big players from competition and disruption.

While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today.

Section 230 says that if you break the law online, you should be the one held responsible, not the website, app, or forum where you said it. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. It has some exceptions—most notably, that it doesn’t shield platforms from liability under federal criminal law—but at its heart, Section 230 is just common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party.

Without Section 230, the Internet would be a very different place, one with fewer spaces where we’re all free to speak out and share our opinions. Social media wouldn’t exist—at least in its current form—and neither would important educational and cultural platforms like Wikipedia and the Internet Archive. The legal risk associated with operating such a service would deter any entrepreneur from starting one, let alone a nonprofit.

As commentators of all political stripes have targeted large Internet companies with their ire, it’s become fashionable to blame Section 230 for those companies’ failings. But Section 230 isn’t why five companies dominate the market for speech online, or why the marketing and behavior analysis decisions that guide Big Tech’s practices are so often opaque to users.

The Problem with Social Media Isn’t Politics; It’s Power

A recent Congressional hearing with the heads of Facebook, Twitter, and Google demonstrated the highly politicized nature of today’s criticisms of Big Tech. Republicans scolded the companies for “censoring” and fact-checking conservative speakers while Democrats demanded that they do more to curb misleading and harmful statements.

There’s a nugget of truth in both parties’ criticisms: it’s a problem that just a few tech companies wield immense control over what speakers and messages are allowed online. It’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunity to appeal bad moderation decisions. There’s little hope of a competitor with fairer speech moderation practices taking hold given the big players’ practice of acquiring would-be competitors before they can ever threaten the status quo.

Unfortunately, trying to legislate that platforms moderate “neutrally” would create immense legal risk for any new social media platform—raising, rather than lowering, the barrier to entry for new platforms. Can a platform filter out spam while still maintaining its “neutrality”? What if that spam has a political message? Twitter and Facebook would have the large legal budgets and financial cushions to litigate those questions, but smaller platforms wouldn’t.

We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.

Likewise, if Twitter and Facebook faced serious competition, then the decisions they make about how to handle (or not handle) hateful speech or disinformation wouldn’t have nearly the influence they have today on online discourse. If there were twenty major social media platforms, then the decisions that any one of them makes to host, remove, or factcheck the latest misleading post about the election results wouldn’t have the same effect on the public discourse. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.

The hearing showed Congress’ shortsightedness when it comes to regulation of large Internet companies. In their drive to use the hearing for their political ends, both parties ignored the factors that led to Twitter, Facebook, and Google’s outsized power and remedies to bring competition and choice into the social media space.

Ironically, though calls to reform Section 230 are frequently motivated by disappointment in Big Tech’s speech moderation policies, evidence shows that further reforms to Section 230 would make it more difficult for new entrants to compete with Facebook or Twitter. It shouldn’t escape our attention that Facebook was one of the first tech companies to endorse SESTA/FOSTA, the 2018 law that significantly undermined Section 230’s protections for free speech online, or that Facebook is now leading the charge for further reforms to Section 230 (PDF). Any law that makes it more difficult for a platform to maintain Section 230’s liability shield will also make it more difficult for new startups to compete with Big Tech. (Just weeks after SESTA/FOSTA passed and put multiple dating sites out of business, Facebook announced that it was entering the online dating world.) We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.

Remember, speech moderation at scale is hard. It’s one thing for platforms to come to a decision about how to handle divisive posts by a few public figures; it’s quite another for them to create rules affecting everyone’s speech and enforce them consistently and transparently. When platforms err on the side of censorship, marginalized communities are silenced disproportionately. Congress should not try to pass laws dictating how Internet companies should moderate their platforms. Such laws would not pass Constitutional scrutiny, would harden the market for social media platforms from new entrants, and would almost certainly censor innocent people unfairly.

Then How Should Congress Keep Platforms in Check? Some Ideas You Won’t Hear from Big Tech

While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today. That’s why EFF recommends that Congress update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion. Before the government approves a merger, the companies should have to prove that the merger would not increase their monopoly power or unduly harm competition.

But even updating antitrust policy is not enough: big tech companies will stop at nothing to protect their black box of behavioral targeting from even a shred of transparency. Facebook recently demonstrated this when it threatened the Ad Observatory, an NYU project to shed light on how the platform was showing different political advertising messages to different segments of its user base. Major social media platforms’ business models thrive on practices that keep users in the dark about what information they collect on us and how it’s used. Decisions about what material (including advertising) to deliver to users are informed by a web of inferences about users, inferences that are usually impossible for users even to see, let alone correct.

Because of the link between social media’s speech moderation policies and its irresponsible management of user data, Congress can’t improve Big Tech’s practices without addressing its surveillance-based business models. And although large tech companies have endorsed changes to Section 230 and may endorse further changes to Section 230 in the future, they will probably never endorse real, comprehensive privacy-protective legislation.

That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.

Any federal privacy bill must have a private right of action: if a company breaks the law and infringes on our privacy rights, it’s not enough to put a government agency in charge of enforcing the law. Users should have the right to sue the companies, and it should be impossible to sign away those rights in a terms-of-service agreement. The law must also forbid companies from selling privacy as a service: all users must enjoy the same privacy rights regardless of what we’re paying—or being paid—for the service.

The recent fights over the California Consumer Privacy Act serve as a useful example of how tech companies can give lip service to the idea of privacy-protecting legislation while actually insulating themselves from it. After the law passed in 2018, the Internet Association—a trade group representing Big Tech powerhouses like Facebook, Twitter, and Google—spent nearly $176,000 lobbying the California legislature to weaken the law. Most damningly, the IA tried to pass a bill exempting surveillance-based advertising from the practices from which the law protects consumers. That’s right: big tech companies tried to pass a law protecting their own invasive advertising practices that helped cement their dominance in the first place. That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.

Section 230 has become a hot topic for politicians and commentators on both sides of the aisle. Whether it’s Republicans criticizing Big Tech for allegedly censoring conservatives or Democrats alleging that online platforms don’t do enough to fight harmful speech online, both sides seem increasingly convinced that they can change Big Tech’s social media practices by undermining Section 230. But history has shown that making it more difficult for platforms to maintain Section 230 protections will further isolate a few large tech companies from meaningful competition. If Congress wants to keep Big Tech in check, it must address the real problems head-on, passing legislation that will bring competition to Internet platforms and curb the unchecked, opaque user data practices at the heart of social media’s business models.

You’ll never hear Big Tech advocate that.

Share
Categories
announcement Creativity & Innovation free speech Intelwars privacy

Introducing “How to Fix the Internet,” a New Podcast from EFF

Today EFF is launching How to Fix the Internet, a new podcast mini-series to examine potential solutions to six ills facing the modern digital landscape. Over the course of 6 episodes, we’ll consider how current tech policy isn’t working well for users and invite experts to join us in imagining a better future. Hosted by EFF’s Executive Director Cindy Cohn and our Director of Strategy Danny O’Brien, How to Fix the Internet digs into the gritty technical details and the case law surrounding these digital rights topics, while charting a course toward how we can better defend the rights of users.  

It’s easy to see all the things wrong with the modern Internet, and how the reality of most peoples’ experience online doesn’t align with the dreams of its early creators. How did we go astray and what should we do now?  And what would our world look like if we got it right? This podcast mini-series will tackle those questions with regard to six specific topics of concern: the FISA Court, U.S. broadband access, the third-party doctrine, barriers to interoperable technology, law enforcement use of face recognition technology, and digital first sale. In each episode, we are joined by a guest to examine how the current system is failing, consider different possibilities for solutions, and imagine a better future. After all, we can’t build a better world unless we can imagine it.

We are launching the podcast with two episodes: The Secret Court Approving Secret Surveillance, featuring the Cato Institute’s specialist in surveillance legal policy Julian Sanchez; and Why Does My Internet Suck?, featuring Gigi Sohn, one of the nation’s leading advocates for open, affordable, and democratic communications networks. Future episodes will be released on Tuesdays.

We’ve also created a hub page for How to Fix the Internet. This page includes links to all of our episodes, ways to subscribe, and detailed show notes. In the show notes, we’ve included all the  books mentioned in each podcast, as well as substantial legal resources—including key opinions in the cases we talk about, briefs filed by EFF, bios of our guests, and a full transcript of every episode. 

You can subscribe to How to Fix the Internet via StitcherTuneInApple Podcasts, and Spotify and through any of the other podcast places. If you have feedback on How to Fix the Internet, please email podcasts@eff.org.

Share
Categories
Biometrics face surveillance Intelwars privacy

Clearview’s Faceprinting is Not Sheltered from Biometric Privacy Litigation by the First Amendment

Clearview AI extracts faceprints from billions of people, without their consent, and uses these faceprints to offer a service to law enforcement agencies seeking to identify suspects in photos. Following an exposé by the New York Times this past January, Clearview faces more than ten lawsuits, including one brought by the ACLU, alleging the company’s faceprinting violates the Illinois Biometric Information Privacy Act (BIPA). That watershed law requires opt-in consent before a company collects a person’s biometrics. Clearview moved to dismiss, arguing that the First Amendment bars this BIPA claim.

EFF just filed an amicus brief in this case, arguing that applying BIPA to Clearview’s faceprinting does not offend the First Amendment. Following a short summary, this post walks through our arguments in detail. 

Above all, EFF agrees with the ACLU that Clearview should be held accountable for invading the biometric privacy of the millions of individuals whose faceprints it extracted without consent. EFF has a longstanding commitment to protecting both speech and privacy at the digital frontier, and the case brings these values into tension. But our brief explains that well-settled constitutional principles resolve this tension.

Faceprinting raises some First Amendment interests, because it is collection and creation of information for purposes of later expressing information. However, as practiced by Clearview, this faceprinting does not enjoy the highest level of First Amendment protection, because it does not concern speech on a public matter, and the company’s interests are solely economic. Under the correct First Amendment test, Clearview may not ignore BIPA, because there is a close fit between BIPA’s goals (protecting privacy, speech, and information security) and its means (requiring opt-in consent).

Clearview’s faceprinting enjoys some protection

The First Amendment protects not just free expression, but also the necessary predicates that enable expression, including the collection and creation of information. For example, the U.S. Supreme Court has ruled that the First Amendment applies to reading books in libraries, gathering news inside courtrooms, creating video games, and newspapers’ purchasing of ink by the barrel.

Thus, courts across the country have held that the First Amendment protects our right to use our smartphones to record on-duty police officers. In the words of one federal appellate court: “The right to publish or broadcast an audio or audiovisual recording would be insecure, or largely ineffective, if the antecedent act of making the recording is wholly unprotected.” EFF has filed many amicus briefs in support of this right to record, and published suggestions about how to safely exercise this right during Black-led protests against police violence and racism.

Faceprinting is both the collection and creation of information and therefore involves First Amendment-protected interests. It collects information about the shape and measurements of a person’s face. And it creates information about that face in the form of a numerical representation. 

First Amendment protection of faceprinting is not diminished by the use of computer code to collect information about faces, or of mathematics to represent faces. Courts have consistently held that “code is speech,” because, like a musical score, it “is an expressive means for the exchange of information and ideas.” EFF has advocated for this principle from its founding through the present, in support of cryptographers, independent computer security researchers, inventors, and manufacturers of privacy-protective consumer tech.

Clearview’s faceprinting does not enjoy the strongest protection

First Amendment analysis only begins with determining whether the government’s regulation applies to speech or its necessary predicates. If so, the next step is to select the proper test. Here, courts should not apply “strict scrutiny,” one of the most searching levels of judicial inquiry, to BIPA’s limits on Clearview’s faceprinting. Rather, courts should apply “intermediate scrutiny,” for two intertwined reasons.

First, Clearview’s faceprinting does not concern a “public issue.” The Supreme Court has repeatedly held that the First Amendment is less protective of speech on “purely private” matters, compared to speech on public matters. It has done so, for example, where speech allegedly violated a wiretapping statute or the common law torts of defamation or emotional distress. The Court has explained that, consistent with the First Amendment’s core protection of robust public discourse, the universe of speech that involves matters of public concern is necessarily broad, but it is not unlimited.

Lower courts follow this distinction when speech allegedly violates the common law tort of publication of private facts, and when collection of information allegedly violates the common law tort of intrusion on seclusion. The courts held that that these privacy torts do not violate the First Amendment as long as they do not restrict discussion of matters of public concern.

Second, Clearview’s interests in faceprinting are solely economic. The Supreme Court has long held that “commercial speech,” meaning “expression related solely to the economic interests of the speaker and its audience,” receives “lesser protection” compared to “other constitutionally guaranteed expression.” Thus, when faced with First Amendment challenges to laws that protect consumer data privacy from commercial data processing, lower courts apply intermediate judicial review under the commercial speech doctrine. These decisions frequently focused not just on the commercial motivation, but also the lack of a matter of public concern.

To be sure, faceprinting can be the predicate to expression that is relevant to matters of public concern. For example, a journalist or police reform advocate might use faceprinting to publicly name the unidentified police officer depicted in a video using excessive force against a protester. But this is not the application of faceprinting practiced by Clearview.

Instead, Clearview extracts faceprints from billions of face photos, absent any reason to think any particular person in those photos will engage in a matter of public concern. Indeed, the overwhelming majority of these people have not and will not. Clearview’s sole purpose is to sell the service of identifying people in probe photos, devoid of journalistic, artistic, scientific, or other purpose. It makes this service available to a select set of paying customers who are contractually forbidden from redistribution of the faceprinting. 

In short, courts here should apply intermediate First Amendment review. To pass this test, BIPA must advance a “substantial interest,” and there must be a “close fit” between this interest and how BIPA limits speech.

Illinois has substantial interests

BIPA advances three substantial government interests.

First, Illinois has a substantial interest in protecting biometric privacy. We have a fundamental human right to privacy over our personal information. But everywhere we go, we display a unique and indelible marker that can be seen from a distance: our own faces. So corporations can use face surveillance technology (coupled with the ubiquity of digital cameras) to track where we go, who we are with, and what we are doing.

Second, Illinois has a substantial interest in protecting the many forms of expression that depend on privacy. These include the rights to confidentially engage in expressive activity, to speak anonymously, to converse privately, to confidentially receive unpopular ideas, and to confidentially gather newsworthy information from undisclosed sources. Police use faceprinting to identify protesters, including with Clearview’s help. Government officials can likewise use faceprinting to identify who attended a protest planning meeting, who visited an investigative reporter, who entered a theater showing a controversial movie, and who left an unsigned pamphlet on a doorstep. So Clearview is not the only party whose First Amendment interests are implicated by this case.

Third, Illinois has a substantial interest in protecting information security. Data thieves regularly steal vast troves of personal data. Criminals and foreign governments can use stolen faceprints to break into secured accounts that can be opened by the owner’s face. Indeed, a team of security researchers did this with 3D models based on Facebook photos.

There is a close fit between BIPA and Illinois’ interests 

BIPA requires private entities like Clearview to obtain a person’s opt-in consent before collecting their faceprint. There is a close fit between this rule and Illinois’ substantial interests. Information privacy requires, in the words of the Supreme Court, “the individual’s control of information concerning [their] person.” The problem is our lost control over our faceprints. The solution is to restore our control, by means of an opt-in consent requirement.

Opt-in consent is far more effective at restoring this control, compared to other approaches like opt-out consent. Many people won’t even know a business collected their faceprint. Of those who do, many won’t know they have the right to opt-out or how to do so. Even an informed person might be deterred because the process is time-consuming, confusing, and frustrating, as studies have shown. Indeed, many companies use “dark patterns” to purposefully design the user’s experience in a manner that manipulates so-called “agreement” to data processing.

Thus, numerous federal appellate and trial courts have upheld consumer data privacy laws like the one at issue here because of their close fit to substantial government interests.

Next steps

Moving forward, EFF will continue to advocate for strong biometric privacy laws, and robust judicial interpretations of those laws. We will also continue to support bans on government use of face surveillance, including (as here) acquisition of information from corporations that wield this dangerous technology. More broadly, Clearview’s faceprinting is another reminder of the need for comprehensive federal consumer data privacy legislation. Finally, EFF will continue to oppose poorly taken First Amendment challenges to such laws, as we’ve done here.

You can read here our amicus brief in ACLU v. Clearview AI.

Share
Categories
Intelwars Laws privacy Voting

California Proposition 24 Passes

California’s Proposition 24, aimed at improving the California Consumer Privacy Act, passed this week. Analyses are very mixed. I was very mixed on the proposition, but on the whole I supported it. The proposition has some serious flaws, and was watered down by industry, but voting for privacy feels like it’s generally a good thing.

Share
Categories
arrr Cryptocurrency Information Security Intelwars investing Online Privacy Podcasts privacy

Episode-2767- Draeth Kata on Pirate Chain the Ultimate Privacy Crypto Currency

Draeth is an engineer by day, who also is a captain of Pirate Chain and President of the BPSAA. He was introduced into crypto in 2016 with mining, and eventually found Pirate Chain a couple months after it’s genesis block. Continue reading →

The post Episode-2767- Draeth Kata on Pirate Chain the Ultimate Privacy Crypto Currency first appeared on The Survival Podcast.

Share
Categories
Commentary Intelwars privacy

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

This is not a drill. Red alert: The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents. 

Since Ring first made a splash in the private security camera market, we’ve been warning of its potential to undermine the civil liberties of its users and their communities. We’ve been especially concerned with Ring’s 1,000+ partnerships with local police departments, which facilitate bulk footage requests directly from users without oversight or having to acquire a warrant. 

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This  serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house. 

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police. 

Only a few months ago, Jackson stood up for its residents, becoming the first city in the southern United States to ban police use of face recognition technology. Clearly, this is a city that understands invasive surveillance technology when it sees it, and knows when police have overstepped their ability to invade privacy. 

If police want to build a surveillance camera network, they should only  do so in ways that are transparent and accountable, and ensure active resident participation in the process. If residents say “no” to spy cameras, then police must not deploy them. The choices you and your neighbors make as consumers should not be hijacked by police to roll out surveillance technologies. The decision making process must be left to communities. 

Share
Categories
Biometrics face surveillance Intelwars privacy Street-Level Surveillance

No Police Body Cams Without Strict Safeguards

EFF opposes police Body Worn Cameras (BWCs), unless they come with strict safeguards to ensure they actually promote officer accountability without surveilling the public. Police already have too many surveillance technologies, and deploy them all too frequently against people of color and protesters. We have taken this approach since 2015, when we opposed a federal grant to the LAPD for purchase of BWCs, because the LAPD failed to adopt necessary safeguards about camera activation, public access to footage, officer misuse of footage, and face recognition. Also, communities must be empowered to decide for themselves whether police may deploy BWCs on their streets.

Prompted by Black-led protests against police violence and racism, lawmakers across the country are exploring new ways to promote police accountability. Such laws are long overdue. A leading example is the federal Justice in Policing Act (H.R. 7120 and S. 3912). Unfortunately, this bill (among others) would expand BWCs absent necessary safeguards. We respectfully recommend amendments.

Necessary BWC safeguards

Police BWCs are a threat to privacy, protest, and racial justice. If worn by hundreds of thousands of police officers, BWCs would massively expand the power of government to record video and audio of what we are doing as we go about our lives in public places, and in many private places, too. The footage might be kept forever, routinely subjected to face surveillance, and used in combination with other surveillance technologies like stationary pole cameras. Police use of BWCs at protests could discourage people from making their voices heard. Given the many ongoing inequities in our criminal justice system, BWCs will be aimed most often at people of color, immigrants, and other vulnerable groups. All of this might discourage people from seeking out officers for assistance. In short, BWCs might undermine community trust in law enforcement.

So EFF opposes BWCs, absent the following safeguards, among others.

Mandated activation of BWCs. Officers must be required to activate their cameras at the start of all investigative encounters with civilians, and leave them on until the encounter ends. Otherwise, officers could subvert any accountability benefits of BWCs by simply turning them off when misconduct is imminent, or not turning them on. In narrow circumstances where civilians have heightened privacy interests (like crime victims and during warrantless home searches), officers should give civilians the option to deactivate BWCs.

No political spying with BWCs. Police must not use BWCs to gather information about how people are exercising their First Amendment rights to speak, associate, or practice their religion. Government surveillance chills and deters such protected activity.

Retention of BWC footage. All BWC footage should be held for a few months, to allow injured civilians sufficient time to come forward and seek evidence. Then footage should be promptly destroyed, to reduce the risks of data breach, employee misuse, and long-term surveillance of the public. However, if footage depicts an officer’s use of force or an episode subject to a civilian’s complaint, then the footage must be retained for a lengthier period. Stored footage must be secured from access or alteration by data thieves and agency employees.

No face surveillance with BWCs. Government must not use face surveillance, period. This includes equipping BWCs with facial recognition technology, or applying such technology to footage from BWCs. Last year EFF supported a California law (A.B. 1215) that placed a three-year moratorium on use of face surveillance with BWCs. Likewise, EFF in 2019 and 2020 joined scores of privacy and civil rights groups in opposing any federal use of face surveillance, and also any federal funding of state and local face surveillance.

Officer review of footage. If footage depicts use of force or an episode subject to a civilian complaint, then an officer must not be allowed to review the footage, or any department reports based on the footage, until after they make an initial statement about the event. Given the malleability of human memory, a video can alter or even overwrite a recollection. And some officers might use footage to better “testily.”

Public access to footage. If footage depicts a particular person, then that person must have access to it. If footage depicts police use of force, then all members of the general public must have access to it. If a person seeks footage that does not depict them or use of force, then whether they may have access must depend on a weighing by a court of (a) the benefits of disclosure to police accountability, and (b) the costs of disclosure to the privacy of a depicted member of the public. If the footage does not depict police misconduct, then disclosure will rarely have a police accountability benefit. In many cases, blurring of civilian faces might diminish privacy concerns. In no case should footage be withheld on the grounds it is a police investigatory record.

Enforcement of these rules. If footage is recorded or retained in violation of these rules, then it must not be admissible in court. If footage is not recorded or retained in violation of these rules, then a civil rights plaintiff or criminal defendant must receive an evidentiary presumption that the missing footage would have helped them. Members of a community should have a private right of action to enforce BWC rules when a police department or its officers violate them. And departments must discipline officers who break these rules.

Community control over BWCs. Local police and sheriffs must not acquire or use BWCs, or any other surveillance technology, absent permission from their city council or county board, after ample opportunity for residents to make their voices heard. This is commonly called community control over police surveillance (CCOPS). Likewise, federal and state law enforcement must not deploy BWCs absent notice to the public and an opportunity for opponents to object.

Many groups have published model BWC rules, including the ACLU, the Leadership Conference on Civil and Human Rights, the Constitution Project, and the Police Executive Research Forum. The safeguards discussed above are among the rules in some of these models.

Amending the Justice in Policing Act

We appreciate that the Justice in Policing Act’s section on federal BWCs (Sec. 372) contains safeguards discussed above. We respectfully request three amendments to the bill’s provisions on BWCs.

Federal grants for state and local BWCs. The bill provides federal grants to state and local police to purchase BWCs. See Sec. 382. But the bill’s rules on these BWCs are far weaker than the bill’s rules on federal BWCs, and lack safeguards discussed above. State and local BWCs are no less threatening to privacy, speech, and racial justice than federal BWCs. For too long, BWCs have flooded into our communities, often with federal funding, in the absence of adequate safeguards. Thus, please amend the bill to apply all of its rules for federal BWCs to any grants for state and local BWCs.

Also, please amend the bill to prohibit state and local agencies from obtaining federal grants for BWCs unless they first use a CCOPS process to obtain permission from their public and elected officials. If the residents of a community do not want their police to deploy BWCs, then the federal government must not fund BWCs in that community.

For federally funded BWCs used by state and local police, these federal rules should be a floor and not a ceiling. Thus, the bill must expressly not preempt state and local rules that ensure even more police accountability and civilian privacy then does the federal bill.

Face surveillance with BWCs. The bill allows the application of face recognition technology to footage from BWCs, provided there is judicial authorization. See Sec. 372(q)(2) & 382(c)(1)(E). But EFF opposes any government use of face surveillance, even with this limit. We especially oppose face surveillance in connection with police BWCs. Thus, please amend the bill to prohibit any equipping of police BWCs with facial recognition technology, and any application of such technology to footage from BWCs, as to both federal BWCs and federal funds for state and local BWCs.

Public access to BWC footage. For purposes of public access to federal BWC footage under the federal Freedom of Information Act (FOIA), the bill divides footage into three categories. If footage depicts an officer’s use of force, then anyone may obtain it. If footage depicts both use of force and resulting grievous injury, then such release must be expedited and occur within five days. We agree with these two rules.

The bill further provides that if footage does not depict use of force, then it may only be released with the written permission of the civilian depicted. See Secs. 372(a)(4), (j), & (l). This is a reasonable attempt to balance the accountability benefits and privacy harms of public disclosure of BWC footage. Still, we respectfully suggest a somewhat different approach. The civilian depicted should not have an absolute prerogative to veto public disclosure. Rather, the civilian’s opposition should be one factor in the larger balancing by a court of the privacy and accountability interests. Courts routinely conduct such balancing upon assertion of the FOIA exemptions for personal privacy.

For example, if footage shows an officer cussing at the mayor, the public should have access, even if the mayor believes release would be politically damaging to them. Likewise, if footage shows officers ransacking a car absent any suspicion, the public should have access, even if the police department conditioned a cash settlement with the driver on a non-disclosure agreement. In such cases, the accountability benefits of disclosure would outweigh the privacy harms. On the other hand, if footage does not depict officer misconduct, disclosure will rarely be justified.

Conclusion

The time is long past for new measures to end violence and racism in policing. But BWCs are not a panacea. Indeed, without necessary safeguards, BWCs will make the problem worse, by expanding police surveillance of our communities without improving police accountability. We urge policymakers: do not put more BWCs on our streets, unless they are subject to strict safeguards.

Share
Categories
backdoors Intelwars national security policy NSA privacy Surveillance Terrorism

The NSA is Refusing to Disclose its Policy on Backdooring Commercial Products

Senator Ron Wyden asked, and the NSA didn’t answer:

The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others.

These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications.

The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines.

[…]

The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws.

“At NSA, it’s common practice to constantly assess processes to identify and determine best practices,” said Anne Neuberger, who heads NSA’s year-old Cybersecurity Directorate. “We don’t share specific processes and procedures.”

Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries.

The article goes on to talk about Juniper Networks equipment, which had the NSA-created DUAL_EC PRNG backdoor in its products. That backdoor was taken advantage of by an unnamed foreign adversary.

Juniper Networks got into hot water over Dual EC two years later. At the end of 2015, the maker of internet switches disclosed that it had detected malicious code in some firewall products. Researchers later determined that hackers had turned the firewalls into their own spy tool here by altering Juniper’s version of Dual EC.

Juniper said little about the incident. But the company acknowledged to security researcher Andy Isaacson in 2016 that it had installed Dual EC as part of a “customer requirement,” according to a previously undisclosed contemporaneous message seen by Reuters. Isaacson and other researchers believe that customer was a U.S. government agency, since only the U.S. is known to have insisted on Dual EC elsewhere.

Juniper has never identified the customer, and declined to comment for this story.

Likewise, the company never identified the hackers. But two people familiar with the case told Reuters that investigators concluded the Chinese government was behind it. They declined to detail the evidence they used.

Okay, lots of unsubstantiated claims and innuendo here. And Neuberger is right; the NSA shouldn’t share specific processes and procedures. But as long as this is a democratic country, the NSA has an obligation to disclose its general processes and procedures so we all know what they’re doing in our name. And if it’s still putting surveillance ahead of security.

Share
Categories
Intelwars privacy

Why Getting Paid for Your Data Is a Bad Deal

One bad privacy idea that won’t die is the so-called “data dividend,” which imagines a world where companies have to pay you in order to use your data.

Sound too good to be true? It is.

Let’s be clear: getting paid for your data—probably no more than a handful of dollars at most—isn’t going to fix what’s wrong with privacy today. Yes, a data dividend may sound at first blush like a way to get some extra money and stick it to tech companies. But that line of thinking is misguided, and falls apart quickly when applied to the reality of privacy today. In truth, the data dividend scheme hurts consumers, benefits companies, and frames privacy as a commodity rather than a right.

EFF strongly opposes data dividends and policies that lay the groundwork for people to think of the monetary value of their data rather than view it as a fundamental right. You wouldn’t place a price tag on your freedom to speak. We shouldn’t place one on our privacy, either.

Think You’re Sticking It to Big Tech? Think Again

Supporters of data dividends correctly recognize one thing: when it comes to privacy in the United States, the companies that collect information currently hold far more power than the individual consumers continually tapped for that information.

But data dividends do not meaningfully correct that imbalance. Here are three questions to help consider the likely outcomes of a data dividend policy:

  • Who will determine how much you get paid to trade away your privacy?
  • What makes your data valuable to companies?
  • What does the average person gain from a data dividend, and what do they lose?

Data dividend plans are thin on details in regarding who will set the value of data. Logically, however, companies have the most information about the value they can extract from our data. They also have a vested interest in using the lowest possible bar to set that value. Legislation in Oregon to value health data would have allowed companies to set that value, leaving little chance that consumers would get anywhere near a fair shake. Even if a third-party, such as a government panel, were tasked with setting a value, the companies would still be the primary sources of information about how they plan to monetize data.

Privacy should not be a luxury. It should not be a bargaining chip. It should never have a price tag.

Which brings us to a second question: why and in what ways do companies value data? Data is the lifeblood of many industries. Some of that data is organized by consumer and then used to deliver targeted ads. But it’s also highly valuable to companies in the aggregate—not necessarily on an individual basis. That’s one reason why data collection can often be so voracious. A principal point of collecting data is to identify trends—to sell ads, to predict behavior, etc.— and it’s hard to do that without getting a lot of information. Thus, any valuation that focuses solely on individualized data, to the exclusion of aggregate data, will be woefully inadequate. This is another reason why individuals aren’t well-positioned to advocate for good prices for themselves.

Even for companies that make a lot of money, the average revenue per user may be quite small. For example, Facebook earned some $69 billion in revenue in 2019. For the year, it averaged about $7 revenue per user, globally, per quarter. Let’s say that again: Facebook is a massive, global company with billions of users, but each user only offers Facebook a modest amount in revenue. Profit per user will be much smaller, so there is no possibility that legislation will require companies to make payouts on a revenue-per-customer basis. As a result, the likely outcome of a data dividend law (even as applied to an extremely profitable company like Facebook) would be that each user receives, in exchange for their personal information over the course of an entire year, a very small piece of the pie—perhaps just a few dollars.

Those small checks in exchange for intimate details about you are not a fairer trade than we have now. The companies would still have nearly unlimited power to do what they want with your data. That would be a bargain for the companies, who could then wipe their hands of concerns about privacy. But it would leave users in the lurch.

All that adds up to a stark conclusion: if where we’ve been is any indication of where we’re going, there won’t be much benefit from a data dividend. What we really need is stronger privacy laws to protect how businesses process our data—which we can, and should do, as a separate and more protective measure.

Whatever the Payout, The Cost Is Too High

And what do we lose by agreeing to a data dividend? We stand to lose a lot. Data dividends will likely be most attractive to those for whom even a small bit of extra money would do a lot. Those vulnerable people—low-income Americans and often communities of color—should not be incentivized to pour more data into a system that already exploits them and uses data to discriminate against them. Privacy is a human right, not a commodity. A system of data dividends would contribute to a society of privacy “haves” and “have-nots.”

Also, as we’ve said before, a specific piece of information can be priceless to a particular person and yet command a very low market price. Public information feeds a lot of the data ecosystem. But even non-public data, such as your location data, may cost a company less than a penny to buy—and cost you your physical safety if it falls into the wrong hands. Likewise, companies currently sell lists of 1,000 people with conditions such as anorexia, depression, and erectile dysfunction for $79 per list—or eight cents per listed person. Such information in the wrong hands could cause great harm.

There is no simple way to set a value for data. If someone asked how much they should pay you to identify where you went to high school, you’d probably give that up for free. But if a mortgage company uses that same data to infer that you’re in a population that is less likely to repay a mortgage—as a Berkeley study found was true for Black and Latinx applicants—it could cost you the chance to buy a home.

Pay-For-Privacy

Those who follow our work know that EFF also opposes “pay-for-privacy” schemes, referring to offers from a company to give you a discount on a good or service in exchange for letting them collect your information.

In a recent example of this, AT&T said it will introduce mobile plans that knock between $5 and $10 off people’s phone bills if they agree to watch more targeted ads on their phone.  “I believe there’s a segment of our customer base where, given a choice, they would take some load of advertising for a $5 or $10 reduction in their mobile bill,” AT&T Chief Executive Officer John Stankey said to Reuters in September.

Again, there are people for whom $5 or $10 per month would go a long way to make ends meet. That also means, functionally, that similar plans would prey on those who can’t afford to protect themselves. We should be enacting privacy policies that protect everyone, not exploitative schemes that treat lower-income people as second-class citizens.

Pay-for-privacy and data dividends are two sides of the same coin. Some data dividend proponents, such as former presidential candidate Andrew Yang, draw a direct line between the two. Once you recognize that data have some set monetary value, as schemes such as AT&T’s do, it paves the way for data dividends. EFF opposes both of these ideas, as both would lead to an exchange of data that would endanger people and commodify privacy.

It Doesn’t Have to Be Like This

Advocacy of a data dividend—or pay-for-privacy— as the solution to our privacy woes admits defeat. It yields to the incorrect notion that privacy is dead, and is worth no more than a coin flipped your way by someone who holds all the cards.

It undermines privacy to encourage people to accept the scraps of an exploitative system. This further lines the pockets of those who already exploit our data, and exacerbates unfair treatment of people who can’t afford to pay for their basic rights.

There is no reason to concede defeat to these schemes. Privacy is not dead—theoretically or practically, despite what people who profit from abusing your privacy want you to think. As Dipayan Ghosh has said, privacy nihilists ignore a key part of the data economy, “[your] behavioral data are temporally sensitive.” Much of your information has an expiration date, and companies that rely on it will always want to come back to the well for more of it. As the source, consumers should have more of the control.

That’s why we need to change the system and redress the imbalance. Consumers should have real control over their information, and ways to stand up and advocate for themselves. EFF’s top priorities for privacy laws include granting every person the right to sue companies for violating their privacy, and prohibiting discrimination against those who exercise their rights.

It’s also why we advocate strongly for laws that make privacy the default—requiring companies to get your opt-in consent before using your information, and to minimize how they process your data to what they need to serve your needs. That places meaningful power with the consumer — and gives you the choice to say “no.” Allowing a company to pay you for your data may sound appealing in theory. In practice, unlike in meaningful privacy regimes, it would strip you of choice, hand all your data to the companies, and give you pennies in return.

Data dividends run down the wrong path to exercising control, and would dig us deeper into a system that reduces our privacy to just another cost of doing business. Privacy should not be a luxury. It should not be a bargaining chip. It should never have a price tag.

Share
Categories
cell phones Intelwars Law Enforcement privacy Surveillance tracking

IMSI-Catchers from Canada

Gizmodo is reporting that Harris Corp. is no longer selling Stingray IMSI-catchers (and, presumably, its follow-on models Hailstorm and Crossbow) to local governments:

L3Harris Technologies, formerly known as the Harris Corporation, notified police agencies last year that it planned to discontinue sales of its surveillance boxes at the local level, according to government records. Additionally, the company would no longer offer access to software upgrades or replacement parts, effectively slapping an expiration date on boxes currently in use. Any advancements in cellular technology, such as the rollout of 5G networks in most major U.S. cities, would render them obsolete.

The article goes on to talk about replacement surveillance systems from the Canadian company Octasic.

Octasic’s Nyxcell V800 can target most modern phones while maintaining the ability to capture older GSM devices. Florida’s state police agency described the device, made for in-vehicle use, as capable of targeting eight frequency bands including GSM (2G), CDMA2000 (3G), and LTE (4G).

[…]

A 2018 patent assigned to Octasic claims that Nyxcell forces a connection with nearby mobile devices when its signal is stronger than the nearest legitimate cellular tower. Once connected, Nyxcell prompts devices to divulge information about its signal strength relative to nearby cell towers. These reported signal strengths (intra-frequency measurement reports) are then used to triangulate the position of a phone.

Octasic appears to lean heavily on the work of Indian engineers and scientists overseas. A self-published biography of the company notes that while the company is headquartered in Montreal, it has “R&D facilities in India,” as well as a “worldwide sales support network.” Nyxcell’s website, which is only a single page requesting contact information, does not mention Octasic by name. Gizmodo was, however, able to recover domain records identifying Octasic as the owner.

Share
Categories
Intelwars International Necessary and Proportionate privacy ¿Quién defiende tus datos?

Peru’s Third Who Defends Your Data? Report: Stronger Commitments from ISPs, But Imbalances, and Gaps to Bridge.

Hiperderecho, Peru’s leading digital rights organization, has launched today its third ¿Quién Defiende Tus Datos? (Who Defends you Data)–a report that seeks to hold telecom companies accountable for their users’ privacy. The new Peruvian edition shows improvements compared to 2019’s evaluation.

 Movistar and Claro commit to require a warrant for handing both users’ communications content and metadata to the government. The two companies also earned credit for defending user’s privacy in Congress or for challenging government requests. None scored any star last year in this category. Claro stands out with detailed law enforcement guidelines, including an explanatory chart for the procedures the company adopts before law enforcement requests for communications data. However, Claro should be more specific about the type of communications data covered by the guidelines. All companies have received full stars for their privacy policies, while only three did so in the previous report. Overall, Movistar and Claro are tied in the lead. Entel and Bitel lag, with the former bearing a slight advantage

Quien Defiende Tus Datos is part of a series across Latin America and Spain carried out in collaboration with EFF and inspired in our Who Has Your Back? Project. This year’s edition evaluates the four largest Internet Service Providers (ISPs) in Peru: Telefónica-Movistar, Claro, Entel, and Bitel.    

Hiperderecho assessed Peruvian ISPs on seven criteria concerning privacy policies, transparency, user notification, judicial authorization, defense of human rights, digital security, and law enforcement guidelines. In contrast to last year, the report has added two new categories – if ISPs publish law enforcement guidelines and a category checking companies’ commitments to users’ digital security. The full report is available in Spanish, and here we outline the main results:

Regarding transparency reports, Movistar leads the way, earning a full star while Claro receives a partial star. The report had to provide useful data about how many requests received and how many times the company complied. It should also include details about the government agencies that made the requests and the authority’s justifications. For the first time, Claro has provided statistical figures on government demands that require the “lifting of the secrecy of communication (LST).” However, Claro has failed to clarify which type of data (IP addresses and other technical identifiers) is protected under this legal regime. Since Peru’s Telecommunications Law and its regulation protect under communications secrecy both the content and personal information obtained through the provision of telecom services, we assume Claro might include both. Yet, as a best practice, the ISP should be more explicit about the type of data, including technical identifiers, protected under communication secrecy. As Movistar does, Claro should also break down government requests’ statistical data in content interception and metadata.  

Movistar and Claro have published their law enforcement guidelines. While Movistar only released a general global policy applicable to its subsidiaries, Claro stands out with detailed guidelines for Peru, including an explanatory chart for the company’s procedures before law enforcement requests for communications data. On the downside, the document broadly refers to “lifting the secrecy of communication” requests without defining what it entails. It should give users greater insight into which kind of data is included in the outlined procedures and whether they are mostly focused on authorities’ access to communications content or refer to specific metadata requests.

Entel, Bitel, Claro, and Movistar have published privacy policies applicable to their services that are easy to understand. All of the ISP’s policies provide information about the collected data (such as name, address, and records related to the service provision) and cases in which the company shares personal data with third parties. Claro and Movistar receive full credit in the judicial authorization category for having policies or other documents indicating their commitment to request a judicial order before handing communications data unless the law mandates otherwise. Similarly, Entel states they share users’ data with the government in compliance with the law. Peruvian law grants the specialized police investigation unit the power to request from telecom operators access to metadata in specific emergencies set by Legislative Decree 1182, with a subsequent judicial review.

Latin American countries still have a long way ahead in shedding enough light on government surveillance practices. Publishing meaningful transparency reports and law enforcement guidelines are two critical measures that companies should commit to. Users’ notification is the third one. In Peru, none of the ISPs have committed to notifying users of a government request at the earliest moment allowed by law. Yet, Movistar and Claro have provided further information on their reasons and their interpretation of the law for this refusal.

In the digital security category, all companies have received credit for using HTTPS on their websites and for providing secure methods to users in their online channels, such as two-step authentication. All companies but Bitel have scored for the promotion of human rights. While Entel receives a partial score for joining local multi-stakeholder forums, Movistar and Claro fill up their stars for this category. Among others, Movistar has sent comments to Congress in favor of user’s privacy, and Claro has challenged such a disproportionate request issued by the country’s tax administration agency (SUNAT) before Peru’s data protection authority.

We are glad to see that Peru’s third report shows significant progress, but much needed to be done to protect users’ privacy. Entel and Bitel have to catch up with the larger regional providers. And Movistar and Claro can also go further to complete their chart of stars. Hiperderecho will remain vigilant through their ¿Quien Defiende Tus Datos? Reports. 

Share
Categories
Commentary Digital Rights and the Black-led Movement Against Police Violence Intelwars privacy

Members of Congress Join the Fight for Protest Surveillance Transparency

Three members of Congress have joined the fight for the right to protest by sending a letter to the Privacy and Civil Liberties Oversight Board (PCLOB) to investigate federal surveillance against protesters. We commend these elected officials for doing what they can to help ensure our constitutional right to protest and for taking the interests and safety of protesters to heart.

It often takes years, if not longer, to learn the full scope of government surveillance used against demonstrators involved in a specific action or protest movement. Four months since the murder of George Floyd began a new round of Black-led protests against police violence, there has been a slow and steady trickle of revelations about law enforcement agencies deploying advanced surveillance technology at protests around the country. For example, we learned recently that the Federal Bureau of Investigation sent a team specializing in cellular phone exploitation to Portland, site of some of the largest and most sustained protests.  Before that, we learned about federal, state, and local aerial surveillance done over protests in at least 15 cities. Now, Rep. Anna Eshoo, Rep. Bobby Rush, and Sen. Ron Wyden have asked the PCLOB to dig deeper..

The PCLOB is an independent agency in the executive branch, created in 2004, which undertakes far-ranging investigations into issues related to privacy and civil liberties including mass surveillance of the internet and cellular communications, facial recognition technology at airports, and terrorism watchlists.  In addition to asking the PCLOB to investigate who used what surveillance where, and how it negatively impacted the First Amendment right to protest, the trio of Eshoo, Rush, and Wyden, ask the PCLOB to investigate and enumerate the legal authorities under which agencies are surveilling protests and whether agencies have followed required processes for use of intelligence equipment domestically. The letter continues: 

“PCLOB should investigate what legal authorities federal agencies are using to surveil protesters to help Congress understand if agencies’ interpretations of specific provisions of federal statutes or of the Constitution are consistent with congressional intent. This will help inform whether Congress needs to amend existing statutes or consider legislation to ensure agency actions are consistent with congressional intent.”

We agree with these politicians that government surveillance of protesters is a threat to all of our civil liberties and an affront to a robust, active, and informed democracy. With a guarantee of more protests to come in the upcoming weeks and months, Congress and the PCLOB board must act swiftly to protect our right to protest, investigate how much harm government surveillance has caused, and identify  illegal behavior by these agencies.

In the meantime, if you plan on protesting, make sure you’ve reviewed EFF’s surveillance self-defense guide for protesters.

Share
Categories
anonymity Intelwars International Privacy Standards Necessary and Proportionate Policy Analysis privacy

Augmented Reality Must Have Augmented Privacy

Imagine walking down the street, looking for a good cup of coffee. In the distance, a storefront glows in green through your smart glasses, indicating a well-reviewed cafe with a sterling public health score. You follow the holographic arrows to the crosswalk, as your wearables silently signal the self-driving cars to be sure they stop for your right of way. In the crowd ahead you recognize someone, but can’t quite place them. A query and response later, “Cameron” pops above their head, along with the context needed to remember they were a classmate from university. You greet them, each of you glad to avoid the awkwardness of not recalling an acquaintance. 

This is the stuff of science fiction, sometimes utopian, but often as a warning against a dystopia. Lurking in every gadget that can enhance your life is a danger to privacy and security. In either case, augmented reality is coming closer to being an everyday reality.  

In 2013, Google Glass stirred a backlash, but the promise of augmented reality bringing 3D models and computer interfaces into the physical world (while recording everything in the process) is re-emerging. As is the public outcry over privacy and “always-on” recording. In the last seven years, companies are still pushing for augmented reality glasses—which will display digital images and data that people can view through their glasses. Chinese company Nreal, Facebook and Apple are experimenting with similar technology. 

Digitizing the World in 3D

Several technologies are moving to create a live map of different parts of our world, from Augmented or Virtual Reality to autonomous vehicles. They are creating “machine-readable, 1:1 scale models” of the world that are continuously updated in real-time. Some implement such models through point clouds, a dataset of points coming from a scanner to recreate the surfaces (not the interior) of objects or a space. Each point has three coordinates to position them in space. To make sense of the millions (or billions) of points, a software with Machine Learning can help recognize the objects from the point cloudslooking exactly as a digital replica of the world or a map of your house and everything inside.  

The promise of creating a persistence 3D digital clone of the world aligned with real-world coordinates goes by many names: “world’s digital twin,” “parallel digital universe,” “Mirrorworld,” “The Spatial Web,” “Magic Verse” or a “Metaverse”. Whatever you call it, this new parallel digital world will introduce a new world of privacy concernseven for those who choose to never wear it. For instance, Facebook Live Maps will seek to create a shared virtual map. LiveMaps will rely on users’ crowd-sourced maps collected by future AR devices with client-mapping functionality. Open AR, an interoperable AR Cloud, and Microsoft’s Azure Digital Twins are seeking to model and create a digital representation of an environment. 

Facebook’s Project Aria continues on that trend and will aid Facebook in recording live 3D maps and developing AI models for Facebook’s first generation of wearable augmented reality devices. Aria’s uniqueness, in contrast to autonomous cars, is the “egocentric” data collection of the environmentthe recording data will come from the wearers’ perspective; a more “intimate” type of data. Project Aria is a 3D live-mapping tool and software with an AI development tool, not a prototype of a product, nor an AR device due to the lack of display. Aria’s research glasses, which are not for sale, will be worn only by trained Facebook staffers and contractors to collect data from the wearer’s point of view. For example, if the AR wearer records a building and the building later burns down, the next time any AR wearer walks by, the device can detect the change, and update the 3D map in real-time. 

A Portal to Augmented Privacy Threats

In terms of sensors, Aria’s will include among others a magnetometer, a barometer, GPS chip, and two inertial measurement units (IMU). Together, these sensors will track where the wearer is (location), where the wearer is moving (motion), and what the wearer is looking at (orientation)a much more precise way to locate the wearers’ location. While GPS doesn’t often work inside a building, for example, sophisticated IMU can allow a GPS receiver to work well indoors when GPS-signals are unavailable. 

A machine learning algorithm will build a model of the environment, based on all the input data collected by the hardware, to recognize precise objects and 3D map your space and the things on it. It can estimate distances, for instance, how far the wearer is from an object. It also can identify the wearers’ context and activities: Are you reading a book? Your device might then offer you a reading recommendation. 

The Bystanders’ Right to Private Life

Imagine a future where anyone you see wearing glasses could be recording your conversations with “always on” microphones and cameras, updating the map of where you are in precise detail and real-time. In this dystopia, the possibility of being recorded looms over every walk in the park, every conversation in a bar, and indeed, everything you do near other people. 

During Aria’s research phase, Facebook will be recording its own contractors’ interaction with the world. It is taking certain precautions. It asks the owners’ concerns before recording in privately owned venues such as a bar or restaurant. It avoids sensitive areas, like restrooms and protests. It blurs peoples’ faces and license plates. Yet, there are still many other ways to identify individuals, from tattoos to peoples’ gait, and these should be obfuscated, too. 

These blurring protections mirror those used by other public mapping mechanisms like Google Street View. These have proven reasonable—but far from infallible—in safeguarding bystanders’ privacy. Google Street View also benefits from focusing on objects, which only need occasional recording. It’s unclear if these protections remain adequate for perpetual crowd-sourced recordings, which focus on human interactions. Once Facebook and other AR companies release their first generation of AR devices, it will likely take concerted efforts by civil society to keep obfuscation techniques like blurring in commercial products. We hope those products do not layer robust identification technologies, such as facial recognition, on top of the existing AR interface. 

The AR Panopticon

If the AR glasses with “always-on” audio-cameras or powerful 3D mapping sensors become massively adopted, the scope and scale of the problem changes as well. Now the company behind any AR system could have a live audio/visual window into all corners of the world, with the ability to locate and identify anyone at any time, especially if facial or other recognition technologies are included in the package. The result? A global panopticon society of constant surveillance in public or semi-public spaces. 

In modern times, the panopticon has become a metaphor for a dystopian surveillance state, where the government has cameras observing your every action. Worse, you never know if you are a target, as law enforcement looks to new technology to deepen their already rich ability to surveil our lives.

Legal Protection Against Panopticon

To fight back against this dystopia, and especially government access to this panopticon, our first line of defense in the United States is the Constitution. Around the world, we all enjoy the protection of international human rights law. Last week, we explained how police need to come back with a warrant before conducting a search of virtual representations of your private spaces. While AR measuring and modeling in public and semi-public spaces is different from private spaces, key Constitutional and international human rights principles still provide significant legal protection against police access. 

In Carpenter v. United States, the U.S. Supreme Court recognized the privacy challenges with understanding the risks of new technologies, warning courts to “tread carefully …  to ensure that we do not ‘embarrass the future.’” 

To not embarrass the future, we must recognize that throughout history people have enjoyed effective anonymity and privacy when conducting activities in public or semi-public spaces. As the United Nations’ Free Speech Rapporteur made clear, anonymity is a “common human desire to protect one’s identity from the crowd…” Likewise, the Council of Europe has recognized that while any person moving in public areas may expect a lesser degree of privacy, “they do not and should not expect to be deprived of their rights and freedoms including those related to their own private sphere.” Similarly, the European Court of Human Rights, has recognized that a “zone of interaction of a person with others, even in a public context, may fall within the scope of “private life.” Even in public places, the “systematic or permanent recording and the subsequent processing of images could raise questions affecting the private life of individuals.” Over forty years ago, in Katz v. United States, the U.S. Supreme Court also recognized “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” 

This makes sense because the natural limits of human memory make it difficult to remember details about people we encounter in the street; which effectively offers us some level of privacy and anonymity in public spaces. Electronic devices, however, can remember perfectly, and collect these memories in a centralized database to be potentially used by corporate and state actors. Already this sense of privacy has been eroded by public camera networks, ubiquitous cellphone cameras, license plate readers, and RFID trackers—requiring legal protections. Indeed, the European Court of Human Rights requires “clear detailed rules…, especially as the technology available for use [is] continually becoming more sophisticated.” 

If smartglasses become as common as smartphones, we risk losing even more of the privacy of crowds. Far more thorough records of our sensitive public actions, including going to a political rally or protest, or even going to a church or a doctor’s office, can go down on your permanent record. 

This technological problem was brought to the modern era in United States v. Jones, where the Supreme Court held that GPS tracking of a vehicle was a search, subject to the protection of the Fourth Amendment. Jones was a convoluted decision, with three separate opinions supporting this result. But within the three were five Justices – a majority – who ruled that prolonged GPS tracking violated Jones’ reasonable expectation of privacy, despite Jones driving in public where a police officer could have followed him in a car. Justice Alito explained the difference, in his concurring opinion (joined by Justices Ginsburg, Breyer, and Kagan):

In the pre-computer age, the greatest protections of privacy were neither constitutional nor statutory, but practical. Traditional surveillance for any extended period of time was difficult and costly and therefore rarely undertaken. … Only an investigation of unusual importance could have justified such an expenditure of law enforcement resources. Devices like the one used in the present case, however, make long-term monitoring relatively easy and cheap.

The Jones analysis recognizes that police use of automated surveillance technology to systematically track our movements in public places upsets the balance of power protected by the Constitution and violates the societal norms of privacy that are fundamental to human society.  

In Carpenter, the Supreme Court extended Jones to tracking people’s movement through cell-site location information (CSLI). Carpenter recognized that “when the Government tracks the location of a cell phone it achieves near perfect surveillance as if it had attached an ankle monitor to the phone’s user.”  The Court rejected the government’s argument that under the troubling “third-party doctrine,” Mr. Carpenter had no reasonable expectation of privacy in his CSLI because he had already disclosed it to a third party, namely, his phone service provider. 

AR is Even More Privacy Invasive Than GPS and CSLI

Like GPS devices and CSLI, AR devices are an automated technology that systematically documents what we are doing. So AR triggers strong Fourth Amendment Protection. Of course, ubiquitous AR devices will provide even more perfect surveillance, compared to GPS and CSLI, not only tracking the user’s information, but gaining a telling window into the lives of all the bystanders around the user. 

With enough smart glasses in a location, one could create a virtual time machine to revisit that exact moment in time and space. This is the very thing that concerned the Carpenter court:

the Government can now travel back in time to retrace a person’s whereabouts, subject only to the retention policies of the wireless carriers, which currently maintain records for up to five years. Critically, because location information is continually logged for all of the 400 million devices in the United States — not just those belonging to persons who might happen to come under investigation — this newfound tracking capacity runs against everyone.

Likewise, the Special Rapporteur on the Protection of Human Rights explained that a collect-it-all approach is incompatible with the right to privacy:

Shortly put, it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately. The very essence of the right to the privacy of communication is that infringements must be exceptional, and justified on a case-by-case basis.

AR is location tracking on steroids. AR can be enhanced by overlays such as facial recognition, transforming smartglasses into a powerful identification tool capable of providing a rich and instantaneous profile of any random person on the street, to the wearer, to a massive database, and to any corporate or government agent (or data thief) who can access that database. With additional emerging and unproven visual analytics (everything from aggression analysis to lie detection based on facial expressions is being proposed), this technology poses a truly staggering threat of surveillance and bias. 

Thus, the need for such legal safeguards, as required in Canada v. European Union, are “all the greater where personal data is subject to automated processing. Those considerations apply particularly where the protection of the particular category of personal data that is sensitive data is at stake.” 

Augmented reality will expose our public, social, and inner lives in a way that maybe even more invasive than the smartphone’s “revealing montage of the user’s life” that the Supreme Court protected in Riley v California. Thus it is critical for courts, legislators, and executive officers to recognize that the government cannot access the records generated by AR without a warrant.

Corporations Can Invade AR Privacy, Too

Even more, must be done to protect against a descent into AR dystopia. Manufacturers and service providers must resist the urge, all too common in Silicon Valley, to “collect it all,” in case the data may be useful later. Instead, the less data companies collect and store now, the less data the government can seize later. 

This is why tech companies should not only protect their users’ right to privacy against government surveillance but also their users’ right to data protection. Companies must, therefore, collect, use, and share their users’ AR data only as minimally necessary to provide the specific service their users asked for. Companies should also limit the amount of data transited to the cloud, and the period it is retained, while investing in robust security and strong encryption, with user-held keys, to give user control over information collected. Moreover, we need strong transparency policies, explicitly stating the purposes for and means of data processing, and allowing users to securely access and port their data. 

Likewise, legislatures should look to the augmented reality future, and augment our protections against government and corporate overreach. Congress passed the Wiretap Act to give extra protection for phone calls in 1968, and expanded statutory protections to email and subscriber records in 1986 with the Electronic Communication Privacy Act. Many jurisdictions have eavesdropping laws that require all-party consent before recording a conversation. Likewise, hidden cameras and paparazzi laws can limit taking photographs and recording videos, even in places open to the public, though they are generally silent on the advanced surveillance possible with technologies like spatial mapping. Modernization of these statutory privacy safeguards, with new laws like CalECPA, has taken a long time and remains incomplete. 

Through strong policy, robust transparency, wise courts, modernized statutes, and privacy-by-design engineering, we can and must have augmented reality with augmented privacy. The future is tomorrow, so let’s make it a future we would want to live in.

Share
Categories
Intelwars Legislative Analysis Necessary and Proportionate privacy Surveillance and Human Rights

Latin American Governments Must Commit to Surveillance Transparency

This post is the second in a series about our new State of Communications Privacy Laws report, a set of questions and answers about privacy and data protection in eight Latin American countries and Spain. The series’ first post was “A Look-Back and Ahead on Data Protection in Latin America and Spain.” The reports cover Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain.

Although the full extent of government surveillance technology in Latin America remains mostly unknown, media reports have revealed multiple scandals. Intelligence and law enforcement agencies have deployed powerful spying tools in Latin American presidential politics and used them against political adversaries, opposition journalists, lawmakers, dissident groups, judges, activists, and unions. These tools have also been used to glean embarrassing or compromising information on political targets. All too often, Latin America’s weak democratic institutions have failed to prevent such government abuse of power.

High Tech Spying in Latam, Past and Present

Examples abound in Latin America of documented government abuses of surveillance technologies. Surveillance rose to public prominence in Peru in the 1990s with a scandal involving the former director of the Intelligence Agency and former President Fujimori. Fujimori’s conviction marked the first time in history that a democratically elected president had been tried and found guilty in his own country for human rights abuses, including illegal wiretapping of opposition figures’ phones. In the 2000s, the Colombian intelligence agency (DAS) was caught wiretapping political opponents. Ricardo A. Martinelli, Panama’s President from 2009 to 2014, was accused of using the spyware “Pegasus” to snoop on political opponents, union leaders, and journalists. (A court last year rejected illegal wiretapping charges against him because of “reasonable doubts”.) 

In Chile in 2017, civil society worked to grasp how the Intelligence Directorate of Chile’s Carabiniers (Dipolcar and its special intelligence unit) had “intercepted” eight of the Mapuche indigenous community leaders’ encrypted WhatsApp and Telegram messages. These leaders had been detained as part of the Huracán Operation. Carabineros shifted its explanation of how it had procured the messages: it had simply claimed generic “interception of messages,” but later claimed to have used a keylogger and other malicious software to plant fake evidence. Expert examinations within a Prosecutor’s Office investigation and the report of a Congressional investigative committee concluded that evidence was fabricated. The Huracán Operation also engaged in fraudulent manipulation of seized devices and obtained communications without proper judicial authorization. This is but one abuse among many involving Mapuches by Chilean authorities.

Leaked U.S. diplomatic cables showed collaboration in communications surveillance between the U.S. Drug Enforcement Administrations and Latin American governments such as Paraguay and Panama. This includedcooperation between the U.S. government and Paraguayan telecom companies.

History repeats itself. Just a few weeks ago, a report revealed that between 2012 and 2018, the government of Mexico City operated an intelligence center that targeted political adversaries including the current Mexican President and the current mayor of Mexico City. Likewise, Brazilians learned just a few weeks ago about Cortex—the Ministry of Justice’s Integrated Operations Secretariat (SEOPI) surveillance program created to fight “organized crime.” Intercept Brazil revealed that Cortex integrates automated-license plate readers (ALPRs) with other databases such as Rais, the Ministry of Economy’s labor database.  Indeed, Cortext reportedly cross-references Rais records about employee “address, dependents, salary, and position” with location data obtained from 6,000 ALPRs in at least 12 Brazilian states. According to the Intercept’s anonymous source, around 10,000 intelligence and law enforcement agents have access to the system. The context of this new revelation recalls a previous scandal involving the same Ministry of Justice’s Secretariat. In July of this year, Brazil’s Supreme Court ordered the Ministry of Justice to halt SEOPI’s intelligence-gathering against political opponents. SEOPI had compiled an intelligence dossier about police officers and teachers linked to the opposition movement. The Ministry of Justice dismissed SEOPI’s intelligence director

Sunlight is the Best Disinfectant

The European Court of Human Rights has held that “a system of secret surveillance set up to protect national security may undermine or even destroy democracy under the cloak of defending it.” In a recent report, the Inter-American Commission’s Free Expression Rapporteur reinforced the call for transparency. The report stresses that people should, at least, have information on the legal framework for surveillance and its purpose; the procedures for the authorization, selection of targets, and handling of data in surveillance programs; the protocols adopted for the exchange, storage, and destruction of the intercepted material; and which government agencies are in charge of implementing and supervising those programs. 

Transparency is vital for accountability and democracy. Without it, civil society cannot even begin to check government overreach. Surveillance powers and the interpretation of such laws must always be on the public record. The law must compel the State to provide rigorous reporting and individual notification. The absence of such transparency violates human rights standards and the rule of law. Transparency is all the more critical where, for operational reasons, certain aspects of the system remain secret. 

Secrecy prevents meaningful public policy debates on these matters of extreme importance: the public can’t respond to abuses of power if it can’t see them. There are many methods states and communication companies can implement to increase transparency about privacy, government data access requests, and legal surveillance authorities.

Policy Recommendations

States should publish transparency reports of law enforcement demands to access customers’ information.
The UN Special Rapporteur on free expression has called upon States to disclose general information about the number of requests for interception and surveillance that have been approved and rejected. Such disclosure should include a breakdown of demands by service providers, type of investigation, number of affected persons, and period covered, among other factors. Unfortunately, the culture of secrecy on states’ transparency reporting is a real problem in Latin America. 

Brazil and Mexico have regulations that compel agencies to publish transparency reports, and they do disclose statistical information. However, Argentina, Colombia, Chile, Paraguay, Peru, and Spain do not have a concrete law that requires them to do so, and in practice, they do not post such reports. Of course, the lack of a specific obligation to publish public interest data, as pointed out by the IACHR’s Freedom of Expression Rapporteur, should not prevent States from publishing this type of data. The IACHR Rapporteur states that the public has the right to access a surveillance agency’s functions, activities, and public resources management.

Mexico‘s Transparency Law requires governmental agencies to regularly disclose statistical information about data demands made to telecom providers for interceptions, access to communications records, and access to location data in real time. Agencies also must keep the data updated. 

Brazil’s decree 8.771/2016 obliges each federal agency to publish, on its website, yearly statistical reports about their requests for access to Internet users’ subscriber data. The statistical reports should include the number of demands, the list of ISPs and Internet applications from which data has been requested, the number of requests granted and rejected, and the number of users affected. Moreover, Brazil’s National Council of Justice created a public database with statistics on communications interceptions authorized by courts. The system breaks the data down per month and court in the following categories: number of started and ongoing requests, number of new and ongoing criminal procedures, number of monitored phones, number of monitored VOIP communications, and number of monitored electronic addresses.  

Companies should publish detailed statistical transparency reports regarding all government access to their customers’ data.
The legal frameworks in Argentina, Brazil, Colombia, Chile, Mexico, Peru, Panama, Paraguay, and Spain do not prohibit companies from publishing statistical data on government requests for user data and other investigative techniques authorities. But of the countries we studied, the only one where ISPs publish this information is Chile. Large and small Chilean ISPs have published their transparency reports, including Telefonica, WOM!, VTR,  Entel, and most recently GTD Manquehue. We haven’t seen similar developments in other countries. While America Móvil (Claro) operates in all the Latam countries covered in our reports, only in Chile does it publish one with statistical figures for government data requests.

Telefónica-Movistar is among the few companies to fully embrace transparency reports throughout all the Latam countries where it operates. Others should follow. In Central America, Millicom-Tigo has generally issued consolidated data for Costa Rica, El Salvador, Guatemala, Honduras, and, more recently, Panama. This is less helpful and deviates from the general standard to publish aggregate data per country and not per multi-country region. The company does the same for South America, where it publishes consolidated statistical data for Bolivia, Colombia, and Paraguay. In 2018, Millicom-Tigo first followed the industry-standard by posting aggregate data just for Colombia. 

AT&T publishes detailed data for the United States, but very little information for Latam countries, except for Mexico, where more data is available. The type of data requested by governments depends on the services AT&T provides in each country (whether it is broadband, mobile, or only TV and entertainment). AT&T should provide information like the number of rejected requests or the applicable legal framework for all the countries where it operates. 

In Spain, Orange published its latest report in 2018, while Ono Vodafone’s last report refers to 2016-2017 requests.

Many local Latam telcos have failed to publish transparency reports.

  • In Argentina: Cablevision, Claro, Telecom, Telecentro, and IPLAN.
  • In Brazil: Claro, Oi, Algar, and Nextel.
  • In Colombia: Claro and EMCALI.
  • In Panama: Cable & Wireless Panamá (Más Móvil), Claro, and Digicel
  • In Perú: Claro, Entel, Olo, Bitel, and Inkacel
  • In Paraguay: Claro, Personal, Copaco, Vox, and Chaco Comunicaciones.
  • In México: Telmex/Telcel (América Móvil), Axtel, Megacable, Izzi, and Totalplay. 

At a minimum, companies’ transparency reports should disclose the number of government data requests per country, and split by key types of data, applicable legal authorities, and the number of claims challenged or denied. The reports we reviewed usually provide different numbers for content and metadata, which is important. AT&T also includes real-time access to location data for Mexico. Telefonica and AT&T Mexico’s section release the number of rejected requests; Millicom doesn’t provide this information. None of the reports distinguish criminal orders from national security requests; AT&T does so only for the United States. Reports should also allow readers to learn the number of affected users or devices; disclosing only the number of requests isn’t enough, since one legal demand may refer to more than one customer or device. Telefónica indicates figures of accesses affected for both interception and metadata in Argentina, Brazil, Chile, Mexico, and Peru. In Spain, the system used by security forces for sending judicial orders to obtain metadata still doesn’t allow this breakdown. And in Colombia, it’s not even possible to count the number of interception requests in mobile lines. 

Of course, companies’ transparency reports depend on their knowledge of when surveillance takes place through their systems. Such knowledge is missing–and so transparency reporting is not possible–when police and other government agencies compel providers to give law enforcement direct access to their servers. The 2018 UN High Commissioner on Human Rights report recognized that such direct access systems are a serious concern; they are particularly prone to abuse and tend to circumvent critical procedural safeguards. According to Millicom’s report, direct access requirements to telecom companies’ mobile networks in Honduras, El Salvador, and Colombia prevent the ISPs from knowing how often or for what periods interception occurs. Millicom points out that a similar practice exists in Paraguay. Yet, in this case, Millicom states the procedures allow them to view the judicial order required for authorities to initiate the interception. 

Companies should publish guidelines for government agencies seeking users’ data. 
It is important for the public to know how police and other government agencies obtain customer data from service providers. To ensure public access to this information,  providers should transparently publish the request guidelines they provide to government agencies. 

Chilean telecom companies publish their law enforcement guidelines. WOM and VTR detail the integrated systems and contact channels they use to receive government requests, and the information that requests and judicial orders should contain, such as the investigative targets and procedures. They break details down by type of interception (like emergency cases and deadline extension) and users’ information (such as traffic data).  

GTD Manquehue has a similar model but doesn’t specify information related to urgent interceptions and extension requests. Claro also includes contact channels and some important requisites, particularly for traffic and other associated data. Entel doesn’t indicate contact channels for data requests but goes beyond others in explaining the applicable legal framework and requirements orders must fulfill. In turn, Telefónica – Movistar’s guidelines are vague when setting legal requirements, but provide great detail about the kind of metadata and subscriber information authorities can access. 

Telefónica and Millicom have global guidelines for all law enforcement requests. They apply to their subsidiaries, which usually don’t publish local specifications. While Telefonica guidelines commit to relevant principles and display a chart flow for assessing government requests, Millicom outlines five steps of their process for law enforcement assistance. Both give valuable insight into the companies’ procedures. But they shouldn’t supplant the release of more specific guidelines at the domestic level, showing how their global policies apply regarding local contexts and rules. 

Secret laws—about government access to data or anything else—are unacceptable.
Law is only legitimate if people know it exists and can act based on that knowledge. It allows people the fundamental fairness of understanding when they can expect privacy from the government and when they cannot.  As we’ve noted before, it avoids the Kafkaesque situations in which people, like Joseph K in The Trial, cannot figure out what they did that resulted in the government accessing their data. The UN Report on the Right to Privacy in the Digital Age states that “Secret rules … do not have the necessary qualities of ‘law’ … [a] law that is accessible, but that does not have foreseeable effects, will not be adequate.” The Council of Europe’s Parliamentary Assembly likewise has condemned the “extensive use of secret laws and regulations.”  

Yet the Peruvian guidelines on data sharing by ISPs with police has been declared “reserved information.” In striking contrast, Peruvian wiretapping protocols are deemed public.  

Service providers should notify all their customers, wherever they live when the government seeks their data. Such notice should be immediate, unless doing so would endanger the investigation.
The notification principle is essential to restrict improper data requests from the government to service providers. Before the revolution in electronic communication, the police had to knock on a suspect’s door and show their warrant. The person searched could observe whether the police searched or seized their written correspondence, and if they felt the intrusion was improper, ask a court to intervene. 

Electronic surveillance, however, is much more surreptitious. Data can be intercepted or acquired directly from telecom or Internet Providers without the individual knowing. It is often impossible to know that their data has been accessed unless the evidence leads to criminal charges. As a result, the innocent are least likely to discover the violation of their privacy rights. Indeed, new technologies have enabled covert remote searches of personal computers. Any delay in the notification has to be justified to a court and tied to an actual danger to the investigation or harm to a person. The UN High Commissioner for Human Rights recognized that users who have been subject to surveillance should be notified, at least ex post facto.

Peru and Chile provide the two best standards in the region to notify the persons affected.  Unfortunately, notification is often delayed. Peru’s Criminal Procedure Code allows informing the surveilled person once the access procedures are completed. The affected person may ask for judicial re-examination within three days of receiving notice. Such post-access notification is permitted only if the investigation scope allows it and does not endanger another person. 

Chilean law has a similar provision. The access procedure is secret by default. However, the state must notify the affected person after the interception has ended, if the investigative scope allows it and notice won’t jeopardize another person. If the data demand is secret, then the prosecutor must set a term of no more than 40 days, which may be extended one more time. 

Argentinian criminal law does not include any obligation or prohibition to inform the individual, not even when the access is over. The subject of the investigation may learn about the evidence used in a criminal proceeding. However, an individual may never know that the government accessed their data if it was not used by the prosecutor. 

There is no legal obligation in Brazil that compels either the State or companies to provide notice a priori. The Telephone Interception Law imposes a general secrecy requirement. Another statute authorizes the judge to determine secrecy issues. Companies could voluntarily notify the user if a gag order is not legally or judicially set, or subsequently after secrecy is lifted.  

In Spain, secrecy is the norm. This includes for interception of communication, malware, location tracking, or communication data access. The obliged company carrying out the investigative measures is sworn to secrecy on pain of criminal penalty. 

Freedom of Information Laws and investigative reporting are needed to shine a light on governmental data requests and secret surveillance. Whistleblowers’ legal protection is required, too.

States in the region are required to respond to public record requests and must provide information ex officio. The Inter-American Court recognizes that it is “essential that State authorities are governed by the principle of maximum disclosure, which establishes the presumption that all information is accessible, subject to a limited system of exceptions.” The Court also echoed the 2004 joint declaration by the rapporteurs on freedom of expression of the UN, the OAS, and the OSCE, in which it stipulated that “[p]ublic authorities should be required to publish proactively, even in the absence of a request, a range of information of public interest. Systems should be put in place to increase, over time, the amount of data subject to such routine disclosure.” 

The Mexican Transparency Law obliges governmental agencies to automatically disclose and update information about government access to company data. In contrast, the Peruvian Transparency Law only compels the State to disclose on request the information it creates or is in its possession, with certain exceptions. So if aggregate information on the details of the requests existed, it could be accessible through FOIA requests. But if it’s not, the law does not require the agency to create a new record.

In Latin America, NGOs have used these public access laws to learn more about high tech surveillance in their countries.  In Argentina, ADC filed a FOIA request after the City of Buenos Aires announced it would deploy facial recognition software over its CCTV cameras’ infrastructure. Buenos Aires’ administration disclosed responsive information about the legal basis and purposes for implementing the technology, the government body responsible for its operation, and the software purchase. ODIA made further requests about the system’s technical aspects, and Access Now followed suit in Córdoba.

In the wake of revelations on the use of “Pegasus” malware to spy on journalists, activists, lawyers, and politicians in Mexico, digital rights NGO R3D filed a FOIA request in 2017 seeking documents about the purchase of “Pegasus”.  After receiving part of the agreement, R3D challenged the country’s Transparency and Data Protection Authority (INAI) decision to classify Pegasus’ technical specifications and operation methods. In 2018, the judge overruled INAI’s resolution, holding that serious human rights violations and acts of corruption must never be confidential information. 

In other countries, digital media have shed light on the number of government data access demands. For example, INFOBAE in Argentina published a story reporting the leaked number of interceptions and other statistical information. Another outlet in Chile revealed the number of telephone interception requests based on the public records law. 

The IACHR Rapporteur stresses the important role of investigative journalists and whistleblowers in its new Freedom of Expression report. The rapporteur’s recommendations underscore the need to ensure legislation to protect the right of journalists and others. The law should also protect their sources against direct and indirect exposure, including intrusion through surveillance. Whistleblowers who expose human rights violations or other wrongdoings should also receive legal protection against retaliation. 

Conclusions
Governments often confuse a need for secrecy in a specific investigation with an overarching reticence to describe a surveillance technology’s technical capabilities, legal authorities, and aggregate uses. But civil society’s knowledge of these technologies is crucial to public oversight and government accountability. Democracy cannot flourish and persist without the capacity to learn about and provide effective remedies to abuses and violations of privacy and other rights. 

Secrecy must be the exception, not the norm. It must be limited in time and strictly necessary and proportionate to protect specific legitimate aims. We still have a long way ahead in making transparency the norm. Government practices, state regulations, and companies’ actions must build upon the transparency principles set forth in our recommendations. 

Share
Categories
Edward Snowden google Intelwars national security policy NSA privacy searches Surveillance

Google Responds to Warrants for “About” Searches

One of the things we learned from the Snowden documents is that the NSA conducts “about” searches. That is, searches based on activities and not identifiers. A normal search would be on a name, or IP address, or phone number. An about search would something like “show me anyone that has used this particular name in a communications,” or “show me anyone who was at this particular location within this time frame.” These searches are legal when conducted for the purpose of foreign surveillance, but the worry about using them domestically is that they are unconstitutionally broad. After all, the only way to know who said a particular name is to know what everyone said, and the only way to know who was at a particular location is to know where everyone was. The very nature of these searches requires mass surveillance.

The FBI does not conduct mass surveillance. But many US corporations do, as a normal part of their business model. And the FBI uses that surveillance infrastructure to conduct its own about searches. Here’s an arson case where the FBI asked Google who searched for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team began by asking Google to produce a list of public IP addresses used to google the home of the victim in the run-up to the arson. The Chocolate Factory [Google] complied with the warrant, and gave the investigators the list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States Magistrate Judge for the Eastern District of New York, authorized a search warrant to Google for users who had searched the address of the Residence close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the address three times: one the day before the SUV was set on fire, and the other two about an hour before the attack. The IPv6 addresses were traced to Verizon Wireless, which told the investigators that the addresses were in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making specific searches will raise the eyebrows of privacy-conscious users, Google told The Register the warrants are a very rare occurrence, and its team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google’s director of law enforcement and information security Richard Salgado told us. “We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and a small fraction of the overall legal demands for user data that we currently receive.”

Here’s another example of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months before they arrested Molina that the location data obtained from Google often showed him in two places at once, and that he was not the only person who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that another man ­ his stepfather ­ sometimes drove Molina’s white Honda. On October 25, 2018, police obtained records showing that Molina’s Honda had been impounded earlier that year after Molina’s stepfather was caught driving the car without a license.

Data obtained by Avondale police from Google did show that a device logged into Molina’s Google account was in the area at the time of Knight’s murder. Yet on a different date, the location data from Google also showed that Molina was at a retirement community in Scottsdale (where his mother worked) while debit card records showed that Molina had made a purchase at a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have made it clear to Avondale police that Google’s account-location data is not always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We have knowingly and willingly built the architecture of a police state, just so companies can show us ads. (And it is increasingly apparent that the advertising-supported Internet is heading for a crash.)

Share
Categories
ALPR Cell Site Simulator Facial Recognition Fourth Amendment Intelwars privacy State Bills stingray Surveillance

Status Report: Nullifying the National Surveillance State

In 2014, the Tenth Amendment Center dove headfirst in the fight against unconstitutional federal surveillance when it spearheaded efforts to turn off the water at the NSA facility in Bluffdale, Utah, and cut off other critical state and local services to other NSA facilities.

We haven’t turned off the water in Utah — yet. But we did win some victories. In 2014, California Gov. Jerry Brown signed SB828 into law, laying the foundation for the state to turn off water, electricity and other resources to any federal agency engaged in mass warrantless surveillance. In 2018, Michigan built on this foundation with the passage of HB4430. The new law prohibits the state and its political subdivisions from assisting, participating with, or providing “material support or resources, to a federal agency to enable it to collect, or to facilitate in the collection or use of a person’s electronic data,” without a warrant or under a few other carefully defined exceptions. 

Although NSA spying remains the most high-profile warrantless surveillance program, the federal government has created a national surveillance network that extends well beyond the operation of this single agency. In fact, state and local law enforcement have become vital cogs in the national surveillance state. 

State, local and federal governments work together to conduct surveillance in many ways. As a result, efforts to protect privacy at the state and local level have a significant spillover effect to the national level.

While continuing efforts to cut off resources to NSA facilities in recent years, we also focused on other state-federal surveillance partnerships that feed into the national spy-state.

ALPR/License Plate Tracking

As reported in the Wall Street Journal, the federal government, via the Drug Enforcement Agency (DEA), tracks the location of millions of vehicles through data provided by Automatic Licence Plate Readers (ALPRs) operated on a state and local level. They’ve engaged in this for nearly 10 years, all without a warrant, or even public notice of the policy.

Currently, six states have placed significant restrictions on the use of ALPRs. Activists are expected to push several states to consider similar restrictions in the next legislative session.

Facial Recognition and Biometric Surveillance

Facial recognition is the newest frontier in the national surveillance state. Over the last few years, the federal government has spearheaded a drive to expand the use of this invasive technology. At the same time, some state and local governments have aggressively pushed back.

In a nutshell, without state and local cooperation, the feds have a much more difficult time gathering information. Passage of laws banning or restricting the use of facial recognition eliminates one avenue for gathering biometric data. Simply put, data that doesn’t exist cannot be entered into federal databases.

In 2019, California enacted a law that prohibits a law enforcement agency or law enforcement official from installing, activating, or using any biometric surveillance system in connection with an officer camera or data collected by an officer camera. This includes body-worn and handheld devices. This new law had a significant impact. After its enactment, San Diego shut down one of the largest facial recognition programs in the country in order to comply with the law.

Washington state passed a bill that would require a warrant for ongoing and realtime facial recognition surveillance. The bill doesn’t completely ban the use of facial recognition and there is some concern about how police will interpret the statute, but it takes a good first step toward addressing the issue.

New York passed a bill that would place a moratorium on the use of facial recognition in schools. At the time of this report, it is awaiting Gov. Cuomo’s signature.

There have also been a large number of local facial recognition bans implemented in the last year, particularly in California and Massachusetts.

Stingrays and Electronic Data Collection

Cell site simulators, more commonly called “stingrays,” are portable devices used for cell phone surveillance and location tracking. They essentially spoof cell phone towers, tricking any device within range into connecting to the stingray instead of the cell tower, allowing law enforcement to sweep up all communications content within range of that tower. The stingray will also locate and track any person in possession of a phone or other electronic device that tries to connect to the tower.

In 2019, New Mexico barred warrantless stingray spying in its Electronic Communications Privacy Act. The law requires police to obtain a warrant or wiretap order before deploying a stingray device, unless they have the explicit permission of the owner or authorized possessor of the device, or if the device is lost or stolen.

In the 2020 session, New Mexico expanded protections under that 2019 law by limiting the retention and use of incidentally-collected data.

Also in 2020, the Maryland legislature passed a bill to ban warrantless stingray spying by adding provisions to existing statutes limiting warrantless location tracking through electronic devices. The bill addresses the use of cell-site simulators, requiring police to get a court order based on probable cause before deploying a stingray device. The bill also bars police from using a stingray to obtain communication content and spells out explicit criteria law enforcement must meet in order to justify such an order.

Two other states also expanded their restrictions on warrantless government access to electronic data last year.

Utah passed a bill expanding its electronic data protection by barring law enforcement agencies from accessing electronic information or data transmitted to a “remote computing service” without a warrant based on probable cause in most situations. In effect, it prohibits police from accessing information uploaded into the “cloud” without a warrant. The state previously prohibited both the use of stingrays and accessing data on a device without a warrant. 

Illinois also expanded its protection of electronic data in 2019. Under the old law, police were required to get a court order based on probable cause before obtaining a person’s current or future location information using a stingray or other means. The new law removes the words “current or future” from the statute. In effect, the law now includes historical location information under the court order requirement. 

This is an overview of the most recent moves to limit surveillance and chip away at the ever-growing national surveillance state. To get more details on state efforts to undermine government spying, along with other unconstitutional federal actions and programs, make sure you read our latest State of the Nullification Movement report. You can download it for free HERE.

Share
Categories
Apple Intelwars ios privacy Security engineering

New Privacy Features in iOS 14

A good rundown.

Share
Categories
Intelwars Legal Analysis privacy

Come Back with a Warrant for my Virtual House

Virtual Reality and Augmented Reality in your home can involve the creation of an intimate portrait of your private life.  The VR/AR headsets can request audio and video of the inside of our house, telemetry about our movements, depth data and images that can build a highly accurate geometrical representation of your place, that can map exactly where that mug sits on your coffee table, all generated by a simultaneous localization and mapping (SLAM) system.  As Facebook’s Reality Labs explains, their “high-accuracy depth capture system, which uses dots projected into the scene in infrared, serves to capture the exact shape of big objects like tables and chairs and also smaller ones, like the remote control on the couch.” VR/AR providers can create “Replica re-creations of the real spaces that even a careful observer might think are real,” which is both the promise of and the privacy problem with this technology.

If the government wants to get that information, it needs to bring a warrant. 

Nearly twenty years ago, the Supreme Court examined another technology that would allow law enforcement to look through your walls into the sanctity of your private space—thermal imaging. In Kyllo v. United States, the Court held that a thermal scan, even from a public place outside the house, to monitor the heat emanating in your home was a Fourth Amendment search, and required a warrant.  This was an important case, building upon some earlier cases, like United States v. Karo, which found a search when the remote activation of a beeper showed a can of ether was inside a home. 

More critically, Kyllo established the principle that new technologies1 that can “explore details of the home that would previously have been unknowable without physical intrusion, the surveillance is a ‘search’ and is presumptively unreasonable without a warrant.” A VR/AR setup at home can provide a wealth of information—“details  of the home”—that was previously unknowable without the police coming in through the door.

This is important, not just to stop people from seeing the dirty dishes in your sink, or the politically provocative books on your bookshelf.  The protection of your home from government intrusion is essential to preserve your right to be left alone, and to have autonomy in your thoughts and expression without the fear of Big Brother breathing down your neck. While you can choose to share your home with friends, family or the public, the ability to make that choice is a fundamental freedom essential to human rights.

Of course, a service provider may require sharing this information before providing certain services.  You might want to invite your family to a Covid-safe housewarming, their avatars appearing in a exact replica of your new home, sharing the joy of seeing your new space. To get the full experience and fulfill the promise of the new technology, the details of your house—your furnishings, the art on your walls, the books on your shelf may need to be shared with a service provider to be enjoyed by your friends. And, at the same time, creating a tempting target for law enforcement wanting to look inside your house. 

Of course, the ideal would be that strong encryption and security measures would protect that information, such that only the intended visitors to your virtual house could get to wander the space, and the government would be unable to obtain the unencrypted information from a third-party.  But we also need to recognize that governments will continue to press for unencrypted access to private spaces. Even where encryption is strong between end points, governments may, like the United Kingdom, ask for the ability to insert an invisible ghost to attend the committee of correspondence meeting you hold in your virtual dining room. 

While it is clear that monitoring the real-time audio in your virtual home requires a wiretap order, the government may argue that they can still observe a virtual home in real-time. Not so.  Carpenter v. United States provides the constitutional basis to keep the government at bay when the technology is not enough.  Two years ago, in a landmark decision, the Supreme Court established that accessing historical records containing the physical locations of cellphones required a search warrant, even though they were held by a third-party. Carpenter cast needed doubt on the third-party doctrine, which allows access to third-party held records without a warrant, noting that “few could have imagined a society in which a phone goes wherever its owner goes, conveying to the wireless carrier not just dialed digits, but a detailed and comprehensive record of the person’s movements.”

Likewise, when the third-party doctrine was created in 1979, few could have imagined a society in which VR/AR systems can map, in glorious three dimensional detail, the interior of one’s home and their personal behavior and movements, conveying to the VR/AR service provider a detailed and comprehensive record of the goings on of a person’s house. Carpenter and Kyllo stand strongly for requiring a warrant for any information created by your VR/AR devices that shows the interior of your private spaces, regardless of whether that information is held by a service provider.

In California, where many VR/AR service providers are based, CalECPA generally requires a warrant or wiretap order before the government may obtain this sensitive data from service providers, with a narrow exception for subpoenas, where “access to the information via the subpoena is not otherwise prohibited by state or federal law.”  Under Kyllo and Carpenter, warrantless access to your home through VR/AR technology is prohibited by the ultimate federal law, the Constitution.

We need to be able to take advantage of the awesomeness of this new technology, where you can have a fully formed virtual space—and invite your friends to join you from afar—without creating a dystopian future where the government can teleport into a photo-realistic version of your house, able to search all the nooks and crannies measured and recorded by the tech, without a warrant. 

Carpenter led to a sea change in the law, and since has been cited in hundreds of criminal and civil cases across the country, challenging the third-party doctrine for surveillance sources, like real-time location tracking, 24/7 video cameras and automatic license plate readers. Still the development of the doctrine will take time. No court has yet ruled on a warrant for a virtual search of your house.  For now, it is up to the service providers to give a pledge, backed by a quarrel of steely-eyed privacy lawyers, that if the government comes to knock on your VR door, they will say “Come back with a warrant.” 

  • 1. Kyllo used the phrase “device that is not in general public use,” which sets up an unfortunate and unnecessary test that could erode our privacy as new technologies become more widespread. Right now, the technology to surreptitiously view the interior of a SLAM-mapped home is not in general use, and even when VR and AR are ubiquitous, courts have recognized that technologies to surveil cell phones are not “in general public use,” even though the cell phones themselves are.
Share