Categories
Biometrics face surveillance Genetic Information Privacy Intelwars Legal Analysis Mandatory National IDs and Biometric Databases medical privacy

EFF Files Comment Opposing the Department of Homeland Security’s Massive Expansion of Biometric Surveillance

EFF, joined by several leading civil liberties and immigrant rights organizations, recently filed a comment calling on the Department of Homeland Security (DHS) to withdraw a proposed rule that would exponentially expand biometrics collection from both U.S. citizens and noncitizens who apply for immigration benefits and would allow DHS to mandate the collection of face data, iris scans, palm prints, voice prints, and DNA. DHS received more than 5,000 comments in response to the proposed rule, and five U.S. Senators also demanded that DHS abandon the proposal.    

DHS’s biometrics database is already the second largest in the world. It contains biometrics from more than 260 million people. If DHS’s proposed rule takes effect, DHS estimates that it would nearly double the number of people added to that database each year, to over 6 million people. And, equally important, the rule would expand both the types of biometrics DHS collects and how DHS uses them.  

What the Rule Would Do

Currently, DHS requires applicants for certain, but not all, immigration benefits to submit fingerprints, photographs, or signatures. DHS’s proposed rule would change that regime in three significant ways.   

First, the proposed rule would make mandatory biometrics submission the default for anyone who submits an application for an immigration benefit. In addition to adding millions of non-citizens, this change would sweep in hundreds of thousands of U.S. citizens and lawful permanent residents who file applications on behalf of family members each year. DHS also proposes to lift its restrictions on the collection of biometrics from children to allow the agency to mandate collection from children under the age of 14. 

Second, the proposed rule would expand the types of biometrics DHS can collect from applicants. The rule would explicitly give DHS the authority to collect palm prints, photographs “including facial images specifically for facial recognition, as well as photographs of physical or anatomical features such as scars, skin marks, and tattoos,” voice prints, iris images, and DNA. In addition, by proposing a new and expansive definition of the term “biometrics,” DHS is laying the groundwork to collect behavioral biometrics, which can identify a person through the analysis of their movements, such as their gait or the way they type. 

Third, the proposed rule would expand how DHS uses biometrics. The proposal states that a core goal of DHS’s expansion of biometrics collection would be to implement “enhanced and continuous vetting,” which would require immigrants “be subjected to continued and subsequent evaluation to ensure they continue to present no risk of causing harm subsequent to their entry.” This type of enhanced vetting was originally contemplated in Executive Order 13780, which also banned nationals of Iran, Libya, Somalia, Sudan, Syria, and Yemen from entering the United States. While DHS offers few details about what such a program would entail, it appears that DHS would collect biometric data as part of routine immigration applications in order to share that data with other law enforcement agencies and monitor individuals indefinitely.

The Rule Is Fatally Flawed and Must Be Stopped 

EFF and our partners oppose this proposed rule on multiple grounds. It fails to take into account the serious privacy and security risks of expanding biometrics collection; it threatens First Amendment activity; and it does not adequately address the risk of error in the technologies and databases that store biometric data. Lastly, DHS has failed to  provide sufficient justification for these drastic changes, and the proposed changes exceed DHS’s statutory authority. 

Privacy and Security Threats

The breadth of the information DHS wants to collect is massive. DHS’s new definition of biometrics would allow for virtually unbounded biometrics collection in the future, creating untold threats to privacy and personal autonomy. This is especially true of behavioral biometrics, which can be collected without a person’s knowledge or consent, expose highly personal and sensitive information about a person beyond mere identity, and allow for tracking on a mass scale. Notably, both Democratic and Republican members of Congress have condemned China’s similar use of biometrics to track the Uyghur Muslim population in Xinjiang.

Of the new types of biometrics DHS plans to collect, DNA collection presents unique threats to privacy. Unlike other biometrics such as fingerprints, DNA contains our most private and personal information. DHS plans to collect DNA specifically to determine genetic family relationships and will store that relationship information with each DNA profile, thus allowing the agency to identify and map immigrant families and, eventually over time, whole immigrant communities. DHS suggests that it will store DNA data indefinitely and makes clear that it retains the authority to share this data with law enforcement. Sharing this data with law enforcement only increases the risk those required to give samples will be erroneously linked to a crime, while exacerbating problems related to the disproportionate number of people of color whose samples are included in government DNA databases. 

Not only is the government’s increased collection of highly sensitive personal data troubling because of the ways the government might use it, but also because that data could end up in the hands of bad actors. Put simply, DHS has not demonstrated that it can keep biometrics safe. For example, just last month, DHS’s Office of Inspector General (OIG) found that the agency’s inadequate security practices enabled bad actors to steal nearly 200,000 travelers’ face images from a subcontractor’s computers. A Government Accountability Office report similarly “identified long-standing challenges in CBP’s efforts to develop and implement [its biometric entry and exit] system.” There have also been serious security breaches from insiders at USCIS. And other federal agencies have had similar challenges in securing biometric data: in 2015, sensitive data on more than 25 million people stored in the Office of Personnel Management databases was stolen. And, as the multiple security breaches of India’s Aadhar national biometric database have shown in the international context, these breaches can make millions of individuals subject to fraud and identity theft.

The risk of security breaches to children’s biometrics is especially acute. A recent U.S. Senate Commerce Committee report collects a number of studies that “indicate that large numbers of children in the United States are victims of identity theft.” Breaches of children’s biometric data further exacerbate this security risk because biometrics cannot be changed. As a recent UNICEF report explains, the collection of children’s biometric information exposes them to “lifelong data risks” that are not possible to presently evaluate. Never before has biometric information been collected from birth, and we do not know how the data collected today will be used in the future.

First Amendment Risks

This massive collection of biometric data—and the danger that it could be leaked—places a significant burden on First Amendment activity. By collecting and retaining biometric data like face recognition and sharing it broadly with federal, state, and local agencies, as well as with contractors and foreign governments, DHS lays the groundwork for a vast surveillance and tracking network that could impact individuals and communities for years to come. DHS could soon build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like at the border, but anywhere there are cameras. This burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups that are the most likely to encounter DHS. 

If immigrants and their U.S. citizen and permanent resident family members know the government can request, retain, and share with other law enforcement agencies their most intimate biometric information at every stage of the immigration lifecycle, many may self-censor and refrain from asserting their First Amendment rights. Studies show that surveillance systems and the overcollection of data by the government chill expressive and religious activity. For example, in 2013, a study involving Muslims in New York and New Jersey found excessive police surveillance in Muslim communities had a significant chilling effect on First Amendment-protected activities.

Problems with Biometric Technology

DHS’s decision to move forward with biometrics expansion is also questionable because the agency fails to consider the lack of reliability of many biometric technologies and the databases that store this information. One of the methods DHS proposes to employ to collect DNA, known as Rapid DNA, has been shown to be error prone. Meanwhile, studies have found significant error rates across face recognition systems for people with darker skin, and especially for Black women. 

Moreover, it remains far from clear that collecting more biometrics will make DHS’s already flawed databases any more accurate. In fact, in a recent case challenging the reliability of DHS databases, a federal district court found that independent investigations of several DHS databases highlighted high error rates within the systems. For example, in 2017, the DHS OIG found that the database used for information about visa overstays was wrong 42 percent of the time. Other databases used to identify lawful permanent residents and people with protected status had a 30 percent error rate.

DHS’s Flawed Justification

DHS has offered little justification for this massive expansion of biometric data collection. In the proposed rule, DHS suggests that the new system will “provide DHS with the improved ability to identify and limit fraud.” However, the scant evidence that DHS offers to demonstrate the existence of fraud cannot justify its expansive changes. For example, DHS purports to justify its collection of DNA from children based on the fact that there were “432 incidents of fraudulent family claims” between July 1, 2019 and November 7, 2019 along the southern border. Not only does DHS not define what constitutes a “fraudulent family,” but also it leaves out that during that same period, an estimated 100,000 family units crossed the southern border, meaning that the so-called “fraudulent family” units made up less than one-half of one percent of all family crossings. And we’ve seen this before: the Trump administration has a troubling record of raising false alarms about fraud in the immigration context.

In addition, DHS does not address the privacy costs discussed in depth above. The proposed rule merely notes that “[t]here could be some unquantified impacts related to privacy concerns for risks associated with the collection.” And of course, the changes would come at a considerable financial cost to taxpayers, at a time when USCIS is already experiencing fiscal challenges. Even with the millions of dollars in new fees USCIS will collect, the rule is estimated to cost anywhere from $2.25 to $5 billion over the next 10 years. DHS also notes that additional costs could manifest.

Beyond DHS’s Mandate

Congress has not given DHS the authority to expand biometrics collection in this manner. When Congress has wanted DHS to use biometrics, it has said so clearly. For example, after 9/11, Congress directed DHS to “develop a plan to accelerate the full implementation of an automated biometric entry and exit data system.” But DHS can point to no such authorization in this instance. In fact, Congress is actively considering measures to restrict the government’s use of biometrics. It is not the place for a federal agency to supersede debate in Congress. Elected lawmakers must resolve these important matters through the democratic process before DHS can put forward a proposal like the proposed rule, which seeks to perform an end run around the democratic process.    

What’s Next

If DHS makes this rule final, Congress has the power to block it from taking effect. We hope that DHS will take seriously our comments. But if it doesn’t, Congress will be hearing from us and our members.

Share
Categories
academic papers Biometrics deep fake Intelwars

Detecting Deep Fakes with a Heartbeat

Researchers can detect deep fakes because they don’t convincingly mimic human blood circulation in the face:

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

Of course, this is an arms race. I expect deep fake programs to become good enough to fool FakeCatcher in a few months.

Share
Categories
Biometrics corporatocracy gates Intelwars Videos

Biofascist State, Partisan Vaccines, Lockdown Unconstitutional – New World Next Week


This week on the New World Next Week: the biosecurity corporate fascist state marches on; the vaccine debate turns partisan; and a Pennsylvania judge rules the arbitrary lockdown closures are unconstitutional.

Share
Categories
Biometrics corporatocracy gates Intelwars Interviews

Interview 1577 – New World Next Week with James Evan Pilato


This week on the New World Next Week: the biosecurity corporate fascist state marches on; the vaccine debate turns partisan; and a Pennsylvania judge rules the arbitrary lockdown closures are unconstitutional.
Share
Categories
Biometrics Commentary Intelwars Student Privacy

EFF Tells California Supreme Court Not to Require ExamSoft for Bar Exam

This week, EFF sent a letter (pdf link) to the Supreme Court of California objecting to the required use of the proctoring tool ExamSoft for the October 2020 California Bar Exam. Test takers should not be forced to give their biometric data to ExamSoft, the letter says, which can use it for marketing purposes, share it with third parties, or hand it over to law enforcement, without the ability to opt out and delete this information. This remote proctoring solution forces Bar applicants to surrender the privacy and security of their personal biometric information, violating the California Consumer Privacy Act. EFF asked the California Bar to devise an alternative option for the five-thousand or so expected test takers next month. 

ExamSoft is a popular proctoring or assessment software product that purports to allow remote testing while determining whether a student is cheating. To do so, it uses various privacy-invasive technical monitoring techniques, such as, comparing test takers’ images using facial recognition, tracking eye movement, recording patterns of keystrokes, and recording video and audio of students’ surroundings as they take the test. The type of data ExamSoft collects includes “facial recognition and biometric data of each individual test taker for an extended period of time, including a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry”). Additionally, ExamSoft has access to the device’s webcam, including audio and video access, and screen, for the duration of the exam. 

ExamSoft’s collection of test takers’ biometric and other personal data implicates the California Consumer Privacy Act. At a minimum, the letter states, the State Bar of California must provide a mechanism for students to opt out of the sale of their data, and to delete it, to comply with this law: 

The California Bar should clearly inform test takers of their protections under the CCPA. Before test takers are asked to use such an invasive piece of software, the California Bar should confirm that, at an absolute minimum, it has in place a mechanism to allow test takers to access their ExamSoft data, to opt out of the “sale” of their data, and to request its deletion. Students should have all of these rights without facing threat of punishment. It is bad enough that the use of ExamSoft puts the state in the regrettable position of coercing students into compromising their privacy and security in exchange for their sole chance to take the Bar Exam. It should not compound that by denying them their rights under state privacy law.

In addition to these privacy invasions, proctoring software brings with it many potential other dangers, including threats to security: vast troves of personal data have already leaked from one proctoring company, ProctorU, affecting 440,000 users. The ACLU has also expressed concerns with the software’s use of facial recognition, which will “exacerbate racial and socioeconomic inequities in the legal profession and beyond.” And lastly, this type of software has been shown to have technical issues that could cause students to experience unexpected problems while taking the Bar Exam, and comes with requirements that could harm users who cannot meet them, such as requiring a laptop that is relatively new, and broadband speed that many households do not have. Other states have canceled the use of proctoring software for their bar exams due to the inability to ensure a “secure and reliable” experience. California should take this into account when considering its use of proctoring software.

The entrance fee for becoming a lawyer in California should not include compromising personal privacy and security. The Bar Exam is already a nerve-wracking, anxiety-inducing test. We ask the Supreme Court of California to take seriously the risks presented by ExamSoft and pursue alternatives that do not put exam takers in jeopardy.

Share
Categories
Biometrics Intelwars privacy Video Games

If Privacy Dies in VR, It Dies in Real Life

If you aren’t an enthusiast, chances are you haven’t used a Virtual Reality (VR) or Augmented Reality (AR) headset. The hype around this technology, however, is nearly inescapable. We’re not just talking about dancing with lightsabers; there’s been a lot of talk about how VR/AR will revolutionize entertainment, education, and even activism. EFF has long been interested in the potential of this technology, and has even developed our own VR experience, Spot the Surveillance, which places users on a street corner amidst police spying technologies. 

It’s easy to be swept up in the excitement of a new technology, but utopian visions must not veil the emerging ethical and legal concerns in VR/AR. The devices are new, but the tech giants behind them aren’t. Any VR/AR headset you use today is likely made by a handful of corporate giants—Sony, Microsoft, HTC, and Facebook. As such, this budding industry has inherited a lot of issues from their creators. VR and AR hardware aren’t household devices quite yet, but if they succeed, there’s a chance they will creep into all of our personal and professional lives guided by the precedents set today.  

A Step Backwards: Requiring Facebook Login for Oculus

This is why Oculus’ announcement last week shocked and infuriated many users. Oculus, acquired by Facebook in 2014, announced that it will require a Facebook account for all users within the next 2 years. At the time of the acquisition Oculus offered distressed users an assurance that “[y]ou will not need a Facebook account to use or develop for the Rift [headset].” 

There’s good cause to be alarmed. Eliminating alternative logins can force Oculus users to accept Facebook’s Community Standards, or risk potentially bricking their device. With this lack of choice, users can no longer freely give meaningful consent and lose the freedom to be anonymous on their own device. That is because Oculus owners will also need to adopt Facebook’s controversial real name policy.  The policy requires users to register what Facebook calls their “authentic identity,” — one known by friends and family and found on acceptable documents—in order to use the social network. Without anonymity, Oculus leaves users in sensitive contexts out to dry, such as VR activism in Hong Kong or LGBTQ+ users who can not safely reveal their identity.

Logging in to Facebook on an Oculus product already shares with Facebook to inform ads when you logged in to a Facebook account. Facebook already has a vast collection of data, collected from across the web and even your own devices. Combining this with sensitive biometric and environmental data detected by Oculus headsets furthers tramples user privacy. And Facebook should really know—the company recently agreed to pay $650 million for violating Illinois’ biometric law (BIPA) for collecting user biometric data without consent. However, for companies like Facebook, which are built on capturing your attention and selling it to advertisers, this is a potential gold mine. Having eye-tracking data on users, for example, can cement a monopolistic power in online advertisements— regardless of how effective it actually is. They merely need the ad industry to believe Facebook has an advantage. 

Facebook violating the trust of users in its acquired companies (like Instagram and Whatsapp) may not be surprising. After all, it has a long trail of broken promises while paying lip service to privacy concerns. What’s troubling in this instance, however, is the position of Oculus in the VR/AR industry. Facebook is poised to help shape the medium as a whole and normalize mass user surveillance, as Google has already done with smartphones.

Defending Fundamental Human Rights in All Realities

Strapping these devices to ourselves lets us enter a virtual world, but at a price—these companies enter our lives and have access to intimate details about us though biometric data. How we move and interact with the world offers insight, by proxy, to how we think and feel in the moment. Eye tracking technology, often seen in cognitive science, is already being developed, which sets the stage for unprecedented privacy and security risks. If aggregated, those in control of this biometric data may be able to identify patterns which let them more precisely predict (or cause) certain behavior and even emotions in the virtual world. It may allow companies to exploit users’ emotional vulnerabilities through strategies that are difficult for the user to perceive and resist. What makes the collection of this sort of biometric data particularly frightening, is that unlike a credit card or password, it is information about us we cannot change. Once collected, there is little users can do to mitigate the harm done by leaks or data being monetized with additional parties. 

Threats to our privacy don’t stop there. A VR/AR setup will also be densely packed with cameras, microphones, and myriad other sensors to help us interact with the real world—or at least not crash into it. That means information about your home, your office, or even your community is collected, and potentially available to the government. Even if you personally never use this equipment, sharing a space with someone who does puts your privacy at risk. Without meaningful user consent and restrictions on collection, a menacing future may take shape where average people using AR further proliferate precise audio and video surveillance in public and private spaces. It’s not hard to imagine these raw data feeds integrating with the new generations of automatic mass surveillance technology such as face recognition.

Companies like Oculus need to do more than “think about privacy”. Industry leaders need to commit to the principles of privacy by design, security, transparency, and data minimization. By default, only data necessary to core functions of the device or software should be collected; even then, developers should utilize encryption, delete data as soon as reasonably possible, and have this data stay on local devices. Any collection or use of information beyond this, particularly when shared with additional parties, must be opt-in with specific, freely given user consent. For consent to be freely given, Facebook should provide an alternative option so the user has the ability to choose. Effective safeguards must also be in place to ensure companies are honoring their promises to users, and to mitigate Cambridge-Analytica-type data scandals from third-party developers. Companies should, for example, carry out a Data Protection Impact Assessment to help them identify and minimize data protection risks when the processing can likely result in a high risk to individuals. While we encourage these companies to compete on privacy, it seems unlikely most tech giants would do so willingly. Privacy must also be the default on all devices, not a niche or premium feature.  

We all need to keep the pressure on state legislatures and Congress to adopt strong comprehensive consumer privacy laws in the United States to control what big tech can get away with. These new laws must not preempt stronger state laws, they must provide users’ with a private right of action, and they should not include “data dividends” or pay-for-privacy schemes.

Antitrust enforcers should also take note of yet another broken promise about privacy, and think twice before allowing Facebook to acquire data-rich companies like Oculus in the future. Mergers shouldn’t be allowed based on promises to keep the user data from acquired companies separate from Facebook’s other troves of user data when Facebook has broken such promises so many times before.

The future of privacy in VR/AR will depend on swift action now, while the industry is still budding. Developers need to be critical of the technology and information they utilize, and how they can make their work more secure and transparent. Enthusiasts and reviewers should prioritize open and privacy-conscious devices while they are only entertainment accessories. Activists and researchers must create a future where AR and VR work in the best interests of the users, and society overall. 

Left unchecked, we fear VR/AR development will follow the trail left by smartphones and IoT. Developers, users, and government must ensure it does not ride its hype into an inescapable, insecure, proprietary, and privacy-invasive ecosystem. The hardware and software may go a long way towards fulfilling long-promised aspects of technology, but it must not do so while trampling on our human rights.

Share
Categories
Biometrics Identification Intelwars

Yet Another Biometric: Bioacoustic Signatures

Sound waves through the body are unique enough to be a biometric:

“Modeling allowed us to infer what structures or material features of the human body actually differentiated people,” explains Joo Yong Sim, one of the ETRI researchers who conducted the study. “For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum.”

[…]

Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

“We were very surprised that people’s bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly,” says Sim. “These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day.”

It’s not great. A 97% accuracy is worse than fingerprints and iris scans, and while they were able to reproduce the biometric in a month it almost certainly changes as we age, gain and lose weight, and so on. Still, interesting.

Share
Categories
Biometrics Intelwars privacy

Sen. Merkley Leads on Biometric Privacy

Businesses across the world are harvesting and monetizing our biometrics without our knowledge or consent. For example, Clearview AI extracted faceprints from three billion people, and now it sells face-matching services to police departments. Likewise, retail stores use face surveillance to identify customers they deem more likely than others to engage in shoplifting, often based on error-prone, racially biased criminal justice data. Other businesses profit on tracking of our fingerprints, iris scans, and other biometrics.

So it is great news that U.S. Sens. Jeff Merkley and Bernie Sanders have introduced the National Biometric Information Privacy Act (BIPA). The Act requires businesses to get your opt-in consent before collecting or sharing your biometrics; to delete your biometrics in a timely fashion; and to store your biometrics securely. Most importantly, the bill empowers you (and us) to sue businesses that break these rules.

Biometric surveillance raises special privacy concerns. We expose our biometrics wherever we go, and unlike passwords and ID numbers, we can’t get new biometrics when the ones we’re born with leak. That’s bad news for us. But for governments and businesses (often in partnership), the permanence of our unalterable biometrics makes it easy to track our whereabouts, activities, and associations. That’s why biometric surveillance is on the rise.

The National BIPA bill improves upon Illinois’ BIPA law, which is one of the most important privacy laws in the United States. EFF and other privacy advocates have long worked in legislatures and courts to protect and expand this model.

EFF looks forward to helping enact this important federal biometric privacy law.

Share
Categories
Biometrics cashless society Censorship Coronavirus Intelwars protest vaccines Videos

Trust Stamp, Serbian Protest, Dicks Censored – New World Next Week


This week on the New World Next Week: All the pieces come together in the new Trust Stamp biometric id / vaccination history / cashless payment system; Serbs force the government to scrap its corona curfew plans; and Dan Dicks is censored off of YouTube.

Share
Categories
Biometrics cashless society Censorship Coronavirus Intelwars Interviews protest vaccines

Interview 1564 – New World Next Week with James Evan Pilato


This week on the New World Next Week: All the pieces come together in the new TrustStamp biometric id / vaccination history / cashless payment system; Serbs force the government to scrap its corona curfew plans; and Dan Dicks is censored off of YouTube.
Share
Categories
Biometrics Intelwars privacy

California: Stand Up to Face Surveillance

EFF has joined a broad coalition of civil liberties, civil rights, and labor advocates to oppose A.B. 2261, which threatens to normalize the increased use of face surveillance of Californians where they live and work. Our allies include the ACLU of California, Oakland Privacy, the California Employment Lawyers Association, Service Employees International Union (SEIU) of California, and California Teamsters.

A.B. 2261 is currently before the Assembly Appropriations Committee. It purports to regulate face surveillance in the name of privacy concerns during this pandemic. In fact, as written, this bill will give a legislative imprimatur to the dangerous and invasive use of face surveillance, by setting weak minimum standards that allow governments and corporation to pay lip service to privacy without actually preventing the harms of face surveillance. The risk is greater now than ever. Government officials already are pushing to use pandemic management tools to surveil and control protests across the country against racism and police brutality.

TAKE ACTION

Stand Up to Face Surveillance

Any bill that smooths a path for increased face surveillance is not the answer, especially as we face the moment’s crises. Several companies and government agencies have proposed expanding the use of this technology in light of the pandemic, even though there is no proof that face surveillance can be a meaningful tool to address the COVID-19 crisis. What is well-documented is how using this technology exacerbates existing biases in policing. It also harms our privacy, by making it impossible to go about our lives without government and corporations monitoring where we go, what we are doing, and who we are with. It also chills our First Amendment rights to gather and protest.

Surveillance infrastructure set up in times of crisis is not easily rolled back. Many governments already employ powerful spying technologies in ways that harm minority communities. This includes spying on the social media of activists, particularly advocates for racial justice such as participants in the Black-led movement for racial and economic justice. Also, police watch lists are often over-inclusive and error-riddled, and cameras often are over-deployed in minority areas—effectively criminalizing entire communities.  If history is any guide, we expect police will engage in racial profiling with face surveillance technology, too.

This is a flawed and dangerous technology at any time, and especially now when its use could further target communities of color who already are disparately impacted by both the pandemic and police violence.

We urge Chairperson Gonzalez and the members of the Assembly Approriations Committee to stop this bill from moving forward, and listen to the voices of their constituents who are concerned about the harmful effect it could have in their everyday lives as they seek some sense of normalcy. Californians: please tell your lawmakers to stand against face surveillance.

TAKE ACTION

Stand Up to Face Surveillance

Share
Categories
Biometrics face surveillance Intelwars privacy

EFF Testifies Today on Law Enforcement Use of Face Recognition Before Presidential Commission on Law Enforcement and the Administration of Justice

The Presidential Commission on Law Enforcement and the Administration of Justice invited EFF to testify on law enforcement use of face recognition. The Commission, which was established via Executive Order and convened by Attorney General William Barr earlier this year, is tasked with addressing the serious issues confronting law enforcement and is made up of representatives from federal law enforcement as well as police chiefs and sheriffs from around the country.

We testified orally and provided the Commission with a copy of our whitepaper, Face Off: Law Enforcement Use of Face Recognition Technology. The following is our oral testimony:

President’s Commission on Law Enforcement and the Administration of Justice
Hearing on Law Enforcement’s Use of Facial Recognition Technology

Oral Testimony of
Jennifer Lynch
Surveillance Litigation Director
Electronic Frontier Foundation (EFF)

April 22, 2020

Thank you very much for the opportunity to discuss law enforcement’s use of facial recognition technologies with you today. I am the surveillance litigation director at the Electronic Frontier Foundation, a 30-year-old nonprofit dedicated to the protection of civil liberties and privacy in new technologies.

In the last few years, face recognition has advanced significantly. Now, law enforcement officers can use mobile devices to capture face recognition-ready photographs of people they stop on the street; surveillance cameras and body-worn cameras boast real-time face scanning and identification capabilities; and the FBI and many other state and federal agencies have access to millions, if not hundreds of millions, of face recognition images of law-abiding Americans.

However, the adoption of face recognition technologies has occurred without meaningful oversight, without proper accuracy testing, and without legal protections to prevent misuse. This has led to the development of unproven systems that will impinge on constitutional rights and disproportionately impact people of color.

Face recognition and similar technologies make it possible to identify and track people, both in real time and in the past, including at lawful political protests and other sensitive gatherings. Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny, even when they are doing absolutely nothing wrong. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.

The right to speak anonymously and to associate with others without the government watching is fundamental to a democracy. And it’s not just EFF saying that—the founding fathers used pseudonyms in the Federalist Papers to debate what kind of government we should form in this country, and the Supreme Court has consistently recognized that anonymous speech and association are necessary for the First Amendment right to free speech to be at all meaningful.

Face recognition’s chilling effect is exacerbated by inaccuracies in face recognition systems. For example, FBI’s own testing found its face recognition system failed to even detect a match from a gallery of images nearly 15% of the time. Similarly, the ACLU showed that Amazon’s face recognition product, which it aggressively markets to law enforcement, falsely matched 28 members of Congress to mugshot photos.

The threats from face recognition will disproportionately impact people of color, both because face recognition misidentifies African Americans and ethnic minorities at higher rates than whites, and because mug shot databases include a disproportionate number of African Americans, Latinos, and immigrants.

This has real-world consequences; an inaccurate system will implicate people for crimes they didn’t commit. Using face recognition as the first step in an investigation can bias the investigation toward a particular suspect. Human backup identification, which has its own problems, frequently only confirms this bias. This means face recognition will shift the burden onto defendants to show they are not who the system says they are.

Despite these known challenges, federal and state agencies have for years failed to be transparent about their use of face recognition. For example, the public had no idea how many images were accessible to FBI’s FACE Services Unit until Government Accountability Office reports from 2016 and 2019 revealed the Bureau can access more than 641 million images—most of which were taken for non-criminal reasons like obtaining a driver license or a passport.

State agencies have been just as intransigent in providing information on their face recognition systems. EFF partnered with the Georgetown Center on Privacy and Technology to do a survey of which states were currently using face recognition and with whom they were sharing their data – a project we call “Who Has Your Face.” Many states, including Connecticut, Louisiana, Kentucky, and Alabama failed to or refused to respond to our public records requests. And other states like Idaho and Oklahoma told us they did not use face recognition but other sources, like the GAO reports and records from the American Association of Motor Vehicle Administrators (AAMVA), seem to contradict this.

Law enforcement officers have also hidden their partnerships with private companies from the public. Earlier this year, the public learned that a company called Clearview AI had been actively marketing its face recognition technology to law enforcement, and claimed that more than 1,000 agencies around the country had used its services. But up until the middle of January, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using his image to sell its services to other agencies.

Unfortunately, the police have been just as tight-lipped with defendants and defense attorneys about their use of face recognition. For example, in Florida law enforcement officers have used face recognition to try to identify suspects for almost 20 years, conducting up to 8,000 searches per month. However, Florida defense attorneys are almost never told that face recognition was used in their clients’ cases. This infringes defendants’ constitutional due process right to challenge evidence brought against them.

Without transparency, accountability, and proper security protocols in place, face recognition systems will be subject to misuse. For example, the Baltimore Police used face recognition and social media to identify and arrest people in the protests following Freddie Gray’s death. And Clearview AI used its own face recognition technology to monitor a journalist and encouraged police officers to use it to identify family and friends.

Americans should not be forced to submit to criminal face recognition searches merely because they want to drive a car. And they shouldn’t have to fear that their every move will be tracked if the networks of surveillance cameras that already blanket many cities are linked to face recognition.

But without meaningful restrictions on face recognition, this is where we may be headed. Without protections, it could be relatively easy for governments to amass databases of images of all Americans—or work with a shady company like Clearview AI to do it for them—and then use those databases to identify and track people as they go about their daily lives. 

In response to these challenges, I encourage this commission to do two things: First, to conduct a thorough nationwide study of current and proposed law enforcement practices with regard to face recognition at the federal, state, and local level, and second, to develop model policies for agencies that will meaningfully restrict law enforcement access to and use of this technology. Once completed, both of these should be easily available to the general public.

Thank you once again for the invitation to testify. My written testimony, a white paper I wrote on law enforcement use of face recognition, provides additional information and recommendations. I am happy to respond to questions.

Share
Categories
Biometrics Coronavirus COVID-19 drone surveillance team fever detection gun detection Headline News human rights Intelwars mass surveillance no privacy Police State scan people Technology thermal imaging top down control tracking humans Travis Calendine tyranny

Facial Recognition Companies Profit From COVID-19 By Adding Thermal Imaging

This article was originally published at by Mass Private I at MassPrivateI Blog

The biometrics industry has never been known to miss an opportunity to make a profit. Especially when it comes at the expense of everyone’s privacy. Since the outbreak of COVID-19, facial recognition companies have been hard at work creating a new sales pitch that will allow them to maximize their profits.

Across the globe, facial-recognition companies are hard at work trying to convince politicians, law enforcement and the public that thermal imaging cameras will help stop the spread of COVID-19.

Forbes.com is all too eager to jump on the thermal imaging bandwagon, claiming that Athena Security’s facial recognition/thermal imaging software can scan 1000 people per hour.

“With Covid-19, you see the problem at airports today,” Athena Security CEO Lisa Falzone explains. “Passengers are waiting in lines to manually take their temperatures, which slows traffic down. With our technology, we can analyze 1000 people per hour.”

In fact, Athena Security has gone so far as to combine fever detection, facial recognition, and gun detection into an all-in-one screening system.

“Our Fever Detection COVID19 Screening System is now a part of our platform along with our gun detection system which connects directly to your current security camera system to deliver fast, accurate threat detection – including guns, knives, and aggressive action. Our system can also alert you to falls, accidents, and unwelcome visitors.”

As Forbes.com points out, what makes Athena’s software so unique is their ability to send out real-time alerts to authorities.

“What’s different about Athena’s system, explains Falzone, is its ability to send out immediate alerts to the appropriate parties, who can then make an informed decision on how to act.”

The scariest thing about the proliferation of facial recognition/thermal imaging cameras is it gives law enforcement near god-like powers to identify and quarantine anyone they choose.

Updated 3/25:

Real-time facial recognition being offered to universities and hospitals for free 
 
The biometrics industry has put a new spin on marketing by offering universities and hospitals free real-time facial recognition.
 
A recent Jumio press release revealed that they are donating their facial recognition identity verification services to U.S. and U.K. hospitals and universities.

“Starting today, Jumio will provide free identity verification services through our AI-powered, fully automated solution, Jumio Go, to any qualifying organization directly involved in helping with COVID-19 relief including (but not limited to): Hospitals and Universities. This free offer is powered by Jumio Go, our real-time, fully automated identity verification solution.”

DroneUp suggests that law enforcement could use thermal imaging drones as an excuse to monitor people for COVID-19.

“Assurance is key in uncertain times. That is why DroneUp is functioning in conjunction with FAA regulatory precautions as we collaborate with state and local officials to ensure safety for all. We strongly encourage our fellow drone operators to also become keenly aware of the protocol and regulations as the pandemic evolves.”

That same scenario is being played out across the country, with more than 1,500 police departments using drones to monitor the public. In Michigan, the Hillsdale Sheriff’s Office recently purchased a DJI Matrice 210 drone equipped with thermal imaging.

Three police departments, in particular, have taken drone surveillance to a whole new level.

The Owensboro Police Department in Kentucky has created a 10-member thermal imaging drone surveillance team.

“The Owensboro Police Department recently created a 10-member Unmanned Aerial Vehicle Team who were trained and certified to fly the department’s $2,300 drone inside city limits.”

In Texas, the Public Safety Unmanned Response Team has partnered with the Airborne Incident Response Team to help expand police drone surveillance.

“The objective is to create a robust network of drone responders and geographic information systems experts capable of rendering direct assistance and location intelligence during complex emergencies, rapidly expanding incidents, and major disasters. Increased communication and cooperation are essential to the successful deployment of drones and other unmanned systems in support of public safety operations,” said Travis Calendine, chairperson of the Public Safety Unmanned Response Team North Texas.

The Chula Vista Police Department (CVPD), arguably the most famous drone surveillance police department in the country, is creating a 52 square mile drone surveillance zone.

“The CVPD can proactively cover about 17 of the 52 square miles of our city with drones launched from two sites. Our goal this year is to add launch sites to provide 100% aerial support coverage during daylight hours 7 days a week by the end of the year,” Vern Sallee, patrol operations captain said. 

The worldwide fear of the coronavirus is so widespread that MIT, Harvard, and The Mayo Clinic have designed a new COVID-19 warning app called “Private Kit.”

credit: Private Kit

As Fast Company explains, Private Kit will allegedly alert a user when someone infected with the COVID-19 virus is close.  Maybe they could use this app to alert them when a “Walking Dead” zombie is close by?

“Researchers just released an app in beta that alerts people when they come into contact with someone who’s been diagnosed with COVID-19, potentially reducing the virus’s spread if the app is downloaded by a large chunk of the population.”

Of course, there is a catch to using Private Kit: users will lose their privacy.

“After you download the app and consent to sharing your location (which is necessary in order for the app to work), the app starts tracking you. Should you cross paths with someone who’s been diagnosed with the coronavirus who also has the app, you’ll receive a notification telling you when and for how long.”

Facial recognition companies like Veriff show how little they care about everyone’s privacy by offering universities and health care providers 1 million free identity verifications.

“Potential beneficiaries of free verification’s range from universities who need to verify people taking exams to marketplaces that have volunteers who help people in need and to registries that could tackle fake accounts and set up reliable databases about people in quarantine. But also, digital health care service providers, organizations fighting fake news and beyond.”

If corporations, politicians and law enforcement can convince an apathetic public that facial recognition, thermal imaging, and gun detection cameras can keep us safe, then our privacy will vanish before COVID-19 runs its course.

Share
Categories
Biometrics Commentary Genetic Information Privacy Intelwars privacy

DOJ Moves Forward with Dangerous Plan to Collect DNA from Immigrant Detainees

The Department of Justice’s (DOJ) recently-issued final rule requiring the collection of DNA from hundreds of thousands of individuals in immigration detention is a dangerous and unprecedented expansion of biometric screening based not on alleged conduct, but instead on immigration status. This type of forcible DNA collection erodes civil liberties and demonstrates the government’s willingness to weaponize biometrics in order to surveil vulnerable communities. 

DOJ finalized its October 2019 Notice of Proposed Rulemaking, making no amendments despite receiving over 40,000 public comments—including one by EFFthe overwhelming majority of which opposed the mandatory DNA collection proposal.

The final rule institutionalizes a practice that is a marked departure from prior DNA collection policies. It draws its authority from the DNA Fingerprint Act of 2005, which granted the Attorney General power to direct federal agencies to collect DNA from “individuals who are arrested, facing charges, or convicted or from non-United States persons who are detained under the authority of the United States.” DOJ regulations implementing the Act specifically exempted the Department of Homeland Security (DHS) from collecting DNA from certain classes of non-U.S. persons, including individuals for whom collection is “not feasible because of operational exigencies or resource limitations,” as identified by the DHS Secretary in consultation with the Attorney General. In 2010, then-DHS Secretary Janet Napolitano used that provision to exclude from DNA collection individuals in immigration custody not charged with a crime and from individuals awaiting deportation proceedings.

In the final rule, DOJ removes the DHS Secretary’s authority to exclude certain classes of individuals from DNA collection because of resource limitations and only allows the Attorney General to make that determination. DOJ estimates that it will collect nearly 750,000 additional DNA profiles annually from immigrant detainees, which will then be added to the Combined DNA Index System (CODIS), the FBI’s national DNA database.

In January 2020, DHS began planning for this vast DNA collection program, releasing an implementation policy for Immigration and Customs and Enforcement (ICE) and Customs and Border Protection (CBP) titled “CBP and ICE DNA Collection.” The policy sets out a five-phase implementation plan over three years. Phase I, which began on January 6, 2020, outlines pilot programs at a Border Patrol sector in Detroit, Michigan, and a port of entry in Eagle Pass, Texas. A subset of CBP officers at these locations collect DNA from immigrants with criminal convictions and from immigrants and U.S. persons (defined as U.S. citizens and legal permanent residents) who are referred for prosecution, including children as young as 14 years old. Subsequent implementation phases permit more CBP officers to collect DNA until Phase V, which allows for DNA collection from all individuals detained under U.S. authority, including people in immigration detention who have never been arrested, charged, or convicted of any criminal offense.

In response to DHS’s implementation policy, U.S. Representatives Rashida Tlaib, Veronica Escobar, and Joaquin Castro sent a letter to the DHS Acting Secretary expressing opposition to the pilot programs. Mandatory DNA collection from immigrants constitutes a privacy invasion, criminalizes immigrant communities, and overburdens federal crime labs, they told DHS. The letter also asked for additional information on the implementation plan, including the privacy protections in place and the administrative burden and backlog the plan will create.

As we highlighted in our comments, DOJ’s final rule marks an unprecedented shift from DNA collection based on a criminal arrest or conviction to DNA collection based on immigration status. After the Supreme Court’s decision in Maryland v. King (2013), which upheld a Maryland statute to collect DNA from individuals arrested for a violent felony offense, states have rapidly expanded DNA collection to encompass more and more offenseseven when DNA is not implicated in the nature of the offense. For example, in Virginia, the ACLU and other advocates fought against a bill that would have added obstruction of justice and shoplifting as offenses for which DNA could be collected. DOJ’s final rule further erodes civil liberties by requiring forcible DNA collection based on false assumptions linking crime to immigration status, despite ample evidence to the contrary.

This DNA collection has serious consequences. Studies have shown that increasing the number of profiles in DNA databases doesn’t solve more crimes. A 2010 RAND report instead stated that the ability of police to solve crimes using DNA is “more strongly related to the number of crime-scene samples than to the number of offender profiles in the database.” There’s no indication that adding nearly 750,000 profiles of immigrant detainees per year will do anything except add more noise to CODIS.

Moreover, inclusion in a DNA database increases the likelihood that an innocent person will be implicated in a crime. We previously wrote about a case where a man was charged with murder during a brutal home invasion because his DNA was found on the victim’s fingernails. In reality, he had been treated by EMTs earlier in the evening, who later responded to the crime scene and likely carried his DNA with them.

The final rule also allows CODIS to indefinitely retain DNA samples from people in immigration detention—even if they later permanently leave the country or adjust their status to become permanent residents or citizens. Indefinite retention creates the opportunity for future misuse, especially since DNA samples reveal ample information about us—from familial relationships to medical history—and may imply characteristics like race and ethnicity. Some have even suggested DNA can reveal intelligence and sexual orientation, although this has been disproved. We’ve seen DNA misuse in the context of genetic genealogy databases, where people voluntarily provide DNA to private companies for ancestry or health analysis, and law enforcement later accesses the database to solve crimes. In 2015, a New Orleans filmmaker was nearly implicated in a cold case murder after police accessed a private genealogy database without a warrant and identified an “exceptionally good match” between the crime scene sample and the filmmaker’s father’s DNA profile.

Lastly, the final rule exacerbates the existing racial disparities in our criminal justice system by subjecting communities of color to genetic surveillance. Black and Latino men are already overrepresented in DNA databases. Adding 750,000 profiles of immigrant detainees annually—who are almost entirely people of color, and the vast majority of whom are Latinx—will further skew the 18 million profiles already in CODIS.

The final rule is yet another example of the government weaponizing biometrics as a form of surveillance of vulnerable communities. This dangerous expansion of DNA collection brings us one step closer to genetic surveillance of the entire population.

Share
Categories
announcement Biometrics Intelwars Mandatory National IDs and Biometric Databases

Announcing Who Has Your Face

The government and law enforcement should not be scanning your photos with face recognition technology. But right now, at least half of Americans are likely in government face recognition databases—often thanks to secretive agreements between state and federal government agencies—without any of us having opted in. Although the majority of Americans are in these databases, it’s nearly impossible to know whether or not your photo has been included. Today, EFF is launching a new project to help fight back: Who Has Your Face.

Who Has Your Face includes a short quiz that you can use to learn which U.S. government agencies may have access to your photo for facial recognition purposes, as well as a longer resource page describing in detail the photo sharing we discovered. This project is a collaboration between the Center on Privacy & Technology at Georgetown Law, and aims to shine a light on the photo sharing that has allowed the Department of Homeland Security, Immigration and Customs Enforcement, dozens of Departments of Motor Vehicles, the Federal Bureau of Investigation, law enforcement, and many other agencies to use face surveillance on millions of people without their knowledge. 

Data-sharing agreements made between agencies with little or no room for input from those they affect violate the privacy of thousands of people every day

This work builds on the work that was done in The Perpetual Lineup, a project of the Center on Privacy & Technology at Georgetown Law, and EFF’s research on the growth of government databases like this. To bring this project to you, we reviewed thousands of pages of public records to determine as clearly as possible what government photos of U.S. citizens, residents, and travelers are shared with which agencies for facial recognition purposes. 

A screenshot of the Who Has Your Face Quiz Results

After answering a few short questions, Who Has Your Face will list agencies that likely have access to your image

Individuals Don’t Know They’re In Facial Recognition Databases and Can’t Opt Out

As U.S. government agencies have increased the type of information they collect on individuals, expanding from fingerprints to faceprints, and adding voice data, DNA, scars and tattoos, they’ve also hoovered up more and more information from individuals without their knowledge. Much of this is collected during fairly common practices like applying for a driver’s license. 

The number of people affected by face recognition is staggering: We count at least 27 states where the FBI can search or request data from driver’s license and ID databases. In June of last year, the Government Accountability Office reported only 21. The total number of DMVs with facial recognition is now at least 43, with only four of those limiting data sharing entirely. That puts two-thirds of the population of the U.S. at risk of misidentification, with no choice to opt out. That number is unconscionable. These data-sharing agreements—made between agencies and with little or no room for input from those they affect—violate the privacy of thousands of people every day. 

Data sharing is especially dangerous for vulnerable individuals and populations, and is especially egregious in some states: in Maryland, for example, undocumented individuals are allowed driver’s licenses and IDs, but data sharing agreements also allow ICE to use face recognition on those DMV databases. This turns the legal protection of a driver’s license into a way for ICE to target undocumented individuals for deportation. Florida—the third most populous state in the nation—has the longest-running facial recognition database in the country, and offers over 250 agencies access to DMV photos for facial recognition purposes. 

Lack of Transparency Thwarts Attempts to Learn Who’s At Risk

Despite hundreds of hours of research, it’s still not possible to know precisely which agencies are sharing which photos, and with whom. Each agency across the U.S., from state DMV’s to the State Department, shares access to their photos differently, depending on agreements with local police, other states, and federal agencies. We were continuously thwarted in our research by non-responsive government agencies, conflicting information and agreements, and the generally covert nature of these policies. This is a huge problem: it should be easy to learn who has the personal data that you’ve been required to hand over in exchange for a driver’s license, or for re-entry into the country after visiting family in a foreign nation. 

But agencies all responded differently to requests for transparency: when sent the same public records request, some DMV’s gave the precise number of facial recognition requests that they had received from outside agencies but not which agencies sent them—for example, Wisconsin’s DMV received 238 requests in 2016; Nevada received 788 requests between June 14, 2015, and March 8, 2018. Other DMVs responded with who had made requests and how many: a list of agency requests to Utah’s Department of Public Safety included Immigration and Customs Enforcement, the Department of Homeland Security, various state Fusion Centers, state Secret Service agencies, and the United States Office of National Drug Control Policy. Utah also responded with data about how successful the requests had been.

A screenshot of a spreadsheet of facial recognition requests to the Utah Statewide Information & Analysis Center, with columns for Date Submitted, Ticket ID, Agency, Other, Case Number, and Query Results

A spreadsheet of facial recognition requests to the Utah Statewide Information & Analysis Center

Still others did not respond or regarded the questions as overbroad, or claimed to have no responsive records. Alabama’s DMV, for example, essentially ignored the request until sent an example of the Memorandum of Understanding we believed they had signed. 

Reports also contradict one another: American Association of Motor Vehicle Administrators (AAMVA), a tax-exempt, non-profit organization that serves as an “information clearinghouse” for Departments of Motor Vehicles across the United States and allows members to interactively request and verify license and ID applicant’s images, reported just three months ago that Idaho’s Transportation Department and Oklahoma’s Department of Public Safety have facial recognition, yet both of those states responded to our requests by saying they did not.

Another area of confusion: three of the states that were confirmed to take part in AAMVA’s National Digital Exchange Program but do not have facial recognition systems. Whether they comply with those agreements is unclear. The Real-ID Act, which requires state licenses to adhere to certain uniform standards if they are to be accepted for some federal purposes, also complicates matters. Many states interpret it as requiring them to provide electronic access to all other states to information contained in their motor vehicle database, and to offer some access to federal agencies such as the DHS or ICE. But in some states, sharing this data with the federal government is explicitly forbidden by law. In Utah, for example, the state DMV granted federal access to its database despite the state legislature rejecting the federal info-sharing required under REAL ID. 

This level of confusion and obfuscation is, frankly, unacceptable. It should be simple for anyone to learn who has their private, biometric data, and we must work to make it easier.

It’s Time to Ban Government Use of Face Surveillance

Lack of transparency is, of course, only part of the problem. Face surveillance is a growing menace to our privacy even when the agencies with access to the technology are clear about it. Police worn body cameras with facial surveillance can record the words, deeds, and locations of much of the population at a given time. The Department of Homeland Security and Customs and Border Patrol can use face surveillance to track individuals throughout their travels. Government use of face recognition data collected from private companies, like Clearview AI, poses additional threats. Government must not be allowed to implement this always-on panopticon. 

Thankfully, more and more laws that ban government use of this technology are passing around the country. In addition to the several states that currently don’t allow or don’t have face recognition at DMVs (California, Idaho, Louisiana, Missouri, New Hampshire, Oklahoma, Virginia, and Wyoming), cities like San Francisco, Berkeley, and Oakland in California, and Somerville in Massachusetts have also passed bans on its use by city governments. California has even passed a moratorium on government use of face recognition with mobile cameras. As more cities pass these bans, we hope more states join in protecting their residents, and in being transparent about who has access to every technology that could endanger civil liberties. It’s time to ban government use of face surveillance. 

Learn more about who has your face by visiting Who Has Your Face. To help ban government use of face recognition in your city, visit our About Face campaign.

Share
Categories
Biometrics Commentary face surveillance Intelwars privacy

Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now

This week, additional stories came out about Clearview AI, the company we wrote about earlier that’s marketing a powerful facial recognition tool to law enforcement. These stories discuss some of the police departments around the country that have been secretly using Clearview’s technology, and they show, yet again, why we need strict federal, state, and local laws that ban—or at least press pause—on law enforcement use of face recognition.

Clearview’s service allows law enforcement officers to upload a photo of an unidentified person to its database and see publicly-posted photos of that person along with links to where those photos were posted on the internet. This could allow the police to learn that person’s identity along with significant and highly personal information. Clearview claims to have amassed a dataset of over three billion face images by scraping millions of websites, including news sites and sites like Facebook, YouTube, and Venmo. Clearview’s technology doesn’t appear to be limited to static photos but can also scan for faces in videos on social media sites.

Clearview has been actively marketing its face recognition technology to law enforcement, and it claims more than 1,000 agencies around the country have used its services. But up until last week, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using his image to sell its services to other agencies.

All of this shows, yet again, why we need to press pause on law enforcement use of face recognition. Without a moratorium or a ban, law enforcement agencies will continue to exploit technologies like Clearview’s and hide their use from the public.

Law Enforcement Abuse of Face Recognition Technology Impacts Communities

Police abuse of facial recognition technology is not theoretical: it’s happening today. Law enforcement has already used “live” face recognition on public streets and at political protests. Police in the UK continue to use real-time face recognition to identify people they’ve added to questionable “watchlists,” despite high error rates, serious flaws, and significant public outcry. During the protests surrounding the death of Freddie Gray in 2015, Baltimore Police ran social media photos against a face recognition database to identify protesters and arrest them. Agencies in Florida have used face recognition thousands of times to try to identify unknown suspects without ever informing those suspects or their defense attorneys about the practice. NYPD officers appear to have been using Clearview on their personal devices without department approval and after the agency’s official face recognition unit rejected the technology. And even Clearview itself seems to have used its technology to monitor a journalist working on a story about its product.

Law enforcement agencies often argue they must have access to new technology—no matter how privacy invasive—to help them solve the most heinous of crimes. Clearview itself has said it “exists to help law enforcement agencies solve the toughest cases.” But recent reporting shows just how quickly that argument slides down its slippery slope. Clifton, New Jersey officers used Clearview to identify “shoplifters, an Apple Store thief and a good Samaritan who had punched out a man threatening people with a knife.” And a lieutenant in Green Bay, Wisconsin told a colleague to “feel free to run wild with your searches,” including using the technology on family and friends.

Widespread Use of Face Recognition Will Chill Speech and Fundamentally Change Our Democracy

Face recognition and similar technologies make it possible to identify and track people in real time, including at lawful political protests and other sensitive gatherings. Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.

The right to speak anonymously and to associate with others without the government watching is fundamental to a democracy. And it’s not just EFF saying that—the founding fathers used pseudonyms to debate what kind of government we should form in this country in the Federalist Papers, and the Supreme Court has consistently recognized that anonymous speech and association are necessary for the First Amendment right to free speech to be at all meaningful.

What Can You Do?

Clearview isn’t the first company to sell a questionable facial recognition product to law enforcement, and it probably won’t be the last. Last year, Amazon promotional videos encouraged police agencies to acquire that company’s face “Rekognition” technology and use it with body cameras and smart cameras to track people throughout cities; this was the same technology the ACLU later showed was highly inaccurate. At least two U.S. cities have already used Rekognition.

But communities are starting to push back. Several communities around the country as well as the state of California have already passed bans and moratoria on at least some of the most egregious government uses of face recognition. Even Congress has shown, through a series of hearings on face recognition, that there’s bipartisan objection to carte blanche use of face recognition by the police. 

EFF has supported and continues to support these new laws as well as ongoing legislative efforts to curb the use of face recognition in Washington, Massachusetts, and New York. Without an official moratorium or ban, high-level attorneys have argued police use of the technology is perfectly legal. That’s why now is the time to reach out to your local city council, board of supervisors, and state or federal legislators and tell them we need meaningful restrictions on law enforcement use of face recognition. We need to stop the government from using this technology before it’s too late.

Share