artificial intelligence gun control Intelwars Joaquin oliver Parkland parents Parkland shooting

Parkland parents use artificial intelligence video of their dead son to push gun control

The parents of a student killed during the Feb. 14, 2018, mass shooting at Marjory Stoneman Douglas High School in Parkland, Florida, recently used an artificial intelligence video of their dead son to promote gun control.

In the aftermath of their 17-year-old son, Joaquin “Guac” Oliver’s death, Manuel and Patricia Oliver founded the nonprofit organization Change The Ref to help empower young people to advance reform on a number of issues, but most notably gun violence, the Associated Press reported.

As a part of an initiative to get-out-the-vote among young people, the Olivers worked with a team of artists to create a video of their son urging viewers to vote for politicians who support gun control.

The upcoming election would have been the first one in which Joaquin could have cast his vote.

“I’ve been gone for two years and nothing’s changed, bro. People are still getting killed by guns … what is that?” he passionately exclaims in the video. “I’m tired of waiting for someone to fix it.

“I’ll never get to choose the kind of world I wanted to live in, so you’ve got to replace my vote,” he continues. “Vote for politicians who care more about people’s lives than the gun lobby’s money. Vote for people not getting shot, bro.”

The Olivers reportedly helped craft every detail of the video, from their son’s wardrobe to his mannerisms to the very words he spoke.

“It’s something where you just put the dots together if you see his posts, the way he thinks, he was still thinking, the way he was expressing his frustration about situations,” Patricia Oliver told the AP in a phone interview.

“We are letting Joaquin grow into his ideas … and how he will be reacting to things that are happening today. We know our son so well and we knew exactly what he wanted from life,” Manuel Oliver added.

The report suggested that Joaquin had been politically active from a young age. When he was 12, he reportedly penned a letter to gunmakers asking why they didn’t support universal background checks.

His mother said the lifelike video was extremely difficult for her to watch.

“I couldn’t even breathe well,” she said. “Of course we know that is not Joaquin, but they did such an amazing job with the technology that you can’t say, ‘Oh my God, how I wish that could be the real Joaquin there talking to everybody.'”

His father, who has been keeping his son’s gun control message alive using his artistic abilities, said of the video: “I wouldn’t describe this as painful but as powerful.”

On his son’s birthday, Manuel Oliver painted a mural outside of the National Rifle Association’s headquarters in Virginia. He painted another mural near the headquarters of major gunmaker, Smith & Wesson, in Massachusetts.

artificial intelligence Biometric biometric wallet Bribery challenge Conspiracy Fact and Theory Department of Homeland Security digital ID Emergency Preparedness enslavement experts Forecasting government control Headline News Intelwars money personal information privacy prizes public Technology User interface

DHS Enters The Bribery Business, Offering Cash Prizes To Come Up With A Better Digital Wallet

This article was originally published by Mass Private I at Activist Post. 

Why is everyone trying to convince the public to use digital wallets for everything?

For years the Department of Homeland Security (DHS) has been trying to force Real-ID down Americans throats. But a recent announcement on their Science and Technology (S&T) website revealed just how committed DHS is to tracking every American digitally.

DHS’s latest idea is to offer companies a “Prize Challenge” to convince the public to use their digital wallet for travel and much more.

S&T uses prize competitions to invite ideas and solutions directly from the public, or crowd, called crowdsourcing. Prizes enable us to engage citizen-solvers in prize competitions for top ideas and concepts, and breakthroughs in science and technology to help DHS.

By offering corporations money to develop a better digital biometric wallet, DHS has entered into the business of bribery. defines bribery “as the act of giving money (or something else of value) to someone to get them to do something you want them to do, especially something they’re not supposed to do.”

The Feds should not be offering companies money to design better ways to collect information on the public. Period. Americans do not want our government collecting and storing our biometric information.

Companies can submit their designs for a better digital wallet beginning Tuesday 9/8/2020-Thursday 10/15/2020.

S&T is calling upon innovators to design a better UI for digital wallets. The total prize purse is $25,000. Winning designs will be easy-to-use, trustworthy and improve the overall user experience and management of digital wallet-based credentials.

As the video below explains, a better digital wallet will allow DHS and the TSA to have easy access to everyone’s personal information.

IDEMIA’s digital wallet stores all kinds of personal information

DHS is looking for a UI design that supports best practices for visual consistency, ensures security and privacy, is interoperable, and can be integrated with existing back-end processes. The UI needs to instill confidence in the user of the digital wallet that their online interactions are secure and that the parties they are interacting with are legitimate. The goal of this Challenge is to foster better UIs for digital wallets to be used by DHS and anyone in the community.

The main reason they are offering a prize challenge is to instill confidence in the public that storing your biometric information on a smartphone is secure.

DHS S&T anticipates the total prize pool is $25,000. The Challenge will be conducted in two stages.

Stage 1: Up to three finalists will each receive $5,000. Stage 1 finalists advance to Stage 2.

Stage 2: One grand prize winner will receive an additional $10,000 at the conclusion of the Challenge.

According to DHS’s “judging” section, their new and improved digital wallet must be “easy-to-use” and follow the “World Wide Web Consortium’s” (WWWC) guidelines.

The WWWC’s origins are highly suspect and scream DHS front company.

According to Wikipedia the WWWC was founded at the Massachusetts Institute of Technology Laboratory for Computer Science with support from the European Commission, and Defense Advanced Research Projects Agency (DARPA). DARPA’s involvement with digital wallets is a huge red flag.

Companies like IDEMIAiProov, and YOTI are just a few examples of how much personal information, corporations, and DHS will store on digital wallets.

A recent news release from YOTI about digital wallets in the United Kingdom proves that they will be used to buy alcohol, rent apartments, and much more.

Secondly, the government has signaled that there is legislation in the works to ensure that digital ID can be used as broadly as possible. We know there are some easy wins for the government, like changing the existing mandatory licensing regime for alcohol sales to allow retailers to rely on robust, privacy-preserving digital age verification. In addition, the industry seeks certainty that amendments, such as usage of digital ID for Right to Rent Checks, will continue after the COVID-19 pandemic ends.

With so many red flags about digital wallets, one would do well to ask, do you really want to hand over your biometric identity to the feds?

Are Americans ready to give up their last remaining vestige of privacy: their biometrics? Only time will tell.

Source: MassPrivateI

The post DHS Enters The Bribery Business, Offering Cash Prizes To Come Up With A Better Digital Wallet first appeared on SHTF Plan – When It Hits The Fan, Don't Say We Didn't Warn You.

artificial intelligence Cheating hacking Intelwars schools

Hacking AI-Graded Tests

The company Edgenuity sells AI systems for grading tests. Turns out that they just search for keywords without doing any actual semantic analysis.

artificial intelligence Cheating hacking Intelwars schools

Hacking AI-Graded Tests

The company Edgenuity sells AI systems for grading tests. Turns out that they just search for keywords without doing any actual semantic analysis.

artificial intelligence bill and melinda gates foundation bill gates biometric ID cashless central banking Conspiracy Fact and Theory COVID-19 Digital Currency Donald Trump enslavement enslavement of mankind Evil Fear Federal Reserve GAVI Headline News Humanity ID2020 illusion immunity passports Intelwars Mark of the Beast matrix Microchip New World Order own the world puppets Rockefeller Foundation ruling class system Technology testing begins in Africa Trust Stamp tyranny Vaccine wake up wellness pass

Testing Will Begin In Africa For Biometric ID, “Vaccine Records,” & “Payment Systems”

Testing will soon begin in poverty-stricken parts of Africa for a biometric ID which will also be your payment system and vaccine record. The biometric digital identity platform that “evolves just as you evolve” is backed by none other than the Bill Gates-backed GAVI vaccine alliance, Mastercard, and the AI-powered “identity authentication” company, Trust Stamp.

The GAVI Alliance, which is largely funded by the Bill and Melinda Gates and Rockefeller Foundations, as well as allied governments and the vaccine industry, is principally concerned with improving “the health of markets for vaccines and other immunization products,” rather than the health of individuals, according to its own website. Similarly, Mastercard’s GAVI partnership is directly linked to its “World Beyond Cash” effort, which mainly bolsters its business model that has long depended on a reduction in the use of physical cash.

Reducing the use of cash is needed. Cash is impossible to track, but if you use centralized digital currency, the ruling class has complete control over what you can spend.

The program, which was first launched in late 2018, will see Trust Stamp’s digital identity platform integrated into the GAVI-Mastercard “Wellness Pass,” a digital vaccination record and identity system that is also linked to Mastercard’s click-to-play system that powered by its AI and machine learning technology called NuData. Mastercard, in addition to professing its commitment to promoting “centralized record keeping of childhood immunization” also describes itself as a leader toward a “World Beyond Cash,” and its partnership with GAVI marks a novel approach towards linking a biometric digital identity system, vaccination records, and a payment system into a single cohesive platform. The effort, since its launch nearly two years ago, has been funded via $3.8 million in GAVI donor funds in addition to a matched donation of the same amount by the Bill and Melinda Gates Foundation. –Activist Post

In early June, GAVI reported that Mastercard’s Wellness Pass program would be adapted in response to the coronavirus (COVID-19) pandemic. Around a month later, Mastercard announced that Trust Stamp’s biometric identity platform would be integrated into Wellness Pass as Trust Stamp’s system is capable of providing biometric identity in areas of the world lacking internet access or cellular connectivity and also does not require knowledge of an individual’s legal name or identity to function. The Wellness Program involving GAVI, Mastercard, and Trust Stamp will soon be launched in West Africa and will be coupled with a COVID-19 vaccination program once a vaccine becomes available.

What is perhaps most alarming about this new “Wellness Pass” initiative, is that it links these “dual use” digital solutions to cashless payment solutions that could soon become mandated as anything over than touchless, cashless, methods of payment have been treated as potential modes for contagion by GAVI-aligned groups like the World Health Organization, among others, since the pandemic was first declared earlier this year. –Activist Post

Do you get it yet? It’s all tied into the same thing, and the plandemic is an excuse to roll this out. Wake up. They are not coming to save you, quite the opposite, actually.

For those stuck on the line of thinking that President Donald Trump said this “vaccine will be voluntary,” you are probably correct. It’ll be “voluntary” all right. And if you don’t get it and participate in the new biometric ID program, you won’t be able to buy or sell anything, including food. That sounds nothing like the definition of voluntary to me, but believe in whatever religion you wish and put your trust in whomever you want. I’ll rely on myself instead of some politician to save me.

Oh, just what does Trump need 300 million doses of the vaccine for if it’s going to be “voluntary?” We are in for a “dark winter” as they have already told us several times. It’s time to apply critical thinking and stop falling for all of these psyops.

Those Who Planned The Enslavement of Mankind Warn Of “A Dark Winter” For Us

This doesn’t mean you shouldn’t remain vigilant and know what’s going on. Get your preps in order. Do another audit, buy some more food, and improve your water storage.  This system is here and it will not be voluntary in any sense of the word.  It’s similar to our “voluntary tax” system. Go ahead and choose to not pay, and men with guns will come to your house to make you pay. Yep, that’s how voluntary interaction works (note: that was sarcasm). Believe any politician you want, but they are all puppets for the Federal Reserve, and their takeover is imminent unless we wake up and stand together.

The entire breakdown of this new beast system can be read by clicking here.

Don’t just trust my word. Look into these issues for yourself. Everything is linked above, and better yet, find your own information. I would implore all of you to not just believe what you are being told by anyone, including Trump or myself. Research, read, learn, and prepare.


artificial intelligence China Intelwars news anchor State news watch

China’s state news introduces 3D, artificially intelligent anchor that can imitate human voices, mannerisms

China’s state news agency Xinhua has introduced its first artificially intelligent, three-dimensional news anchor, TheNextWeb reported.

What are the details?

Xin Xiaowei is modeled after one of the agency’s human news presenters, Zhao Wanwei, the outlet said, adding that search engine Sogou is one of the avatar’s co-developers.

Here’s a look at the 3D, AI news anchor:

In the clip, Xin Xiaowei speaks on set and says developers’ technology “employs multi-modal recognition and synthesis, facial recognition, and animation and transfer learning” that “allow me to intelligently imitate human voices, facial expressions, lip movements, and mannerisms.”

“In the future, I will walk out of the studio and bring you a refreshing news broadcasting style under various scenarios,” the avatar adds.

More from TheNextWeb:

The anchor is the latest in a growing gang of virtual presenters used by Xinhua. In 2018, the agency introduced a digital anchor called Qiu Hao, which used machine learning to simulate the voices, facial movements, and gestures of real broadcasters.

The following year, Xinhua unveiled a Russian-speaking version, which was developed alongside ITAR-TASS, a Russian news agency. The avatar was launched on the 70th anniversary of diplomatic relations between China and Russia, which suggests there are political motivations for the work as well as promotional ones.

Xin Xiaowei’s first assignment is reporting for Xinhua on the Two Sessions, an annual congress of the Chinese Communist Party that began Wednesday, the outlet said.

artificial intelligence Facebook Hate speech Hateful memes Intelwars Internet Social Media

Facebook wants to erase ‘Hateful Memes’ from its platform — and is offering $100,000 in prizes for those who can help

Facebook has launched a competition to help the social media giant erase Hateful Memes” from its site — and is offering $100,000 in prizes for the programs that can accomplish the task.

What are the details?

Of course, it’s easy for programs to spot actual hate speech and delete it. The Facebook AI team defines “hate speech” as:

“A direct or indirect attack on people based on characteristics, including ethnicity, race, nationality, immigration status, religion, caste, sex, gender identity, sexual orientation, and disability or disease. We define attack as violent or dehumanizing (comparing people to non-human things, e.g. animals) speech, statements of inferiority, and calls for exclusion or segregation. Mocking hate crime is also considered hate speech.”

“This definition mirrors the community standards on hate speech employed by Facebook, and is intended to provide an actionable classification label: if something is hate speech according to this definition, it should be taken down; if not, even if it is distasteful or objectionable, it is allowed to stay,” the competition page states.

But then there’s the problem of hateful memes being created by combining otherwise harmless phrases and otherwise harmless images:

Image source: DrivenData video screenshot

Which is why the tech giant is instructing participants in its challenge to create artificial intelligence that can detect text and images used together as a hateful meme.

“We continue to make progress in improving our AI systems to detect hate speech and other harmful content on our platforms, and we believe the Hateful Memes project will enable Facebook and others to do more to keep people safe,” the competition page also said.

artificial intelligence COVID-19 Facial Recognition Headline News human beings Humanity Intelwars LIES machines robots taking jobbs tyranny

CV-1984: The Accelerated Rise Of Automated Robots

This article was originally published by Aaron Kesel at Activist Post. 

CV-1984 is accelerating the spread of Orwellian surveillance devices like talking drones, facial recognition cameras, and more; but that’s not the only technology that is advancing – robots or automated machines are also speeding up their deployment to combat the spread of the coronavirus.

This reporter and Activist Post has consistently stated that robots were coming for our jobs; and now while record numbers of unemployment are hitting here in the U.S., automated machines are being rolled out as well to replace even more workers.

ZeroHedge reports:

Bloomberg Law reports JBS SA, the world’s largest meat producer, is preparing to install robots in slaughterhouses to mitigate the spread of COVID-19 among human employees working on the production line.

JBS SA CFO Guilherme Cavalcanti recently said the Brazilian processing company expects to expand automation at its facilities across the world.

Cavalcanti said the adoption of automation started before the pandemic as labor tightened at US plants due to a decline in immigration sparked by the Trump administration. He said labor shortages have developed in the US as the virus infects workers and shutters plants.”

Here’s a video of those meat packaging machines in action.

It’s not just meat packaging that could be threatened by robots; grocers themselves are also in danger of being replaced by machines. CNN reports that grocers – big and small chains alike – are turning to robots for performing various tasks like cleaning floors, stocking shelves and delivering groceries to shoppers. The covid crisis could even prompt online retail warehouses like Amazon to invest more into automation technology as well.

The New York Times reported that the outbreak is boosting demand for Zhen Robotics and its RoboPony, a self-driving cart that is sold to retailers, hospitals, malls, and apartment complexes.

A group of scientists on the editorial board of Science Robotics are further calling for robots to do the “dull, dirty, and dangerous jobs” of infectious disease management by replacing certain hospital jobs like disinfecting robots combing rooms/floors and working in labs.

A recent report by A3, Association For Advancing Automation, further details all the ways that artificial intelligence and automation is being used in different industries to combat the coronavirus.

This all continues to highlight what Activist Post and this writer has detailed consistently for months — that advancement in robotics and A.I. is taking jobs daily more and more, warning that robots would soon take human jobs such as construction and farming robots, Angus and HRP-5P, being created to replace workers in the aforementioned industries. Activist Post has been sounding the alarm of the coming robot apocalypse.

Also see the article entitled: “Robots Already Replacing Bank Tellers, Drivers, News Anchors, Restaurant and Warehouse Employees. Will Your Job Be Next?” written by B.N. Frank.

As this writer has written previously on Steemit, we are shifting towards a working world with little or no humans, as automation and artificial intelligence begins to take over our jobs. It’s cheaper to hire a few robots which don’t need rest and benefits than to hire a few humans who need healthcare and retirement funds.

Last year, robots took a record number of jobs in the U.S. according to Robotic Industries Association (RIA) as Activist Post reported. Now, with the impetus of the coronavirus the number of jobs occupied by robots could multiply quite rapidly. Oxford Economics also published its own report warning that accelerating technological advances in automation, engineering, energy storage, artificial intelligence, and machine learning have the potential to reshape the world in 2020 through 2030s, displacing at least 20 million workers.

With the coronavirus as a catalyst to speed up the deployment of automated machines, we can probably safely say that number will be much more severe. It seems I am not the only one to share that opinion; a recent MarketWatch article written by Johannes Moenius, a professor of global business and the director of the Institute for Spatial Economic Analysis at the University of Redlands, agrees with this author’s conclusion stating “at least 50 million jobs could be automated in just essential industries.”

In fact, the Brookings Institution said in a report last month that “any coronavirus-related recession is likely to bring about a spike in labor-replacing automation … Automation happens in bursts, concentrated especially in bad times such as in the wake of economic shocks, when humans become relatively more expensive as firms’ revenues rapidly decline.”

artificial intelligence counter terrorism Intelwars Online Edition violent extremism

A New AI Strategy to Combat Domestic Terrorism and Violent Extremism

Jonathan Fischbach[*]

[This essay is available in PDF at this link]

Introduction: A Revealing Inversion

Data scientists utilize artificial intelligence (AI) in thousands of different contexts, ranging from analytics that design culinary masterpieces and identify illegal fishing, to algorithms that diagnose cancerous tumors, virtually compose symphonies, and predict vehicle failures.[1] Two communities within this expansive field, acting independently and without coordination, are currently experimenting with AI for the same narrow purpose—to determine whether machine-learning algorithms can discover patterns in demographic and behavioral data that identify actors likely to endanger innocent people. One group—the national security community—is tightly organized and well financed, operates at the federal level, attracts premier professional talent, and enjoys broad statutory access to data gathered in the United States. The other group—the social policy community—is decentralized and chronically underfunded, acts primarily at the state and local level, struggles to hire qualified data scientists, and lacks the authority and resources to establish access to data across localities. Yet the social policy community has already harnessed AI to achieve noteworthy policy outcomes, while the national security community still labors to understand its relevance and potential. Why?

The answer lies in the critical and underappreciated role of human beings in data collection. Intelligence agencies often acquire foreign intelligence as bits and snapshots of information that are bereft of context and rarely elicited directly by national security personnel. Social workers, however, use comprehensive intake tools to compile holistic histories and risk profiles of their subjects in order to optimize the delivery of appropriate services.[2] These case files are fertile terrain for machine-learning algorithms, which have unearthed striking and unforeseen causal relationships in numerous social service datasets and disciplines.

This track record suggests that the government cannot innovate its way to productive AI simply by developing more sophisticated software or by inventing better algorithms. To fully exploit machine technology, the national security community must reconceive its approach to data collection by reallocating resources from platforms that produce fragmentary collection to programs that leverage sources of data rich in biographic and narrative detail. But perhaps more significantly, this revelation underscores the vital importance of building bridges and establishing dialogue between professionals in the national security and domestic social policy realms. A natural first step would be a collaborative venture to determine how AI can help identify and ameliorate the conditions that induce domestic terrorism and violent extremism, emerging crises with both national security and social policy dimensions.

This article explores the hidden relevance of AI-enabled social welfare initiatives to national security programs. Part I recounts the efforts of one social welfare agency in Allegheny County, Pennsylvania to develop advanced analytics that predict violent acts against children before they occur. Part II surveys the use of AI in other social welfare programs to advance policies that substantially overlap national security missions. Part III highlights critical distinctions between intelligence information and the social welfare data that drive AI successes in the social policy realm. Part IV proposes two strategies to overcome the inherent difficulties of using traditional intelligence collection to support AI capabilities. Part V envisions the fusion of social policy tradecraft with national security missions and resources to neutralize the threat of domestic terrorism and violent extremism.

I.  A Social Welfare Triumph in Allegheny County

On June 30, 2011, firefighters rushed to the scene of a fire blazing from a third-floor apartment on East Pittsburgh-McKeesport Boulevard in Pittsburgh.[3] When the firefighters broke down the door, they found the body of seven-year-old KiDonn Pollard-Ford buried under a pile of clothes in his bedroom, where he sought refuge from the smoke. KiDonn’s four-year-old brother KrisDon was lying under his bed, unconscious. He died in the hospital two days later. The Office of Children, Youth and Families (CYF) in Allegheny County, Pennsylvania previously received calls alleging that the children were neglected, but on each occasion, a case screener “screened the call out,” or determined that the information did not warrant additional follow-up.

This tragedy was followed by other episodes of children dying in Allegheny County after CYF call screeners dismissed allegations of abuse or neglect. Desperate to halt this trend, CYF contacted two social scientists exploring the use of AI to improve the response of social welfare agencies to allegations of mistreatment. The scientists exhaustively combed through all 76,894 allegations CYF screened between April 2010 and April 2014. For each allegation, the team created an entry composed of every piece of information the county knew about the family, comprising more than one hundred different data fields populated by eight separate databases. Each entry reflected whether children were re-victimized after CYF screened a phone call implicating their family.

The results of the study were jarring: CYF call screeners were screening in 48% of the lowest-risk families for additional investigation, and screening out 27% of the highest-risk families. The social scientists used the database to develop an algorithm—the “Allegheny Family Screening Tool”—that generated a machine assessment of the risk posed by each family accused of abuse or neglect. CYF deployed the algorithm in August 2016. Thereafter, when a screener fielded an allegation and made a preliminary decision to screen the call in or out, the final step was to click on an icon for the Allegheny Family Screening Tool to produce the algorithm’s risk assessment.

After using the algorithm for sixteen months, Allegheny County reported that CYF call screeners accurately identified high-risk calls more frequently. The percentage of low-risk cases flagged for investigation dropped from nearly half of all investigations to one third, and audits revealed that screeners treated black and white families more consistently. A second round of technical modifications increased the algorithm’s success rate at predicting bad outcomes from 78% to 90%.

*     *     *

At 3:50 p.m. on November 30, 2016, CYF received a call from a preschool teacher. Moments earlier a three-year-old child informed the teacher that her mother’s boyfriend had “hurt their head and was bleeding and shaking on the floor and the bathtub.” Local media outlets reported that the boyfriend had overdosed and died in the home.

The call screener searched CYF’s database for records about the family. The database showed numerous allegations about the family dating back to 2008—substance abuse, inadequate hygiene, domestic violence, and inadequate food and medical care—but not a single allegation was substantiated. Drug use without more did not satisfy the minimum legal requirements to have a caseworker visit the home. To screen the call out and administratively close the file, the CYF screener had to predict the risk of future harm to the child. He typed in “Low risk.” Prompted to assess the threat to the child’s immediate safety, he chose “No safety threat.”

The final step was to click the icon for the Allegheny Family Screening Tool. Three seconds later, the computer screen displayed a numeric scale ranging from 1 (lowest risk) to 20 (highest risk). The score for the child’s family was 19. The screener reversed his initial decision, screened the call in, and recommended that a caseworker visit the home. The caseworker’s investigation revealed that the girl and her two older siblings were in immediate danger, and they were removed from the home. As of 2018, all three children were thriving in placements with other relatives.

II.  Predictive Analytics in the Social Services Realm

Allegheny County’s breakthrough in marshalling AI to revamp its child welfare system is echoed in social policy achievements reported by other localities and social service domains. In 2016, the Oklahoma Department of Human Services built a machine-learning algorithm that analyzed child welfare data statewide to identify the factors most likely to predict child fatalities.[4] The algorithm revealed that fifteen data points—many of which would not set off alarm bells—correlate highly with subsequent child fatalities, including the presence of a child under the age of three, a lover in the home, young parents, and substance abuse.[5] Machine algorithms also illuminate the circumstances that case screeners weigh too heavily in the intake process, including the screener’s own background and experiences, the gender and ethnicity of the child and their family, and the ease of attributing blame for a reported incident to the child (termed “blame ideology”).[6]

These initiatives have produced tangible results. In Iowa, social scientists developed a predictive analytic using data from 6,832 families reported to child welfare services, focusing on families that were either re-reported for alleged maltreatment or had a prior report substantiated.[7] Use of the analytic led case screeners to classify more families as low-risk, enabling the state to channel additional resources and supports to high-risk families.[8] In Broward County, Florida, social scientists used a predictive analytic to assess, for families reported to Florida’s Office of Child Welfare, which social services and supports would most likely ensure that the families would not be reported a second time for alleged mistreatment.[9] The team projected that use of the algorithm could improve child welfare outcomes in Broward County by up to thirty percent through fewer inappropriate referrals and the enhanced use of “light touch services” in low-risk cases, such as periodic phone calls or visits, homemaker services, child care, and transportation.[10]

AI advances in the social policy realm are not limited to child welfare agencies. Leveraging existing administrative data, researchers at Cornell University, the University of Chicago, and Harvard University created a database of over one million bond court cases to develop an analytic that predicts whether defendants released on bail before trial will commit another criminal offense.[11] The scientists estimated that using the analytic to complement the intuition of criminal court judges would (1) reduce crimes committed by released defendants by up to 25%, without increasing the overall number of incarcerated individuals; and (2) lead judges to jail up to 42% fewer people without causing any increase in the crime rate.[12]

Public health experts in Illinois analyzed administrative data from the Illinois Department of Human Services to compile a dataset of 6,457 women who gave birth between July 2014 and May 2015.[13] Using this data the team developed an analytic to predict when a woman would experience an “adverse birth outcome,” defined to include a preterm birth, low birth weight, death within the first year of life, or infant complications requiring admission to the neonatal intensive care unit.[14] The algorithm indicated that circumstances commonly assumed to increase the risk of adverse birth outcomes—such as mental illness, domestic violence, and prior incarceration—had little to no effect.[15] However, physical attributions and conditions, such as multiple pregnancies, low pre-pregnancy weight, previous preterm birth, and maternal age of 40 years or older, had a surprisingly significant impact on health outcomes for newborns.[16]

III.  Contrasting Social Services Data with Foreign Intelligence

A Mitre Corporation research team surveying the use of predictive analytics in child welfare was unequivocal in its assertion that “[d]ata is the most crucial part of a predictive analytics implementation; without useable data, no further work can be conducted.”[17] Experts generally agree that predictive analytics require vast amounts of diverse data to expose patterns that inform the prediction of future events.[18] But data science initiatives in the Intelligence Community (IC) implicitly presume that any unit of information, aggregated in large quantities, can support the development of effective analytics. The reality is that foreign intelligence—the unit of information available to IC data scientists—is less useful to algorithms than are units of social welfare data because of the way foreign intelligence is collected. At root, this phenomenon is driven by discrepancies in data context. In three critical respects social-science data points acquire context through shared relationships to a common focal point—connective tissue that rarely binds foreign intelligence data points.

A.  Context Through Horizontal Integration of Discrete Service Systems

An important factor driving the success of AI in the social policy realm is the ability of social workers to trace the status and behavior of individuals across service systems. The Allegheny Family Screening Tool, for example, can access a rich trove of administrative data including medical records, criminal history information, records reflecting disability status, educational records, mental health information, and receipt of government benefits.[19] The data pool for the Broward County child welfare study consolidated 238,912 records from 85 datasets governing a diverse range of triggers for government services or interventions, including sexual abuse, physical abuse, human trafficking, alcohol and drug abuse, malnutrition, environmental hazards, criminal history, homelessness, housing assistance, counseling, child care, juvenile court proceedings, mental health services, adoption, family planning, respite care services, unemployment services, and transportation services.[20] Indeed, the Broward County team observed that efforts to develop a predictive analytic for child welfare outcomes “demonstrate[] the value and importance of integrating data from a variety of essential child welfare data sets in order to improve child welfare assessment instruments and improve child and family outcomes.”[21] Similarly, the Illinois study on adverse birth outcomes leveraged a database that tracks participation in all Illinois government programs, capturing data ranging from biographic information like age, pre-pregnancy weight, birth history, and education level; to behavioral and health indicators such as HIV status, mental illness, tobacco and drug use, homelessness, and child services involvement.[22]

The context that emerges from horizontally integrated data is invaluable to a predictive analytic. It enables algorithms to assemble and analyze a person’s mosaic of interwoven attributes, behaviors, and events—for thousands of unique individuals. The value of machine learning is to reveal relationships between and among these data points that may elude the observation of human beings;[23] horizontally integrated data most closely approximates the layered factual circumstances a predictive analytic would be asked to evaluate.

Foreign intelligence is not similarly contextualized through horizontal integration of data across agencies and domains. Indeed, cultural and technical barriers preclude the IC from establishing databases that consolidate all the government’s information about intelligence targets.[24] IC elements also gravitate by necessity toward more superficial modes of data collection in response to operational needs, resource constraints, and regulatory frameworks. The IC collection model prioritizes intelligence with obvious and immediate relevance to a human analyst; it does not strive for comprehensive coverage of a target’s pattern of life, or cast a wide net for contextual data peripheral to the primary intelligence objective. While social welfare agencies apply intake procedures that produce a holistic profile of their subject, IC elements conceive subjects as two-dimensional personas caricatured through the lens of a specific threat.

B.  Context Through Longitudinal Data

Predictive analytics rely on diverse subject matter, but also on data points collected over time, which allow algorithms to sequence significant events into personal timelines that enable the study of cause and effect. For example, the machine-learning algorithm developed to assess whether criminal defendants should be eligible for pre-trial release relied on a data pool that merged information about at least three non-contemporaneous events for each defendant: (1) the defendant’s initial arrest or indictment; (2) whether the defendant was held for trial or released on bail; and (3) whether a defendant released before trial committed another criminal offense.[25] Similarly, analytics used to predict child welfare outcomes are grounded not only in investigation narratives and antecedent events archived in government databases, but also in post-allegation criminal, child welfare, and open-source data that map a family’s trajectory after a reported incident.[26]

The IC may occasionally construct detailed chronologies for high-profile targets, but these efforts are the exception, not the rule. Foreign intelligence that lacks longitudinal context is not regarded as inherently insufficient or incomplete, and analysts routinely supply informed inferences to frame and contextualize the snapshots of activity revealed in intelligence collection. Thus intelligence reports about a terrorist or cyber hacker would generally eschew a lengthy treatment of the actor’s childhood, and accentuate the present circumstances that create danger or suggest a threat-mitigation strategy—such as the individual’s unique capabilities, geographic location, access to resources, and personal network. Consequently, the sample of foreign intelligence data points enriched by historical facts is miniscule compared to the volume of longitudinal data in the administrative datasets that shape predictive analytics in the social services realm.

C.  Context Through Documentation of Interventions

Both social policy and national security officials administer a variety of interventions to disrupt potential threats. At the state and local level, social sector responses range from passive observation to engagement through counseling and social services, from support through government benefits and assistance to arrest and prosecution. Strategies to disrupt national security threats include intelligence gathering, criminal prosecution, covert action, or kinetic operations.

When the government at any level responds to individuals who pose a threat, there is obvious AI value in combining the subject’s demographic and behavioral information with data that chart the nature, implementation, and aftermath of these interventions. Data scientists exploit the availability of this information in social services data to develop analytics that not only predict risk but also recommend interventions to neutralize the specific risk identified.[27] For example, the predictive analytic developed to improve child protective service investigations in Broward County revealed not only that certain low-risk cases were systematically over-referred to local agencies and the juvenile courts, but also that giving families “too much”—i.e., delivering social services that were unnecessarily intensive—was more harmful than not serving the family at all.[28]

There is no parallel movement within the national security community to consolidate and leverage data on efforts to disrupt national security threats. IC elements have limited visibility into the strategies or operations used by other agencies to combat threat actors of general concern,[29] and even within agencies there are no uniform standards for documenting an intervention or rating its effectiveness. IC elements may occasionally synchronize intelligence gathering with separate operational activity to reveal how a target was impacted by an intervention, or to capture any subsequent adjustment of an adversary’s strategies, behaviors, and tradecraft. But these data samples are too small to support analytics that could recommend specific national security interventions for newly identified threats.

IV.  Applying Robust Analytics to National Security Problems

IC elements have invested substantial resources in AI, and these initiatives have yielded some impressive results. The preceding discussion does not imply otherwise. But in the current threat landscape, AI will ultimately be measured by its ability (1) to predict whether newly encountered individuals pose a threat to our national security; (2) to reveal with precision the geographic, demographic, political, and socio-economic conditions likely to foment threats against the United States; and (3) to prescribe the specific strategies and interventions best suited to disrupt identified threats. Data scientists have proven that contextualized data enable machine-learning algorithms to supply these answers in the social policy realm. The question is whether these successes translate to national security challenges.

Obviously, the government cannot collect contextualized foreign intelligence through the overt and transparent intake procedures that social service organizations use to gather information. However, two creative strategies could posture the national security community to achieve the same AI breakthroughs that data scientists have delivered for social welfare agencies.

A.  Leveraging Domestic Administrative Data to Decode Foreign Threat Activity

Algorithms mining administrative datasets have correlated demographic and behavioral data with event outcomes to produce non-intuitive insights. Significantly, the same contextualized data compiled by domestic social service agencies could revolutionize the study of foreign threat actors. The foreign activities we characterize as national security threats—e.g., terrorism, nefarious cyber activity, weapons proliferation, and transnational organized crime—are manifestations of the same underlying conditions and behaviors that engender adverse social and criminal justice outcomes in the United States.[30] By developing a more sophisticated understanding of the common attributes that link foreign threat activity and negative domestic outcomes, social scientists and psychologists could leverage the troves of social welfare and law enforcement data maintained by federal, state, and local governments to develop robust analytics that predict national security threats. Instead of relying exclusively on foreign intelligence leads to locate specific overseas actors who threaten America’s security, algorithms trained and refined by domestic social welfare data could reason inductively to identify the behaviors, locations, conditions, and circumstances overseas that are material to our national security. Mining these results could generate novel leads that improve the exploitation of existing intelligence collection; broadening the perspective of IC analysts evaluating national security risk in the same way that the Allegheny Family Screening Tool improved the assessments of CYF call screeners.

Predictive analytics that apply lessons learned from social policy data to inform foreign intelligence activities pose obvious risks to privacy and civil liberties. But these concerns can be addressed through transparent processes and careful regulation. Since the AI value of social policy data is divorced from the identities of the people indexed in social service databases, the requirements governing this program would compel the government to strip all personally identifiable information from administrative records used to develop algorithms for national security purposes. The regulations would further prohibit federal government personnel from viewing, querying, or otherwise accessing these data pools. Only machine tools vetted and ratified by a cleared oversight panel representing government, private sector, and civil liberties interests would be approved to process the data. The panel would also sample and audit results transmitted by the machine tools to ensure that personal information is appropriately protected. Finally, the panel would advance key privacy and civil liberties objectives by working collaboratively with national security officials to assess whether use of these analytics expands or narrows the population of U.S. persons and other individuals targeted as subjects of interest.

B.  Promoting AI as a New Organizing Concept for Foreign Intelligence Collection

Traditionally, the IC has structured intelligence activity around frameworks that optimize human data processing. Its collection apparatus is partitioned into areas of specialization demarcated by geographic region (e.g., South America, Africa), threat category (e.g., counterterrorism, cyber), specific target, method of collection, etc. This modular infrastructure may contribute certain efficiencies and promote subject matter expertise among human analysts, but it impedes the IC from consolidating data across topics and modalities into data pools that emulate the rich administrative datasets supporting predictive analytics in the social services realm.

These pools of contextualized social services data are a boon to the IC because they model the conditions that would prime AI for success in the national security domain. To that end, the examples outlined above suggest that the IC has the resources and capabilities to generate datasets capable of supporting effective predictive analytics. The missing ingredient is a commitment by national security officials to establish AI as a new center of gravity in the IC that can orient data integration efforts, tasking requirements, and future intelligence gathering around a core mission of creating datasets primarily designed to support machine analytics.

Data scientists could lay the groundwork for these data pools in three phases. First, the IC should begin consolidating existing records on national security threat actors into a single repository, prioritizing records with numerous data points and rich narrative detail. National Security Presidential Memorandum 7 (NSPM 7) already directs federal agencies to develop technical architecture “to advance the integration, sharing, and use of identity attributes and associated derogatory information” for individuals falling into five enumerated threat categories.[31] Though NSPM 7 contemplates the creation of separate databases for each threat category, national security officials should direct the executive agents for each database to harmonize the structure, formatting, and functionality of all NSPM 7 architecture. This will ensure that identity records initially archived in threat-specific databases can flow downstream into data pools that merge all-source and all-threat intelligence to support predictive analytics.

Second, the IC should direct agencies to coordinate responses to common intelligence taskings with an eye toward corroborating and enhancing identity information gathered by other agencies. Horizontally integrated social services records could provide a model for these efforts. By reverse-engineering the anatomy of these records, data scientists can establish common standards for identity records that agencies can build cooperatively for use in IC data pools. Where, for example, the IC has acquired extensive human intelligence (HUMNIT) on a group of priority threat actors, respective agency partners should be prompted to determine whether signals intelligence (SIGINT), geospatial intelligence (GEOINT), or open-source intelligence (OSINT) capabilities could augment that knowledge to enrich identity records until they meet the threshold for inclusion in the foreign intelligence data pool.[32]

Third, the IC should consider pursuing new sources of information that may have limited utility to a human analyst, but unique value to a machine algorithm. As just one example, foreign countries with advanced social welfare systems (and threat actors within their borders) possess vast stores of social security and social welfare data that algorithms have successfully mined to advance other AI initiatives.[33] The IC could leverage relationships with foreign partners to either gain direct access to this data under appropriate conditions, or launch joint ventures with foreign partners to develop advanced analytics supported by a multi-national social welfare dataset.

V.  Case Study: Developing an Analytic to Predict Extremist Acts

In October 2016, the President updated the nation’s strategic implementation plan to address violent extremism in the United States, observing:

In many ways, the threat of violent extremism today is more challenging than ever before. Violent extremists have demonstrated an ability to entice people to travel great distances, to direct attacks, and to inspire others to act from afar. They have utilized the Internet and other technologies, specifically social media platforms, as a means to reach a greater number of people in more places, tailor messages to appeal to different audiences, and reach out to potential recruits individually.[34]

Unlike other national security threats, the government cannot combat the domestic influence of violent extremists by strengthening America’s borders or by targeting hostile actors and networks overseas. Lacking an obvious countermeasure, the government has witnessed homegrown violent extremism proliferate as individuals radicalized in the United States conducted lethal, ideologically motivated attacks to promote the ideologies and agendas of foreign and domestic terrorist groups. Just within the last twenty months:

  • On December 6, 2019, Mohammed Saeed Alshamrani, an aviation student from Saudi Arabia, killed 3 people and injured 8 others in an attack at Naval Air Station Pensacola in Pensacola, Florida.[35] The Department of Justice concluded that the attack was an act of terrorism motivated by jihadist ideology.[36]
  • On August 3, 2019, Patrick Crucius killed 22 people and injured 26 others in a mass shooting at a Walmart store in El Paso, Texas.[37] In a manifesto published prior to the attack, Crusius stated that the attack was “a response to the Hispanic invasion of Texas.”[38]
  • On October 27, 2018, Robert Bower—an anti-Semite and proponent of white nationalism—killed 11 people and injured 6 others at the Tree of Life Synagogue in Pittsburgh, Pennsylvania.[39]

These acts of violent extremism bedevil national security and law enforcement officials because they are notoriously hard to predict. According to the National Consortium for the Study of Terrorism and Responses to Terrorism: “There are no known pathways, definite set of risk factors, or reliable predictors that would indicate who is likely to commit violent acts driven by extremism.”[40] The diffuse nature of this threat is punctuated in the President’s first national strategy for preventing violent extremism: “Individuals from a broad array of communities and walks of life in the United States have been radicalized to support or commit acts of ideologically-inspired violence.”[41] “[They] come from different socioeconomic backgrounds, ethnic and religious communities, and areas of the country, making it difficult to predict where violent extremist narratives will resonate.”[42]

Demystifying the recent outbreak of domestic terrorism and violent extremism—and discerning the antecedents that telegraph future attacks—is the quintessential policy predicament that could be unlocked by an AI solution. These extremist acts appear to be a function of three factors: (1) individual attributes and predispositions; (2) community conditions; and (3) the influence of extremist ideologues. All three dynamics are expressed and captured in data currently compiled by government agencies or available in open-source collection. But analysts have struggled to uncover patterns in this data that explain the aforementioned attacks, highlight existing high-risk conditions, and recommend services and interventions to neutralize these risks.

By merging relevant information into the type of contextualized data pools described above, the IC could give machine-learning algorithms the opportunity to identify patterns in extremist behavior that elude human observation. Ideally these data pools would be populated by information from local, state, federal, and non-government sources. State and local administrative data from social service agencies could contribute person-level information reflecting biographic, behavioral, and criminal justice indicators, as well as a wealth of community-level demographic information. IC elements would enrich these data pools with intelligence information on known extremist threat actors, prioritizing inputs that provide visibility into domestic infrastructure, messaging tradecraft, Internet activity, and pattern-of-life. Government open-source analysts, in collaboration with private partners, could supplement these databases with social media data that researchers have previously aggregated and mined to conduct detailed impact analyses of social media accounts promoting violent extremism.[43] Establishing comprehensive data coverage across a broad universe of individuals and communities exposed to extremist influences may enable machine-learning algorithms to engineer analytics that can accurately predict when extremist expression will tip into lethal and destructive attacks.

This collaboration would require national security and social policy professionals to engage and trust one another—a dynamic conspicuously absent from government policymaking in recent decades. The COVID-19 pandemic underscores the importance of integrated, whole-of-government strategies to combat existential threats; an approach that will serve the nation well when it can renew in earnest the campaign to eradicate domestic terrorism and violent extremism.


The national security community tends to regard itself as a favored son among government and commercial sectors—privileged by abundant resources, influence with policymakers, broad access to data, and top-to-bottom talent in its workforce. Buoyed by this sense of exceptionalism, national security officials have been slow to recognize that intelligence agencies and defense elements are uniquely disadvantaged in the race to evolve core capabilities through AI innovation. Numerous public and commercial entities have enjoyed a more natural transition to advanced analytics, and have discovered that machines can process the same data that humans review to automate manual functions, anticipate problems before they materialize, and recommend novel solutions to stubborn challenges.

Security agencies have fallen behind this curve. Fragmented intelligence—acquired through covert means and subject to strict collection and retention rules—is an inherently poor fit for AI. The result is a low ceiling for even the most advanced machines and algorithms attempting to spin data straw into rich and revealing patterns.

These realities demand new intelligence programs customized for machines. As a government we should reassess how traditional intelligence collection is resourced and prioritized at a time when the most valuable national security data for machines increasingly resides in open sources and mediums. As a society we should reexamine the questionable assumption—codified in current rules—that privacy interests are impacted equally by human and machine review of the same information. Advanced analytics cannot be optimized in data environments constrained by collection and processing rules designed for human analysts. To strike the appropriate balance between civil liberties and national security we should explore regulatory frameworks that advance a more contemporary notion of privacy by prescribing unique and segregable roles for humans and machines in modern information domains.

Recommended Citation
Jonathan Fischbach, A New AI Strategy to Combat Domestic Terrorism and Violent Extremism, Harv. Nat’l Sec. J. Online (May 6, 2020),

[*] Attorney, U.S. Department of Defense. J.D., Cornell Law School, 2002; B.A., Princeton University, 1998. The positions expressed in this article do not necessarily reflect the views of any government agency or of the United States. This article was a winner of the Galileo Award in 2018, an annual competition sponsored by the Office of the Director of National Intelligence.

[1] See Bernard Marr, 27 Incredible Examples of AI and Machine Learning in Practice, Forbes (Apr. 30, 2018, 12:28 AM) [].

[2] Paula Allen-Meares & Bruce A. Lane, Social Work Practice: Integrating Qualitative and Quantitative Data Collection Techniques, 35 Soc. Work 452, 452–54 (1990).

[3] The story recounted in this Part is adapted from Dan Hurley, Can an Algorithm Tell When Kids Are in Danger?, N.Y. Times Mag. (Jan. 2, 2018), [].

[4] Kathleen Hickey, Saving Children, One Algorithm at a Time, GCN (July 26, 2016), [].

[5] Id.

[6] See Philip Gillingham, Predictive Risk Modelling to Prevent Child Maltreatment and Other Adverse Outcomes for Services Users: Inside the ‘Black Box’ of Machine Learning, 46 Brit. J. Soc. Work 1044, 1049 (2016).

[7] Carol Coohey et al., Actuarial Risk Assessment in Child Protective Services: Construction Methodology and Performance Criteria, 35 Child. & Youth Servs. Rev. 151, 155 (2013).

[8] See id. at 160.

[9] See Ira M. Schwartz et al., Predictive and Prescriptive Analytics, Machine Learning and Child Welfare Risk Assessment: The Broward County Experience, 81 Child. & Youth Servs. Rev. 309, 317 (2017).

[10] Id. at 318–19.

[11] See Jon Kleinberg et al., A Guide to Solving Social Problems with Machine Learning, Harv. Bus. Rev. (Dec. 8, 2016), [].

[12] Id. In a similar study, researchers used machine learning algorithms to forecast the likelihood that a defendant arraigned on domestic violence charges would commit additional acts of domestic violence if released on bail. See Richard A. Berk et al., Forecasting Domestic Violence: A Machine Learning Approach to Help Inform Arraignment Decisions, 13 J. Empirical Legal Stud. 94, 94–95 (2016). The authors predict that use of the algorithm could cut the reoffending rate nearly in half over 24 months in the jurisdiction studied—that is, eliminate more than 2,000 post-arraignment arrests. See id. at 94.

[13] Ian Pan et al., Machine Learning for Social Services: A Study of Prenatal Case Management in Illinois, 107 Am. J. Pub. Health 938, 938 (2017).

[14] Id. at 939.

[15] Id. at 941–42.

[16] Id.

[17] Christopher Teixeira & Matthew Boyas, MITRE Corp., Predictive Analytics in Child Welfare: An Assessment of Current Efforts, Challenges and Opportunities 12 (2017), [].

[18] See Amir Gandomi & Murtaza Haider, Beyond the Hype: Big Data Concepts, Methods and Analytics, 35 Int’l J. Info. Mgmt. 137, 138–40 (2015).

[19] See Hurley, supra note 3.

[20] See Schwartz et al., supra note 9, at 312–15.

[21] Id. at 319.

[22] See Pan et al., supra note 13, at 939–42.

[23] See Hurley, supra note 3 (“What the screeners have is a lot of data, . . . but it’s quite difficult to navigate and know which factors are most important. . . . [T]he human brain is not deft at harnessing and making sense of all that data.”).

[24] See Marie-Helen Maras, Overcoming the Intelligence-Sharing Paradox: Improving Information Sharing Through Change in Organizational Culture, 36 Comp. Strategy 187, 190 (2017) (“A general culture of secrecy exists in the intelligence community. . . . The perceived risk of inadvertent disclosure of information serves as a barrier to better cooperation and sharing of intelligence.”); Nat’l Research Council, Intelligence Analysis for Tomorrow: Advances from the Behavioral and Social Sciences 8 (2011) (“Broadly speaking, the nation’s confederated intelligence system has produced specialization at the expense of integration and collaboration. The IC’s inability to function as a unified team has been the subject of more than 40 major studies since the CIA’s establishment in 1947.” (citation omitted)); see also Richard A. Best Jr., Cong. Research Serv., R41848, Intelligence Information: Need-to-Know vs. Need-to-Share (2011), [] (documenting challenges to post-9/11 efforts to establish common repositories of information that merge key intelligence produced by all IC elements).

[25] See Kleinberg et al., supra note 11.

[26] See Teixeira & Boyas, supra note 17, at 13.

[27] See Sun-Woo Choo, Predictive Analytics, Recommender Systems, and Machine Learning: The Power of Data for Child Welfare, Abt Associates (May 14, 2018), [] (“[Machine learning] can be used in child welfare in two ways: to make predictions (e.g., about the scale of a child’s endangerment), or to make recommendations (e.g., what services and referrals can be provided to parents to maximize child well-being).”).

[28] See Schwartz et al., supra note 9, at 318.

[29] See sources cited supra note 24.

[30] See, e.g., N. Veerasamy, A High-Level Conceptual Framework of Cyber-Terrorism, 8 J. Info. Warfare, no. 1, 2009, at 43, 46 (“[T]he distinction between cyber terror and cyber crime . . . does not lie in the mechanics of the event, but rather in the intent that drove the person’s actions.” (citation omitted)).

[31] Memorandum on Integration, Sharing, and Use of National Security Threat Actor Information to Protect Americans, 2017 Daily Comp. Pres. Doc. 722, at 2 (Oct. 4, 2017), [].

[32] For a brief introduction to these four intelligence collection disciplines, see What Is Intelligence?, Off. of the Director of Nat’l Intelligence, [].

[33] See, e.g., Longbing Cao, Social Security and Social Welfare Data Mining: An Overview, 42 IEEE Transactions on Sys., Man & Cybernetics—Part C 837, 837 (2012).

[34] Exec. Office of the President of the U.S., Strategic Implementation Plan For Empowering Local Partners to Prevent Violent Extremism in the United States 1 (2016), [].

[35] William P. Barr, Attorney Gen., U.S. Dep’t of Justice, Announcement of the Findings of the Criminal Investigation into the December 2019 Shooting at Pensacola Naval Air Station (Jan. 13, 2020), [].

[36] Id.

[37] Vanessa Romo, El Paso Walmart Shooting Suspect Pleads Not Guilty, NPR (Oct. 10, 2019, 4:31 PM), [].

[38] Nicholas Bogel-Burroughs, ‘I’m the Shooter’: El Paso Suspect Confessed to Targeting Mexicans, Police Say, N.Y. Times (Aug. 9, 2019), [].

[39] Campbell Robertson et al., 11 Killed in Synagogue Massacre; Suspect Charged with 29 Counts, N.Y. Times (Oct. 27, 2018), [].

[40] Nat’l Consortium for the Study of Terrorism & Responses to Terrorism, Supporting a Multidisciplinary Approach to Addressing Violent Extremism: What Role Can Education Professionals Play? 1 (2015), [].

[41] President of the U.S., Empowering Local Partners to Prevent Violent Extremism in the United States 2 (2011), [].

[42] Id. at 1.

[43] See Tamar Mitts, Countering Violent Extremism: Do Community Engagement Efforts Reduce Extremist Rhetoric on Social Media? 11–13 (Apr. 6, 2017) (unpublished manuscript), [] (describing use of Twitter data on Islamic State supporters in the United States to measure the online expression of extremist ideology).

ai algorithms Apps artificial intelligence China citizen Coronavirus COVID-19 Emergency Preparedness experts Forecasting Headline News Infections Intelwars mass surveillance tracking Wuhan

COVID-19 Outbreak Is The Trojan Horse To Increase Smartphone Surveillance

This article was originally published by Aaron Kesel at Activist Post. 

The coronavirus outbreak is proving to be the Trojan horse that justifies increased digital surveillance via our smartphones.

All over the world, starting with China – the suspected origin of the COVID-19 outbreak – governments are increasing surveillance of citizens using their smartphones. The trend is taking off like wildfire; in China citizens now require a smartphone application’s permission to travel around the country and internationally.

Edward Snowden’s Warning: Surveillance Measures Will Outlast The Pandemic

The application is AliPay by Ant Financial, the finance affiliate controlled by Alibaba Group Holding Ltd. co-founder Jack Ma, and Tencent Holdings Ltd.’s WeChat. Citizens now require a green health code to travel, Yahoo News reported.

China isn’t the only country looking towards smartphones to monitor their citizens; Israel and Poland have also implemented their own spying to monitor those suspected or confirmed to be infected with the COVID-19 virus. Israel has gone the more extreme route and has now given itself the authority to surveil any citizen without a court warrant. Poland, on the other hand, is requiring those diagnosed with COVID-19 ordered to self-isolate to send authorities a selfie using an app. Which, if Poles don’t respond back in 20 minutes with a smiling face, they risk a visit from the police, Dailymail reported.

Singapore has asked citizens to download an app that uses Bluetooth to track whether they’ve been near anyone diagnosed with the virus; and Taiwan, although not using a smartphone, has introduced “electronic fences” which alert police if suspected patients leave their homes.

Meanwhile, here in the U.S. as reported by the Washington Post, smartphones are being used by a variety of companies to “anonymously” collect user data and track if social distancing orders are being adhered to. Beyond that, the mobile phone industry is discussing how to monitor the spread of COVID-19. If that’s not enough, as this author reported for The Mind Unleashed, the government wants to work with big social tech giants like Google, Facebook, and others, to track the spread of COVID-19.

new live index shows the increase of the police state by Top10VPN, a Digital Rights group. Top10VPN lists a total of 15 countries which have already started measures to track the phones of coronavirus patients, ranging from anonymized aggregated data to monitor the movement of people more generally, to the tracking of individual suspected patients and their contacts, known as “contact tracing.”

That’s not the only live index, a company called Unacast that collects and analyzes phone GPS location data also launched one. Except this is a “Social Distancing Scoreboard” that grades, county by county, monitoring who is following social distancing rules.

As Activist Post previously wrote while discussing the increase of a police surveillance state, these measures being put into place now will likely remain long after the pandemic has stopped and the virus has run its course. That’s the everlasting effect that COVID-19 will have on our society.  The coronavirus is now classified as a pandemic by the World Health Organization, and it may very well be a legitimate health concern for all of us around the world. But it’s the government’s response that should worry us all more in the long run.

At the time of this report, the COVID-19 virus has infected 458,927, killed 20,807, while 113,687 have recovered according to the Johns Hopkins map.


Prepping For Two Week Quarantine: Emergency Food Supply


artificial intelligence CIA contractor CORRUPTION Edward Snowden enslavement Headline News human rights Intelwars liberty mass surveillance Orwellian pandemic is an excuse politicans power public rulin class Society system totalitarian tyranny WHISTLEBLOWER

Edward Snowden’s Warning: Surveillance Measures Will Outlast The Pandemic

Edward Snowden has a warning for those who are giving up liberty for a false sense of security: the temporary mass surveillance measures put in place will be anything but temporary. Snowden says that these measures are not worth giving up even more liberty.

Nothing is so permanent as a temporary government program, and infamous whistleblower Edward Snowden is sounding the alarms about the Orwellian mass surveillance that will long outlast this coronavirus pandemic.

The former CIA contractor, whose leaks exposed the scale of spying programs in the United States, is warning that once this tech is taken out of the box, it will be hard to put it back. “When we see emergency measures passed, particularly today, they tend to be sticky,” Snowden said in an interview with the Copenhagen International Documentary Film Festival.

The emergency tends to be expanded. Then the authorities become comfortable with some new power. They start to like it.

Power corrupts. It always has and always will.  But people have become comfortable with government officials ruling over them, stealing their money, and telling them what to do.  We’ve been transformed into the government’s slaves, and Snowden says it’s only going to get worse.

Security services will soon find new uses for the tech. And when the crisis passes, governments can impose new laws that make the emergency rules permanent and exploit them to crack down on dissent and political opposition.

Take the proposals to monitor the outbreak by tracking mobile phone location data.

This could prove a powerful method of tracing the spread of the virus and the movements of people who have it. But it will also be a tempting tool to track terrorists — or any other potential enemies of the states. -The Next Web

This virus has so far been an excuse by the ruling class to institute permanent tyranny. As I’ve stated before, I’m far less afraid of this virus than I am of the government’s response to it. Snowden is as well.  He’s especially concerned about security services adding artificial intelligence to all the other surveillance tech they have. “They already know what you’re looking at on the internet,” he said. “They already know where your phone is moving. Now they know what your heart rate is, what your pulse is. What happens when they start to mix these and apply artificial intelligence to it?”

There is no disputing the severity of this pandemic, however, surrendering basic human rights and dignity to the government is essentially giving up and allowing yourself to be enslaved for the remainder of your life.

abuses Adolus Huxley amass power artificial intelligence authoritarian Big Government Censorship Chinese surveillance civil unrest control Coronavirus CORRUPTION CRISIS enslaved Facial Recognition freedom destruction George Orwell Government Headline News human rights Humanity Intelwars markets mass surveillance Orwellian Police State tolerate

Spiro Interview: How The Coronavirus Is Being Used To Control You And Remove Your Rights

This article was originally published by John W. Whitehead at The Rutherford Institute. 

“If, as it seems, we are in the process of becoming a totalitarian society in which the state apparatus is all-powerful, the ethics most important for the survival of the true, free, human individual would be: cheat, lie, evade, fake it, be elsewhere, forge documents, build improved electronic gadgets in your garage that’ll outwit the gadgets used by the authorities.”—Philip K. Dick

Emboldened by the citizenry’s inattention and willingness to tolerate its abuses, the government has weaponized one national crisis after another in order to expand its powers.

The war on terror, the war on drugs, the war on illegal immigration, asset forfeiture schemes, road safety schemes, school safety schemes, eminent domain: all of these programs started out as legitimate responses to pressing concerns and have since become weapons of compliance and control in the police state’s hands.

It doesn’t even matter what the nature of the crisis might be—civil unrest, the national emergencies, “unforeseen economic collapse, loss of functioning political and legal order, purposeful domestic resistance or insurgency, pervasive public health emergencies, and catastrophic natural and human disasters”—as long as it allows the government to justify all manner of government tyranny in the so-called name of national security.

Now we find ourselves on the brink of a possible coronavirus contagion.

I’ll leave the media and the medical community to speculate about the impact the coronavirus will have on the nation’s health, but how will the government’s War on the Coronavirus impact our freedoms?

For a hint of what’s in store, you can look to China—our role model for all things dystopian—where the contagion started.

In an attempt to fight the epidemic, the government has given its surveillance state apparatus—which boasts the most expansive and sophisticated surveillance system in the world—free rein. Thermal scanners using artificial intelligence (AI) have been installed at train stations in major cities to assess body temperatures and identify anyone with a fever. Facial recognition cameras and cell phone carriers track people’s movements constantly, reporting in real-time to data centers that can be accessed by government agents and employers alike. And coded color alerts (red, yellow and green) sort people into health categories that correspond to the amount of freedom of movement they’re allowed: “Green code, travel freely. Red or yellow, report immediately.”

Mind you, prior to the coronavirus outbreak, the Chinese surveillance state had already been hard at work tracking its citizens through the use of some 200 million security cameras installed nationwide. Equipped with facial recognition technology, the cameras allow authorities to track so-called criminal acts, such as jaywalking, which factor into a person’s social credit score.

Social media credit scores assigned to Chinese individuals and businesses categorize them on whether or not they are “good” citizens. A real-name system—which requires people to use government-issued ID cards to buy mobile sims, obtain social media accounts, take a train, board a plane, or even buy groceries—coupled with social media credit scores ensures that those blacklisted as “unworthy” are banned from accessing financial markets, buying real estate or traveling by air or train. Among the activities that can get you labeled unworthy are taking reserved seats on trains or causing trouble in hospitals.

That same social credit score technology used to identify, track and segregate citizens is now one of China’s chief weapons in its fight to contain the coronavirus from spreading. However, it is far from infallible and a prime example of the difficulties involved in navigating an autonomous system where disembodied AI systems call the shots. For instance, one woman, who has no symptoms of the virus but was assigned a red code based on a visit to her hometown, has been blocked from returning to her home and job until her color code changes. She has been stuck in this state of limbo for weeks with no means of challenging the color code or knowing exactly why she’s been assigned a red code.

Fighting the coronavirus epidemic has given China the perfect excuse for unleashing the full force of its surveillance and data collection powers. The problem, as Eamon Barrett acknowledges in Fortune magazine, is what happens after: “Once the outbreak is controlled, it’s unclear whether the government will retract its new powers.”

The lesson for the ages: once any government is allowed to expand its powers, it’s almost impossible to pull back.

Meanwhile, here in the U.S., the government thus far has limited its coronavirus preparations to missives advising the public to stay calm, wash their hands, and cover their mouths when they cough and sneeze.

Don’t go underestimating the government’s ability to lock the nation down if the coronavirus turns into a pandemic, however. After all, the government has been planning and preparing for such a crisis for years now.

The building blocks are already in place for such an eventuality: the surveillance networks, fusion centers and government contractors that already share information in real-time; the government’s massive biometric databases that can identify individuals based on genetic and biological markers; the militarized police, working in conjunction with federal agencies, ready and able to coordinate with the federal government when it’s time to round up the targeted individuals; the courts that will sanction the government’s methods, no matter how unlawful, as long as it’s done in the name of national security; and the detention facilities, whether private prisons or FEMA internment camps, that have been built and are waiting to be filled.

Now all of this may sound far-fetched to you now, but we’ve already arrived at the dystopian futures prophesied by George Orwell’s 1984, Aldous Huxley’s Brave New World, and Philip K. Dick’s Minority Report.

It won’t take much more to push us over the edge into Neill Blomkamp’s Elysium, in which the majority of humanity is relegated to an overpopulated, diseased, warring planet where the government employs technologies such as drones, tasers and biometric scanners to track, target and control the populace.

Mind you, while these technologies are already in use today and being hailed for their potentially life-saving, cost-saving, time-saving benefits, it won’t be long before the drawbacks to having a government equipped with technology that makes it all-seeing, all-knowing, and all-powerful——helped along by the citizenry—far outdistance the benefits.

On a daily basis, Americans are relinquishing (in many cases, voluntarily) the most intimate details of who we are—their biological makeup, our genetic blueprints, and our biometrics (facial characteristics and structure, fingerprints, iris scans, etc.)—in order to navigate an increasingly technologically-enabled world.

Consider all the ways you continue to be tracked, hunted, hounded, and stalked by the government and its dubious agents:

By tapping into your phone lines and cell phone communications, the government knows what you say. By uploading all of your emails, opening your mail, and reading your Facebook posts and text messages, the government knows what you write. By monitoring your movements with the use of license plate readers, surveillance cameras and other tracking devices, the government knows where you go. By churning through all of the detritus of your life—what you read, where you go, what you say—the government can predict what you will do.

By mapping the synapses in your brain, scientists—and in turn, the government—will soon know what you remember. By mapping your biometrics—your “face-print”—and storing the information in a massive, shared government database available to bureaucratic agencies, police and the military, the government’s goal is to use facial-recognition software to identify you (and every other person in the country) and track your movements, wherever you go. And by accessing your DNA, the government will soon know everything else about you that they don’t already know: your family chart, your ancestry, what you look like, your health history, your inclination to follow orders or chart your own course, etc.

Of course, none of these technologies are foolproof.

Nor are they immune from tampering, hacking or user bias.

Nevertheless, they have become a convenient tool in the hands of government agents to render null and void the Constitution’s requirements of privacy and its prohibitions against unreasonable searches and seizures.

The ramifications of a government—any government—having this much unregulated, unaccountable power to target, track, round up and detain its citizens is beyond chilling.

Imagine what a totalitarian regime such as Nazi Germany could have done with this kind of unadulterated power.

Imagine what the next police state to follow in Germany’s footsteps will do with this kind of power. Society is rapidly moving in that direction.

We’ve made it so easy for the government to watch us.

Government eyes see your every move: what you read, how much you spend, where you go, with whom you interact when you wake up in the morning, what you’re watching on television and reading on the internet.

Every move you make is being monitored, mined for data, crunched, and tabulated in order to form a picture of who you are, what makes you tick, and how best to control you when and if it becomes necessary to bring you in line.

Chances are, as the Washington Post has reported, you have already been assigned a color-coded threat assessment score—green, yellow or red—so police are forewarned about your potential inclination to be a troublemaker depending on whether you’ve had a career in the military, posted a comment perceived as threatening on Facebook, suffer from a particular medical condition, or know someone who knows someone who might have committed a crime.

In other words, you’re most likely already flagged in a government database somewhere.

The government has the know-how.

Indeed, for years now, the FBI and Justice Department have conspired to acquire near-limitless power and control over biometric information collected on law-abiding individuals, millions of whom have never been accused of a crime.

Going far beyond the scope of those with criminal backgrounds, the FBI’s Next Generation Identification database (NGID), a billion-dollar boondoggle that is aimed at dramatically expanding the government’s ID database from a fingerprint system to a vast data storehouse of iris scans, photos searchable with face recognition technology, palm prints, and measures of gait and voice recordings alongside records of fingerprints, scars, and tattoos.

Launched in 2008, the NGID is a massive biometric database that contains more than 100 million fingerprints and 45 million facial photos gathered from a variety of sources ranging from criminal suspects and convicts to daycare workers and visa applicants, including millions of people who have never committed or even been accused of a crime.

In other words, innocent American citizens are now automatically placed in a suspect database.

For a long time, the government was required to at least observe some basic restrictions on when, where and how it could access someone’s biometrics and DNA and use it against them.

That is no longer the case.

The information is being amassed through a variety of routine procedures, with the police leading the way as prime collectors of biometrics for something as non-threatening as a simple moving violation. The nation’s courts are also doing their part to “build” the database, requiring biometric information as a precursor to more lenient sentences. And of course, Corporate America (including Google, Facebook, Amazon, etc.) has made it so easy to use one’s biometrics to access everything from bank accounts to cell phones.

We’ve made it so easy for the government to target, identify and track us.

Add pre-crime programs into the mix with government agencies and corporations working in tandem to determine who is a potential danger and spin a sticky spider-web of threat assessments, behavioral sensing warnings, flagged “words,” and “suspicious” activity reports using automated eyes and ears, social media, behavior sensing software, and citizen spies, and you have the makings for a perfect dystopian nightmare.

This is the kind of oppressive pre-crime and pre-thought crime package foreshadowed by George Orwell, Aldous Huxley, and Phillip K. Dick.

Remember, even the most well-intentioned government law or program can be—and has been—perverted, corrupted and used to advance illegitimate purposes once profit and power are added to the equation.

In the right (or wrong) hands, benevolent plans can easily be put to malevolent purposes.

Surveillance, digital stalking and the data mining of the American people add up to a society in which there’s little room for indiscretions, imperfections, or acts of independence.

This is the creepy, calculating yet diabolical genius of the American police state: the very technology we hailed as revolutionary and liberating has become our prison, jailer, probation officer, Big Brother and Father Knows Best all rolled into one.

It turns out that we are Soylent Green.

The 1973 film of the same name, starring Charlton Heston and Edward G. Robinson, is set in 2022 in an overpopulated, polluted, starving New York City whose inhabitants depend on synthetic foods manufactured by the Soylent Corporation for survival.

Heston plays a policeman investigating a murder, who discovers the grisly truth about the primary ingredient in the wafer, soylent green, which is the principal source of nourishment for a starved population. “It’s people. Soylent Green is made out of people,” declares Heston’s character. “They’re making our food out of people. Next thing they’ll be breeding us like cattle for food.”

Oh, how right he was.

Soylent Green is indeed people or, in our case, Soylent Green is our own personal data, repossessed, repackaged and used by corporations and the government to entrap us.

Without constitutional protections in place to guard against encroachments on our rights when power, technology, and militaristic governance converge, it won’t be long before we find ourselves, much like Edward G. Robinson’s character in Soylent Green, looking back on the past with longing, back to an age where we could speak to whom we wanted, buy what we wanted, think what we wanted, and go where we wanted without those thoughts, words, and movements being tracked, processed and stored by corporate giants such as Google, sold to government agencies such as the NSA and CIA, and used against us by militarized police with their army of futuristic technologies.

We’re not quite there yet. But that moment of reckoning is getting closer by the minute.

In the meantime, we’ve got an epidemic to survive, so go ahead and wash your hands. Cover your mouth when you cough or sneeze. And stock up on whatever you might need to survive this virus if it spreads to your community.

We are indeed at our most vulnerable right now, but as I make clear in my book Battlefield America: The War on the American People, it’s the American Surveillance State—not the coronavirus—that poses the greatest threat to our freedoms.

Police forces across the United States have been transformed into extensions of the military. Our towns and cities have become battlefields, and we the American people are now the enemy combatants to be spied on tracked, frisked, and searched. For those who resist, the consequences can be a one-way trip to jail or even death. Battlefield America: The War on the American People is constitutional attorney John W. Whitehead’s terrifying portrait of a nation at war with itself. In exchange for safe schools and lower crime rates, we have opened the doors to militarized police, zero-tolerance policies in schools, and SWAT team raids. The insidious shift was so subtle that most of us had no idea it was happening. This follow-up to Whitehead’s award-winning A Government of Wolves is a brutal critique of an America on the verge of destroying the very freedoms that define it. Hands up!?the police state has arrived.

artificial intelligence Fourth Amendment Intelwars privacy spying Surveillance

A Nightmarish Army of Unblinking Spies

The surveillance state constantly expands. That thing that seems like no big deal today can suddenly become a big deal as technology evolves.

Take the proliferation of surveillance cameras. We’ve come to accept electronic eyes recording our every move like a normal part of life. Most of the time we hardly even notice the cameras. At some level, we may realize we’re being recorded, but we figure nobody will ever actually watch the footage. Even with cameras everywhere, we feel relatively safe in our anonymity.

But how would you feel if you knew somebody was monitoring every camera pointed in your direction 24/7. Scrutinizing your every move. Judging your every action. Noting whom you associate with and scouring your facial expressions for signs of suspicious behavior?

We’re rapidly getting to that place.

Of course, we’re not talking about human “somebodies.” We’re talking about artificial intelligence – “AI agents” capable of scouring video footage every second of every day and flagging “suspicious” behavior.

The ACLU recently released a report on the rapidly growing use of “video analytics” as a surveillance tool. As the ACLU puts it, AI has the potential to turn every-day surveillance cameras into a “nightmarish army of unblinking watchers.”

What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives. Cameras that collect and store video just in case it is needed are being transformed into devices that can actively watch us, often in real-time. It is as if a great surveillance machine has been growing up around us, but largely dumb and inert — and is now, in a meaningful sense, “waking up.”

According to the report, police and government intelligence agencies have used AI to develop “anomaly detection” algorithms that can pick up on “unusual,” “abnormal,” “deviant,” or “atypical” and flag such individuals for further scrutiny. As the ACLU reports, this could have far-reaching ramifications and brings with it tremendous potential for abuse.

Think about some of the implications of such techniques, especially when combined with other technologies like face recognition. For example, it’s not hard to imagine some future corrupt mayor saying to an aide, “Here’s a list of enemies of my administration. Have the cameras send us all instances of these people kissing another person, and the IDs of who they’re kissing.” Government and companies could use AI agents to track who is “suspicious” based on such things as clothing, posture, unusual characteristics or behavior, and emotions. People who stand out in some way and attract the attention of such ever-vigilant cameras could find themselves hassled, interrogated, expelled from stores, or worse.

AI also raises concerns about accuracy. We’ve already heard about problems with facial recognition systems misidentifying people – particularly minorities. As the ACLU puts it, “Many or most of these [AI] technologies will be somewhere between unreliable and utterly bogus.”

The interconnectedness of the U.S. surveillance state magnifies danger and the threat to your privacy these systems pose. If a local camera happens to flag you, you will almost certainly end up in national databases accessible by police and government officials across the U.S. Federal, state and local law enforcement agencies can share and tap into vast amounts of information gathered at the state and local level through fusion centers and a system known as the “information sharing environment” or ISE.

George Orwell’s Big Brother would drool over the all-encompassing surveillance system quietly under construction in the United States. Cameras equipped with facial recognition technology. monitored by “AI agents,” and linked to federal, state and local databases can track your every move just by pointing a camera at your face. It effectively turns each of us into a suspect standing in a perpetual lineup.

Police operate these camera systems with little oversight and oftentimes in complete secrecy.

With their rapid proliferation, the potential for abuse and the threat to basic privacy rights posed by camera surveillance, state and local governments need to make oversight and placing limits on law enforcement use of facial recognition a top priority. At the least, law enforcement agencies should be required to get local government approval in a public meeting before obtaining facial recognition technology. The TAC’s Local Ordinance to Limit Surveillance Technology covers this.