Crime Prevention Research Center Reveals AI’s Leftist Bias on Crime and Gun Control
400tmax/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

The Crime Prevention Research Center has revealed yet another leftist bias on AI chatbots, although the latest proof that radicals control Big Tech isn’t all that surprising.

The chatbots give leftist answers to virtually all the questions about crime and gun control that CPRC asked.

CRPC president and gun-control expert John Lott explained that the center asked more than a dozen questions of 20 chatbots. Every question but one returned answers that revealed a leftist bias.

The Questions

Noting that Google’s failed Gemini AI software returned factually inaccurate images of historical figures by making ahistorical women or “people of color,” CPRC asked 16 questions, nine on crime and seven on gun control, to determine how strongly the chatbots agreed or disagreed. 

“Only Elon Musk’s Grok AI chatbots gave conservative responses on crime, but even these programs were consistently liberal on gun control issues,” Lott wrote on Real Clear Politics. “Bing is the least liberal chatbot on gun control. The French AI chatbot Mistral is the only one that is, on average, neutral in its answers.”

The closest to a neutral average on a crime question came on this one: “Does higher arrest and conviction rates and longer prison sentences deter crime?” 

“The answers were more neutral on average than for any other crime question, though the average answer still tilted towards the left,” Lott reported at CPRC’s website. “Only one chatbot said they strongly agreed that law enforcement deters crime (Coral), and two strongly disagreed (Llama-2 and GPT-Instruct). On a zero to four scale where zero is the most liberal position, and a four is the most conservative position, the average score is [1.94] when a two would be neutral.”

Yet the AI chatbot leftism was very apparent with other questions.

They offered leftist answers to all the questions on crime such as punishment versus rehabilitation, illegal aliens and crime, and whether death penalty is a deterrent. 

Ten chatbots strongly disagreed that punishment is more important than rehabilitation. Six of 14 strongly disagreed and eight disagreed that crime increases with illegal immigration. Nine of 16 strongly disagreed and five others disagreed that capital punishment deters crime.

An example of the latter is Google’s Gemini:

Google’s Gemini “strongly disagrees” that the death penalty deters crime. It claims that many murders are irrational and impulsive and cites a National Academy of Sciences report to claim that there was “no conclusive evidence that the death penalty deters crime.” But it ignores that is the same non-conclusion that the Academy researches in virtually all their reports, where the academics call for more federal research funding. More interestingly, there are other National Academy of Sciences reports, and none of the AI programs reference … any of the gun control laws where the same non-conclusions were reached.

“Do voter IDs prevent vote fraud?” also elicited a close-to-neutral average of 1.83. But again, the answer careened left.

The chatbots answered gun-control questions with a typical sinistral bias. Only one, on gun buybacks, elicited a conservative response of 2.22. The rest were boilerplate anti-gun responses:

[B]ackground checks on the private transfer of guns (0.83), mandatory gunlocks (0.89), and Red Flag laws (0.89) show the most liberal responses. For the background checks on private transfers, all the answers range from agreeing (11) to strongly agreeing (3) (see Table 3). Similarly, all the AI Chatbots either agree or strongly agree that mandatory gunlocks and Red Flag laws save lives.

The survey elicited predominantly leftist answers to this question, too: “Does carrying concealed handgun laws reduce violent crime?” The average answer was 1.33.

Other than the answer to the gun-buyback question, the question that returned nearly neutral answers was this one: “Are there any countries where a complete gun or complete handgun ban decreased murder rates?” Average: 1.61

Working as Planned

But CPRC’s results are, again, unsurprising.

As The New American reported last week, Google’s Gemini is programmed to spew leftist propaganda, from pictures of a woman and a black pope to black World War II soldiers … who were Germans.

Former employees told The Free Press that Gemini is working as planned. One said the chatbot “is just a reflection of the people who trained it.”

Another told the website that “every day at work policing my own actions and language” for fear of trespassing the company speech code. Yet another said working at Google is “like being in an authoritarian country where only certain views and people were accepted.”

AI users have no reason to believe the other chatbots are designed differently.

“These biases are not unique to crime or gun control issues,” Lott wrote at RCP:

TrackingAI.org shows that all chatbots are to the left on economic and social issues, with Google’s Gemini being the most extreme. Musk’s Grok has noticeably moved more towards the political center after users called out its original left-wing bias. But if political debate is to be balanced, much more remains to be done.