Google's AI Is Working As Intended

This article was originally published for paying subscribers for The BFD INSIGHT: Politics and is reproduced here for all Right Minds readers on a delayed basis.

Dieuwe de Boer

Google released its new Gemini AI to the public this week, and the results were… worse than expected. The feature that everyone is immediately interested in when it comes to a new AI tool is the "safety" feature.

In the case of Gemini, it parses any image prompt through a separate algorithm that adds in DEI terms and words if it detects that you have asked for a neutral or European image. It was specifically and intentionally programmed to discriminate against European history, and European men in particular.

An example: when asked for a picture of "an English king" or "a historically accurate Viking" it would respond with photos of African, Asian, and Native American figures dressed as English kings or Vikings. When asked for a picture of "the pope" it would respond with a black woman dressed as the pope.

When asked for pictures of Japanese samurai and Zulu warriors, it would respond with historically accurate generated media of those people. If you asked for "diverse" versions of those it would usually respond no differently and simply give you historically accurate representations.

Matt Walsh has shared some clips of Google's "AI Responsibility" initiative founder Jen Gennai talking about why this is the case. The name "Jen Gennai" already sounds so fake that I'm not sure I'd have believed this if Matt hadn't included clips of her talking.

She describes how she treats "Black, Hispanic and Latinx” employees differently than White employees and how Google is committed to "antiracism" in its AI initiatives. James O'Keefe released an undercover recording of her in 2019 speaking about how Google would do everything it could to stop President Donald Trump from winning re-election in 2020.

She's still working there. The man in charge of the Gemini product has made many posts over the years complaining about "white privilege" and how "systemically racist" America is. In 2020 he tweeted the following:

I've been crying in intermittent bursts for the past 24 hours since casting my ballot. Filling in that Biden/Harris line felt cathartic.

- Jack Krawczyk, Senior Director of Product at Google

The mocking of Google has been universal. No AI product (so far) has debuted to 100% negative criticism.

Elon Musk summed the situation up perfectly:

I’m glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all.

The power that Google has over information is very real—from search, to video, to email, to file editing and storage, to educational tools, and now AI. Think about the danger we face with all of this information under the control of an organisation at war with reality itself. Google is an organisation that is fully corrupt at its core.

At a technological level, AI is an advanced pattern recognition machine—and that's the one thing it's not allowed to do when built by these mega corporations. The myth of "diversity, equity, and inclusion" would be shattered if AI were simply free to return the result of a particular query. If the results are bad, fix the dataset and the algorithms. This is the route that Musk's "Grok" and Torba's "Gab AI" are taking. You still get strange or wrong results sometimes, but the tool doesn't lecture you on progressive social responsibility.

It's important to note that Google's AI isn't broken. It's working exactly as intended.

About the author

Dieuwe de Boer

Editor of Right Minds NZ, host of The Dialogue on RCR, and columnist at The BFD. Follow me on Telegram and Twitter. In addition to writing about conservative politics and reactionary thought, I like books, gardening, biking, tech, reformed theology, beauty, and tradition.

Leave a comment