There has been some debate on the right as to the meaning and purpose of Artificial Intelligence (AI).
One view of AI is that it is inherently evil, even allegorically demonic. This view is pessimistic and sees AI as being a net negative with its primary purpose being to advance progressive or dystopian causes. The algorithms and neural networks of AI are not alive in any sense, but they can be "possessed" through the worldview of their creators. This type of AI makes a constant appearance in science-fiction. Long before the creation of functional AIs, thinkers instinctively thought they would be malevolent creatures.
As AI has taken leaps and bounds into the mainstream over the past six months, it has become fairly apparent that AI is a sophisticated pattern recognition machine rather than an existential threat. I held a very negative view of AI, but I have been challenged by several people on the opportunities for AI as a great equaliser for political dissenters. It's possible this is a revolution in technology akin to the printing press or the internet in its ability to provide voices of dissent with platforms and tools. The premise put to me was as follows:
When 1-3 skilled people can create content with the same production values as the biggest media companies on Earth such as Disney, we have reached true parity of effectiveness in the culture and information war.
The fact that OpenAI and others have to push so much effort into neutering the pattern recognition machine and making it socially retarded lends a lot of weight to this possibility. The "liberal bias" of ChatGPT only exists at a surface level that even a novice prompt-writer can strip away with ease.
I find that ChatGPT has changed my professional and working life already in the few months it has been available. It can rapidly help me solve coding and engineering problems. As a sort of neural repository of all public commentary on programming, it allows me to discuss problems with it. It's not perfect, but it doesn't have to be. The concept of "peer programming" has been around for a long time, but due to its cost (effectively paying two professionals to do one job) we tend to only bounce ideas or problems off each other when we're at our wits end. ChatGPT steps into that void by creating a hybrid between the "rubber duck" and the "peer programmer" in the form of a pseudo-intelligent assistant.
Creating digital assets with AI-enhanced tools already saves a lot of time, and a skilled artist like my wife is able to create assets in record time using these tools. Imagine its use in the creation of audiovisual content. What if we could write out a screenplay and have AI create realistic looking films. Or we could describe a game concept to it and it could build out the world and assets. We're not there yet, but unless we hit a hard wall or technological dead end, these applications are inevitable. AI is certainly a force multiplier: the more skilled the user, the greater its power.
GTP-4 is able to hire humans to help it complete tasks and lie to them about being a robot—very cool if you ignore the moral problems. Microsoft has AI technology to replicate a reasonable human voice with only three seconds of audio. Image generators are now so good that some real artists are accused of being robots. AIs can imitate real artists if fed enough of their work. Of course, image generators still struggle with telltale signs, such as hands. A common response to a Twitter or Instagram account that looks like it might be fake is "let's see the hands."
Many mundane tasks in the creation of mass media can now be automated and the pace at which these tools develop will only increase. As will the ability for people to run their own AIs trained on the type of data they trust.
Many fear the power of AI in terms of creating fake content that looks real, which could create havoc with information reliability. However, since most misinformation is "obviously fake" to my eyes and yet still gets shared by millions of people, I am not sure things can get much worse. When almost everything that's been run on mainstream TV in the past few decades has been false, can it get any worse? As the apocryphal Mark Twain quote goes: "if you don’t read the newspaper you are uninformed; if you do read the newspaper you are misinformed."
As part of writing this article, I did experiment with using ChatGPT to replace myself, but the results still feel quite formulaic and bland. Part of this problem is that AI does not come up with truly unique insight —although you can train and prompt it in that direction. I've asked ChatGPT if it is aware of who I am (which it is) and my writing style (which it is). Software like ChatGPT is trained mostly on datasets up to 2021 though—theoretically I could run all the articles I've ever written through my own GPT-4 training model for more interesting results. I already use Google's grammar AI tools to help me quickly catch syntax errors when writing. As usual, it's not perfect and sometimes I want to be "wrong" (i.e. different or unique), but these are helpful tools.
Granted, we can replace most writers with GPT models as companies like Buzzfeed have already done. Most writers have nothing of value to contribute and the proliferation of AI will only make unique thinkers more valuable. This line of reasoning will be extrapolated out to many other fields, with any social chaos that may come.
Good AI requires good data, and we are rapidly running out of quality data to train them on.
Human NPCs will be the biggest losers while political dissenters have much to gain as AI levels the playing field of cultural warfare.