SHRINKITOUT

CLICK
ME

ChatGPT: Psychology as a building stone of AI

Tasina Emma Westberg
|
January 30, 2023

How do intelligent machines use psychology to optimise user experience? This article will talk about the recently released beta ChatGPT as well as the potential for bias in AI algorithms. It’s certainly not exhaustive, and only serves as a small sneak-peek into the curious world of AI.

Collage by author. Photos sourced from: Wired UK (Pink background) and J.M. Bourgery (Cross-section of the head showing brain and cerebellum).

What we’ll cover:

  • What is ChatGPT?
  • Bias in Algorithms
  • The Iceberg Theory Applied to Machines and The Human Brain

What is ChatGPT?

At the beginning of December, Chat GPT, the brainchild of OpenAI, was released into the wild that is the open public.

Don’t know what I’m talking about? You can find out more in this short video.

So far, the superintelligent chatbot has been accounting for its fair share of ruckus and lash-back from artists, writers, and teachers, while others have been enjoying the ease with which they can now write essays, find ideas for recipes, or solutions to maths problems without having to sit down and think about it for even a second.

It all seems a bit unreal. If you’ve already tried chatting with GPT, you’ll know what I mean - OpenAI’s optimised response system is so close to what you’d expect a human to say that it’s difficult to make the distinction and remind yourself -- hey, you’re still talking to a computer. On the other hand, it’s not like this is something new. Chatbots have been around for quite a while, and NPCs in video games are probably one of the famous firsts when it came to assigning qualities of the human psyche to virtual, non-human counterparts with which the player interacts in real-time.

What is new with ChatGPT is the incredible depth and accuracy to which it can respond to any given prompt… basic chatbots are trained to be able to detect keywords from prompts with which they’ve been trained. Hence, at some point, they might not be able to provide you with some ultra-specific information you’re looking for, especially if it’s off-topic for the prompts that chatbot has been trained with. Same thing with NPCs in video games-- they’ve been trained to recognize certain decision paths taken by the player, to which they respond in a predetermined way programmed by the game designers. In contrast to these examples, Chat GPT has very few limitations. You can basically feed it any crazy prompt that comes to mind, and it will most likely provide you with not only an accurate, but also unique response.

So, why does Chat GPT matter? You’re reading this on a psychology website, after all.

Well, that’s exactly why. In order to mimic human behaviours and thought processes as accurately as possible, Artificial Intelligence is extremely dependent on psychology. Just like the human brain, an AI algorithm makes judgments based on a large collection of data. For mortals, we call them experiences, for machines, they’re referred to as training data [1].

Bias in Algorithms

Let’s talk a bit more about this training data. Logically speaking, just like a human, the more data an AI is trained with, the more accurately it will be able to make judgements. Simultaneously, though, a larger data pool can also create a biassed algorithm… this is what happened in the Amazon Hiring Bias Case [2], when the multi-billion dollar company developed an algorithm that discriminated against women in the hiring process. This happened not because the algorithm was inherently biassed, but rather because it had been trained with Amazon’s hiring data from the past 10 years, which was composed mostly of male hires. However, the tech field had very much evolved since then, accommodating a far larger number of women in previously male-dominated positions. The training data for Amazon’s hiring AI did not account for this change, therefore favouring male candidates and turning a blind eye to female ones.

When we, as humans, make biassed decisions, they’re also often based on patterns we’ve formed in our minds due to an accumulated amount of positive or negative experiences related to a topic. For instance, we might hate chocolate (how sad), because every time we have it, we end up getting into an argument with someone. While these two things have no logical cause-and-effect relationship, we are very likely to begin associating them together in our minds, so every time we see chocolate, we think that we’ll get into an argument. We develop a superstition by way of “protecting” ourselves from a perceived negative outcome, and hence form a bias about chocolate.

The danger of AI is that, however intelligent it may be, it has no consciousness, and hence can perpetrate our human biases to considerable lengths if we train it to do so. In Amazon’s case, the hiring bias was reportedly unintentional, but there’s another example in which matters got far worse: In 2016, Microsoft’s AI Taybot was launched on Twitter. Through Taybot, Microsoft had created a relatable character with the witty personality of a teenage girl who was supposed to “engage people in dialogue through tweets or direct messages” [3]. The initiative was described as an experiment in “conversational understanding,” basically meaning that Taybot would learn how to communicate via the information and values other Twitter users fed her.

It took less than three hours for the Taybot experiment to go wrong. “Within 16 hours of her release, Tay had tweeted more than 95 000 times, and a troubling percentage of her messages were abusive and offensive” [2]. The trolling community of  bulletin board “4chan” had encouraged its members to feed Taybot racial, sexist, and other discriminatory slurs, hoping to “train” her to become an internet troll herself. Their efforts were successful, to everyone else’s misfortune. In this example, several types of menacing human biases were deliberately fed to Taybot by one group of people who were able to control the algorithm’s behaviour within a couple of hours.

We always say we are afraid of what machines will do to this world, but most of the time, malicious human intention is what really makes them so dangerous. When I look at the recent release of Chat GPT, and when I recall two of the most notorious AI scandals with Amazon and Twitter, I try to think of AI as a human, who is capable of incredible feats while also being equally capable of immense destruction. Power and influence govern the human mind, just as they do that of machines. And, to conclude this article, we’ll briefly look into how both of these things resemble each other under one key point: try as we might, we don’t actually know that much about them.

The Iceberg Theory Applied to Machines and the Human Brain

An interesting point about AI is that, even though we’ve made immense progress in terms of developing its capabilities and performance capacity, we know very little about the inner workings of any algorithm we, as humans, create for machines. According to an article from the MIT Technology Review, “the apps and websites that use deep learning to serve ads or recommend songs are [run by computers that] have programmed themselves.” In other words, no one can explain what got us from A to B… “Even the engineers who built these apps cannot fully explain their behaviour.” [4]

All of this seems reminiscent of another large, complex thing we don’t quite understand either… No, not space (although that’s certainly one of them), but the good old human brain. Lu Chen, Professor of Neurosurgery and Psychiatry at Stanford University, said in a 2016 interview, “We know very little about the brain. Learning, for example, doesn’t just require good memory, but also depends on speed, creativity, attention, focus, and, most importantly, flexibility” [5].

Paradoxically, to me, not knowing  much about immensely complex mechanisms such as the brain or AI is quite comforting. As the Tame Impala classic goes, “The less I know, the better…” It’s not that I don’t care to know more, but rather, I find it calming to zoom out and put into perspective that, no matter how revolutionary or phenomenal anything might seem at first glance, it is extremely complicated to understand-- Fascination and shock last for seconds, maybe hours, or a day… But genuine interest and frustration can last for years. Decades. Maybe even a lifetime.

Whether you’re among those who are just now discovering AI through the growing popularity of Chat GPT, or already an AI-geek well-versed in the field’s 50+ year-long history (the term AI was coined in 1963!), I’d like to conclude by saying that you don’t have to be afraid of AI. Just because we don’t understand it, doesn’t mean we should fear it. I mean, you don’t fear the human brain, do you? Nevermind, I’m sure there are certain occasions on which you do. Like when you walk into a room with a clear intention of what you were going to do there but one second later you already forgot. But that’s not the point.

“Science works as a process that extends over decades,” says Nobel Laureate and Neuroscientist Tom Südhof [3]. There’s a reason that it’s called computer science… it’s just as timely and evolutionary a process as plain old science. And because we are only able to truly make sense of that which we know, what we don’t know can’t hurt us. Or it will… but only once we understand it. I’m trapping you in lots of conundrums, but think of it this way: how empowering, retributive, and comforting it is to know that AI can only operate and grow the same way the human psyche does… power be to psychology!

For further exploration :

My favourite place to go to learn more about the endless world of AI is the Lex Fridman podcast. Lex is a computer scientist passionate about the (r)evolution of AI and what it means for us all. He invites experts onto his podcasts to talk about it. A great way to learn!

Sources :

[1] “What Does Training Data Mean?” Technopedia, February 17, 2022. https://www.techopedia.com/definition/33181/training-data

[2] Dastin, Jeffery. “Amazon scraps secret AI recruiting tool that showed bias against women.” Reuters, October 11, 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[3] Schwartz, Oscar. “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation.” IEEE Spectrum, 25 November, 2019. https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation

[4] Knight, Will. “The Dark Secret at the Heart of AI.” MIT Technology Review, April 11, 2017. https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/

[5] Lam, Vivian. “‘We know very little about the brain’: Experts outline challenges in neuroscience.” SCOPE by Stanford Medicine, November 8, 2016. https://scopeblog.stanford.edu/2016/11/08/challenges-in-neuroscience-in-the-21st-century/#:~:text=%E2%80%9CWe%20know%20very%20little%20about%20the%20brain