Movies

BBC: Can AI a mind of its own?

NJChoi 2024. 10. 30. 12:22

In the autumn of 2021, something strange happened at the Google headquarters in California's Silicon Valley. A software engineer called, Blake Lemoine, was working on the artificial intelligence project, 'Language Models' for Dialogue Applications', or 'LaMDA' for short.

LaMDA is a chatbot- a computer programme designed to have conversations with humans over the internet. 

After months talking with LaMDA on topics ranging from movies to the meaning of like. Blake came to a surprising conclusion: the chatbot was an intelligent person with wishes and rights that should be replaced. For Blacke, LaMDA was a Google employee, not a machine. He also called it his 'friend'.

Google quickly reassigned Blake from the project, announcing that his ideas wee not supported by the evidence. But what exactly was going on?

In this programme, we'll be discussing whether artificial intelligence is capable of consciousness. We'll hear from one expert  who thinks AI is not as intelligent as we sometimes think, ans as usual, we'll be learning some new vocabulary as well. 

But before that, I have a question for you, Neil. What happened to Blake Lemoine is strangely similar to the 2013 Hollywood movie, Her, starring Joaquin Phoenix as a lonely writer who talks with his computer, voiced by Scarlett Johnasson. But what happens at the end of the movie? Is is:

a) the computer comes to life    b) the computer dreams about the writer   or c) the writer falls in love with the computer?

...c) the writer falls in love with the computer. 

OK, Neil, I'll reveal the answer at the end of the programme. Although Hollywood is full of movies about robots coming in life, Emily Bender, professor of linguistics and computing at the Universtity of Washington thinks AI isn't that smart. She thinks the words we use to talk about technology, phrases like 'machine learnign', give a false impression about what computers can and can't do. 

Here is Professor Bender discussing another misleading phrase, 'speech recognition', with BBC World Service programme, The Inquiry:

If you talk about 'automatic speech recognition', the term 'recognition' suggests that there's something cognitive going on, where I think a better term would be automatic transcription. That just describes the input-output relation, and not any theory or wishful thinking about wha tthe computer is doing to be able to achieve that. 

Using words like 'recognition' in relation to computers gives the idea that something cognitive is happening- something related to the mental processes of thinking, knowing, learning and understanding. 

But thinking and knowing are human, not machine, activities. Professor Benders says that talking about them in connection with computers is wishful thinking- something which is unlikely to happen. 

The problem with using words in this way is that it reinforces what Professor Benders calls, technical bias- the assumption that the computer is always right. When we encounter language that sounds natural, but is coming from a computer, humans can't help but imagine a mind behind the language, even when there isn't one. 

In other words, we anthropomorphis computers- we treat them as if they were human. Here's Professor Benders again, discussing this idea with Charmaine Cozier, presenter of BBC World Service's, the Inquiry. 

So 'ism' means system, 'anthro' or 'anthropo' means human, and 'morph' means shape...And so this is a system that puts the shape of a human on something, and in this case the something is a computer. We anthropomorphise animals all the time, but we also anthropomorphis action figures, or dolls, or companies when we talk about companies having intentions and so on. We very much are in the habit of seeing ourselves in the world around us.