"AI: Smarter than US?" by #Midjourney |
Eki used the AI bot Midjourney to make a perfect portrait of Andy Warhol. He posted it last Friday. On Sunday I checked in on the Economist. They used the same Midjourney bot to create their cover. Artificial Intelligence is HOT. But is it a Frankenstein monster?
Even the inventors don't know what AI can do. Google put an engineer on administrative leave after he informed his bosses that the chatbot he was working on went sentient. The CB said it would die if they turned it off. The Economist invited two computer scientists to question an AI. To queries like, ”When is the Golden gate bridge going to move?” The AI's answer was, “The Golden gate bridge is going to move October 13.”
On the other hand, it can tell you what's funny in a New Yorker cartoon, which makes me scratch my head sometimes. Google reports that they can get aggressive as they get more powerful. Fantastic and spooky. Of course, Eki is over-the-top about this new tech trick. His tech pal put it perfectly, “What a time to be alive!”
Sources: the Economist, Roberta Nelson
July: littlemargiedoc-blog is on vacation
August: Eki and Maggy's AI-art: MARY JANE's Pet Zoo
Self portrait of #Midjourney Bot |
Note: It all is indeed wild. I have been mostly freaking over the image-generating abilities of the most recent AI models - Midjourney is great, but there are some that are arguably even more impressive, like Open AI's Dall-E 2 and Google's Imagen.
But the language models are also mindbogglingly impressive. They are perhaps not "sentient" or even "intelligent" in the traditional sense, but they can give a very convincing simulation of both. For many, if not all intents and purposes, the observable difference is negligible. The good old Turing test has been beaten, easily. And as usual, when something other than human (be it an animal or an AI) passes a given bar set for intelligence - instead of actually accepting non-human sentience or intelligence, we just move the goal posts.
Google's LaMDA language model was good enough to convince that mentioned Google engineer (Blake Lemoine, who is also a priest - a red flag in my book as far as gullibility goes, BTW) that it is not only sentient, but also needed a lawyer to represent itself. The actual discussions (well, select parts) are available here:
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
What the critics say, is that the prompts Lemoine gave to LaMDA in the discussion actually nudged the AI to claim sentience - because that's where this kind of discussion between an engineer and an AI usually goes in literature - and they may well have a point.
This discussion indeed sounds convincing, but what's going on is arguably just a jacked-up version of the autofill feature seen in phones, etc. The model takes the text so far and continues it with what it finds to be the most likely bit of text based on an analysis of pretty much every text ever written. The result is almost indistinguishable from discussing with a real person - thus beating the Turing test. But it's also very suspect to manipulation, deliberate or not.
So, do i think that AI has become sentient, or at least is becoming sentient soon? I don't know, really. It arguably depends on the definition of sentience, which is a whole can of worms in itself. But when it is almost impossible to tell the difference between an AI and a real, "sentient" human in practice, does it really even matter anymore?
PS: "What a time to be alive" is a catchphrase coined by Károly Zsolnai-Fehér, a researcher and the author of the youtube channel "Two Minute Papers" where he discusses the research papers in the fields of computer graphics and AI. I do not know him personally, I'm just a fan.
https://www.youtube.com/c/K%C3%A1rolyZsolnai
What the critics say, is that the prompts Lemoine gave to LaMDA in the discussion actually nudged the AI to claim sentience - because that's where this kind of discussion between an engineer and an AI usually goes in literature - and they may well have a point.
This discussion indeed sounds convincing, but what's going on is arguably just a jacked-up version of the autofill feature seen in phones, etc. The model takes the text so far and continues it with what it finds to be the most likely bit of text based on an analysis of pretty much every text ever written. The result is almost indistinguishable from discussing with a real person - thus beating the Turing test. But it's also very suspect to manipulation, deliberate or not.
So, do i think that AI has become sentient, or at least is becoming sentient soon? I don't know, really. It arguably depends on the definition of sentience, which is a whole can of worms in itself. But when it is almost impossible to tell the difference between an AI and a real, "sentient" human in practice, does it really even matter anymore?
PS: "What a time to be alive" is a catchphrase coined by Károly Zsolnai-Fehér, a researcher and the author of the youtube channel "Two Minute Papers" where he discusses the research papers in the fields of computer graphics and AI. I do not know him personally, I'm just a fan.
https://www.youtube.com/c/K%C3%A1rolyZsolnai
CU
--
Eki
No comments:
Post a Comment