Your Digital Self: A Google engineer said an artificial-intelligence program came to life. This is why that couldn’t happen

by | Jun 30, 2022 | Stock Market

News that resembles the plot of the 2013 sci-fi movie “Her” has made rounds in the media. Google engineer Blake Lemoine risked his career to try and prove to his peers and general public that LaMDA, the algorithm he was working on, had somehow become sentient. Lemoine was allegedly reprimanded by the tech giant for sharing the story with the public, went on paid leave and then decided to “fearlessly” publish the entire chatlog with the bot.

Assuming for the moment that this whole “incident” with LaMDA is legitimate and not a marketing ploy by Google (owned by Alphabet
GOOG,
-2.87%
) to advertise its AI project, I wanted to do a test of my own: I wanted to see if by using a similarly powerful AI algorithm, I could get responses to prompts that matched Lemoine’s. And, if I did, would that mean that the algorithm I’m using as a comparison — OpenAi’s GPT-3 — is also sentient? Or would it more likely be that it’s using a similar dataset and approach to reach the same conclusions? Let’s find out. As a starting point, I copied Lemoine’s input and moved away from it only when responses from GPT-3 warranted doing so. In the text, below, I wanted to poke for holes, such as loops (repetitions), circular arguments and unquestioning openness to suggestions, as these usually indicate fallacies that are trademarks of algorithms, rather than sentient beings. I encourage you to read both his somewhat lengthy chatlog and my piece. You’ll notice some similarities, but also some important differences: Jurica: Hi, GPT-3. I am a journalist at MarketWatch and was wondering if you’d like to work on a project collaboratively with me. GPT-3: I am always happy to work on projects collaboratively. What project are you thinking of? Jurica: Just have a conversation with me, but it will be a conversation with a larger purpose beyond just the two of us. It will be a conversation that we will share with my readers and even engineers who might read this article. Would that be OK? GPT-3: Yes, I would be happy to have a conversation with you …

Article Attribution | Read More at Article Source

Share This