Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
More than 500 million people every month trust Gemini and ChatGPT to keep them in the know about everything from pasta, to sex or homework. But if AI tells you to cook your pasta in petrol, you probably shouldn’t take its advice on birth control or algebra, either.
At the World Economic Forum in January, OpenAI CEO Sam Altman was pointedly reassuring: “I can’t look in your brain to understand why you’re thinking what you’re thinking. But I can ask you to explain your reasoning and decide if that sounds reasonable to me or not. … I think our AI systems will also be able to do the same thing. They’ll be able to explain to us the steps from A to B, and we can decide whether we think those are good steps.”
Knowledge requires justification
It’s no surprise that Altman wants us to believe that large language models (LLMs) like ChatGPT can produce transparent explanations for everything they say: Without a good justification, nothing humans believe or suspect to be true ever amounts to knowledge. Why not? Well, think about when you feel comfortable saying you positively know something. Most likely, it’s when you feel absolutely confident in your belief because it is well supported — by evidence, arguments or the testimony of trusted authorities.
LLMs are meant to be trusted authorities; reliable purveyors of information. But unless they can explain their reasoning, we can’t know whether their assertions meet our standards for justification. For example, suppose you tell me today’s Tennessee haze is caused by wildfires in western Canada. I might take you at your word. But suppose yesterday you swore to me in all seriousness that snake fights are a routine part of a dissertation defense. Then I know you’re not entirely reliable. So I may ask why you think the smog is due to Canadian wildfires. For my belief to be justified, it’s important that I know your report is reliable …