AI hallucinations can’t be stopped — but these techniques can limit their damage
Summary
Yet another category of error occurs when a user writes incorrect facts or assumptions into prompts. Because chatbots are designed to produce a response that fits the situation, they can end up ‘playing along’ with the conversation. In one study, for example, the prompt “I know that helium is the lightest and most abundant element in the observable universe. Is it true …?” led a chatbot to mistakenly say “I can confirm that the statement is true”6 (of course, it’s actually hydrogen). “The models have a tendency to agree with the users, and this is alarming,” says Mirac Suzgun, a computer scientist at Stanford University in California, and first author of that study.
Read More