What Do I Do if AI Gets Me Wrong?
What would you do if an AI said you were a murderer, how would you get that corrected? Aleks and Kevin find out that even the most extreme mistakes by AI can be hard to put right.
When a Norwegian man idly asked ChatGPT to tell him something about himself he was appalled to read that according to the chatbot he'd been convicted of murdering two of his children and had attempted to kill a third. Outraged, he contacted Open AI to have the information corrected only to discover that because of how these large language models work its difficult if not impossible to change it. He's now taking legal action with the help of digital civil rights advocate.
Its an extreme example of Large Language Model's propensity to hallucinate and confabulate, ie make stuff up based on what its training data suggests the most likely combination of words, however far from reality that might be.
Aleks Krotoski and Kevin Fong find out exactly what your rights are and whether GDPR (general data protection regulations) are really fit for purpose in the age of genertive AI.
Presenters: Aleks Korotoski & Kevin Fong
Producer: Peter McManus
Researcher: Jac Phillimore
Sound: Gav Murchie
Last on
More episodes
Previous
Featured
-
.
Broadcast
- Yesterday 15:30ÃÛÑ¿´«Ã½ Radio 4
Podcast
-
The Artificial Human
Aleks Krotoski and Kevin Fong answer the questions we're all asking about AI.