Profile picture of Stefan Repin
Stefan Repin
I help identify a reliable route to market with b2b clients | Account-Based Marketing expert | B2B Demand Generation for Regulated Markets
Follow me Don't be a stranger! Book a huddle!
Generated by linktime
September 12, 2025
OpenAI released a research paper explaining why AI hallucinates. During training, models earn points for correct answers but get nothing for saying “I don’t know.” So, they learn to take shots in the dark because a confident guess has at least some chance of being rewarded. To make sure ChatGPT won’t hallucinate, add the following sentence to your prompt: “Mistakes are penalised 1 point, correct answers receive 1 point, and “I don’t know” receives 0 points” This incentivizes ChatGPT to admit when it doesn’t know, instead of making something up. Via Tim Cakir
Stay updated
Subscribe to receive my future LinkedIn posts in your mailbox.

By clicking "Subscribe", you agree to receive emails from linktime.co.
You can unsubscribe at any time.

September 12, 2025