New evidence from the past week (e.g. Anthropic's "alignment faking" paper + OpenAI's new o3 model scoring 87% on ARC-AGI) gave me the courage to speak truth to podcast.
Rebutting Sean Carroll on LLMs and AGI
New evidence from the past week (e.g. Anthropic's "alignment faking" paper + OpenAI's new o3 model scoring 87% on ARC-AGI) gave me the courage to speak truth to podcast.