Reading List - August 29, 2025
Read what we've read this week - but keep it private 😉

GPT-5 Set the Stage for Ad Monetization and the SuperApp
By: Doug OLaughlin, Dylan Patel, Wei Zhou, and AJ Kourabi
We love a good SemiAnalysis article, and the case made here is as compelling as it is depressing: GPT-5's new router architecture may herald the introduction of advertising on AI platforms.
Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmark
By: Tim @ Nous Research
Tim does some top-notch novel AI benchmark research, proposing "token efficiency" as a measure of reasoning model performance. Check out their measurements and insights regarding how many tokens different models use during chain-of-thought (CoT) inference.
Environments Hub: A Community Hub to Scale RL to Open AGI
By: The Prime Intellect Team
Big shouts to the good people at Prime Intellect this week with the release of their reinforcement learning environment for open AI models. We think open-source AI is important, and community-centric research tooling is a crucial step towards open model innovation (Andrej Karpathy agrees!)
The AI research experimentation problem
By: Sarah Catanzaro @ Amplify Partners
Sarah argues that contemporary AI research methods often lack stable tooling and optimally incentivized and transparent backing to support rigorous and replicable science (read: the "science" leaves room for improvement). Absolutely true, Queen! Check out her suggestions for how to improve the field.
Survey of Specialized Large Language Mode
By: Yang Et. al
We’ve said it before, but we’ll say it again: specialization (in this case, specialized architectures) is the true path to accuracy, efficiency, and trust in regulated domains. The authors of this survey agree and show how domain-native models kick ass!
Mass Intelligence
By: Ethan Mollick
We think the phrase “Mass Intelligence” is a pretty neat encapsulation of the democratization of powerful AI tools. But it’s not all sunshine and roses! Ethan Mollick raises the important question: how will institutions and individuals adapt to this world of AI abundance?
Anthropic users face a new choice – opt out or share your chats for AI training
By: Connie Loizos @ TechCrunch
Title explains it all (for the most part). Not great! Others agree...
Tweet of the Week
We're big fans of Simon Willison and always love to see AI experts asking for privacy guarantees from closed-source model providers:
If I opt in to this, is there any chance a future version of Claude could tell someone else a secret it learned from a document that I upload for summarization or similar?
— Simon Willison (@simonw) August 28, 2025