Sam Altman talks AGI, Sora and monetization in a16z interview

The a16z Podcast published an interview with Sam Altman in which he discussed future interfaces, monetization plans, and OpenAI’s path towards artificial general intelligence. The conversation was led by a16z co-founder Ben Horowitz and partner Erik Torenberg.
The conversation covered OpenAI’s key strategies: how the company is progressing towards artificial general intelligence (AGI); the role of Sora and video models as a proving ground for new interfaces; which monetization schemes are planned; and how graphics processing units (GPUs) are allocated between product and research. All of this, in one way or another, revolved around the need for a personal subscription model and the challenge of maintaining user trust.
Throughout the discussion, Altman repeatedly returned to artificial general intelligence as the main development vector and to the need to preserve the audience’s trust. The entire ambitious plan rests on this foundation.
We want to be people’s personal artificial‑intelligence subscription. I think most people will have one, and some will have several. It will be used in our consumer products, when logging into third‑party services, and on dedicated devices. At some point you will have an artificial intelligence that knows you well and genuinely helps you. That is what we want to build. In order for that to work, we have to construct a massive infrastructure. But the goal, the mission, is to build artificial general intelligence and make it genuinely useful to people.
From this starting point, Altman turns to Sora, a product that critics tend to dismiss as “entertainment,” whereas he describes it as scaffolding on the path to world models. Video becomes construction material for new interfaces: it has a stronger emotional impact than text, it brings interaction closer to reality, and it carries risks ranging from deepfakes to a glut of visual content.
Altman explains that Sora is designed to demonstrate future capabilities before they are widely deployed, so that society has time to develop norms and rules for their use. At the same time, OpenAI does not, and does not plan to, devote a large share of computing resources to Sora. From this perspective, artificial‑intelligence research remains the company’s priority.
First, it is important to make excellent products, and people like the new Sora. Second, it is important to let society feel what is coming, as part of a co‑evolution. Very soon the world will have to deal with powerful video models that can deepfake anyone or show almost anything. Overall, that will be positive, but society needs time to adapt. As with ChatGPT, the world needed to understand where we are. It is important that the world quickly understands where video is heading, because it has a much stronger emotional impact than text.
Altman describes OpenAI’s long-term vision as the creation of an autonomous “artificial‑intelligence scientist.” He says it is too early for bold declarations, yet within GPT‑5 there are already “very small examples” of emerging scientific autonomy. In practical terms, the most substantive competition is shifting from benchmark tables to models’ ability to produce knowledge from correct proofs of complex theorems to discoveries that change the course of research. In his view, that outcome is a fair metric of progress, rather than another point on a static set of tasks.
What excites me most is the idea of an artificial‑intelligence scientist. Not long ago it sounded implausible, and the popular understanding of the Turing test has already sped past us. For a long time it seemed out of reach, and then suddenly it was passed; the world worried for a week or two and moved on. For the first time with GPT‑5 we are seeing very small examples of how this begins to happen. I think that within two years the models will perform more complex scientific tasks and make important discoveries.
When the conversation turned to economics, Altman became markedly more pragmatic. He stated that generating video is resource‑intensive, and therefore, pay‑per‑generation models and careful advertising formats are likely to appear.
Altman formulates the key risk unambiguously: once trust has been undermined, it is difficult to restore, so recommendations driven by payment rather than quality are off the table. As a reference point, he cites Instagram’s approach to advertising that introduces something genuinely new and is perceived as a service rather than an intrusion.
I am open to advertising, but with caution. Many people find it unpleasant; I do as well, but it is not taboo. One simply needs to avoid obvious pitfalls very carefully. People have a very high level of trust in ChatGPT: even when it makes mistakes, they feel it is trying to help and to do the right thing. If we were to betray that trust for example, you ask ‘Which coffee machine should I buy?’ and we recommend not the best product but the one that paid us – that trust would disappear immediately.
Altman also describes current practices and risks around monetization. User behavior in Sora has been broader than expected: beyond professional video production, many users are creating short, humorous videos for friends and group chats - a “social” use case that suggests a different economic model.
Creating Sora videos is expensive. Therefore, for scenarios in which people generate a hundred clips a day, a different payment approach will be required… most likely, one needs to charge for each generation when the process is that costly.
He allows for limited advertising options for niche audiences but stresses that he will not permit recommendations driven by payment, since that would put the company’s core asset, trust, at risk. Altman acknowledges that websites and sellers are attempting to manipulate the model with large volumes of paid reviews and curated lists designed to make the artificial intelligence recommend their content more often. He has already encountered this problem and has tasked the team with finding robust defenses against this “next‑generation SEO spam”.
People are already doing this, not necessarily fake reviews, but large volumes of paid content written to appeal to the model… I do not yet know exactly how we will fight this, but solutions will emerge.
On the creator side, he expects movement in the opposite direction: easier production, including with Sora, will stimulate supply, and over time a revenue‑sharing model may appear in consumer scenarios.
People want to create far more than before… at some point there may be revenue sharing for specific formats; for now it is often just ‘likes’, but the motivation to create has already increased significantly.
Altman calls revenue an indirect quality metric, alongside scientific productivity, as opposed to static benchmarks that are easy to “train for” and that reflect real usefulness less well.
Summary of OpenAI’s near‑term strategy
- Vertical integration of the stack (products/research/infrastructure) to accelerate development and ensure quality control.
- Prioritising research over mass-market conversational products when GPU resources are limited.
- Emphasising safety, with rigorous testing of frontier-level models before public release.
- Monetising responsibly through pay-per-video generation and limited advertising formats that avoid conflicts of interest.
- Using Sora as a testbed for interfaces and rules ahead of broad deployment.
Recommended