Most Gen AI Players Remain 'Far Away' from Profiting: Interview with Andy Wu (www.library.hbs.edu)

🤖 AI Summary
Harvard Business School’s Andy Wu warns that generative AI is wildly promising but economically precarious: training state‑of‑the‑art models requires huge fixed investments, and every single inference (a user prompt → model response) incurs nontrivial variable costs—think several cents of electricity and chip time per image—and OpenAI projects more than $150 billion in inference spending through 2030. Current go‑to monetization (free tiers + $20/month subscriptions) mirrors legacy software models that assumed near‑zero marginal cost; those price points are far too low to cover today’s variable costs, so firms face a mismatch between enormous operational outlays and limited revenue per use. That mismatch shapes competitive and technical strategy. Nvidia (the “shovel seller”) and Meta (a “jewelry maker” leveraging platform synergies) have been market winners, while OpenAI and other foundation‑model builders risk commoditization: weak IP, rapid open‑source forks (e.g., Llama, Grok, DeepSeek), and low barriers to entry compress pricing power. Technically, bigger models raise both fixed training and per‑inference costs, pushing the field toward smaller, efficient models that match quality at lower runtime expense. Wu expects a shift to genuine pay‑per‑usage pricing and warns of a possible market reckoning or “bubble” if value creation outpaces the ability to capture sustainable economic returns—especially for chip/cloud suppliers (“neoclouds”) that are highly leveraged.
Loading comments...
loading comments...