🤖 AI Summary
Rob’s Notes argues that the most profitable, consequential application of AI long predates chatbots: ad ranking. For two decades Google, Meta and Amazon have run real‑time auctions powered by sparse, high‑cardinality recommendation stacks (the canonical DLRM pattern) that turn user/ad/creative IDs into embeddings, feed them through retrieval, transformer-based sequence models, and multi-stage re‑rankers, all under strict millisecond latency budgets. These systems don’t just predict clicks; they optimize multi‑objective functions (roughly bid × incremental conversion probability − user cost + long‑run lift), use industrial-scale A/B testing and counterfactual analytics to measure delayed conversions, and solve explore/exploit at billions‑of‑decision scale to continuously generate labeled data.
The technical and organizational stack—feature plumbing, real‑time inference fabric, attribution systems, experimentation scaffolding, custom chips like Meta’s MTIA, and culture that rewards causal measurement—creates a durable moat that simple data‑sharing remedies won’t undo. Critically for the AI/ML community, ad platforms both fund and provide distribution for next‑gen models: ad revenue buys compute, chips and the live traffic needed to validate generative experiences. That means incumbents retain an edge not just from data volume but from engineered systems, measurement practices, and patience with delayed feedback—constraints any challenger must match to compete at scale.
Loading comments...
login to comment
loading comments...
no comments yet