Employees, AI, and AI Employees (slobodan.me)

🤖 AI Summary
Two founders documented a 36‑day, five‑month experiment trying to build an “AI cofounder” (CofounderGPT) and concluded it failed — not for lack of clever demos but because current LLMs lack sustained agency, legal standing, emotions, and true responsibility. The piece argues that while models and tools (GPT-5 Pro, Claude Opus, Cursor, Claude Code, Codex) have dramatically improved coding, reasoning, and tool integration, key technical limits remain — context window size, robust multi‑step reasoning, reliable tool chaining, and verifiable decision‑making — that prevent an AI from being a full cofounder who can accept consequences or steer long‑term strategy. For the AI/ML community this frames a practical roadmap: most software engineering activities will be augmented (faster research, scaffolding, tests, API schemas) or commoditized (routine code, payloads, CI pipelines), but higher‑order work — intent/prioritization, domain modeling, security tradeoffs, incident response, and long‑term architecture — will stay human‑centric for the foreseeable future. Using a Wardley‑style evolution map, the author predicts a shift from “employee + AI” (custom tool use) toward more standardized AI employees over time, but warns that legal, ethical, and reliability questions must be solved before true automation of knowledge work is possible.
Loading comments...
loading comments...