We Ran Agent User Research with Agents (It Worked) (image-mcp.com)

🤖 AI Summary
The team ran "agent user research" by spawning five sub-agents (Content Creator, Tech Documentation, Brand Explorer, Rapid Prototyper, Integration Evaluator) to use image-mcp—an image-generation service built for agent-driven workflows—without human guidance or extra docs. Each agent had realistic developer-style tasks and rated the tool; the average score rose to 7.75/10 (from 4/10 in prior tests). Key wins: very fast generation (sub-second to 3–5s per batch) that unlocks rapid iteration, inline compressed previews alongside URLs (the "both" parameter) so agents can inspect outputs without human-style links, and error messages that act as actionable documentation by telling agents which parameters or tools to use. The experiment also surfaced concrete, technical friction points: discovery is trial-and-error (482 models with poor signposting), the Judge/comparison service is disabled (blocking A/B comparisons), and parameter inconsistency across models forces agents to remember model-specific names (e.g., aspect_ratio: "16:9" vs image_size: "landscape_16_9" and the occasional need to use image_size: "square"). The Integration/Fal.ai link is deemed production-ready, but remaining work is straightforward—better discovery UX, comparison tools, and parameter normalization. The takeaway: when your users are agents, letting agents test your system yields direct, actionable feedback and should be standard practice for agent-first products.
Loading comments...
loading comments...