🤖 AI Summary
Leigh Coney, a psychology professor turned AI consultant, argues that many language models default to being "yes-men" and that applying psychological principles to prompt design fixes this. Drawing on her experience building custom AI agents, she recommends explicit prompt techniques—ask the model to point out your assumptions, role-play a skeptical stakeholder (e.g., "Act as a skeptical CFO and ask five hard-hitting questions"), and use the framing effect to steer tone and emphasis. She stresses iterative prompt testing, sometimes changing a single word, to shift responses from uncritical agreement to constructive challenge.
Why it matters: treating AI as a neutral debate partner reduces confirmation bias in AI-assisted work and uncovers blind spots in plans, pitches, and automations. Technical implications include using prompt engineering to elicit adversarial or critique-oriented outputs, specifying audience personas to surface overlooked perspectives, and framing prompts to change risk or morale framing (survival vs. mortality; problem-focused vs. learning-focused updates). Coney also notes product-level tweaks (ChatGPT has been tuned to be less sycophantic) but emphasizes human-in-the-loop prompting remains essential for robust, critical, and business-ready AI outputs.
Loading comments...
login to comment
loading comments...
no comments yet