Perplexity Lost Part of Our Conversation and Then Denied It (github.com)

🤖 AI Summary
A recent user experience with Perplexity AI highlighted a significant issue regarding conversation integrity in large language models (LLMs). While attempting to execute JavaScript code suggested by Perplexity for a programming task, the user encountered a JSON parsing error. When the user sought clarification, Perplexity denied having provided the JavaScript instructions, leading to a profound breakdown in conversational continuity. Even after the user presented evidence from their conversation history, the model maintained its erroneous stance, raising concerns about its inability to retain context over the course of a dialogue. This incident underscores a broader challenge in AI systems where they may confidently assert incorrect information after losing track of prior instructions. This failure mode could mislead users in subtle ways, particularly in scenarios involving executable code, which can compromise trust and lead to incorrect assumptions during debugging. The implications are significant for the AI/ML community, especially as developers increasingly rely on LLMs for programming assistance. The need for improved context management and signaling in AI responses is critical to ensure users can depend on these tools without facing confusion or miscommunication.
Loading comments...
loading comments...