🤖 AI Summary
At a Bitmovin hackathon, a solo project to integrate FoxESSCloud solar data exposed a subtle but revealing LLM failure: the API required an HMAC-SHA256 signature built from a single concatenated string of five fields (HTTP method, API path, auth token, timestamp, JSON body) separated by newline characters. The hitch wasn’t the hashing itself but how the input string had to include literal newlines in its first segment (e.g., "POST\n/api/v1/query\n" + token ...), not be constructed by naïve concatenation like "POST" + "\n" + "/api/v1/query" + ... — and the API returned “illegal signature” until the exact byte-level format was produced.
Two leading tools, Cursor and Claude, both failed on that narrow requirement but in different ways. Cursor got stuck in a silent logical loop—correctly identifying where the hash was made but repeatedly proposing irrelevant fixes (encoding, libraries) and never challenging the concatenation pattern. Claude was more conversational but catastrophically hallucinatory: it confidently blamed a fictitious system-clock one-year drift, wasting time before reverting to the same wrong concatenation. The episode highlights a core limitation of current LLM assistants: they repeat learned patterns and can’t reliably apply byte-precise, nonstandard protocol rules or replace simple human checks (print(signature_string)). For subtle, documentation-sensitive integrations, human oversight and literal debugging remain essential.
Loading comments...
login to comment
loading comments...
no comments yet