Notes on "Prothean AI" (aphyr.com)

🤖 AI Summary
Prothean Systems claimed last week to have achieved “Emergent General Intelligence (EGI),” boasting 100% accuracy on “all 400 tasks” of the ARC-AGI-2 challenge in 0.887 seconds and inviting researchers to verify the work. A close read of their public materials shows multiple falsehoods and pseudoscience: ARC-AGI-2 actually provides 1,000 public training tasks and 120 evaluation tasks (not 400), their cited dataset repo is wrong, and the demo—claimed to be “local” with “no uploads”—actually issues Wikipedia queries and multiple reads/writes to a Firebase backend. The announcement also appears to solicit investors while withholding the promised reproducibility repository. Technical artifacts in the white paper and demo reveal many red flags. “Memory DNA”’s nine-tier compression list is flashy but the demo performs a simple LZString.compressToUTF16 call; the “Guardian” integrity filter is just regex checks (email, 16‑digit card numbers, keywords like password/api_key) with a trivial threshold; the “Universal Pattern Engine” computes semantic distance from string lengths and selects from a fixed bridge list; the “Radiant Data Tree” uses an impossible φ^n depth formula; and their “transcendence score” applies a modular wrap (×φ mod 1.0) making the metric non‑monotonic. Collectively, these issues plus prose and commit patterns suggest heavy LLM authorship and confabulation. For the AI community this is a textbook case for skepticism: demand reproducible code, correct dataset references, verifiable benchmarks, and sanity‑checking of math/implementation before accepting extraordinary AGI claims.
Loading comments...
loading comments...