Is ChatGPT and OpenAI Stealing Ideas? (medium.com)

🤖 AI Summary
HugstonOne says it built a novel “memory toggle” that lets users switch between a single-layer memory (LLM remembering only a local agent) and a dual-layer system combining the LLM’s internal memory with a persistent local storage on the user’s device. The team reports a clear technical tradeoff: with double memory enabled token generation dropped to ~40–50 tokens/sec versus 80+ t/s when only the single-layer (direct streaming to the chat bubble) path was used. The slowdown was traced to the extra logic and pathways required to synchronize and consult two memory layers, even though the local storage remained active in both modes. The feature also followed an earlier episode where HugstonOne prototyped an embedded code editor using GPT-5 for tuning—then saw similar capabilities appear in ChatGPT days later. Beyond the engineering, HugstonOne raises a broader IP and ethics alarm: they hadn’t shared code publicly, are on a paid ChatGPT plan that forbids training on their data, and yet saw a near-identical memory-on/off UI roll out “too quickly” to be coincidental. The company frames this as a structural risk for startups—if platform tools or scraped materials can be used to replicate innovations without consent or credit, incentives to build shrink. The incident underscores hard questions for the AI community and regulators about data usage policies, transparency in model development, and enforceable boundaries to protect small teams while preserving open innovation. HugstonOne says it will more rigorously document features and monitor the market as it presses for clarity.
Loading comments...
loading comments...