ChatGPT's CSS may hide model info (clip-path, opacity:0, user-select:none) [pdf] (aya-peppers.github.io)

🤖 AI Summary
A recent analysis of ChatGPT's CSS files has uncovered systematic techniques for obscuring model-related information, raising significant legal concerns regarding transparency. The extracted stylesheets, which can be publicly verified, employ methods such as setting `opacity: 0`, using `clip-path`, and disabling user interaction with `pointer-events: none` to prevent users from copying or identifying model metadata. These design choices suggest a deliberate effort to hide critical interface elements, violating GDPR requirements and EU AI Act provisions intended to ensure transparency in AI systems. The implications for the AI/ML community are profound. By inhibiting user understanding of which model is being used, these CSS practices not only challenge ethical boundaries but also highlight an uneven power dynamic where developers with technical knowledge can access hidden details while ordinary users cannot. This lack of transparency holds potential legal ramifications, including significant violations of user rights to information about AI's operational logic. The findings call for urgent discussions among security researchers, legal experts, and developers about the broader implications of such obfuscation in AI interactions, advocating for a more equitable approach to user access and understanding.
Loading comments...
loading comments...