Imago Obscura: An Image Privacy AI Co-Pilot to Identify and Mitigate Risks (cmu-spuds.github.io)

🤖 AI Summary
Imago Obscura is an intent-aware AI “co-pilot” for image privacy that helps people identify and mitigate privacy risks before sharing photos. Built after a formative study with seven image-editing experts, the system asks users to state their sharing intent and concerns, surfaces contextually relevant risks (e.g., faces, license plates, location cues), and then recommends and applies obfuscation strategies such as blurring, inpainting, and generative content replacement. A lab study with 15 participants using their own photos showed the tool improved users’ awareness of privacy threats and their ability to make safer sharing decisions. Technically, Imago Obscura stitches together an ensemble of models into an open-source image editor: a vision model for object detection and annotation, a multimodal large language model that maps user intent to pertinent privacy risks, a segmentation model to precisely localize sensitive regions, and an image generation/inpainting model to apply automated obfuscations. The human-centered, three-phase design (formative study → system build → user evaluation) demonstrates a practical pipeline for combining vision, LLM reasoning, and generative editing to operationalize image privacy. For AI/ML practitioners, the work is significant because it showcases how multimodal model ensembles and intent-aware interfaces can translate privacy research into usable tools, and it provides an evaluated blueprint for integrating automated obfuscation into real-world workflows.
Loading comments...
loading comments...