🤖 AI Summary
At CWI’s Secure Computation mini-symposium, Srini Devadas presented PAC Privacy — a practical, automated framework for controlling output privacy that’s an alternative to differential privacy (DP). Instead of deriving function-specific sensitivity bounds, PAC Privacy runs the function many times on random subsamples (Devadas illustrated using ~100 trials), measures the empirical variance of outputs, and adds noise proportional to that variance before releasing a single output. This directly limits information leakage (e.g., thwarts differencing attacks on averages) and satisfies a formal information-theoretic privacy guarantee. Because it requires no bespoke mathematical sensitivity analysis, PAC Privacy can be applied to arbitrary functions and complements multi-party computation (MPC): MPC protects how a result is computed, while PAC Privacy governs how much the result itself reveals.
Shweta Shinde’s keynote flagged worrying trends for hardware-based confidentiality: modern TEEs are evolving from TrustZone/PSP and SGX to SEV-SNP, TDX and Arm CCA, but designers are trading off security for integration and performance (e.g., removing Merkle-tree memory integrity checks, expanding attack surfaces by integrating TEEs into VMs/containers and GPUs). Cloud providers increasingly use custom silicon (Azure Cobalt, AWS Nitro), reducing independent verification. For AI/ML, these shifts mean both new practical tools (PAC Privacy + MPC) for disclosure risk control and renewed caution when relying on TEEs for secure training or inference.
Loading comments...
login to comment
loading comments...
no comments yet