🤖 AI Summary
            A large two-wave survey representative of the Swiss population (n1=1,514; n2=1,488) compared public attitudes toward AI before and after the GenAI boom sparked by ChatGPT and found a measurable drop in acceptance and a stronger demand for human oversight. After ChatGPT’s launch, the share of respondents who said AI is “not acceptable at all” rose from 23% to 30%, while preference for human-only decision-making climbed from 18% to 26%. The study also documents widening social cleavages: educational, linguistic, and gender gaps in AI acceptance became more pronounced post-boom.
For the AI/ML community, these results challenge assumptions that broad exposure to generative models automatically increases public trust or readiness for automated systems. The findings imply developers and deployers must prioritize human-in-the-loop designs, clearer communication about capabilities/limits, and targeted engagement with underrepresented groups to avoid exacerbating inequalities. From a technical and policy perspective, the study strengthens the case for rigorous user-centered evaluation, transparency mechanisms, and governance that account for shifting societal preferences—especially in high-stakes decision contexts where demand for human oversight has risen markedly.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet