Explainable AI affects understandability, trust and usability (chuniversiteit.nl)

🤖 AI Summary
A recent study highlights the complex relationship between Explainable AI (XAI) and user perceptions of trust and usability, particularly in the context of combating disinformation. While AI can amplify the spread of false information through deepfakes and misleading content, it also presents opportunities for enhanced detection and mitigation. This research employed design science methods to create user-friendly XAI interfaces, establishing guidelines that emphasize clarity, simplicity, and user control in the explanation of AI decisions. Despite these intentions, findings revealed that integrating XAI did not significantly enhance understandability or trust among users, raising questions about the effectiveness of current XAI approaches. The research underscores that demographic factors, such as age and personal trust levels, play a more critical role in shaping user experience than XAI features themselves. For instance, older participants reported lower trust and usability levels, suggesting that any benefits from explanatory tools may be overshadowed by user expectations and cognitive load. The study advocates for a tailored approach to XAI design, recommending optional explanations, cognitive overload prevention, and continuous refinement based on user feedback to better serve diverse audiences. Overall, these insights may prompt a reevaluation of how XAI is implemented in disinformation detection systems, focusing on user adaptability and individual differences.
Loading comments...
loading comments...