Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative
AI 生成式人工智能错误信息对用户信息处理的影响:人们如何理解生成式人工智能中的错误信息
Author: Donghee Shin(申东熙)、Amy Koerber(艾米·克伯)、Joon Soo Lim(林俊秀)
Source:《New Media & Society》2024 Volume27 Issue7
Abstract: This study examines the impact of artificial intelligence (AI) on the ways in which users process and respond to misinformation in generative artificial intelligence (GenAI) contexts. Drawing on the heuristic-systematic model and the concept of diagnosticity, our approach examines a cognitive model for processing misinformation in GenAI. The study's findings revealed that users with a high-heuristic processing mechanism, which affects positive diagnostic perception, were more likely to proactively discern misinformation than users with low-heuristic processing and low-perceived diagnosticity. When exposed to misinformation from GenAI, users' perceived diagnosticity of misinformation can be accurately predicted by the ways in which they perform heuristic systematic evaluations. With this focus on misinformation processing, this study provides theoretical insights and relevant recommendations for firms to be more resilient in protecting users from the detrimental impacts of misinformation.
文摘:本研究考察了人工智能(AI)对生成式人工智能(GenAI)环境中用户处理和应对虚假信息方式的影响。借助启发式系统模型和诊断性概念,我们的方法考察了生成式人工智能中处理错误信息的认知模型。研究结果显示,具有高启发式处理机制(影响积极诊断感知)的用户,比起低启发式处理和低感知诊断的用户,更有可能主动识别错误信息。当用户接触到生成式人工智能的错误信息时,用户对错误信息的感知准确性可以通过他们进行启发式系统评估的方式来准确预测。本研究聚焦虚假信息处理,提供了理论见解和相关建议,帮助企业更具韧性地保护用户免受虚假信息的有害影响。
Keywords: algorithmic effects on misinformation; algorithmic misinformation; ChatGPT; generative AI; heuristic-systematic process; misinformation-processing model
编译:任艳林、刘鑫