Credit and blame in the age of AI: who deserves recognition for AI-generated content?
12 February 2025
Artificial intelligence (AI) has rapidly evolved, becoming a tool that assists with creative and intellectual tasks, from writing to programming. But as AI-generated content proliferates, a crucial ethical question emerges: Who deserves credit for beneficial AI-assisted work, and who should bear responsibility for harmful outcomes?
A recent study published in The Annals of the New York Academy of Sciences, explored this issue by analyzing how people across four countries - China, Singapore, United Kingdom, United States - attribute credit and blame to human users of AI. The study focused on the role of personalization, where AI models are fine-tuned using an individual's previous work, and its effect on moral responsibility.
Credit Attribution: AI Use Diminishes Perceived Human Effort
The study found that when individuals used a standard AI model (e.g., ChatGPT) to generate a beneficial blog post, they were given significantly less credit than if they had written the post themselves. Participants perceived that the AI had done most of the work, reducing the human’s perceived contribution in terms of effort and creativity.
However, personalization played a significant role in restoring credit. When users employed a personalized AI—one fine-tuned on their past writings—participants attributed more credit to them. This suggests that people see outputs from personalized AI as more reflective of a user’s own intellectual effort, rather than simply the AI’s work.
Interestingly, credit attributions for personalized AI-assisted content were statistically similar to credit for human-only work, indicating that personalization might help bridge the so-called achievement gap, where AI use can diminish human recognition.
Blame Attribution: The Standards Remain the Same
When it came to harmful AI-generated content—such as disinformation—people attributed similar levels of blame to human users regardless of whether they used a standard or personalized AI model. This aligns with the idea that responsibility for negative outcomes remains with the human user, even when AI is involved.
One notable cultural difference emerged: participants from China tended to assign slightly more blame when harmful content was produced using personalized AI. This might reflect a perception that personalized AI systems reflect their users’ past biases or shortcomings, making them more accountable for the final output.
Cultural Differences in Perceptions of AI Responsibility
The study also revealed national differences in how people judge AI use. In the UK, participants were more inclined to blame individuals for using AI in any capacity, even if it was not personalized. This suggests a stronger ethical stance against AI involvement in content creation compared to other countries in the study.
Conversely, Chinese participants did not show a significant difference in their credit attributions between personalized and standard AI, suggesting a more neutral or pragmatic approach to AI-generated achievements.
Implications for AI Ethics and Policy
These findings carry important ethical and policy implications. As personalized AI systems become more common, they may help users retain credit for their work, but the potential for blame remains unchanged. This has consequences for fields like journalism, law, and academia, where authorship and accountability are critical.
The study suggests that AI personalization could mitigate some of the concerns about AI replacing human creativity by ensuring that users still receive recognition for their contributions. However, it also raises concerns about the ethical use of AI, as human users cannot evade responsibility for harmful content, even if AI plays a role in its generation.
Final Thoughts: Navigating AI’s Role in Human Creativity
As AI tools continue to integrate into creative and professional domains, society must grapple with evolving notions of authorship and accountability. This study underscores the need for clear guidelines on AI-assisted work—both in recognizing human contributions to beneficial content and ensuring accountability for misuse.
With personalized AI offering a potential middle ground, the challenge ahead lies in balancing technological advancement with ethical responsibility. Understanding how different cultures and societies view AI-generated content will be crucial for shaping fair policies and practices in the AI-driven future.
This blog is based on the paper: Earp, B. D., Porsdam Mann, S., Liu, P., Hannikainen, I., Khan, M. A., Chu, Y., & Savulescu, J. (2024). Credit and blame for AI–generated content: Effects of personalization in four countries. Ann NY Acad Sci., 1542, 51–57. https://doi.org/10.1111/nyas.15258. We were assisted by AI in the writing of this blog.