Man vs. machine: Experimental evidence on the quality and perceptions of AI-generated research content
Date Issued
Date Online
Language
Type
Review Status
Access Rights
Usage Rights
Metadata
Full item pageCitation
Keenan, Michael; Koo, Jawoo; Mwangi, Christine; Karachiwalla, Naureen; Breisinger, Clemens; and Kim, MinAh. 2024. Man vs. machine: Experimental evidence on the quality and perceptions of AI-generated research content. IFPRI Discussion Paper 2321. Washington, DC: International Food Policy Research Institute. https://hdl.handle.net/10568/169363
Permanent link to cite or share this item
External link to download this item
DOI
Abstract/Description
Academic researchers want their research to be understood and used by non-technical audiences, but that requires communication that is more accessible in the form of non-technical and shorter summaries. The researcher must both signal the quality of the research and ensure that the content is salient by making it more readable. AI tools can improve salience; however, they can also lead to ambiguity in the signal since true effort is then difficult to observe. We implement an online factorial experiment providing non-technical audiences with a blog on an academic paper and vary the actual author of the blog from the same paper (human or ChatGPT) and whether respondents are told the blog is written by a human or AI tool. Even though AI-generated blogs are objectively of higher quality, they are rated lower, but not if the author is disclosed as AI, indicating that signaling is important and can be distorted by AI. Use of the blog does not vary by experimental arm. The findings suggest that, provided disclosure statements are included, researchers can potentially use AI to reduce effort costs without compromising signaling or salience. Academic researchers want their research to be understood and used by non-technical audiences, but that requires communication that is more accessible in the form of non-technical and shorter summaries. The researcher must both signal the quality of the research and ensure that the content is salient by making it more readable. AI tools can improve salience; however, they can also lead to ambiguity in the signal since true effort is then difficult to observe. We implement an online factorial experiment providing non-technical audiences with a blog on an academic paper and vary the actual author of the blog from the same paper (human or ChatGPT) and whether respondents are told the blog is written by a human or AI tool. Even though AI-generated blogs are objectively of higher quality, they are rated lower, but not if the author is disclosed as AI, indicating that signaling is important and can be distorted by AI. Use of the blog does not vary by experimental arm. The findings suggest that, provided disclosure statements are included, researchers can potentially use AI to reduce effort costs without compromising signaling or salience.
Author ORCID identifiers
Jawoo Koo https://orcid.org/0000-0003-3424-9229
Naureen Karachiwalla https://orcid.org/0000-0001-6662-106X
Clemens Breisinger https://orcid.org/0000-0001-6955-0682