April 4, 2023
At a Glance
- Cambridge University Press unveils AI ethics principles to maintain academic standards amid a rise in tools like ChatGPT
- AI can’t receive authorship credit and any use of AI tools must be clearly stated in publications
A new set of AI ethics policies from Cambridge University Press has banned AI from being treated as an author of academic papers and books that it publishes.
The publishing house said the guidelines are aimed at maintaining academic standards as tools like ChatGPT are causing issues like plagiarism, originality and accuracy.
“It’s obvious that tools like ChatGPT cannot and should not be treated as authors,” said Mandy Hill, managing director for academic publishing at Cambridge University Press.
Hill added: “We want our new policy to help the thousands of researchers we publish each year and their many readers. We will continue to work with them as we navigate the potential biases, flaws, and compelling opportunities of AI,” said Hill.
The newly unveiled principles dictate that authors are accountable for the originality, integrity and accuracy of their work. The use of AI must also be stated in the research papers, just as methodologies, software and tools are already clearly stated in papers.
Any use of AI must not breach the publishing house’s plagiarism policy, meaning scholarly works must be the author’s own and not present others’ ideas, data, words or other material without “adequate citation and transparent referencing.”
“Generative AI introduces many issues for academic researchers and educators,” according to Cambridge University Press’ series editor, R. Michael Alvarez.
Alvarez, who conducts research on AI and language learning models to analyze online trolling, said that academics and authors will “be having this conversation about the opportunities and pitfalls presented by generative AI for academic publishing for many years to come.”
About the Author(s)
You May Also Like