On Wednesday, renowned scientific journal Nature announced in an editorial that it will not publish images or video created using generative AI tools. The ban comes amid the publication’s concerns over research integrity, consent, privacy, and intellectual property protection as generative AI tools increasingly permeate the world of science and art.
Founded in November 1869, Nature publishes peer-reviewed research from various academic disciplines, mainly in science and technology. It is one of the world’s most cited and most influential scientific journals.
Nature says its recent decision on AI artwork followed months of intense discussions and consultations prompted by the rising popularity and advancing capabilities of generative AI tools like ChatGPT and Midjourney.
“Apart from in articles that are specifically about AI, Nature will not be publishing any content in which photography, videos or illustrations have been created wholly or partly using generative AI, at least for the foreseeable future,” the publication wrote in a piece attributed to itself.
The publication considers the issue to fall under its ethical guidelines covering integrity and transparency in its published works, and that includes being able to cite sources of data within images:
“Why are we disallowing the use of generative AI in visual content? Ultimately, it is a question of integrity. The process of publishing — as far as both science and art are concerned — is underpinned by a shared commitment to integrity. That includes transparency. As researchers, editors and publishers, we all need to know the sources of data and images, so that these can be verified as accurate and true. Existing generative AI tools do not provide access to their sources so that such verification can happen.”
As a result, all artists, filmmakers, illustrators, and photographers commissioned by Nature “will be asked to confirm that none of the work they submit has been generated or augmented using generative AI.”
synthesized from millions of images fed into an AI model.
That fact also leads to issues concerning consent and permission, especially related to personal identification or intellectual property rights. Here, too, Nature says that generative AI falls short, routinely using copyright-protected works for training without obtaining the necessary permissions. And then there’s the issue of falsehoods: The publication cites deepfakes as accelerating the spread of false information.
However, Nature is not wholly against the use of AI tools. The journal will still permit the inclusion of text produced with the assistance of generative AI like ChatGPT, given that it is done with appropriate caveats. The use of these large language model (LLM) tools must be explicitly documented in a paper’s methods or acknowledgments section. Additionally, sources for all data, even those generated with AI assistance, must be provided by authors. The journal has firmly stated, though, that no LLM tool will be recognized as an author on a research paper.