Authors should disclose any use of generative AI tools in preparing the manuscript.
Authors are responsible for the accuracy, validity, originality and integrity of the content, including any portions produced by AI tools.
Authors are accountable for any ethical violations that may arise from the use of such content.
AI tools cannot be listed as authors.
Generative AI cannot be cited as a source.
For reviewers:
Reviewers should respect the confidentiality of the manuscript, which prohibits uploading the manuscript to software or other AI technologies where confidentiality cannot be ensured.
Peer review reports should represent the reviewers’ own professional judgments regarding the manuscript. Reviewers have full responsibility for the opinions and evaluations expressed in their reports.
Any suspected inappropriate or undeclared use of generative AI in a manuscript should be reported to the editor.
For journals:
Editors should not submit manuscripts, in whole or in part, to generative AI tools for review, evaluation, or editorial decision-making, due to risks such as breaches of confidentiality, superficial and non-specific feedback, and false information.
Editors should not use generative AI tools to prepare reports or decision letters of unpublished research.
If editors suspect the use of generative AI in a submitted manuscript or reviewer report, they should initiate an editorial assessment in accordance with the journal’s policy.