Generative AI Use Policy

International Journal on Management Education and Emerging Technology (IJMEET) acknowledges the rapid advancement of generative AI tools (such as ChatGPT, Bard, Copilot, etc.) and their potential role in academic research and writing. To maintain transparency, academic integrity, and ethical standards in scholarly publishing, IJMEET adopts the following guidelines on the use of generative AI:

  • For Authors:
    • Disclosure Requirement: Authors must explicitly disclose if generative AI tools were used in the preparation of the manuscript (e.g., drafting, editing, or data analysis support).
    • Human Accountability: AI tools must not be credited as authors. Only individuals who meet authorship criteria may be listed as authors.
    • Content Review: Authors are fully responsible for verifying the accuracy, originality, and ethical compliance of any AI-assisted content included in the manuscript.
    • No Full AI-Generated Manuscripts: IJMEET does not accept fully AI-generated submissions. Manuscripts must be predominantly authored by humans.
  • For Peer Reviewers:
    • Confidentiality Assurance: Reviewers must not use generative AI tools to read, summarize, or comment on confidential manuscript submissions.
    • Reviewer Responsibility: All reviews must be written by the assigned reviewer and not by or with assistance from AI tools.
  • For Editors: Editors must ensure that all editorial decisions are made independently and are not delegated to AI systems. They may use AI tools for administrative tasks (e.g., grammar checks) but remain accountable for all editorial outcomes.
  • Misuse: Failure to disclose AI usage or improper dependence on AI tools may result in manuscript rejection, retraction, or other actions consistent with COPE guidelines.