Âé¶¹ÊÓÆµ

AI Guidelines for Research

All researchers at George Âé¶¹ÊÓÆµ (faculty, staff, students) are responsible for ensuring the responsible use of AI in research in accordance with these guidelines. These guidelines apply to all sponsored and nonsponsored research conducted at Âé¶¹ÊÓÆµ. Principal Investigators are responsible for ensuring that all members of their team adhere to these guidelines.

AI guideline documentation will continue to evolve as tools, student expectations, and pedagogical approaches change.


 

Accountability 

Researchers using AI tools for scholarship activities are fully accountable for all aspects of the research process, including verifying the accuracy and integrity of generated content. Researchers are accountable for their decision to use a tool and how they choose to use it. Judging whether a particular AI tool is suitable for a specific research purpose may require a sophisticated understanding of how the tool works.

Security and Confidentiality

Uploading research data, grant proposals, confidential and/or nonanonymized interview transcripts, and analytical results into an AI tool could publicly disclose that content. Researchers must be aware of and follow all data security and privacy requirements appropriate to their project and the level of sensitivity of the data. Research involving or using sensitive data requires a protected AI environment which does not share information outside of the project.

Explainability

Researchers are responsible for appropriately explaining the inputs, workings, and outputs of an AI method or tool to the academic community. Read the for more information.

Disclosure

Researchers using AI tools in research must be transparent in disclosing their use of AI tools. The researcher involved in proposing, reviewing, performing, or disseminating research is responsible for abiding by the policies and standards governing the use of AI in their field of study and the venue of dissemination.

Mitigating Biases 

Researchers should make reasonable attempts to understand, evaluate, and mitigate biases in AI tools and their data stores.

For more context and examples:

Authorship 

AI models should not be listed as co-authors or cited as authors because they cannot be responsible for the accuracy and integrity of the work. See the .

Peer Review

When reviewing a proposal submitted for consideration to a funding agency, or when reviewing a manuscript submitted for consideration at a symposium, conference, or journal, follow the guidelines provided by the specific scientific venue or the funding agency. Be aware that some agencies, such as the National Institutes of Health, currently prohibit AI use in peer-review processes. When in doubt, always ask.

Use of Research Software Incorporating AI

Researchers should fully and explicitly include any use of AI in research processes in their Institutional Review Board (IRB) protocols and consult with Information Technology Services (ITS) to understand levels of risk. Researchers should be able to answer questions about whether AI tools will retain the input prompts and data, must understand how any analysis is being conducted, and must protect against security risks.

Record-keeping 

Researchers should save and maintain appropriate records of prompts entered and output generated by AI tools to document their research conduct and support replicability.


 

AI Tool Questions 

  • Tool selection: What options are available? What are their pros and cons? 
  • Is your intended use covered by a Âé¶¹ÊÓÆµ license? If not, what license would you need and what are the steps to procure it? 
  • Is Architectural Standards Review Board (ASRB) review required for your intended use? 
  • What computational resources are needed for your intended use?

Data Questions 

What are the inputs for your intended use? Do they involve any of the following: 

  • Sensitive data (e.g., CUI, PII, HIPAA- or FERPA-protected, etc.) 
  • Confidentiality requirements 
  • Privacy requirements 
  • Copyrighted and/or proprietary information 
  • Contractually specified data-management or data-security requirements (e.g., data use agreements, or terms and conditions specific to a sponsored project)

Use of AI in Research Checklist

  • Have you ensured that no confidential information data is being entered into AI tools? 
     
  • Have you verified the accuracy and integrity of all generated content? 
     
  • Have you disclosed all uses of AI at all stages of the research process? 
     
  • Have you made explicit attempts to mitigate biases in generated content? 
     
  • Have you adhered to the Committee on Publication Ethics (COPE) guidelines for authorship? 
     
  • Have you checked your agency's guidelines for AI use in peer-review processes? 
     
  • Have you worked with IRB and ITS to ensure that all AI tools meet expectations for confidentiality and security? 
     
  • Have you checked whether your use of AI is permitted by all data agreements and security requirements associated with your project?
     

 

George Âé¶¹ÊÓÆµ Resources

  • (ASRB) is responsible for reviewing, verifying compliance, and providing recommendations with regards to new and upgrade software and hardware procurement projects.  
     
  • (DDSS) works with students, faculty, and staff by providing research support to facilitate data-focused research and teaching across the university in all disciplines.  
     
  • (ITSO) provides computer users and system administrators with the tools and information they need to secure their systems.  
     
  • (IRB) provides review and oversight of all human subjects research at the university, and is charged with protecting the rights and welfare of human subjects recruited to participate in research in which the university is engaged.  
     
  • (ORC) coordinates and supports the computing-related needs of the university's research community.  
     
  • (ORIA) is responsible for compliance with federal and other relevant regulations pertaining to the ethical and responsible conduct of research (e.g., IRB, IACUC, Export Controls, Conflict of Interest).