Why you should be careful

Most of us want to do research ethically. So it‘s not surprising that I’m often approached by doctoral researchers with a question: “Would it be ethical to do X?” They‘re typically seeking a clear answer—yes or no. However, the reality is more complicated.

That is what makes ethical decisions different from simply following rules, policies, and guidelines. When there’s an established policy—for instance, your university‘s guidelines on data storage—there isn’t much space for ethical decision-making: you should just follow the rule. However, there are many cases where no rules exist and cannot exist, particularly in emerging areas like GenAI use in research.

Even if you consider just two dominant frameworks in ethics—deontological approach and consequentialism—you quickly discover that they often contradict each other. For instance, a consequentialist might support using AI to help write academic papers if it leads to more efficient knowledge dissemination, while a deontologist might object on principles of intellectual authenticity.

What makes a decision ethical, therefore, isn‘t necessarily the specific choice you make, but rather the process through which you arrive at that choice. This involves carefully considering potential benefits and harms, weighing various stakeholder perspectives, and making a conscious, informed decision. Let’s talk about what it means in practice.

Follow guidelines and policies

Before engaging with Generative AI tools in your research, it is important to make sure that you are familiar with relevant guidelines. Such guidelines exist at multiple levels: national, institutional (e.g. UTS), and organisational (e.g. publishers and funding agencies). You should avoid making assumptions about what is permitted because policies can be contradictory. For instance, while Springer Nature allows the use of LLMs for AI-assisted copy editing without declaring their use, your funding agency might explicitly prohibit any use of AI in grant applications.

You also need to consider how broader research policies and guidelines might affect your use of AI tools. For example, if you‘re working with protected data that cannot be put in a cloud environment, it automatically means that you cannot include it as part of your prompt to a chatbot. Another example is peer review: reviewers are bound by confidentiality agreements, and uploading a manuscript to a chatbot would constitute a breach of this confidentiality.

Reach out to your supervisor and experienced researchers

Academia operates through unwritten rules, norms, and expectations that are often no less important than formal policies. Following these conventions is especially important given that academia is a reputation-based institution. Making a faux pas, especially regarding research integrity and ethics, could have significant long-term negative effects on your career. As early career researchers often lack access to this unwritten knowledge, reaching out to more experienced researchers could prove extremely valuable. Your supervisor, in particular, can provide crucial guidance on field-specific considerations.

Moreover, having multiple perspectives is almost always beneficial. Other researchers might have encountered similar challenges or could point to potential issues you hadn’t considered. The conversations themselves can be valuable learning experiences. They could reveal how experienced researchers navigate ethical challenges and balance competing priorities—a skill that could be useful in your academic career, regardless of the specific technologies involved.

It is your call

After confirming that your planned use of GenAI complies with relevant policies, obtaining necessary ethical approvals, and consulting with your supervisor, you still need to carefully consider the potential implications of your choices. To make this a bit easier, in the following videos I’ll discuss why we should be careful when using Generative AI.

On authorship and accountability

On bias and discrimination