(Image by Gerd Altmann, available via Pixabay)
Generative AI Guidance
Last Updated: 19 March 2024
Artificial Intelligence (AI) is not a new idea, but recent high-profile developments, such as ChatGPT, have demonstrated just how powerful and transformative AI has the potential to be. Different forms of AI are already being used to power technological advances in transport, business, medicine, engineering, the arts, and AI also has huge potential for education and research.
Understanding how generative AI works, as well as knowing when to use it appropriately, is critical when using it in an academic context. This guide is intended to complement the University’s official guidance on using AI in your studies and to support you in understanding the strengths and limitations of generative AI, how to use it in an ethical manner, as well as how to reference it appropriately.
What is Artificial Intelligence?
The British scientist Alan Turing is believed to have led the conversation around what would later become Artificial Intelligence following the publication of his 1950 paper Computing Machinery and Intelligence, in which he questioned “can machines think?” (IBM, 2023) and defined a test where a human participant has to distinguish between a text response from a human and a text response from a computer (or a computer programme), which we know today as “the Turing Test.”
The field of Artificial Intelligence developed alongside advancements in computer science and technology since the 1950s and, while lots of definitions of Artificial Intelligence have emerged, here are a few by leading experts:
IBM (2023) have a detailed introduction to AI, as well as associated concepts such as machine learning, deep learning, large language models etc.
What is generative AI?
Generative AI is a form of AI which is able to produce text, images, music, video, code, or other content based on a prompt written in natural language. Generative AI tools can do this because they have been trained on large amounts of data, often in the form of Large Language Models, in order to produce more human-like responses.
Some generative AI, such as ChatGPT, Bard, and Claude are designed to produce results in the form of text, while others, such as DALL-E and Midjourney are designed to output works of visual art. There are lots of other generative AI tools capable of producing content in other formats, such presentations, vidoes, and more.
The University's official guidance
The official guidance for students on how to use generative AI in your studies, is available on BruNet.
It states that the University “won't stop the use of these programmes” but that “it’s important that AI is not used unethically to pass off academic work generated by AI as your own”, elaborating further that:
“the use of any type of generative artificial intelligence tools (such as text generating, image generating, computer software generating, and translators) is not permitted in your assignment unless your module leader has explicitly specified that their use is permitted”
Any permitted use of generative AI should be done in a transparent, ethical, and critically engaged manner.
Separate guidance aimed at academic staff on using generative AI in teaching and assessment is available on the staff intranet.
Can I use generative AI in my work?
You may be able to use generative AI to assist you with your academic work but, before you do, make sure you have done the following:
There will be times where you will be permitted to use generative AI and others where you are not. Always pay attention to the advice you are given, and remember that you should never just copy and paste content produced by generative AI and present it as your own work, to do so would likely be considered Academic Misconduct.
What are the limitations of generative AI?
Generative AI tools can be useful when used appropriately but they do have serious limitations
Engaging critically with generative AI
Generative AI tools can be useful when used properly but make sure that you fully understand their limitations and the legal, ethical, and data protection issues they present before using them. Here is a summary of key points on how to engage critically with generative AI:
Legal, ethical, and data protection considerations
Every generative AI platform will have its own terms, conditions, and ways of working, and there a number of legal, ethical, data protection issues to consider before using generative AI:
Academic misconduct and referencing generative AI
Presenting AI-generated content as your own work is a form of academic misconduct. If your work is thought to contain content that you did not create yourself, it can lead to a lengthy investigation and very serious consequences. The University’s academic misconduct policy sets out the policy and procedure on this, and you can learn more about avoiding plagiarism by completing the Library’s short online course, or by attending a session on avoiding plagiarism.
Referencing can be used to acknowledge the use of AI generated content, but this is generally limited to situations where you are including AI generated content as stand-alone examples that are clearly set apart from the main body of your work. You should not be copying and pasting generated content and integrating it into your work in a way that makes it seems like your own.
The advice on how to reference AI generated content will depend on the referencing style used by your department, and the advice provided may be prone to change. Here are examples of how to reference generative AI in the some of the most commonly used referencing styles at Brunel (correct as of March 2024):
Harvard (Cite Them Right)
AI generated text should be referenced as a personal communication, because the responses generated are unique to you (they are non-reproducible). You should include your generated text within quotation marks, but if the text is over three lines (or twenty words) long then it should be indented.
Example citation: (OpenAI ChatGPT, 2023)
Example reference list entry: OpenAI ChatGPT (2023) ChatGPT response to Joe Bloggs, 6 September.
You should check with your lecturer or module-leader whether you should include the transcript of your interaction as an appendix to your work. For maximum transparency, we recommend this as a good approach.
For more information, including how to reference AI-generated images, see Cite Them Right’s guidance on generative AI using Harvard.
APA (7th Edition)
AI generated text should be referenced as the output of a software program or algorithm.
You should include your generated text within quotation marks, but if the text is over three lines (or twenty words) long then it should be indented.
Example citation: (OpenAI, 2023)
Example reference list entry: OpenAI. (2023). ChatGPT (Version 4) [Large Language Model]. https://chat.openai.com/.
You should check with your lecturer or module-leader whether you should include the transcript of your interaction as an appendix to your work. For maximum transparency, we recommend this as a good approach.
For more information see Cite Them Right’s guidance on generative AI using APA 7th.
IEEE
Include a reference number as you would normally. The format of the reference would look like this:
[1] ChatGPT, “Request for definition of artificial intelligence,” Oct. 10, 2023.
There is no suggestion that you should include the transcript of your chat in an appendix but check your module handbook or ask your lecturer, just in case. For maximum transparency, we recommend this as a good approach.
For more information see Cite Them Right’s guidance on generative AI using IEEE.
OSCOLA
Indicate your footnote with a number in superscript, as usual. The footnote and bibliography entries will be the same, for example:
ChatGPT, ‘Text generated on New York Convention by ChatGPT to Joe Bloggs’ (18 March 2023) < https://chat.openai.com/c/c628045f-a94d-495f-a8e2-508d2afb3a99 > accessed 18 March 2024.
You should check with your lecturer or module-leader whether you should include the transcript of your interaction as an appendix to your work. This guidance may change upon the publication of OSCOLA 5th edition.
Library support for generative AI
A brief list of generative AI tools
New generative AI tools are emerging all the time but here is a selection of popular ones used in academia:
Microsoft Copilot (Which integrates ChatGPT 4 and Dall-E)
References
Google (2023) Learn as you search (and browse) using generative AI. At: https://blog.google/products/search/google-search-generative-ai-learning-features/ (Accessed 09 Oct 2023).
IBM (2023) What is artificial intelligence (AI)? At: https://www.ibm.com/topics/artificial-intelligence (Accessed 09 Oct 2023).
Kumar, A. and Davenport, T. (2023) How to make generative AI greener. At: https://hbr.org/2023/07/how-to-make-generative-ai-greener (Accessed 09 Oct 2023).
McCarthy, J. (2007) What is Artificial Intelligence? At: https://www-formal.stanford.edu/jmc/whatisai.pdf (Accessed 09 Oct 2023).
Russell, S. and Norvig, P. (2021) Artificial intelligence: a modern approach. 4th Ed. Harlow: Pearson Education.
Ryan-Mosley, T. (2023) ‘How generative AI is boosting the spread of disinformation and propaganda’ in: MIT Technology Review, Oct 4th. At: https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/ (Accessed 09 Oct 2023).
Strubell, E. et al. (2019) ‘Energy and policy considerations for deep learning in NLP’, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, 28th July – 2nd August. https://arxiv.org/abs/1906.02243.
Tegmark, M. (2018) Life 3.0. London: Penguin.