Generative AI - Guidance for students: Overview

AI Banner Image

(Image by Gerd Altmann, available via Pixabay)

Generative AI Guidance

Last Updated: 19 March 2024

Artificial Intelligence (AI) is not a new idea, but recent high-profile developments, such as ChatGPT, have demonstrated just how powerful and transformative AI has the potential to be. Different forms of AI are already being used to power technological advances in transport, business, medicine, engineering, the arts, and AI also has huge potential for education and research.

Understanding how generative AI works, as well as knowing when to use it appropriately, is critical when using it in an academic context. This guide is intended to complement the University’s official guidance on using AI in your studies and to support you in understanding the strengths and limitations of generative AI, how to use it in an ethical manner, as well as how to reference it appropriately.


What is Artificial Intelligence? 


The British scientist Alan Turing is believed to have led the conversation around what would later become Artificial Intelligence following the publication of his 1950 paper Computing Machinery and Intelligence, in which he questioned “can machines think?” (IBM, 2023) and defined a test where a human participant has to distinguish between a text response from a human and a text response from a computer (or a computer programme), which we know today as “the Turing Test.”

The field of Artificial Intelligence developed alongside advancements in computer science and technology since the 1950s and, while lots of definitions of Artificial Intelligence have emerged, here are a few by leading experts: 

  • “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy, 2007, p.2)
  • “the field concerned with not just understanding but also building intelligent entities - machines that can compute how to act effectively and safely in a wide variety of novel situations” (Stuart and Norvig, 2021, p.19)
  • “non-biological intelligence” (Tegmark, 2017, p.39)

IBM (2023) have a detailed introduction to AI, as well as associated concepts such as machine learning, deep learning, large language models etc.


What is generative AI?

Generative AI is a form of AI which is able to produce text, images, music, video, code, or other content based on a prompt written in natural language. Generative AI tools can do this because they have been trained on large amounts of data, often in the form of Large Language Models, in order to produce more human-like responses.

Some generative AI, such as ChatGPT, Bard, and Claude are designed to produce results in the form of text, while others, such as DALL-E and Midjourney are designed to output works of visual art. There are lots of other generative AI tools capable of producing content in other formats, such presentations, vidoes, and more.


The University's official guidance


The official guidance for students on how to use generative AI in your studies, is available on BruNet.

It states that the University “won't stop the use of these programmes” but that “it’s important that AI is not used unethically to pass off academic work generated by AI as your own”, elaborating further that:

the use of any type of generative artificial intelligence tools (such as text generating, image generating, computer software generating, and translators) is not permitted in your assignment unless your module leader has explicitly specified that their use is permitted

Any permitted use of generative AI should be done in a transparent, ethical, and critically engaged manner.

Separate guidance aimed at academic staff on using generative AI in teaching and assessment is available on the staff intranet.


Can I use generative AI in my work?


You may be able to use generative AI to assist you with your academic work but, before you do, make sure you have done the following:

  • Carefully read and understood the University’s guidance on using generative AI.
  • Ensure you have viewed and understood all of the guidance provided for your module or assignment. Check your module handbook / assignment brief, look at lecture slides, or speak to your module leader / lecturer. Do your best to find out whether you are allowed to use generative AI, and the extent to which you are permitted to use it. 
  • Consider what you are being assessed on and whether generative AI is appropriate. Even if you only use generative AI to assist you, will your work be a genuine reflection of your abilities? For example, if you’re an English Literature student being assessed on your ability to summarise text and present a clear argument in English, then is using generative AI to help you summarise and/or paraphrase appropriate? If you’re a Games Design student being assessed on your ability to draw a character concept, is presenting work produced solely by DALL-E or Midjourney appropriate? It is crucial that your work reflects your capabilities and that you approach it with a sense of transparency, honesty, and integrity.
  • Question whether it will actually help you. It can seem like a good way to cut corners, but spending time trying to get an answer via generative AI is often time that could be spent doing work yourself.

There will be times where you will be permitted to use generative AI and others where you are not. Always pay attention to the advice you are given, and remember that you should never just copy and paste content produced by generative AI and present it as your own work, to do so would likely be considered Academic Misconduct.


What are the limitations of generative AI?


Generative AI tools can be useful when used appropriately but they do have serious limitations

  • Their results are reliant on how well you write and frame your prompt.
  • They are not integrated with, nor can they access, any of Brunel’s systems, which means they are unable to access any course or module specific material on Brightspace.
  • They can only learn from content available on the surface web, they do not currently have access to commercially available journal collections and other specialist databases; the kind of content that the Library provides access to.   
  • They can make up information and/or reproduce false information from the web, including citations and references. These are sometimes referred to as “hallucinations”.
  • They do not necessarily have access to up to date information. Chat GPT, for example, was initially only trained on data available as of 2021. As of March 2024, Chat GPT 4 is able to access current events via the web but the free version of Chat GPT is based on the previous version and is still limited.
  • Different levels of access (free vs paid subscriptions), can lead to different quality and/or compromised results. For example, the free versions of Chat GPT are more likely to produce “hallucinations” and provide false or out-of-date information compared to the paid versions.
  • They are only as good as the data they are trained on. It is possible that the data and web content they have been trained on is biased or flawed itself, which means there is risk that the results produced by generative AI perpetuate or amplify these biases.
  • They are designed to learn from the data they have been trained on and to reproduce it in a human-like way, but are not capable of evaluating or critically examining the sources used. They may reproduce harmful or inappropriate content without realising it.
  • They have a limited capacity for higher-order thinking skills and creativity, which means you can’t rely on them to produce results that will be of a suitable academic level.


Engaging critically with generative AI


Generative AI tools can be useful when used properly but make sure that you fully understand their limitations and the legal, ethical, and data protection issues they present before using them. Here is a summary of key points on how to engage critically with generative AI:

  • Always cross-reference the information, including any references, presented with more valid academic sources. If you’re not sure how to do this, consult your academic liaison librarian.
  • Do not rely on generative AI as a source in itself, such as referencing a definition or piece of information on a topic. Generative AI is not a credible academic source.
  • Question the generative AI on its reasoning and how it produced its results, “explain how you come up with that answer?”, “what evidence can you provide to back up that answer?”
  • Be aware of bias and discrimination. Generative AI scrapes data from the web in huge quantities, including biased, harmful, and/or misinformed content that perpetuates existing biases.
  • Do not overuse or rely fully upon generative AI. As a result of the issues raised above, relying solely on generative AI can seriously compromise your work, so ensure you seek out multiple perspectives and assume that whatever is produced by generative AI is inherently unreliable and potentially biased.  
  • Use it to assist you, but do not present work produced by generative AI as your own. It is important that you develop your work yourself to demonstrate your abilities, learning, and knowledge of a topic.
  • Do not share any personal, confidential, or sensitive information. Any data entered into generative AI can be fed into the data sets that the generative AI relies upon


Legal, ethical, and data protection considerations


Every generative AI platform will have its own terms, conditions, and ways of working, and there a number of legal, ethical, data protection issues to consider before using generative AI:

  • The data they are trained on is scraped from freely available web sources, but that can include illegal copies, or entire reproductions of works. You could inadvertently plagiarise or breach the copyright of someone’s work when using generative AI.
  • In the case of visual art produced by generative AIs such as Dall-E and Midjourney, these tools have been accused of copying and/or creating collages of existing art work found online, rather than producing genuinely original works. Not giving due credit and/or payment to original artist means that this is a very contentious and complicated issue relating to legal ownership, intellectual property, and copyright. There are a number of high-profile legal cases that are currently looking at these issues.
  • Most generative AI tools require an additional log-in, or may require you to create an account, which means they will be able to collect personal data and keep track of everything you enter. Depending on the generative AI being used, their results may also be influenced by your personal browsing history (Google, 2023).
  • You should not enter any personal, sensitive, or confidential data. This goes for your own personal data (like asking a generative AI to write a CV for you) but also any research data that you might collect via primary research.
  • Generative AI is being used to create “deep fakes”, video content designed to appear authentic but are actually fake. They often feature the likeness of a person that has been digitally recreated and applied to an existing, or entirely new, video.
  • Web-based technologies like generative AI have a significant environmental impact. The data centre industry, which powers much of the web, as well as things like generative AI, uses significant amounts of electricity (Kumar and Davenport, 2023), and that’s before we consider the carbon footprint of producing and maintaining the computer hardware, cooling machinery, and buildings required. It can depend on the method used to process data, but the computational resources required to train and produce results can be significant (Strubell et al., 2019).
  • Generative AI is actively being used to spread misinformation and political propaganda, as well as being employed as sophisticated means of censoring information (Ryan-Mosley, 2023).


Academic misconduct and referencing generative AI


Presenting AI-generated content as your own work is a form of academic misconduct. If your work is thought to contain content that you did not create yourself, it can lead to a lengthy investigation and very serious consequences. The University’s academic misconduct policy sets out the policy and procedure on this, and you can learn more about avoiding plagiarism by completing the Library’s short online course, or by attending a session on avoiding plagiarism.

Referencing can be used to acknowledge the use of AI generated content, but this is generally limited to situations where you are including AI generated content as stand-alone examples that are clearly set apart from the main body of your work. You should not be copying and pasting generated content and integrating it into your work in a way that makes it seems like your own.

The advice on how to reference AI generated content will depend on the referencing style used by your department, and the advice provided may be prone to change. Here are examples of how to reference generative AI in the some of the most commonly used referencing styles at Brunel (correct as of March 2024):


Harvard (Cite Them Right)

AI generated text should be referenced as a personal communication, because the responses generated are unique to you (they are non-reproducible). You should include your generated text within quotation marks, but if the text is over three lines (or twenty words) long then it should be indented.

Example citation: (OpenAI ChatGPT, 2023)

Example reference list entry: OpenAI ChatGPT (2023) ChatGPT response to Joe Bloggs, 6 September.

You should check with your lecturer or module-leader whether you should include the transcript of your interaction as an appendix to your work. For maximum transparency, we recommend this as a good approach.

For more information, including how to reference AI-generated images, see Cite Them Right’s guidance on generative AI using Harvard.

APA (7th Edition)

AI generated text should be referenced as the output of a software program or algorithm.

You should include your generated text within quotation marks, but if the text is over three lines (or twenty words) long then it should be indented.

Example citation: (OpenAI, 2023)

Example reference list entry: OpenAI. (2023). ChatGPT (Version 4) [Large Language Model].

You should check with your lecturer or module-leader whether you should include the transcript of your interaction as an appendix to your work. For maximum transparency, we recommend this as a good approach.

For more information see Cite Them Right’s guidance on generative AI using APA 7th.


Include a reference number as you would normally. The format of the reference would look like this:

[1] ChatGPT, “Request for definition of artificial intelligence,” Oct. 10, 2023.

There is no suggestion that you should include the transcript of your chat in an appendix but check your module handbook or ask your lecturer, just in case. For maximum transparency, we recommend this as a good approach.

For more information see Cite Them Right’s guidance on generative AI using IEEE.


Indicate your footnote with a number in superscript, as usual. The footnote and bibliography entries will be the same, for example:

ChatGPT, ‘Text generated on New York Convention by ChatGPT to Joe Bloggs’ (18 March 2023) < > accessed 18 March 2024.

You should check with your lecturer or module-leader whether you should include the transcript of your interaction as an appendix to your work. This guidance may change upon the publication of OSCOLA 5th edition.


Library support for generative AI


  • If you don’t understand how to use generative AI or need some advice on which tools to use, then the Library’s Digital Skills Advisors are able to help.
  • If you need help with evaluating or verifying information you obtained via generative AI, including using the library collections to find alternative academic sources, then you should speak to your Academic Liaison Librarian.
  • Our LibSmart programme consists of standalone sessions that you can sign up to, some of which cover generative AI and related topics. More information can be found here.


A brief list of generative AI tools


New generative AI tools are emerging all the time but here is a selection of popular ones used in academia:


Microsoft Copilot  (Which integrates ChatGPT 4 and Dall-E)




Research Rabbit

Goblin Tools






Google (2023) Learn as you search (and browse) using generative AI. At: (Accessed 09 Oct 2023).

IBM (2023) What is artificial intelligence (AI)? At: (Accessed 09 Oct 2023).

Kumar, A. and Davenport, T. (2023) How to make generative AI greener. At: (Accessed 09 Oct 2023).

McCarthy, J. (2007) What is Artificial Intelligence? At: (Accessed 09 Oct 2023).

Russell, S. and Norvig, P. (2021) Artificial intelligence: a modern approach. 4th Ed. Harlow: Pearson Education.

Ryan-Mosley, T. (2023) ‘How generative AI is boosting the spread of disinformation and propaganda’ in: MIT Technology Review, Oct 4th. At: (Accessed 09 Oct 2023).

Strubell, E. et al. (2019) ‘Energy and policy considerations for deep learning in NLP’, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, 28th July – 2nd August.

Tegmark, M. (2018) Life 3.0. London: Penguin.