Skip to Main Content
University of Gloucestershire Library

Business : Using Generative AI in your studies

Library Resources for Business

About this page

This page is designed to suggest some ways you can use Generative AI to support your studies, particularly your dissertation, and to prompt you to reflect on the benefits and limitations of using these tools in your work.  The examples on this page have been tested using the free version of ChatGPT.

Important

Before you start

Before you use any Generative AI as part of your research or writing, you should read the University's  Student guidance for using Generative AI in learning

Always ask yourself: am I still the author of this work?

Generative AI tools can sound very plausible, even when fabricating false information. This is why you must always check any information you take from a Gen AI tool, as the accuracy of any work you produce is ultimately your responsibility. 

We recommend that you read this page, particularly the section on Critical use of AI, before working through the page on Using AI for your literature search.

What is Generative AI?

I asked ChatGPT to tell me briefly what Generative AI (Gen AI) is and got this answer:

"Generative AI is a type of artificial intelligence that creates new content, such as text, images, or music, by learning patterns from existing data. It can produce original outputs that resemble real-world data, enabling tasks like content creation and data synthesis. Examples include models like GPT for text generation and GANs for creating realistic images."

More specifically, the text-based Gen AI tools discussed on this page are designed to generate human-like text based on input prompts.  One of the skills in using Gen AI tools is learning how to create good prompts that will retrieve the information you want, also known as prompt engineering.  You can converse with the tool, so if the answer isn't what you want you are always able to try again.

AI and academic integrity

AI and academic integrity

The UoG Student guidance for using Generative AI in learning explains that you are expected to maintain academic integrity and honesty while using AI tools, noting that "any work submitted for assessment must represent a genuine demonstration of your own work, skills and subject knowledge".  Similar information appears in the student charter, which all students are asked to read and sign before enrolment each year.

It is your responsibility to act with academic integrity. In this context that involves deciding whether any particular use of Gen AI tools (or other resources) is appropriate, or whether it might be considered as academic misconduct. It might also involve considering whether you have used Gen AI tools and other resources responsibly and ethically. The information on this page is designed to help you to reflect on this.

We would suggest you also consider the impact of any use you make of Gen AI on your overall learning.  Creating a piece of assessed work is about the process as well as the final product, and skipping any steps along the way will have an impact on what you are able to take away with you.

You may wish to use these questions to help you to reflect on your use of Gen AI and other tools in your studies:

  • can I say with honesty and integrity that I am the author of this piece?
  • did I follow all the guidance and restrictions on the use of AI for this particular assessment?
  • did I critically evaluate, fact check, and follow up on all the information I received through Gen AI?
  • have I consulted multiple sources in the creation of this piece?
  • have I given appropriate credit to all tools and sources that I used in the creation of this piece?

Giving appropriate credit to the use of Gen AI includes accurate referencing.  This is a complicated process as (a) Gen AI doesn't tend to cite its sources, so you won't know where the information has come from and (b) your conversation with Gen AI isn't publically available to be referenced.  Cite them Right includes more information about this aspect of AI usage including suggested citations.

We strongly recommend that you check and follow up on any information you take from a Gen AI tool.  Search for corroboration, for other viewpoints, and for authoritative sources which you can reference.

Acknowledgements

As this is such a fast-moving area I am grateful to Ian Clark from the University of East London for outlining the issues so clearly - see his guide on Artificial Intelligence for more information. 

Critical use of AI

Critical use of AI

It is important to treat all new technologies critically and curiously to ensure that you understand the inputs they draw on and the outputs they generate.  These are just some of the issues to consider when using Gen AI:

Bias

Gen AI tools learn patterns from existing data, so they can easily reproduce bias. A Bloomberg study in 2023 showed how an AI tool "amplifies stereotypes about race and gender", limiting the voices and worldviews being portrayed (Nicoletti and Bass, 2023).

Privacy

It is currently unclear exactly how Gen AI tools use the data you enter, so it is wise to be cautious and to avoid entering confidential or personal information. Check privacy policies and settings when you are using AI tools.

Environment

Is Gen AI bad for the environment? Opinions differ as to how much impact it has, particularly as companies are unwilling to disclose this type of information.  It is worth noting that no internet use comes without an impact, including web searches, the use of online library tools and cloud-based storage.  Data centres consume huge amounts of energy and water. 

In 2024 the Climate Action Against Disinformation (CAAD) coalition published a report focusing on the dangers posed by AI to the environment.  As well as "the vast increase in energy and water consumption required by AI systems like ChatGPT", likely to double in the next 5-10 years, they warn that generative AI will fuel the spread of climate change denial and disinformation campaigns (CAAD, 2024).  

Social Justice

As with all of the tools which enable our digital lives, there is a vast amount of hidden labour used to develop and refine Gen AI tools.  Workers participate in content moderation and data labelling.  Professor Mark Graham, who researches this hidden workforce behind technology, notes that these workers face particular systemic inequalities, not least because their jobs can be performed by anyone, anywhere, globally, so there is little possibility for collective action and regulation is made more difficult.  He also notes that the invisibility of these workers removes the accountability of their employers: 

"precisely because AI presents itself as automated, very few people can imagine what the human labour on the other side of the screen looks like. AI companies are complicit in this subterfuge. They want to present themselves as technological innovators rather than as the firms behind vast digital sweatshops." (Graham, 2024).


References:

Climate Action Against Disinformation (CAAD) (2024) Artificial Intelligence Threats to Climate Change. Available at: https://foe.org/news/ai-threat-report/ (Accessed: 13 August 2024).

Nicoletti, L. and Bass, D. (2023) Humans are biased. Generative AI is even worse. Available at: https://www.bloomberg.com/graphics/2023-generative-ai-bias/ (Accessed 12 August 2024).

Graham, M. (2024) The hidden cost of AI: in conversation with Professor Mark Graham. Available at: https://www.ox.ac.uk/news/features/hidden-cost-ai-conversation-professor-mark-graham (Accessed: 13 August 2024)

Further reading

Adib-Moghaddam, A. (2023) Is artificial intelligence racist? : the ethics of AI and the future of humanity. London, UK: Bloomsbury Academic, Bloomsbury Publishing Plc.

Broussard, M. (2024) More than a glitch : confronting race, gender, and ability bias in tech. Cambridge, Massachusetts: MIT Press.

Graham, M. (2024) The hidden cost of AI: in conversation with Professor Mark Graham. Available at: https://www.ox.ac.uk/news/features/hidden-cost-ai-conversation-professor-mark-graham (Accessed: 13 August 2024)

Nicoletti, L. and Bass, D. (2023) Humans are biased. Generative AI is even worse. Available at: https://www.bloomberg.com/graphics/2023-generative-ai-bias/ (Accessed 12 August 2024).

Noble, S.U. (2018) Algorithms of oppression : how search engines reinforce racism. New York: New York University Press. 

Rowe, N. (2023) 'Millions of Workers Are Training AI Models for Pennies', Wired. Available at: https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/ (Accessed: 13 August 2024).

 

Disclaimer

Generative AI tools are constantly being developed and changing in functionality.  Aspects of this guide may become out-of-date very quickly, and guidance may be updated frequently.

The University of Gloucestershire currently doesn't endorse or provide any of the AI tools mentioned in this guide.  Always check back with the University's Student guidance for using Generative AI in learning if you have any questions.

This page was created by Rachel Reid, September 2024.