Learn about the practical applications, limitations and ethical considerations of using AI.
What is generative AI?
‘Generative artificial intelligence’ encompasses a range of tools that use AI models to generate seemingly new, realistic content such as text, images, audio and code in response to questions and prompts from users. Such AI models, which include ChatGPT and Microsoft Copilot, are trained on a large corpus of data, i.e. material sourced from the internet, and then taught to generate responses based upon it.
The capabilities of, and uses for, generative AI tools are developing rapidly and are becoming increasingly embedded in everyday online tools, such as search engines, MS Office and more.
- What is generative AI good for?
Applications such as Grammarly and Microsoft Word have, for a long time, been using algorithms to suggest next words, correct spellings or provide alternative sentence structures. We don’t always use such tools passively. Rather, we can learn from them about how to effectively communicate knowledge and ideas. Similarly, used effectively, generative AI tools can provide all of us with valuable guidance, ideas and inspiration. This includes:
Overcoming barriers to learning
Generative AI tools can help students overcome barriers to learning, such as those imposed by assumed knowledge (often referred to as the ‘hidden curriculum’), barriers to the accessibilty of learning materials, or to support students in organising and planning. For example, they can be used to:
- interpret assessment briefs, learning outcomes or feedback
- engage with, organise and revise from notes or lecture recordings/transcripts
- provide plans for study periods (e.g. ‘what should I do next?’)
- auto-generate transcripts or captions
- auto-generate audio (podcast) versions of texts.
Supporting self-directed learning
AI tools can help with:
- providing explanations, e.g. of key theories or concepts
- reviewing and critically analysing written materials to assess their validity or quality
- summarising lecture notes, texts and transcripts
- creating bespoke work or study plans
- generating self-study tests
- underaking literature searches and initial review
- generating standards-based feedback.
Overcoming mental blocks and enabling creativity
This could include:
- overcoming writer's block
- suggesting a structure for a piece of work
- generating ideas for graphics, images and visuals
- improving written expression
- helping to improve grammar and writing structure (especially helpful if English is a second language)
- debugging code
- providing opportunities to experiment with different writing styles.
To support employability
This could include:
- identifying career opportunities
- making sense of job descriptions
- creating resumes and cover letters
- preparing for interviews.
For teaching and learning
Generative AI tools can also be helpful in creating:
- multiple-choice questions, answers and feedback
- simulated data sets
- submission exemplars
- fictional case studies
- educational games and characters (personas)
- ideas for essay questions
- learning outcomes
- marking criteria and rubrics
- references
- interview questions
- pub quiz questions and much more.
The tools are also useful for enhancing student engagement in the classroom.
Limitations of AI
Although generative AI offers numerous benefits, it's important to be aware of the limitations and considerations when using generative AI, particularly in educational settings.
The limitations listed below are adapted from helpful guidance for educators and students produced by Open AI, the creators of ChatGPT:
- AI tools are not intelligent or sentient
- While their output can appear confident, plausible and well written, AI tools frequently get things wrong and can’t be relied upon to be accurate.
- They are prone to ‘hallucination’, whereby they sometimes make up facts, distort the truth or offer arguments that are wrong or don’t make sense.
- They don’t consistently provide sources for the content they produce.
- They perform better in subjects that are widely written about, and less well in niche or specialist areas.
- They don't always provide accurate references – they might generate well-formatted but fictitious citations.
- They can perpetuate harmful stereotypes and biases and are skewed towards Western perspectives and people.
This means these tools should be avoided for:
- research, e.g. as a substitute for Google Scholar/Web of Science etc.
- writing from scratch
- unsupervised editing
- tasks about which one has no proper knowledge or can’t verify.
Ethical implications
Generative AI tools are not neutral. The world is still in the early days of grappling with the ethical and legal impacts and implications of their development and use. Engaging critically with generative AI is, therefore, important for staff and students.
- Exploitation and bias
Generative AI tools have been been trained using sources which perpetuate biases (i.e. the internet). This is as much a concern for image generation tools, as it is for text-based tools. See, for example, this discussion, by Dustin Hosseni in 2023, which illustrates the problematic intersections of racialized gender, race, ethnicity in AI generated representations.
Such AI tools have also been refined using human labour, which can be exploitative and damaging. See, for example, this January 2023 Time article which reports on how low paid Kenyan workers were tasked by OpenAI with labelling textual descriptions of sexual abuse, hate speech, and violence.
- Plagiarism and copyright considerations
AI platforms recover patterns and relationships, which they then use to create rules, and then make judgments and predictions, when responding to a prompt. As this April 2023 report in the Harvard Business Review explains, “this process comes with legal risks, including intellectual property infringement. For example, does copyright, patent, trademark infringement apply to AI creations? Is it clear who owns the content that generative AI platforms create?” These are legal challenges that as still being resolved.
The legal relationship between intellectual property law and generative AI is currently under review by UK government . A first step in this process is the creation of a working group of industry representatives from the technology, creative and research sectors and the Intellectual Property Office, which began meeting in June 2023, to draft a voluntary code of practice on copyright and AI.
- Sustainability considerations
AI data centres are taking a heavy toll on the planet through their use of raw materials in hardware production, and the subsequent electronic waster produced, and in through the energy and water use required to operate them. For example, the International Energy Agency (2024) reported that a request made through ChatGPT consumes 10 times the electricity of a Google search. The agency also estimates that in the tech hub of Ireland, the rise of AI could see data centres account for nearly 35 per cent of the country’s energy use by 2026.
See also the September 2024 UoS Digital Humanities Climate Coalition report, The cloud and the climate: navigating AI-Powered futures.
Practical considerations
When using AI yourself, or guiding students in its use, it is important to protect one’s own and others' information and rights.
If you are using university data in an AI tool, such as learning and teaching content, then ensure you use the licensed and data protected version of Microsoft Copilot accessed via your Sussex account to ensure that any data is protected.
Before using generative AI tools
Read the terms of use and, if you decide to use the tool, comply with its terms. While interacting with generative AI, it is important to avoid providing the AI tools with certain information to protect your and others' information and rights.
Don't share protected or highly protected data with AI tools not licensed by the University of Sussex
University of Sussex data protection rules apply to the use of generative AI. This includes student work, personally identifiable information, names, addresses, unpublished research data and results, biometric data, health and medical information, geolocation data, government-issued personal identifiers, confidential or commercially sensitive material, unpublished exams, and security credentials.
Express permission should be obtained from the person who controls the copyright
Without permission from the person who controls the copyright, third party copyrighted material, including e-texts, should not be entered into generative AI products.
Where possible, opt out of data collection
Microsoft Copilot when accessed via your Sussex account will ensure your data is protected and is the recommended AI tool. For other AI tools you will need to ensure, where possible, that you opt out of data collection.
Find out how to access help and support and find additional links and examples of practice.
See more from Artificial Intelligence in teaching and assessment