zum Inhalt

Critically reflective AI compass for students

The use of artificial intelligence (AI) in artistic, creative and scientific practice offers manifold opportunities, but it also poses challenges and risks. This AI compass aims to help students understand and apply AI in a responsible fashion. Recognising the possibilities and limitations of AI is crucial to making ethical decisions.

When is the use of AI not acceptable?
The use of AI is not acceptable if it violates good academic, scientific and creative practice or if it is categorised as academic misconduct. This includes, for example, plagiarism, unauthorised aids and fraud.

When is the use of AI beneficial?
The use of AI can be beneficial if it serves the expansion of artistic, creative and scientific practice. For example, AI can promote accessibility and inclusion by translating and subtitling content. It can help students improve their writing skills and revise their own work. AI can assist students aiming to review and deepen existing knowledge. It can contribute to the development of critical thinking and help generate new ideas and develop concepts. 

AI, art and design
AI can help students develop initial concepts, create ideas and improve digital competencies. It can be a tool for experimenting with artistic elements and new methods, offering a platform for the exploration of creative approaches and the promotion of innovative thinking. AI is a powerful tool that can support students with different capabilities expanding potentials and creating new perspectives.
AI can be applied in different phases of the artistic, creative or scientific process, from concept development to research, from the use of tools to realisation, to AI as the actual result of an artistic, creative or scientific process. It allows for the creation of new art forms and methods, supports the analysis of large volumes of data and performs repetitive tasks. For instance, the translation of hand drawings into architectural designs in Lisa Ackerl's project ‘Baubonanza’ or the AI-assisted implementation of crochet instructions demonstrate how some new works could only be created with the use of AI.

Evidently, AI is not only involved in the implementation of scientific, creative and artistic practices. It can also be used creatively to expand human capabilities into new dimensions. However, such processes involve algorithms that are based on the data and work of others. Therefore, the part of a work that was not created independently must always be labelled and critically reviewed regarding ethical concerns. Ultimately, students themselves are responsible for the application of AI in their work.

The Critically Reflected AI Compass aims to provide guidance on how students can use AI can and what they need to look out for. It is divided into five areas, whereby some of the points mentioned can be found in several areas:

Data and rights
Traceability
Responsibility
Labelling
Critical reflection

Data and rights

The responsible use of data and compliance with legal regulations in the context of AI use are of crucial importance. The most essential points are summarised here:

Sensitive data and data protection

  • Sensitive data includes, e.g., personal data that can identify an individual; (unpublished) research data; confidential information; meeting minutes; sensitive personal information such as health records or financial records.
  • Such data should not be entered into AI systems; if this is necessary nonetheless, consent of the people affected must be obtained or the data must be rendered anonymous.
  • Use data economically and minimise its use.
  • Follow the General Data Protection Regulation (GDPR) and other relevant data protection regulations.

Rights of others

  • Relevant rights include copyrights as well as personal and intellectual property rights.
  • Respect copyrights when using and creating AI-generated content.
  • Avoid using copyrighted material without authorisation.
  • Ensure that all data and materials used have been acquired legally and cited correctly.
  • Protect your own intellectual property by clearly documenting and labelling your work.
  • Be aware that the training of AI algorithms may violate existing copyrights and intellectual property rights. AI models are often trained using large data sets which may contain copyrighted content. Make sure that the training data has been acquired legally and does not violate any third-party rights as far as possible.

Awareness of bias and data processing

  • Be aware of potential bias that can result from the use of AI tools. Always analyse results critically.
  • Understand that your input can be processed by AI systems and possibly used for training purposes.
  • Pay attention to what data you enter and what consequences this could have.
  • Review results and underlying data on a regular basis in order to recognise and correct possible biases and errors.
  • Scrutinise the algorithms and methods used in AI systems and ensure transparency and traceability in decision-making processes.

Traceability

The responsible use of generative artificial intelligence requires transparent and accountable documentation of work processes, decisions and reasoning. This aims to ensure that artistic, creative and scientific independence is preserved and that the integrity of works can be verified.

Records

AI can be used in different ways: as a support tool or through the direct use of AI-generated content in texts, images, translations, audio-visual projects and other artistic fields. Every use of AI must be documented by accompanying records. These should clearly document and justify the creative and decision-making processes and AI-related results. Records should also explain whether AI was used as a result or as a (research) tool.

Records are important for ensuring transparency and traceability of generative AI use. They should include:

  • Name of the person who has used AI
  • Date or period of time: period of AI use
  • Individual concept: Describe the concept of your work.
  • Description of prompts: Which prompts were used in the AI tool to control and achieve results?
  • Description of the use of AI:
    - Was AI used as a tool or to create content?
    - Specification of use: How and why was AI used?
    - Reasoning for the use of AI: Why was AI chosen for the project?
  • Chronologic development: documentation of the development of your concept in individual steps (not all AI prompts need to be listed).
  • Generated output including the AI-generated content
  • Use of the output and individual adjustment: Explanation of how the AI output was used and adjusted based on your own understanding of the topic.

Transparency and responsibility

Transparency and responsibility are imperative for the use of AI.

  • Clearly disclose which parts of the work were created with the assistance of AI and which parts were created independently.
  • Ensure that the final product is subject to your own responsibility and that the use of AI is transparent.
  • Document and reflect transparently on methods and results in order to ensure good scientific, artistic and creative work.
  • Regularly review and evaluate the methods and results in order to recognise AI-generated errors and biases. This is part of the critical reflection that is always expected of students at the University of Arts Linz.
  • Ensure that the use of AI is in accordance with the rules of good scientific, creative and artistic practice and complies with the requirements of the relevant course or the regulations for final theses.

Independent work

  • Ensure that your work reflects your own creative and intellectual accomplishment. Unless explicitly agreed otherwise, AI should be used as a supporting tool. It must not replace your own capabilities and thought processes. Students must clearly illustrate which part of the work they have performed themselves.
  • Make sure that your work can be recognised as your own by ensuring that your work reflects your own creative and intellectual achievements and that AI is not misused as a substitute for your own competencies.

Future verifiability 

Be aware that future software might be able to identify older AI-generated work, meaning that the origin and integrity of your work could also be examined in the long term. Therefore, clear documentation and labelling are essential for ensuring the transparency of your decisions in the future.

Responsibility

Responsible use of generative AI requires careful review and compliance with ethical, scientific, creative and artistic standards.

Students are responsible for the use of AI tools and AI-generated results. This includes the correctness of contents, possible bias and discrimination, legal aspects and compliance with good scientific practice.

Good scientific, creative and artistic practice and integrity

Ensure that your work is scientifically, creatively and artistically well-founded. Practise rigorous self-monitoring and critical review of AI-generated content and AI tools to ensure their integrity.

  • Use literature research and reliable sources to support and validate your work. Seek feedback from your teachers, experts and fellow students on a regular basis.
  • Document the entire work process in detail (including the methods and materials used without AI).
  • Adhere to the general rules of good scientific, creative and artistic work at the University of Arts Linz and to the requirements of your course (communicated via ufg-online and at the beginning of the course).

Exploring creative possibilities and boundaries

Be curious and use AI to explore new creative approaches. But always be critical regarding the aesthetic and artistic limitations of technology. Experiment with different AI tools to develop ideas and realise projects while reflecting on their impact on your creative practice.

  • Use AI to explore creative approaches and develop new perspectives by testing different algorithms and techniques.
  • Be aware of the boundaries of AI and critically reflect on their aesthetic and artistic limitations.
  • Experiment with different AI tools and reflect on how they affect your creative practice, e.g., by keeping a log on your experiments. 

Review of the content of results

AI tends to imagine facts. You can by no means rely on the validity of results. It is the user’s responsibility to
review content generated by AI. Thoroughly check the integrity of AI-generated results and be aware of potential restrictions and bias inherent to the AI system or resulting from the use of an AI tool.

  • Carefully review the integrity of AI-generated information by comparing it to reliable sources.
  • Be aware that AI tools and AI-generated results may show bias. Critically reflect on potential limitations by scrutinising results. 

Ethics and integrity

Ensure ethical use and integrity of AI in your work by minding the following aspects:

  • Follow ethical standards and practices by keeping informed on the latest ethics guidelines of the University of Arts Linz.
  • Ensure the integrity of your work: Always apply honesty and transparency to your work and clearly document each step.
  • Ensure the security of data and AI tools used by complying with safety standards (cf. rights and data) and regularly updating your security software.
  • Mind data privacy regulations and protect personal data in order to guarantee privacy by anonymising or pseudonymising personal data.

Responsibility and reliability

Make sure that your work is reliable and responsible.

  • Ensure reliability of results by reviewing them multiple times and having independent third parties validate them.
  • Be aware of how resource-intensive the use of generative AI can be. Always consider sustainable use and minimise waste.
  • Make sure that your work follows legal regulations by complying with all relevant laws and rules. If necessary, seek legal advice.
  • Consider the social impact of your work and act with social responsibility by assessing the potential effects of your work.
  • Be aware of potential negative effects and take preventative measures.

Responsibility and transparency

Accept responsibility for the contents you use and ensure transparency in your creative process.

  • List content sources correctly and cite them according to scientific standards.
  • Adhere to the principles of academic integrity and apply honesty and fairness to your work.
  • Assume responsibility for any content you use and its integrity.
  • Double-check the integrity of information to avoid inaccurate, misleading or completely fabricated material.
  • Clearly explain your use of AI and ensure transparency in your creative process by documenting methods and results in detail and making them comprehensible for others.

Labelling

Responsible use of generative AI requires clear and comprehensible labelling of the use of AI tools and AI-generated content. This guarantees transparency and accountability in the creative process.

Use of AI as a tool

Document and label the use of AI tools in your work (records).

  • Specify the AI tools used and their purpose.
  • Describe how AI tools were integrated into the creative process.
  • Ensure that the role the AI tools assume in your work is comprehensible and transparent.

Using AI-generated content

Specify AI-generated content as quotes and describe which parts of your work were developed with the assistance of AI. This includes generation of ideas, texts, translations as well as creative and artistic content.

  • Clearly label paragraphs or elements generated by AI.
  • Cite AI-generated content by indicating that it was generated by AI. This is important for ensuring transparency.
  • In addition to source citation, adapt the citation system by describing the exact use and adaptations applied to AI-generated content. This helps illustrate initial generative processes and their role in further processing.

Transparency

Make sure that the use of AI in your work is clear and comprehensible.

  • Document all steps performed with AI tools in detail.
  • Explain all decision-making processes and methods leading to the use of AI.
  • Make sure that documentation is comprehensible to third parties.

Good scientific, creative and artistic practice

Follow the standards of good scientific, creative and artistic practice, also when using AI.

  • Comply with the ethical guidelines and standards of the University of Arts Linz.
  • Ensure the integrity and quality of your work through diligent documentation and critical reflection.
  • Regularly review your methods and results with regard to accuracy and fairness.

Accountability

Responsibility and transparency are essential to ensuring comprehensibility and accountability.

  • Assume responsibility for the correctness and integrity of AI-generated content. Be aware that you are responsible for the accuracy and reliability of AI-generated content in your work.
  • Be aware of the possible legal as well as ethical consequences of the use of AI.
  • Make sure that your work is verifiable regarding all AI tools and methods applied. Keep detailed records allowing examination by third parties.
  • Whether as a developer or as a user, you are responsible for results and their effects. This includes responsibility for potentially faulty or biased results.

Kritisch-reflektierter Umgang

A responsible and reflective approach to generative AI is essential to artistic, creative and scientific practice.

Using clean systems

Clean systems are AI systems which are based on legally acquired and ethically justifiable training data, avoid processing sensitive or external data, and consider social as well as environmental criteria.

  • Legal training data: Only use AI systems whose training data originates from legitimate, legally sound sources. Ensure that no stolen or unethically obtained data has been used.
  • Avoiding sensitive data: In order to protect the privacy and rights of third parties as well as your own data and the data you have been entrusted with, do not feed your AI systems with personal or confidential information. Avoid entering data containing financial, health-related or other information requiring protection (see ‘data and rights’).
  • Transparency and traceability: Use AI systems whose origin and datasets are transparent and traceable. Providers should disclose which data was used for AI training and how the data was collected. It should also be transparent whether and how the AI system processes your input and whether and where this information is stored.
  • Ethical considerations: When choosing and applying AI systems, take ethical aspects into account. Ensure that systems do not promote discrimination or bias and that they are used responsibly.
  • Environmental awareness: Be aware of how resource-intensive the development and use of AI systems are. Consider energy consumption and the ecological impact of the systems.
  • Social responsibility: Make sure that AI systems comply with social criteria and have no negative effect on specific groups. Consider inclusion and diversity of the training data. 
    You must also be aware that especially the process of data labelling, a key step in developing AI systems, is mostly performed with the involvement of labour from the global south. These people often work under bad conditions (including psychological stress) and receive miserable wages.
  • Inclusive and diverse training data: In order to avoid western, white, male bias, try to use AI systems whose training data includes a diverse and representative selection of data. Promote anti-racist, anti-sexist and non-colonial approaches to AI use.

Promoting AI literacy

Understand the basic functionalities and possibilities of AI tools as well as their limitations and risks. Use existing resources and contacts at the university and participate in continuous training. This includes:

  • Fundamentals and application possibilities: Learn about the functionalities and possible applications of AI tools. Understand how these tools operate and in which areas they can be applied.
  • Critical scrutiny: Stay critical regarding the promises and hype surrounding AI and question their impact on your discipline. Analyse the advantages and disadvantages of using AI.
  • Understand the limitations and risks of AI systems in order to use them responsibly.
  1. Inaccuracy: AI systems can deliver inaccurate, erroneous and made-up results. It is important to always scrutinise these results, compare them with reliable sources and then assess them critically.
  2. Human-in-the-loop and responsibility: Ensure that there is always a person involved in the decision-making process. Accept responsibility for the AI-generated content and make sure that decisions are not made exclusively by machines.
  3. Violation of intellectual property rights: AI models can generate content containing copyrighted material. Make sure that you do not violate any third-party rights and, if in doubt, clarify legal issues in advance.
  4. Bias and discrimination: AI systems can reinforce existing bias or produce discriminatory results. Be aware of these risks and work actively to identify and minimise bias. Use mechanisms to identify and avoid discrimination.
  5. Source transparency: Demand transparency from the AI systems you use. Make sure you know what data was used to train the AI and how the data was collected. Transparent sources are crucial to assessing the quality and ethics of AI results (see ‘clean systems’).
  • Resources and further training: Take advantage of the support offered by the University of Arts Linz to deepen your knowledge. Participate in workshops and training courses on the use of AI and contribute your own critical and reflected ideas.
  • Current developments: Stay informed about current developments and new tools in the field of AI. Scrutinise their social and artistic impact and discuss your experiences and best practice cases critically with fellow students and teachers.
  • Ethical discussions: Debate the use of AI with your teachers and clarify ethical issues, especially for final theses. Ensure that the use of AI tools in your work is ethically justifiable and appropriate.
  • Critically reflected use of AI tools: Use AI tools to support literature research, data analysis, writing processes, image editing, music composition, video production and other artistic processes, but critically scrutinise the results generated each time. Use AI to analyse complex data and check the results for validity and reliability.
  • Information assessment: Critically analyse information provided by AI tools and compare it with reliable sources.
  • Bias recognition: Be aware that AI systems can contain bias and perpetuate it in their results! Therefore, scrutinise every result regarding possible bias and the reasons behind it.
  • Error detection: Regularly check AI results for errors and inconsistencies and correct them if necessary.

Data transfer and impact assessment

Regarding data transfer and assessment of the consequences of AI use, consider the following aspects:

  • Data transfer: Share data with trustworthy parties only. Ensure compliance with data privacy regulations. Be aware that your input may be processed and passed on. Scrutinise which third party providers have access to the data and what risks are associated with this.
  • Storage of input: Gather information on where your input is saved. Consider the implications for data safety and privacy.
  • Impact assessment: Evaluate the impact of your AI-assisted work on society, the environment and other relevant factors. Make sure there are no negative effects. Reflect on potential ethical dilemmas and long-term consequences of AI use.

People at the centre of attention

Ensure that that the use of AI always has people in control and that benefits are distributed equally. This includes:

  • People are in control: Make sure that people control the AI systems and decisions are not made by machines exclusively.
  • Human-in-the-loop: Integrate human feedback and surveillance in the AI-assisted process. Ensure that critical decisions involve human supervision.
  • Shared benefit: Make sure that the benefits of AI are distributed equally and no specific group benefits disproportionately. Advocate for equitable access to the benefits of AI.

Fairness, bias and discrimination

Try to ensure fairness and avoid bias or discrimination. This includes:

  • Fairness: Make sure that AI-generated results are fair and equitable. Check regularly if AI systems operate in a fair and equitable manner.
  • Bias: Be aware of potential bias and try to minimise it. Develop methods for recognising and correcting bias.
  • Discrimination: Ensure that your work with AI has no discriminatory effects. Implement mechanisms for the identification and prevention of discrimination.

Environment and social factors

Consider the impact of AI on the environment and social factors, including the following:

  • Environmental impact: Assess the ecological impact of your AI-assisted work and try to find sustainable solutions. Think about how you can minimise energy and resource consumption.
  • Social factors: Consider the social impact of your work and make sure it is positive and equitable. Your work should contribute to the promotion of social justice.

Prosperity and benefits for the community

Aim for your work with AI to contribute to general prosperity. This includes:

  • Prosperity: Your work should contribute to the improvement of the general quality of life and prosperity. Think about how your work can support positive societal change.
  • Common good: Ensure that your work contributes to the benefit of society as a whole. Get involved in projects that promote the common good and create added social value.

25 September 2024, Version 1.0