In publica commoda

DEALING WITH AI MODELS IN STUDY AND TEACHING

Recommendations for Instructors

Against the background of experiences gained since 2023 with AI models in study and teaching, the university reinforces its recommendation to integrate these tools into teaching as extensively as possible and to promote their use by instructors and students.

General Assessment

ChatGPT is a particularly popular version of an AI-based chatbot. The service is available at https://chat.openai.com/chat. Technically, ChatGPT is based on a Large Language Model (LLM), of which a large number are now available. In the higher education context, it is especially significant that these models can generate texts of high quality in a question-and-answer format. This also applies to the generation of source code for programs. The recommendations presented below therefore explicitly apply both to traditional texts and to program code.

Here, we initially use ChatGPT as an example of the rapidly developing group of (text and image) AI models that are increasingly forming the basis of many different applications and services. For example, one might ask ChatGPT to introduce itself as follows:

What is ChatGPT, explained in simple terms?

ChatGPT is an AI-supported text system developed by OpenAI. "AI" stands for Artificial Intelligence, meaning computer programs that possess human-like abilities such as understanding and processing language. ChatGPT stands for "Conversational Generative Pre-trained Transformer" and is a model specialized in generating human-like conversations.
The model was trained using machine learning and large amounts of text from the internet. In doing so, it learned to recognize patterns in language in order to generate appropriate responses to user requests. ChatGPT can handle a wide variety of topics and tasks, such as answering questions, translating texts, making suggestions, or telling creative stories.

The capabilities of current AI chatbots are so powerful that they can “answer” many questions from typical bachelor-level examinations at the performance level of an average student. Texts generated by AI chatbots are so similar to those written by humans that their use can hardly be detected through automated identification methods such as plagiarism software. Even for readers, distinguishing between them is already difficult today and will likely become nearly impossible in the foreseeable future. LLMs will therefore change learning and teaching. The availability of AI tools also affects which competencies will be required in future professional life.

However, LLMs provide no guarantee that the texts generated by the system are correct. On the contrary, the system has also been trained on misinformation and may reproduce it. Completely incorrect or fabricated information may therefore be presented convincingly:

Why is the "Blue Tower" in Göttingen called the Blue Tower?

The Blue Tower in Göttingen is called so because it has a blue façade. The color was applied in the 18th century during a renovation of the tower. The tower is an important historic building in Göttingen and one of the city’s best-known landmarks. It is located near the Old Town Hall and formerly served as a defensive tower of the city wall. Today, it houses the Museum of Art and Cultural History.

In the academic context, it is additionally relevant that LLMs not only fabricate facts but may also “invent” sources and present them in the same format as real references. Listed literature references — including author initials, journal titles, volumes, page numbers, and similar details — may therefore be entirely fictitious. Users therefore bear particular responsibility for how texts generated by LLMs are used.

The performance of AI language models is increasing rapidly, and extensive feedback on these models will further support this development. At present, it is still unclear how far the capabilities of these systems will extend. A strategy for dealing with AI-supported systems must therefore not rely on existing limitations of current systems but instead adopt a fundamental position. Developments observed in the context of Wikipedia have also shown that the use of such systems cannot and should not be prevented or prohibited. On the contrary, the University of Göttingen understands AI models as tools that will be used and for whose intelligent application students and instructors must acquire competencies. A prerequisite for this is transparency: all participants must disclose the extent to which LLMs were used.

Recommendations for Action

The following recommendations can only be provisional due to the rapid pace of development. Language-based AI models will sooner or later have significant influence in many areas of society, which makes it essential for us as a university to repeatedly engage with both the opportunities and risks involved.


Teaching

Intelligent use of AI systems offers opportunities, but also entails some risks for university teaching. A central goal in dealing with AI systems should be the development of competencies among users. These include, for example:

  • an understanding of how AI systems work in general, which societal biases may be present in the training data and reproduced by AI, and how this can be addressed in research and teaching,
  • knowledge of which AI models can be used in which contexts,
  • the competence to assess relevant aspects of data protection, personal rights, and copyright when using AI, as well as an understanding of the difference between using self-hosted models at the GWDG (e.g., chat-ai.academiccloud.de) and external models (e.g., ChatGPT from OpenAI),
  • recognizing the relationship between how a question is formulated and the response generated by AI models, and, based on this, the competence to optimize a question with respect to the desired level of detail in the answer,
  • the competence to recognize the limits, relevance, and implications (values) of the answers AI systems provide to questions.

Developing competence for the critical and responsible use of AI systems must become an integral part of teaching at our university. Competency orientation has long been an important aspect of higher education, which requires particular attention when dealing with AI systems.

The following recommendations aim to support this goal in the context of using AI models in teaching:

  1. Consistent culture of transparency: The use of ChatGPT should be openly communicated by both instructors and students.
  2. Integration of AI models into teaching: Depending on the discipline, AI models can be incorporated into the teaching and learning process in very different ways. Various approaches can be shared across departments through good-practice examples.
  3. In subjects where producing original texts, images, or music is a core goal, AI systems should be actively integrated as a future-oriented tool in the creation process and dialogically guided. Abandoning written work is not advisable.
  4. When creating original texts, etc., is more of a medium than a primary learning objective, AI support in text generation can create space for students to focus more on other learning goals such as evaluation skills or reflection.
  5. Existing AI systems also significantly increase students’ self-responsibility for their own learning success. This can, for example, be supported by instructors providing opportunities for regular voluntary learning assessments (without AI).
  6. Even though the use of AI is generally encouraged, it can sometimes be pedagogically useful to adopt more differentiated approaches. Students must, in any case, have clarity about the rules for AI use in courses. This should be discussed with participants at the beginning of each semester in all courses.
  7. The GWDG offers a portfolio of AI services for all university members (students and instructors), e.g., its own chatbot (ChatAI). This chatbot provides access to AI models hosted by the GWDG as well as to external models such as those from OpenAI (ChatGPT). These offerings should be introduced and explained to students of all disciplines at the beginning of the semester.

Examinations

For examinations taken without supervision (e.g., term papers, essays), students have access to AI text models. Prohibiting their use is not advisable, not only because supervision is lacking, but also because meaningful use of such tools is already part of prior school education and will be important for future professional life. Proper use in the sense of good scientific practice (GSP) should therefore also be part of the examination. At the same time, it must be ensured that the submitted work represents an independently performed achievement. In principle, a critical analysis of the examination formats should be conducted for each course or module.

Against this background, the following recommendations are provided:

  • In supervised examinations (e.g., written exams and oral exams), the use of AI tools is not permitted unless explicitly allowed. Use during bathroom breaks or similar situations would constitute an attempt to deceive. In some cases, it may be advisable to use, for instance, an e-examination platform for assignments that are written at home, where the text must be submitted within a defined time frame.
  • In unsupervised examinations, the following should apply:
    1. AI models should generally be considered permissible aids in unsupervised exams, also because it cannot be monitored whether students refrained from using them.
    2. The permissibility of AI models as aids ends where parts of the submitted work originate from an AI model and are not transparently declared as such.
    3. The use of AI tools must be transparent. As a rule, students should explain, in addition to the submitted work, using guiding questions (see Appendix 2) how AI tools were used in creating the examination submission.
    4. As an alternative or complementary procedure for declaring AI use, introducing a methodology section is recommended, as is already standard practice in some disciplines. Here, all relevant tools used in preparing the work are explained.
    5. If AI competencies are not direct examination content, transparent use of AI models should have neither positive nor negative effects on the grading of the submission.
    6. Examiners should adjust examination formats and questions so that the independence of student work can still be verified even when AI tools are used. For suitable exam formats, examiners may decide to waive the declaration requirement.
    7. At the beginning of the semester, the specific requirements and framework for AI use should be discussed with students in each course. This should also reflect the students’ personal responsibility in accordance with good scientific practice regarding AI usage.
    8. The nature of independent work changes when using AI models. For example, text passages may be developed, evaluated, and linked with sources in a question-and-answer dialogue with an LLM based on competencies acquired in the course. It does not, per se, contradict independent work if such text blocks, once linked with previously identified sources, are incorporated into the submitted examination result.

All study committees and program deans should work together with instructors and students to consider the recommendations outlined here regarding AI models and their impact on teaching and examinations, and refine them as appropriate for the specific discipline. Special attention should also be paid to examination formats and the design of assessments.


Curricular Developments

In addition to integrating AI systems into individual courses, it is also useful to address these developments at the curricular level. In this context, improving the following competencies is considered important:

  • Media and IT competencies
  • Language and text competencies
  • Evaluation, assessment, and reflection competencies
  • Methodological competence (scientific working) and practices of good scientific practice

Next Steps

As a continuation of the recommendations presented here, more specific guidance will be developed for instructors on the “Use of AI Tools in Teaching” and “AI‑aware Examination Formats,” as well as recommendations for students. However, these can only reflect an interim status and will remain provisional in nature.

Overall, the recommendations aim to identify ways to integrate AI models in academic contexts. At the same time, they seek to ensure that AI systems are used responsibly and with awareness of their inherent potential.


Appendix 1: Checklist for Instructors

This template serves as a guide for transparency regarding AI use in independently completed written examinations, and in particular, final theses. Based on the answers to the following guiding questions, it should be made clear in what ways AI tools were used in the creation of the work.

In an extended annex, some examples are provided for each question. These examples serve solely as orientation for formulating the answers and do not cover all possibilities nor constitute a recommendation for specific uses of AI. The questions are not intended to be answered mechanically, but rather provide a framework for students to reflect on their use of AI in preparing the examination work. Only questions applicable to the respective work should be answered.

It should also be noted that different disciplines or programs may have specific requirements for AI use and its declaration. While in some disciplines the declaration may, for example, be provided solely as answers to these questions in an additional document attached to the work, in other disciplines it is customary to include specific information in the methodology section of the work. Instructors will provide details on the exact requirements in the courses.

  1. How was AI used in the ideation process?
  2. How was AI used in working with literature and sources?
  3. How was AI used in the process of academic writing?
  4. How was AI used in methodology, particularly in data collection and analysis?
  5. How was AI used in other activities?

Appendix 2: Guiding Questions for Transparency of AI Use in Examinations

The checklist provided as an external annex serves to support and adapt courses in the context of Artificial Intelligence. The focus is also on the design and conduct of examinations. The checklist is provided as a working document.