Centre for Pedagogical Innovation guidance on Generative AI in university teaching and learning at Brock University.
The emergence of Artificial Intelligence (AI) tools such as ChatGPT, New Bing, Google Gemini, Meta’s LLaMA, Stanford’s Alpaca, and other AI tools have implications for university teaching and learning as well as academic integrity.
The creativity and expertise of Brock University’s instructors, the integrity of its students, and policies such as the Guide to Academic Computing Behaviour in the Undergraduate Calendar and the Academic Integrity Policy provide a strong foundation for understanding how to approach AI, but there is still much to understand.
Brock University’s Provost and Vice-President, Academic has formed an Advisory Group on Artificial Intelligence.
This resource provides Brock University instructors and learners with a starting point for understanding AI’s implications.
Last updated: October 10, 2024
Frequently Asked Questions
ChatGPT is a large language model developed by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture, which was trained on an unprecedented amount of text data from the internet, as it existed until 2021. The model is trained on conversational data to make it understand and generate human-like text. It has been recognized as a significant step beyond the AI tools that have been made available to the public before.
Instructors are welcome and encouraged to test ChatGPT, the use of which is currently free upon registration. Instructors can use their non-BrockU accounts to opt into Microsoft’s AI-powered New Bing experience. Google’s AI-powered Bard has a similar sign up system. Meta’s LLaMA currently requires some software development knowledge to run on your computer.
You can also test other similar AI tools to assess their capability, for instance, to see if they can respond to the assignments used in your courses or the way in which they improve the readability and grammar of a paragraph. Experimentation is also useful to assess the limits of the tool.
Please note
- Due to high demand, access to ChatGPT is at times only available to OpenAI Plus subscribers.
- ChatGPT was recently released as an iOS smartphone app that requires a subscription. Beware of imitators.
Professor Matt Bower, Interim Dean of the School of Education at Macquarie University, recently shared the following video summary with the Educause community that is posted on YouTube.
Updated: May 23, 2023
Large Language Models (LLMs) are trained to predict the next word in a sentence, given the text that has already been written. Early attempts at addressing this task (such as the next-word prediction on a smartphone keyboard) are only coherent within a few words, but GPT has the ability to pay attention to words and phrases which were written much earlier in the text, allowing it to maintain context for much longer. This new capacity is combined with many training phases through automation, outsourced human labour, and “beta testing” such as the current public availability. As a result, models like ChatGPT, and its underlying technology GPT-3, are good at predicting what words are most likely to come next in a sentence, which results in generally coherent text.
One area where generative AI tools often fail is in repeating facts or quotations. To a model trained to sound convincing, the only important aspect of a fact is that it sounds like a fact. This means that models like GPT-3 frequently generate claims that sound real, but to an expert they are clearly wrong.
Related areas where ChatGPT seems to struggle include citations and discussion of any event or concept that has received relatively little attention in online discourse. ChatGPT sometimes generates inaccurate references when asked to generate them, perhaps treating authors, dates and publications as if they were synonyms, not discrete facts. To assess these limitations, you could try asking the system to generate your biography. Unless you have numerous accurate biographies online, ChatGPT is unlikely to generate a comprehensively correct biography.
Chat-based LLMs have also proven to be an unreliable source about their own capacity, often reporting their ability to generate and recognize text differently depending on the context in which the question was asked. This is assumed to be a symptom of chat tools’ bias towards being helpful and demonstrates their unreliability.
Updated: April 27, 2023
As with all technologies, and especially those available for no financial cost, there are risks and considerations that should be taken into account before using, including how data will be stored and used. This includes avoiding biases in the AI outputs, being transparent about data collection and privacy, assuming responsibility for the limitations and potential inaccuracies of AI systems, clarifying ownership of AI-generated intellectual property, and being transparent about the use of AI in the curriculum.
AI can perpetuate biases that can be difficult to address because of the complexities of AI models and the perceived objectivity of the experience of receiving results. With AI models, bias can be introduced at several points, including, the training data, the coding, the validation and the presentation of results. Just as not everything written on the internet is accurate or without bias, a large language model trained primarily on internet-based information can replicate inaccuracies and bias. The way that some AI tools, such as ChatGPT, obscure sources of information can make assessing the value of responses difficult.
Updated: February 27, 2023
Yes. Some instructors may wish to use the technology to demonstrate how it can be used productively, or what its limitations are.
Asking or requiring your students to access these tools is complicated by the fact that these tools have not been vetted by the University for privacy or security. The University generally discourages the use of such systems for instruction until we are assured that the system is protecting any personal data (e.g., the email address used to register on the system).
Microsoft has indicated that they will integrate AI into the premium tier of MS Teams. Brock University does not have access to Teams Premium. Microsoft’s Azure Cognitive Service for Language and AI “Copilot” are being integrated into the Microsoft suite as part of Microsoft’s large investment in OpenAI (the developers of ChatGPT).
If you decide to ask or encourage students to use this or other AI systems in your courses, there are a few issues to consider before you do so:
- There may be some students who are opposed to using AI tools. Instructors should consider offering alternative forms of assessment for those students who might object to using the tools, assuming that AI is not a core part of the course.
- Instructors should consider indicating on their syllabus that AI tools may be used in the course and, as relevant, identify restrictions to this usage in relation to learning outcomes and assessments.
- Be aware that not everything that generative AI technology produces is correct. You may wish to experiment with ChatGPT to see what kinds of errors it generates; citations are often fabricated, and inaccurate prompts are sometimes taken as fact.
- There is a risk that ChatGPT may produce plagiarized text or perpetuate biases inherent in the material on which it was trained.
- ChatGPT has also been overwhelmed by users recently and not available for hours at a time. OpenAI may also change its terms of use without notice. If you plan on using it ‘live’ in the classroom, consider having a back-up plan.
Updated: March 21, 2023
The University expects students to complete assignments on their own, without any outside assistance, unless otherwise specified. Instructors are strongly encouraged to speak to their students about what tools, if any, are permitted in completing assessments. Written assignment instructions should indicate what types of tools are permitted; vague references to ‘the internet’ will generally not suffice today.
If an instructor indicates that use of AI tools is not permitted on an assessment, and a student is later found to have used such a tool on the assessment, the instructor should inform their Chair as the first step in Brock University Academic Integrity Policy.
Some students may ask if they can create their assignment outline or draft using ChatGPT, and then simply edit the generated first draft; consider in advance of discussing the assignment with your students what your response to this question might be, and perhaps address this question in advance.
You may wish to consider some of the tips for assessment design below. Instructors may choose to contact the Centre for Pedagogical Innovation for more information about assignment design. Consider what your learning goals are for the assignment, and how you can best achieve those in the context of this new technology.
If an instructor specified that no outside assistance was permitted on an assignment, then the use of ChatGPT can be considered unacknowledged assistance. Such a categorization is in keeping with how the University has classified use of other generative and unauthorized technology tools, such as Chegg, in the past.
We understand that instructors are concerned about students using artificial-intelligence (AI) generated output in ways that are not authorized by the instructor. However, in line with the recommendation from the Provost’s advisory group on artificial intelligence, the Centre for Pedagogical Innovation strongly cautions instructors against the use of tools that purport to detect the use of generative AI in student coursework, for two reasons:
- These tools are neither accurate nor reliable (Weber-Wulff et al., 2023), they produce both false negatives and false positives, and are known to disproportionately disadvantage learners for whom English is a second language (Liang et al., 2023).
- As per Section 3 C.4 of the Faculty Handbook (Ownership of Student-Created Intellectual Property), the ownership of student-created works rests with the creator of the work. This means that student coursework is considered the student’s intellectual property (IP) and so may not be uploaded or transferred to third-party platforms (including AI detection tools).
If an instructor cannot accept academic work to the standards of originality expected, namely when the use of unauthorized AI-generated content is suspected, instructors are required to follow the provision outlined in the Academic Integrity policy under Appendix 3 (“Instructors are responsible for taking steps to detect plagiarism in all course work that is submitted by Students”).
For support in academic integrity processes, contact Ana Cassamali at acassamali@brocku.ca
For support in course and assessment design contact the Centre for Pedagogical Innovation at cpi@brocku.ca
Updated: April 27, 2023
Yes. If you use multiple-choice quizzes/tests, assume that generative AI systems will be able to answer the questions unless they pertain to highly specific subjects; new knowledge; or the specifics of classroom discussions, the content of which cannot be found on the internet. Some instructors may wish to test this by using their multiple-choice/short answer assessments as prompts and reviewing ChatGPT’s responses.
Talking to students about ChatGPT and its limitations will let students know that you are aware of the technology and will likely generate interesting discussion and help to set guidelines for students. Let students know clearly, both verbally and in assignment instructions, what tools may or may not be used to complete the assignment. Advise students of the limitations of the technology, and its propensity to generate erroneous content.
Students may be tempted to use AI even for assignments where the instructor has specifically precluded its use. Brock University has information on how instructors can support student integrity including designing submission methods and timelines that could reduce this temptation.
If you choose not to allow the use of AI tools on your assignments, here are some tips for generating assignments to which generative AI systems will have difficulty responding. Some include:
- asking students to respond to a specific reading, particularly one that is from the last year, and may not be on the internet, or may not have generated much commentary online. Generative systems struggle to create accurate responses to prompts for which there is little or no information on the internet.
- ask students to create a video or recording that explains or expands on their work.
- use a flipped classroom approach, and/or assign group work to be completed in class, with each member contributing.
- ask students to create a first draft of an assignment, or an entire assignment, by hand in class. (Consider the accessibility needs of students who may require accommodations.)
- call on students in class to explain or justify elements of their work.
- ask students to use ChatGPT to generate material, and then ask them to critique GPT’s response.
- request citations in all written assignments, and if feasible, spot check them—the accuracy of ChatGPT’s citations is one of its gravest shortcomings.
- talk to your colleagues about ideas for your discipline. Different disciplines, such as computer science, history, language studies and visual studies may be developing new norms of pedagogy.
Updated: February 27, 2023
As with all assessment design revisions based on the availability of AI tools, it’s important to align the assessment with your learning objectives: what is the assessment activity measuring, and does this fit with the learning outcomes for the course? Visit CPI’s Assessment Design page for more information.
Brock University discourages the use of AI analysis on student work.
Transmitting student work to internet-based AI tools for the purpose of analysis falls within the scope of the Faculty Handbook Section 3 articles 10.1.1 and 10.4 provisions of phrase matching software programs. These sections mandate that if an instructor has decided to employ such systems, students must be informed in writing at the beginning of the course. Further, transmission of student work collected by the University to a third party can only be done after appropriate privacy and security reviews.
Students’ submitted work is considered the student’s intellectual property (IP) and should be treated with care.
This question is still being actively debated by the global academic community. We expect to see standards of practice emerge in the coming months. The concern about citing AI spans from being transparent about assistance and the originality of work to the question of if AI tools meet the requirements for authorship or not, as they cannot take responsibility for the submitted work.
The MLA has some guidance on the general question of citing AI output. However, this guidance predates ChatGPT, and may become obsolete as these new tools take on a greater presence in academic writing.
Updated: February 27, 2023
No. Large Language Model (LLM) technology is at the heart of a variety of generative AI products that are currently available, including writing assistant programs (e.g., Jasper, Writer, Moonbeam), image creation programs (e.g., DALL-E 2, Midjourney, Stable Diffusion), AI-assisted search engines such as New Bing, Google Bard, and programs to assist people who are creating computer code (e.g., GitHub Copilot). It is also possible for you to build a system which utilizes this underlying technology (GPT-3 or another LLM) if you are interested in doing so.
It is also worth noting that there are a variety of products (online and mobile apps) that have popped up that use GPT-3 or other LLM technology and require paid subscriptions. Some add additional features such as editing tools and templates. However, there are others that do nothing more than the free version but are meant to fool people into paying for a service that is currently free.
Updated: May 23, 2023
This FAQ incorporates with permission the thoughtful work prepared by the University of Toronto’s Vice-Provost, Innovations in Undergraduate Education, Dr. Susan McCahan
Thanks to colleagues in Brock University’s Faculty of Education, the Digital Scholarship Lab, and the Academic Integrity Manager for their contributions.
Additional Resources
Brock University AI Essentials micro-credential
Brock University events and meetings
- Learn how AI is shaping reality at public talk November 7, 2024
- Brock University, Senate Teaching and Learning Policy Committee (2023, January 20). Meeting #5 (2022-2023)January 20, 2023
- Brock University, Senate (2023, March 22). 708th Meeting of Senate March 21, 2023
- Coffee and conversation at the Brock LINC Innovation Social: AI Tools in the University, April 4, 2023
- Artificial Intelligence (AI) Day at Brock University, November 9, 2023
- AI Essentials for Educators: A Practical Guide for Next Generation Learning – Simon Chow, November 20, 2023
Around the Internet
- Clarke Gray, B. & Cormier, D. (2022, December 16). What can our classrooms look like after sites like Chegg, Photomath and OpenAI change what it means to ‘do your own work’? [Recording]
- Clarke Gray, B. (2023, January 9). Whither comes the data: Current uses of AI and data set training in higher ed. TRU Digital Detox
- Eaton, S. (2022, December 9). Sarah’s thoughts: Artificial intelligence and academic integrity.
- D’Agostino , S. (2023, January 12) ChatGPT Advice Academics Can Use Now, Inside Higher Ed
- Feldstein, M. (2022, December 16). I would have cheated in college using ChatGPT. eLiterate
- McMurtrie, B. (2023, January 5). Will ChatGPT change the way you teach? The Chronicle of Higher Education
- Rosalsky, Greg, and Emma Peaslee. (January 17, 2023 ). This 22-Year-Old Is Trying to Save Us from ChatGPT before It Changes Writing Forever. NPR
- Warner, J. (2023, January 4). How about we put learning at the center? Inside Higher Ed
Updated: October 10, 2024