A New Work Group at WNMU Is Addressing How Higher Education Can Respond to AI

Image is provided by Pixabay, where the creator has released it under the Creative Commons Zero license.

© Western New Mexico University

The recent proliferation of generative Artificial Intelligence (AI) technologies poses many questions about the appropriate use of AI in academic environments. At WNMU those questions have prompted the formation of the AI Work Group, which includes faculty, staff, and students and began meeting in the Spring of 2023.

WNMU Provost and Vice President of Academic Affairs Jack Crocker formed the AI Work Group after hearing a presentation by WNMU Professor of Biology Manda Jost at the spring semester Welcome Back Convocation. Jost explained that the timing of her presentation was because spring “was the first semester we had to really seriously start thinking about students using AI, both from the point of view of a tool that helps them and as something that might compromise their learning.”

Jost is committed to considering AI in its full complexity—not just as a dangerous temptation to cheat but also as a potentially useful resource. “AI are very powerful tools,” she said, “and we need to be training our students how to use them well and intelligently. If we are simply trying to shield them from AI, so they do their [own] homework, but we are not training them how to be intelligent users of AI, we are selling them short because these tools can do so many things quickly, often more efficiently and sometimes at a higher quality than humans do.” If the university does not teach students to use AI well, indicated Jost, they might be at a disadvantage when they graduate.

According to Jost, the teaching of AI skills is essentially a continuation of something WNMU already does, teaching digital literacy, but she sees AI as adding new complexity to how digital literacy is taught. “WNMU has been teaching digital literacy for a long time, and . . . before the AI tools became so common, part of the digital literacy education was teaching students [how to] vet the results you might be getting from a search engine, such as Google. [Research is] not just about how to get information but about how you evaluate the quality of the information, the reliability of the information, and what is an ethical use of that information. Those questions I believe . . . are going to become more challenging with AI because there is going to be more of a tendency with these AI tools, like ChatGPT, to just ask it a question and take the output for granted.”

The ability to assess the quality and reliability of AI-generated material is especially important because products like ChatGPT are not always accurate, Jost indicated, mentioning that ChatGPT has been known to misunderstand questions and commands, and it tends to make things up when it does not have sufficient data to create a complete response. WNMU Senior Web Developer Sebastiano Marino said there are some things AI can do reliably and other things it cannot. Advised Marino, “Don’t ask ChatGPT about novels that have been turned into successful movies, because it might mix them up. Don’t try to debate with it because you might have an easy win, even when you are wrong.”

AI is also likely to reflect the culture, assumptions, and biases of its human programmers. “One of the big concerns about where AI is going is that AI tools could be biased—they are only as good as the information they are trained upon. And there have already been studies showing that the different [AI] language models have biases built into them—cultural biases, ideological biases, political biases,” Jost said.

Marino also advises caution with AI, even though he is optimistic about its potential. Chat GPT, he said, “encompasses most of the available knowledge as well as the known human downfalls and should be embraced … but with reason, since there are still a lot of concerns, not only with the accuracy and quality of AI-generated output but also related to data sourcing and privacy.”

These are the kinds of challenges that the AI Work Group will be thinking about as they continue to have conversations about AI’s place in academics. So far, the group has identified a selection of goals to guide their work. One of these goals is to develop a WNMU policy on AI. They also plan to develop some common syllabus language that would allow individual instructors to tailor their specific policies to the needs of the course.

Beyond these tasks, the group hopes to educate faculty and students on the range and use of AI technologies and to explore and recommend new teaching methods that emphasize human skills, human thought processes or inter-human interactions. One of the more difficult questions they will be addressing is how online courses will need to be modified.

Jost thinks that the AI Work Group’s conversations will in many ways align with the questions that are central to the Applied Liberal Arts and Sciences at WNMU, questions that are common to each student’s academic experience at the university: What is truth? What is Justice? What does it mean to be human? and What is the “good” life? She pointed to the third question as especially relevant to any discussion of AI, as it is at the heart of the “philosophical task of this group, which is to define what are those uniquely human skills and traits . . . that we need to emphasize now.”

The goals of the AI Work Group are broad, and there are bound to be surprises along the way, given how rapidly AI technologies are developing. As Jost said, “We are at the beginning of all this. We don’t know where it’s going to go.”

 

Submit Feedback