OPINION: Instead of punishing students for using AI, colleges and universities must provide clear, consistent guidelines and rules

Haishan Yang is a success story — and a cautionary tale. Yang was the first person from his rural village in China to earn a scholarship to attend graduate school abroad. After receiving his master’s degree in Austria, he earned a doctorate in economics in the United States and was working on a second Ph.D. […] The post OPINION: Instead of punishing students for using AI, colleges and universities must provide clear, consistent guidelines and rules appeared first on The Hechinger Report.

May 13, 2025 - 08:35
 0
OPINION: Instead of punishing students for using AI, colleges and universities must provide clear, consistent guidelines and rules

Haishan Yang is a success story — and a cautionary tale.

Yang was the first person from his rural village in China to earn a scholarship to attend graduate school abroad. After receiving his master’s degree in Austria, he earned a doctorate in economics in the United States and was working on a second Ph.D. when the University of Minnesota expelled him last fall.

Yang was accused of using generative AI on an open-book online exam he had to pass before he could begin writing his dissertation. Though he had previously used AI for translation and grammar support, he denied using it on the exam.

Yang’s expulsion amounted to what he called “a death penalty” — it cost him his student visa and derailed his promising career as an academic researcher.

He has since sued the university and a faculty member, and litigation persists at the U.S. District Court of Minnesota. This is a problematic episode for everyone, showcasing the lack of a cohesive AI strategy at a time whencolleges and universities should be seeking to become “AI forward” and establish clear, consistent guidelines on AI use.

As AI use becomes routine in higher education and the workplace, institutions that expel students for using AI are likely punishing themselves in the long run. Instead, they should teach students to become effective and responsible users of the technologies their future employers will expect them to know.

Related: Interested in more news about colleges and universities? Subscribe to our free biweekly higher education newsletter.

With higher education teetering on the edge of an enrollment cliff, colleges and universities should embrace AI or risk losing students and scholars to institutions taking a more proactive view of these transformative technologies.

Building an AI-forward campus culture begins with AI literacy. Results from a new research study suggest that in an academic setting, students unfamiliar with AI might be more likely to become overly reliant on it.

By teaching students about AI’s capabilities, flaws and limitations, institutions can help students understand where and how to use these technologies to support their coursework.

Employers increasingly expect their early career hires to be AI literate, and the vast majority of college graduates say AI should be incorporated into college classes. Indeed, job candidates with AI skills and experience often land more interviews and command higher salaries.

If the primary purpose of college is to prepare learners for the workforce, then colleges must ensure that students know and understand AI. Harsh and inconsistent AI policies stand in direct conflict with this duty. Yet most institutions still lack acceptable-use AI policies.

In Yang’s case, the reviewers of the exam in question relied on AI detection software, which is far from an exact science. At best, this software is inconsistent. At worst, it simply doesn’t work and can display bias against neurodivergent students and students like Yang whose first language isn’t English.

If colleges are determined to use tech-driven solutions to detect AI-generated work, they should teach faculty about AI detection’s shortcomings and never rely solely on AI detection to make consequential decisions about failing or expelling a student.

The year before his expulsion, Yang submitted an assignment that included what may have been an AI prompt. As reported by Gizmodo, Yang’s assignment read: “re write it [sic], make it more casual, like a foreign student write but no ai.” Yang denied using AI; the university issued him a warning.

While that apparent prompt does certainly make you question what exactly happened during his open-note exam, I still wonder whether or not expulsion was the right decision. Could the university, professors or department have implemented a more robust strategy, sooner, to avoid potential misuse and capitalize on the upside of AI for student learning?

Related: International students may be among the biggest early beneficiaries of ChatGPT

Creating an AI-forward campus means embracing AI technology — not dismissing or banning it — because research shows that safe AI strategies can have enormous benefits for higher education.

A majority of college students are using AI tools to handle basic tasks such as proofreading, brainstorming and summarizing lecture notes, a study by Educause found.

 College faculty and institutional leaders say AI tools can power learning analytics; improve accessibility for students, faculty and staff with disabilities; and generally broaden access to higher education.

An AI-forward approach requires clear expectations and consistent policies throughout an institution, especially because so many colleges emphasize interdisciplinary research and scholarship. Putting guardrails around AI use is fine, but institutions should be extremely careful about how they use AI-detection tools.

Rather than using tech detection tools to play “gotcha,” educators should use the tools to support learners. For example, colleges should consider proactively equipping students with AI-detection tools so they can flag and address potential AI text in their own writing before they submit their assignments.

The California State University system has the potential to become a model for AI-forward culture. In February, the system announced a partnership with OpenAI to bring a version of ChatGPT customized for higher education to its 460,000 students and 63,000 faculty and staff at its 23 campuses.

That partnership will include free coaching and certifications to help everyone learn to use ChatGPT — and generative AI — effectively, and it will help students gain entry to apprenticeship programs in AI-driven industries so they can sharpen their AI skills.

This broad access to AI has the potential to enhance teaching, learning, research and administrative duties and give graduates the AI tools they’ll need to succeed in their careers.

By creating an AI-forward culture, institutions will be seen as innovative and welcoming of change as higher education enters a new era of increasing competition for students and resources.

Kelsey Behringer is the CEO of Packback.

Contact the opinion editor at opinion@hechingerreport.org.

This story about college AI policies was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The post OPINION: Instead of punishing students for using AI, colleges and universities must provide clear, consistent guidelines and rules appeared first on The Hechinger Report.