Can Oxford and Cambridge Save Harvard From ChatGPT?

Artificial intelligence (AI) is capable not just of disrupting higher education but of blowing it apart. The march of the smart machines is already well advanced. AI can easily pass standardized tests such as the GMAT (Graduate Management Admission Test) and the GRE (Graduate Record Examination) required by graduate schools. AI received a 3.34 GPA (grade point average) in a Harvard freshman course and a B grade on the final exam of a typical core Wharton Business School MBA course.

What can be done to avoid a future in which AI institutionalizes cheating and robs education of any real content? This question is stirring an anxious debate in the university world, not least in the United States, a country that has long been a pacemaker in higher education and technology, but one that is losing confidence in its ability to combine equity with excellence. With the return to campus night, the Washington Post warns of an autumn of “chaos” and “turmoil.” This debate should also be coupled with another equally pressing one: What does the ease with which machines can perform many of the functions of higher education as well as humans tell us about the deficiencies of the current educational model?

One solution to the problem is to ban students from using AI outright. Sciences Po in Paris and RV University in Bangalore are taking this draconian approach. But is trying to ban a technology that is rapidly becoming ubiquitous realistic? And is it a good preparation for life after university to prevent students from using a tool that they will later rely on in work? The banners risk making the same mistake as Socrates who, in Plato’s Phaedrus, opposed writing things down on the grounds that it would weaken the memory and promote the appearance of wisdom, not true wisdom.

A more realistic solution is to let students use AI but only if they do so responsibly. Use it to collect information organize your notes or check your spelling and facts. Refrain from getting it to write your essays or ace your tests. But this raises practical questions of how you draw the line. How do you tell if students have merely employed it to organize their notes (or check their facts) rather than write their essays? And are you really doing research if you get a bot to do all the work and then merely fluff the material into an essay?

The “use it responsibly” argument opens the possibility of an academic future that is a cross between an arms race and a cat-and-mouse game. The arms race will consist of tech companies developing ever more sophisticated cheating apps and other tech companies developing even more sophisticated apps to conceal cheating. The cat-and-mouse game will consist of professors trying to spot the illicit use of AI and students trying to outwit them.

Neither approach seems to work, particularly for spotting cheating, let alone eliminating it. Open AI, the maker of ChatGPT, unveiled an app that was supposed to expose AI-generated content this January only to scrap it quietly because of its “low rate of accuracy.” Another company, Turnitin.com, has discovered that bots frequently flag human writing as being AI-generated. A professor at Texas A&M, Jared Mumm, used ChatGPT to check whether his students might have been using the system to write their assignments. The bot claimed authorship and the professor held up his students’ diplomas until they provided Google Docs timestamps showing that they had actually done the writing. It turns out that ChatGPT is over-enthusiastic in its claims of authorship.