8 Comments
User's avatar
Guy Wilson's avatar

Shreeharsh, I agree that it is difficult to have the conversations we need about AI. Educational Progressivism and politics are also tangled up with personal and institutional ambitions. If it can be done, then we should start exploring ways to get it implemented in regulations, even if we have to start out slowly and in small ways, with pressure from universities. I am skeptical that education alone can bring about this change. It may be that we need to find allies in professional fields that also need this capability, perhaps law or medicine.

You noted that OpenAI itself backed off when 30% of surveyed users suggested they would use the product less. That would require enforceable regulation, something most AI companies resist tooth and nail. There are vested interests that would likely push back. These could include intelligence services conducting disinformation campaigns (though they could circumvent regulations), marketers who do not want their AI-generated product reviews caught, politicians who don't want to be seen using it, etc.

Watermarking text in a robust way is hard. Going to the blog post from OpenAI about their watermarking (https://openai.com/index/understanding-the-source-of-what-we-see-and-hear-online/), they note:

>>While it has been highly accurate and even effective against localized tampering, such as paraphrasing, it is less robust against globalized tampering; like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character - making it trivial to circumvention by bad actors.

Researchers at the University of Reading claim to have created a watermark that cannot be easily altered but can still be defeated.

I wonder about larger issues of ethics and integrity in society. In a subscriber-only post this week, Audrey Watters (https://2ndbreakfast.audreywatters.com/dishonor-code/) wonders if there has been a sufficiently significant cultural shift outside universities that our plagiarism concerns are simply irrelevant. She doesn't put it this way, but do we live in a culture where lying, cheating, and deception are norms, and where developing deep skills and knowledge are obsolescent? And how far has that already seeped into academe? That is the larger context that we would have to change first.

Expand full comment
Shreeharsh Kelkar's avatar

Thanks for the comment. Sorry for the delay (been a long week!). I agree with you that universities need more allies in this fight but I think the fact that AI companies think that students are a big part of their customer base gives us a lot of leverage. I also agree that AI companies will fight tooth-and-nail against this but it's not like we haven't seen this before with negative externalities like pollution. That said, I think this standardization exercise will work much better if AI companies get benefits from this, not just costs. And I think that is very possible: if we could have basic "law and order" it opens the window of opportunity for instructors to integrate AI into their assignments and that ultimately is to the companies' benefit too. (Obviously, we would ask students to use only AIs with watermarking.)

Last, but not the least, I think watermarking will only work if it's adopted by all companies (this means students can't just copy-paste an output from ChatGPT and put it into Claude and reword it) but this will also solve some of the "technical" issues companies are running into right now when it's each company trying something out with its own software.

I will read the Audrey Watters post soon; while I have learned a lot from Watters, I think her posts are not designed for pragmatic solution-making which is where I most want to go.

Expand full comment
Guy Wilson's avatar

This is still weeks or months off, and so far, it is only Google unless they can get others to sign onto their particular watermarking technology, but it sounds like your wish might be granted.

https://blog.google/technology/ai/google-synthid-ai-content-detector/

Expand full comment
Shreeharsh Kelkar's avatar

Oh wow. Thanks for this. I think I'm going to test this out! I hope other AI providers join! If we could have reliable and transparent AI detection, then I, for one, would love to integrate more AI into my assignments in my large classes.

Expand full comment
Rob Nelson's avatar

I'm ambivalent about watermarking without falling into the opposing camps you outline. Or maybe I am in one of those camps, but deluding myself into thinking otherwise.

Regardless of my theoretical confusion, I am interested in the empirical question of whether watermarking works. The work of Soheil Feizi, John Kirchenbauer, and others at the U of Maryland seems like the best window into what's possible, and right now, as you say, their view seems to be not yet.

I'm not teaching history this fall.. I plan to allow the use of AI tools and expect the students who choose to use it, to share their analysis of what educational value they get from the tools. My assessment will be based on what they do in class in the form of presentations, short talks, and group-developed workshops conducted with non-digital technologies.

There will be less writing in these classes that gets graded, but writing will be part of developing in-class activities. This may fail miserably, but I'd rather do it this way than rely on tools that purport to validate students' non-use of AI in writing.

In any case, I want to live in the world you describe in the last paragraph, but I am not sure that world exists anymore.

Expand full comment
Shreeharsh Kelkar's avatar

Thanks for the comment. Many of my other colleagues are doing similar things and I do similar things in my smaller classes which tend to be much more hands-on anyway.

The big problem for me are my large 100+ student classes (and I get graders but no TAs) and it is there that I most feel the need for an enforcement mechanism that will allow me to get some baseline "law and order" so I can then think about assignments that let students use AI in a productive way.

Expand full comment
Swen Werner's avatar

Hi very interesting article, maybe I have a solution but it's not tech:

Watermarking fails for short or structured text.

It needs long, unconstrained output to embed detectable statistical patterns.

In rigid formats (e.g. reading responses), there’s no room for watermarking to “hide.”

Cryptographic methods don’t help unless the entire writing process is locked inside a controlled system. If students can type or paste from outside — even using their phone or asking someone else — the chain is broken, and detection becomes meaningless.

But that’s not the right way to tackle this.

If an LLM produces a passable answer, then the task is flawed — not just the method. Let students voluntarily use LLMs and let them learn their limits through firsthand failure — not punishment. Learned experience.

You give them the test and say:

“Use an LLM to create a future-proof prompt template that guarantees high grades with minimal effort.”

Let them believe it’s a game about efficiency.nBut what they actually build is a case study in cognitive outsourcing.

Then you reveal:

“You didn’t build a shortcut.

You built a crutch — and the grade it gets you depends on how well you understand that.”

LLMs are easy to fool — if you understand how they encode, retrieve, and simulate patterns. They don’t reason — they simulate what looks like reasoning based on language. That’s not good or bad — it’s just how they work. And that means students risk being held back by a dumb machine — one they can’t beat by being smarter, because they already are. But in this world, being smart is no longer enough. They have to learn how not to get trapped by the tool.

Expand full comment
Joseph Stitt's avatar

I don't think university professors are going to be able to organize in a way that has any discernible effect on the people driving the AI industry. And given the number of people associated with higher ed who have critiqued "punitive approaches" to academic misconduct as being outdated, evil, and so on, colleges and universities don't have the credibility to make the case.

But the more fundamental problem with an industry solution is the industry itself and the people who run it. When you make plagiarism mega-bots that vacuum up creative work from millions of writers and artists without compensating them, you're not the sort of person who cares about lying, cheating, or stealing.

I like the watermark idea conceptually, but I'm afraid this problem, which is a huge problem, is going to end up in the laps of the individual professors who care about it. There are many such professors out there doing the best they can. The thing that I would ask of universities is that they try to support these people--or at least leave them alone--instead of steamrolling them on the grounds that the customer is always right.

Expand full comment