How Students Actually Use AI, When No One Is Watching
If you sit still in a campus library long enough, you can hear a new kind of study group. It has one member and two minds. A student types. A model replies. The student frowns, then nods. Everyone else hears silence. That quiet is not a void, but a new homework ritual.
This essay braids two strands: Anthropic’s Education Report on real student conversations with Claude, drawn from about a million interactions filtered down to roughly 575,000 academic conversations, and local Tulane student surveys from consecutive semesters. The Anthropic team used a privacy-preserving pipeline called Clio that strips personal content and aggregates patterns, and nobody is peeking at anyone’s homework, which matters if we want honest signals from real use. The Tulane surveys, cleaned to exclude faculty who wandered into the student link, gave us 844 responses in the first run and 798 in Spring. Together they describe ordinary behavior with important consequences. Not a scare story. Not a miracle. Just the world we have to teach in.
Here is what we found. The same research sorts student behavior into four familiar postures. Sometimes a student wants a fast answer. Sometimes a fast paragraph. Sometimes a partner to reason with. Sometimes a coauthor. None of this is exotic. It is essentially the old spectrum of learning, from the single fact to the whole essay, now collapsed into one chat window. The shocking thing is not that students do all four. The shocking thing is that anyone is shocked.
The Work and the Worker
Students mostly talk to AI for two things that matter: creating and analyzing. They ask the model to draft, revise, explain, debug, and outline. In other words, they use it at the point where thinking is work. That should not scare us. It should make us curious. When a tool shows up exactly where effort lives, the lesson plan should not be to lock the toolbox.
Creating or improving educational content accounts for about 39 percent of conversations. Technical explanation and problem solving account for about 34 percent. The rest spreads across data analysis, research design, diagrams, translation, and proofreading. In Bloom’s terms, students often ask the model to operate at higher levels first, which is both the promise and the risk.
At Tulane, the patterns rhyme with the global story. The share of students who never use AI dropped from roughly 21.9 percent to 11.0 percent across our two survey runs. Daily use dipped from 19.0 to 14.2 percent, which sounds like retreat until you notice what is really happening: the middle bands are swelling. More students are using AI a few times a week, keeping what works, tossing what does not. They are becoming pragmatists, not evangelists. High familiarity with ChatGPT fell from about 60.6 percent to 31.8 percent in the Spring sample, likely because the cohort included more newcomers. Familiarity with Claude stayed low and steady around 5 to 6 percent, consistent with a STEM-heavy adoption pattern.
Confidence and the Check
Confidence is the interesting wrinkle. Many students do not feel sure they can judge AI output well. You can see the anxiety in their open responses. Hallucinations. Bad citations. Wobbly math. Code that looks right until it fails your test. That is not a reason to ban the thing. That is a reason to teach the thing. Education is the slow art of learning how to doubt your own first draft, whether it was written by you or your new silicon study buddy.
The share who rated themselves very or extremely confident at evaluating AI outputs slid from about 26.3 percent to 19.8 percent. Meanwhile, strong support for more AI in the curriculum, scored 8 to 10 on a 0 to 10 scale, cooled from about 37.9 to 20.9 percent, while strong opposition at 0 to 2 dropped from 22.8 to 15.4 percent. Put those numbers together and you get something interesting. Students have lived with the tool long enough to know they need instruction, not just access. They want the course to take the model seriously, then teach them how not to be fooled by it.