I keep carrying around this fantasy that one semester I’ll finally “arrive” with a finished course: I’ll dust off last year’s slides, tweak a date or two, and stroll into class to deliver a polished, low-effort masterpiece. Other professors, I imagine, do this. They skim their notes for five minutes, walk into the room, and effortlessly re-teach the course, students nodding along, absorbing every concept like osmosis.
I know this is a fantasy. More than anything, it erases the role of the students in the whole enterprise. Teaching is supposed to be a dialogue, not a monologue, and a good teacher adjusts to the actual humans in front of them, not an idealized class from last year. Still, my daydream is so far from how things really feel that I can’t help wondering whether there is some way to make this all a little easier on myself.

Almost seven years into academia, I’m still extensively rebuilding, rewriting, and re-thinking. I don’t have a perfect course; I’m not even sure I’m capable of leaving one alone. The course I taught this term, Big Data in Geophysics, is meant to bring students right up to the edge of current geophysics research that depends on massive datasets. It is a topic I care deeply about, because I’m convinced that most of the biggest breakthroughs in Earth science are data-inspired, not dreamed up on a blackboard. And some of the most exciting developments in how we measure Earth today are characterized by how we’re scaling up measurements in new ways. It’s a course that almost by design needs to be refreshed each time it’s taught.
And, of course, you can’t talk about big data in 2025 without talking about artificial intelligence. Not just building and training AI models, but using large language models, these new chatbots, as part of how we learn, read, write, and do research.
In earlier versions of this course, I spent a huge chunk of time just getting everyone up to speed on the computational tools. If you’re working with big geophysical datasets, you need to code. If you’re doing your “cutting-edge” research in Excel, I’m pretty comfortable saying you’re probably not at the cutting edge. (If you are, you’re definitely smarter than me.) So far, I’ve never had a student walk into this class with all the necessary programming skills. That meant we burned a lot of time on basics and had less room for the fun part: actually attacking real geophysical problems, the science.
This semester, I tried something different: we outsourced a lot of the low-level coding to AI. Students were allowed to use the latest LLMs to write code for them, as long as they could explain what the code did. The upside was thrilling: suddenly we could tackle problems that were completely out of reach for previous cohorts.
We didn’t stop there. We used LLMs to help us read and interpret papers. Faced with a paper in an unfamiliar subfield, students found that “reading with a chatbot” gave them a kind of guided tour; they could ask it to unpack phrases like “Glen’s flow law” or “representative volume element” in the specific context of a paper on seismic insights into ice-stream deformation, after importing the paper as context. LLMs became a just-in-time tutor sitting beside them as they read.
We also used LLMs to draft and refine presentations. That’s how I do most of my writing now: get rough thoughts down quickly, ask an LLM to clean and sharpen them, then do a final pass myself. For student talks, generating figures and images with these tools dramatically improved the look and clarity of their presentations, without needing a professional graphic designer.
On my side, I leaned on LLMs to design in-class activities. Gamifying the learning experience with things like Jeopardy or University-Challenge-style quizzes provided some fun excursions from regular lectures or discussions for the students and for me. At one point, when I asked Claude.ai to convert a quiz into Jeopardy questions for fun, it even produced an interactive point-and-click Jeopardy board we could play live in class.
The advent of LLMs has fundamentally changed how I teach this course. This semester has been a kind of live experiment in figuring out how to do that well. I discovered some things that worked beautifully and others that really didn’t. In general, while LLMs are an incredible accelerator for learning, something I wish I’d had as a student, they are also a dangerously tempting shortcut around the hard parts of learning altogether.
Traditional “from-scratch” programming, at least the way we used to do it, has in the last year been replaced by a new mode of working, what people sometimes call “vibe-coding,” where you collaborate with the AI, nudging it in natural language until it produces usable code. But handing a powerful LLM to someone without solid foundations is a bit like giving superpowers to someone who hasn’t learned to walk yet. These tools are genuine game-changers for productivity, but only if you are the one driving and not the other way around. Students still need to understand how to think like a programmer, even if they’re not typing every line themselves. I saw in real time how easy it was to bypass that thinking: I had provided questions in Jupyter notebooks, expecting students to use LLMs as helpers, only to watch some type “Answer Question 4” and receive an uncannily polished solution without ever understanding a word of the answer. Some students, I suspect, stopped there and never attempted to really understand the answer.

Class discussions were generally richer with LLMs in the mix, but I also worried that some students might be tempted to offload too much: upload a paper, ask for a bullet-point summary, skim the highlights, and skip the actual reading. In other words, the same tools that can deepen engagement with the material can, if used uncritically, become the perfect engine for superficial understanding. I am probably never going to “arrive” at a finished course, and that might actually be the point. Teaching at the intersection of big data, Earth science, and AI means living in a moving target. My job is less about perfecting a static syllabus and more about learning, alongside my students, how to think clearly, ask better questions, and use these new tools without letting them do the thinking for us. If I can help them leave the class a little more curious and a little more in control of the machines they use, I will count that as a win. Just got to make a few more tweaks here and there…