N.B.: I wrote this while at the Recurse Center (recurse.com) back in October, but for some reason did not post it. I still think its worth reading, but I no longer completely endorse this essay, in part because my coding is no longer solely for learning to code.
The dilemma of competent LLMs
It is alarming to consider the fact that as someone learning coding, AI models are quite a bit better at programming than I am. This isn’t true in every context, but if you wanted a simple web app implementation of flappy bird, for instance, I would point you to Claude.
There’s an opportunity and a danger in this fact. The opportunity is twofold: a) I can create projects that are beyond my capability alone, and b) I can learn more quickly, and more deeply by myself. But the danger is apparent: the skill I may end up learning may not be how to write a coding project, but how to ask ChatGPT questions.
There have been a couple of times when I have used an LLM to help me write a program, and ended off the deep end. The experience is always the same: I have a question, it provides sample code which does more than that single question, and when I have another question, I no longer understand what’s happening. So I ask it another question, and the cycle repeats. At the end of the day, I might have a working project, but I don’t have a working knowledge of what happened.
A laundry list of approaches
So how should I use AI tools when learning how to code?
The first question to ask is what my goal is. In some cases, the goal may be to speed through familiar boilerplate, or to deal with a part of a project I’m less interested in. For example: suppose I need to parse text which I will later manipulate. If all I care about is learning the manipulation, then I should straight up ask the LLM to write the string-parsing code. Some other time I might learn regular expressions, but now is not the time. More frequently, though, I am looking for assistance on a problem that I want to learn how to solve myself. A straight-up solution won’t give me anything, but refusing to use the LLM is a cop-out. I have found some approaches that have worked for me:
- Have the LLM keep asking me leading questions (with no code) until I figure out the solution.
- Tell the LLM “do not provide any code until I ask”1
- Making sure every time I ask the LLM a question, I explain and suggest a hypothesis/strategy to answer the question and request feedback on that hypothesis.
- Only ask for feedback after I have a working example
- If I receive a code sample, manually typing it out to make sure I understand what it is doing
- Asking really small questions, that I might otherwise use a search engine
LLM as answer key
These strategies are useful, but I think there’s more to say. If you are want to solve a problem in a textbook, you can copy the solution down, but what’s the point? There are times and places to look at the solution, but only in service of a broader goal. The point of doing an exercise is not to get the answer. It is to give yourself the tools to solve a similar problem in a different context. In other words, exercises aim to increase your agency. With LLMs, the only difference is that the textbook is life, and the model is more than an answer key.
What ties the above strategies together is that they too aim to increase my agency. They balance making me exercise my independence, with giving me the structure I need to tackle a harder problem. They don’t all function as answer keys — explaining a hypothesis treats the LLM like a rubber duck — but the effect is similar.
LLM as API
One last thing: you might be wondering what the point is. If the AI is already better at certain technical tasks than I am, and only going to get better, isn’t learn to code a fools errand?
I don’t think so. If I want to do a coding project to accomplish something, I will be the one directing that project, no matter who is doing the brunt work of writing lines of code.2 At its most extreme, the LLM’s code will function as an API: something that I work with but don’t implement. Yet like API’s there’s quite a bit of value in learning what’s going on under the hood.