It's clear that generative AI is already being used by a majority—a large majority—of programmers. This is a good thing. Even if productivity gains are smaller than many people think, 15% to 20% is significant. Making it easy to learn programming and start a productive career is nothing to complain about either. When we were all affected. Simon Wilson asked ChatGPT to help him learn Rust.. It's amazing to have this power at your fingertips.
But there is one misconception that I share with a surprisingly large number of other software developers. Does the use of generative AI widen the gap between entry-level junior developers and senior developers?
Generative AI makes many things easier. When writing Python, I often forget to put colons where they are needed. I often forget to use parentheses when I call. print()
I've never used Python 2 though. (Old habits die hard, there are many old languages where print is a command rather than a function call.) I usually have to look up the name of a pandas function for, well, just about anything. Me — although I use Panda quite a lot. Generative AI, whether you use GitHub Copilot, Gemini, or something else, eliminates this problem. And I've written that, for the beginner, Creative AI saves a lot of time, frustration, and mental space by reducing the need to memorize library functions and the arcane details of language syntax—which is what every language requires. It is increasing as the need is felt. Until his competition. (Walrus operator? Give me a break.)
However, there is another side to this story. We are all lazy and we don't like to remember the names and signatures of all the functions of the libraries we use. But don't they need to know a good thing? There is such a thing as fluency in programming language, just like there is in human language. You don't become fluent by using a phrasebook. It might get you through Europe backpacking for the summer, but if you want to get a job there, you're going to need to do a lot better. The same is true in almost any field. I have a PhD in English Literature. I know that Wordsworth was born in 1770, the same year as Beethoven. Coleridge was born in 1772. Many important texts in Germany and England appeared in 1798 (plus or minus a few years); The French Revolution happened in 1789—does that mean something important was happening? Something that takes some poems from Wordsworth and Coleridge and Beethoven wrote some symphonies? As it happens, it happens. But how would someone who wasn't aware of these basic facts think to give an AI a clue about what was going on when all these separate events collided? Would you like to ask about the relationship between Wordsworth, Coleridge and German thought, or think about the romantic movement that transcended individuals and even European countries? Or will we be stuck in islands of knowledge that are not connected, because we (not AIs) are the ones connecting them? The problem isn't that the AI couldn't make the connection. It is that we will not think to connect with it.
I see the same problem in programming. If you want to write a program, you need to know what you want to do. But you also need an idea of how it can be done if you want to get extraordinary results from AI. You have to know what to ask and, surprisingly, how to ask it. I just experienced this the other day. I was doing some simple data analysis with python and pandas. I was going along with a language model, asking “how do I do” for every line of code I needed (as in GitHub Copilot) — partly as an experiment. , partly because I don't use pandas very often. And the model drove me into a corner where I had to hack myself out of it. How did I get to this corner? Not because of the quality of the answers. Every answer to my prompts was correct. In my post-mortem, I examined the documentation and tested the sample code that the model provided. I'm backed into a corner because of a question I didn't know I needed to ask. I went to another language model, prepared a long prompt that described the entire problem I wanted to solve, compared the answer to my poor hack, and then asked, “Does what? reset_index()
What's the method?” And then I felt (not wrongly) like a clueless beginner — if I asked my first model to reset the index, I wouldn't be cornered.
I think you can read this example as “Look, you don't really need to know all the details of pandas, you just have to write better hints and ask the AI to solve the whole problem. ” is better. But I think the real lesson is that you need to be fluent in the details. Whether you let a language model write your code in chunks or one line at a time, if you don't know what you're doing, either approach will get you into trouble sooner rather than later. will put You probably don't need to know the details of pandas. groupby()
function, but you need to know it's there. And you need to know this. reset_index()
There is. I had to ask GPT “Wouldn't it work better if you used ? groupby()
? Because I asked him to write a program where groupby()
There was an obvious solution, and it didn't happen. You may need to find out if your model uses groupby()
Proper testing and debugging is not, and never will be, over.
Why is this important? Let's not think about the distant future, when programming may no longer be needed. We need to ask how junior programmers entering the field now will become senior programmers if they rely heavily on tools like Copilot and ChatGPT. It's not that they shouldn't use these tools—programmers have always built better tools for themselves, generative AI is the latest generation of tooling, and one aspect of Fluency has always been knowing which tools to use to be more productive. How to use But unlike older generations of tools, generative AI easily becomes a crutch. This can hinder rather than facilitate learning. And junior programmers who never become fluent, who always need a phrase book, will have trouble catching up to seniors.
And that's a problem. I've said, many of us have said that people who learn to use AI won't have to worry about losing their jobs to AI. But there's another side to this: People who learn how to use AI will also need to worry about losing their jobs to AI, leaving them fluent in what they're doing with AI. They will be replaceable—literally—because they won't be able to do anything that AI can't do. They will not be able to take good cues because they will have trouble imagining what is possible. They will have trouble figuring out how to test, and they will have trouble debugging when the AI fails. What do you need to learn? This is a difficult question, and my thoughts on fluency may not be correct. But I'd be willing to bet that people who are fluent in languages and tools will use AI more effectively than those who aren't. I'll also bet that learning to look at the big picture instead of the code you're working on will get you far. Finally, the ability to connect the big picture with the microcosm of minute details is a skill few possess. I don't and, if it's any comfort, I don't think AIs do either.
So – learn to use AI. Learn to write good tips. The ability to use AI has become a “table stake” for getting a job, and rightfully so. But don't stop there. Don't let AI limit what you learn and don't fall into the trap of thinking “AI knows this, so I don't have to.” AI Can Help You Become Fluent: The Answer “Does reset_index()
What?” was revealing, even if it was humbling to ask. It's definitely something I'm not likely to forget. Learn to ask big-picture questions: What context does this piece of code fit into? Asking these questions instead of accepting AI's output is the difference between using AI as a crutch and using it as a learning tool.