Over the years, many of us have become accustomed to letting computers do our thinking for us. “That's what the computer says” is a refrain in many bad customer service interactions. “This is what the data says” is a variation — “data” doesn't say much if you don't know how it was collected and how the data was analyzed. “That's what the GPS says” — Well, GPS is usually accurate, but I've seen GPS systems tell me to go the wrong way down a one-way street. And I've heard (from a friend who fixes boats) of boat owners who ran because their GPS told them to.
In many ways, we think of computers and computing systems as oracles. It's an even greater temptation now that we have creative AI: ask a question and you'll get an answer. Maybe that would be a good answer. Perhaps it will be an illusion. Who knows? Whether you see facts or illusions, the AI's response will surely be confident and authentic. It is very good at that.
Learn fast. Dig deep. Look ahead.
It's time we stopped listening to the Word—human or otherwise—and started thinking for ourselves. I'm not an AI skeptic; Generative AI is great at generating ideas, summarizing, finding new information and more. I'm concerned with what happens when humans transfer thought to something else, whether it's a machine or not. If you use creative AI to help you think, so much the better; But if you're just repeating what AI tells you, you're probably losing your ability to think independently. Like your muscles, your brain degenerates when it's not used. We've heard that “people won't lose their jobs to AI, but people who don't use AI will lose their jobs to people who do.” True enough—but there is a deeper point. People who simply repeat what generative AI says without understanding the answer, without thinking through the answer and making it their own, are not doing anything that AI can't do. They are replaceable. They will lose their jobs to someone who can bring insights that AI can.
It's easy to fall prey to the “AI is smarter than me,” “it's AGI” thinking. Maybe so, but I still think AI is best at showing us what intelligence isn't. Intelligence is not the ability to win games, even if you beat champions. (In fact, humans have discovered Weaknesses (AlphaGo lets beginners beat it.) It's not the ability to create new works of art — we always need new art, but more Van Goghs, Mondrians, or even computer-generated ones. The Rutkowskis. (What AI means for Rutkowski's business model is an interesting legal question, but Van Gogh certainly isn't feeling the pressure.) It took Rutkowski to decide what it meant to create artwork. Yes, just like he did to Van Gogh and Mondrian. AI's ability to replicate this is technically interesting, but doesn't really say anything about creativity. The ability of AI to create new kinds of artwork under the direction of a human artist is an interesting direction to explore, but let's be clear: this is human initiative and creativity.
Humans are much better than AI at understanding very large contexts—contexts that dwarf a million tokens, contexts that contain information that we have no way of describing digitally. There is no method. Humans are better than AI at creating new directions, synthesizing new types of information and creating something new. More than anything else, Ezra Pound's dictum “Make it New” is the theme of 20th and 21st century culture. It's one thing to ask AI for startup ideas, but I don't think AI would ever have created the web or, for that matter, social media (which really started with USENET newsgroups). AI will have trouble creating anything new because AI won't want anything – new or old. It would be great at designing fast horses if asked, to borrow Henry Ford's alleged words. Perhaps a bioengineer could ask an AI to decode a horse's DNA and make some improvements. But I don't think AI can ever design an automobile without seeing it first or being told by a human.Install a steam engine on a tricycle.“
There is another important part of this problem. At DEFCON 2024, Moxie Marlinspike Argued that the “magic” of software development has been lost because new developers are crammed into “black box abstraction layers.” It's hard to be innovative when all you know is the reaction. or spring. Or some other big, overbuilt framework. Creativity comes from the bottom up, starting with the basics: the underlying machine and network. No one learns assembler anymore, and maybe that's a good thing — but does it limit creativity? Not because there's some really clever configuration of assembly language that will unlock a new set of capabilities, but because you won't unlock a new set of capabilities when you're locked into a set of abstractions. . Similarly, I've seen arguments that one doesn't need to learn algorithms. After all, that would ever need to be implemented sort()
? The problem is that sort()
It's a great exercise in problem solving, especially if you force yourself into the simple past. bubble sort
To quicksort
, merge sort
and Beyond that. The point is not learning how to sort. It is learning how to solve problems. Seen from this perspective, generative AI is just another abstraction layer, another layer that creates distance between programmers, the machines they program, and the problems they solve. Abstractions are valuable, but what is more valuable is the ability to solve problems that are not covered by the current set of abstractions.
Which brings me back to the topic. AI is good — very good — at what it does. And it does a lot of things well. But we humans cannot forget that it is our role to think. It is our role to desire, to synthesize, to come up with new ideas. It's up to us to know, to be fluent in the technologies we're working with — and we can't hand that fluency over to generative AI if we want to generate new ideas. Perhaps AI can help us make these new ideas a reality—but not if we take shortcuts.
We need to think better. If AI forces us to do that, we'll be in good shape.