A recent article in Fast Company claims”Thanks to AI, the coder is no longer king. Greetings to all QA Engineers” It's worth reading, and the argument is probably valid. Generative AI will be used to build more and more software. AI makes mistakes and it's hard to imagine a future in which it doesn't. So, if we want software that works, quality assurance teams will increase in importance. Even if creative AI Becomes much more reliablethe problem of finding the “last bug” will never go away.
However, the addition of QA raises several questions. First, one of the foundations of QA is testing. Generative AI can generate tests, of course—at least it can generate unit tests, which are fairly simple. Integration tests (tests of multiple modules) and acceptance tests (tests of the entire system) are more difficult. Even with unit tests, we run into a fundamental problem with AI: it can generate a test suite, but that test suite It may have its faults.. What does “testing” mean when the test suite may contain bugs? Testing is difficult because good testing goes beyond just verifying certain behaviors.
Learn fast. Dig deep. Look ahead.
The problem increases with the complexity of the test. Bugs that arise when integrating multiple modules are more difficult to find, and even more difficult when you're testing the entire application. AI may need to be used. Selenium Or some other test framework for clicking user interface. It will need to anticipate how users may become confused, as well as how users may (unintentionally or intentionally) misuse the application.
Another difficulty with testing is that bugs aren't just minor slips and oversights. The most important bugs result from misunderstandings: misunderstanding a specification or incorrectly implementing a specification that does not reflect what the customer needs. Can AI develop tests for these situations? An AI may be able to read and interpret a specification (especially if the specification is written in a machine-readable format — although this would be another form of programming). But it's not clear how an AI could ever infer the relationship between a specification and actual intent: What does the customer really want? What does the software actually do?
Security is another issue: Is an AI system capable of red-teaming an application? I'll grant that the AI should be able to do an excellent job. Ambiguousand we've seen AI at play. Discover “fraud”. Still, the more complex the test, the harder it is to know whether you're debugging the test or the software under test. We operate in rapid expansion. Kernighan's law: Debugging is twice as hard as writing code. So if you write code that's beyond your understanding, you're not smart enough to debug it. What does this code mean that you didn't write? Humans have to test and debug code they didn't write all the time. This is called “maintaining legacy code”. But that doesn't make it easy or (for that matter) enjoyable.
Programming culture is another issue. In the first two companies I worked at, QA and testing were definitely not high-prestige jobs. Being assigned to QA was, if anything, a demotion, usually reserved for a good programmer who couldn't work well with the rest of the team. Has the culture changed since then? Cultures change very slowly. I doubt it. Unit testing has become a widespread practice. However, it is easy to write a test suite that gives good coverage on paper, but that actually tests very little. As software developers understand the importance of unit testing, they start writing better, more comprehensive test suites. But what about AI? Will AI resist the “temptation” of writing low-cost tests?
Perhaps the biggest problem, though, is that prioritizing QA doesn't solve the problem that has plagued computing since its inception: programmers who never understand the problem they need to solve well enough. is called for. Answering a Quora question unrelated to AI, Alan Miller wrote:
We all start programming thinking of mastering a language, maybe only smart people know how to use design patterns.
Then our first real job shows us a whole new scene.
The language is simple. The problem domain is difficult.
I have programmed industrial controllers. I can now talk about factories, and PID control, and PLCs and acceleration of critical equipment.
I worked in PC games. I can talk about rigid body dynamics, matrix normalization, quaternions. just a little.
I worked in marketing automation. I can talk about sales funnels, double opt-ins, transactional emails, drip feeds.
I worked in mobile games. I can talk about the level design. A one-way system to force player flow. Phased reward systems.
Do you see that we have to learn about the business we code for?
The code is literally nothing. Language is nothing. The tech stack is nothing. Nobody gives a monkey's [sic]we can all do it.
To write a real app, you need to understand why it will be successful. What problem does it solve? What does this have to do with the real world? In other words understand the domain.
Absolutely. This is a great explanation of what programming is really about. Elsewhere, I've written that AI can make programmers 50% more productive, although this figure is probably optimistic. But programmers spend only 20% of their time coding. Getting 50% back from 20% of your time is important, but it's not revolutionary. To make it revolutionary, we have to do better than spend more time writing test suites. This is where Mailer's insight into the nature of software is crucial. Cranking out lines of code does not make software good. This is the easy part. Nor is cranking out test suites, and If creative AI can help write tests. This would be a huge step forward, without compromising the quality of testing. (I'm skeptical, at least for now.) An important part of software development is understanding the problem you're trying to solve. Grinding out test suites in the QA group doesn't help much if the software you're testing doesn't solve the right problem.
Software developers will need to devote more time to testing and QA. This is a given. But if we leave AI with the ability to do what we can already do, we're playing a losing game. The only way to win is to do a better job of understanding the problems we need to solve.