Forget AI Dome and Hype, Let's Make Computers Useful • Register

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Systems approach Full disclosure: I have a history with AI, having flirted with it in the 1980s (remember Expert Systems?) and then safely surviving the AI ​​winter of the late 1980s. 1988 before finally landing on networking as my specialty before making a formal endorsement.

And just as my Systems Approach colleague Larry Patterson has classics like Pascal Manual on his bookshelf, I still have a few AI books from the eighties, notably PH Winston's Artificial Intelligence (1984). This book is quite a blast to put down, in that much of it feels like it was written yesterday. For example, the preamble begins as follows:

I was also curious to see some examples from 1984 of “what computers can do.” One example was solving seriously difficult calculus problems—notable because exact math seems beyond the capabilities of today's LLM-based systems.

If calculus could already be solved by computers in 1984, while basic mathematics stumped the systems we see as state-of-the-art today, perhaps the amount of progress in AI over the past 40 years would have been much greater. Not as much as it first appears. (That said, there are better systems for dealing with calculus today, they're just not based on LLMs, and it's not clear why anyone refers to them as AI.)

One of the reasons I picked up my old copy of Winston was because of what he had to say about the definition of AI, since that is also a controversial topic. His first take on this is not very encouraging:

Well, well, it's quite circular, because you need to define intelligence somehow, as Winston admits. But then he describes two goals of AI:

  1. To make computers more efficient
  2. Understanding the principles that make intelligence possible.

In other words, intelligence is hard to define, but maybe studying AI can help us understand what it is. I would even say that we are still debating what intelligence is after 40 years. The first goal sounds laudable but obviously applies to a lot of non-AI technology.

The debate over the meaning of “AI” continues in the industry. I've seen a lot of fuss that we wouldn't need the term Artificial General Intelligence aka AGI, if only the term AI wasn't so polluted by people marketing statistical models as AI. I really don't buy it. As far as I can tell AI has always covered a wide range of computing techniques, most of which wouldn't fool anyone into thinking that a computer is exhibiting human-level intelligence.

When I started re-engaging with the field of AI about eight years ago, neural networks – which some of my colleagues were using in 1988 before they fell out of favor – made a startling comeback. was, at the point where the recognition of the image deepened. Neural networks had surpassed humans in speed and accuracy, despite some caveats. This rise of AI created a certain level of anxiety among my engineering colleagues at VMware, who realized that a major technological shift was underway that (a) most of us didn't understand (b) our employer could take advantage of. was not in a position to .

Your PC can probably guess just fine – so it's already an AI PC.

don't leave

When I threw myself into learning how neural networks work (with the help of Rodney Brooks), I realized that the language we use to talk about AI systems has a significant impact on how we think about them. For example, by 2017 we were hearing a lot about “deep learning” and “deep neural networks”, and the use of the word “deep” has an interesting double meaning. If I say I have “deep thoughts,” you might imagine I'm thinking about the meaning of life or something equally weighty, and “deep learning” seems to mean something similar.

But actually the “depth” in “deep learning” refers to the depth, measured in the number of layers, of the neural network that supports the learning. So it's not “deep” in the sense of meaning, but just as deep as the end of a swimming pool is deep – with more water. This double meaning contributes to the illusion that neural networks are “thinking”.

A similar confusion applies to “learning,” which is where Brooks was very helpful: a deep neural network (DNN) gets better at a task the more training data it's exposed to, in the sense that it “learns” from experience. , but the way it learns is nothing like the way humans learn things.

As an example of how DNNs learn, consider AlphaGo, the gaming system that used neural networks to beat human grandmasters. According to the system developers, where a human would easily handle changing the size of the board (typically a 19×19 grid), a small change would render AlphaGo impotent until it had to resize the board. May not have time to train on new data.

To me, this clearly illustrates how the “learning” of DNNs is fundamentally unlike human learning, even if we use the same word. A neural network is unable to generalize what it has “learned.” And speaking of which, AlphaGo was recently defeated by a human opponent who repeatedly used a play style that wasn't in the training data. This inability to handle new situations seems to be a hallmark of AI systems.

Language is important.

The language used to describe AI systems affects how we think about them. Unfortunately, given the reasonable pushback on recent AI hype, and some notable failures with AI systems, many may now be convinced that AI is completely useless as there are members of the camp that say AI is about to achieve human-like intelligence. .

I'm highly skeptical of the latter camp, as noted above, but I also think it would be unfortunate to ignore the positive effects of AI systems – or if you prefer, machine learning systems.

I'm currently helping some colleagues write a book on machine learning applications to networking, and it shouldn't surprise anyone to hear that there are many networking problems that are amenable to ML-based solutions. are In particular, network traffic traces are wonderful sources of data, and training data is the fodder on which machine learning systems thrive.

Applications ranging from denial-of-service prevention to malware detection to geolocation can use ML algorithms, and this book aims to help networking people understand how ML There is no magic powder that you sprinkle on your data to achieve it. answers, but a set of engineering tools that can be selectively applied to solve real problems. In other words, there is neither a cure nor an overhyped placebo. The goal of the book is to help readers understand which ML tools are appropriate for different classes of networking problems.

A story that caught my eye a while ago was Network Rail in the UK using AI to help manage the vegetation that grows alongside British railway lines. The key “AI” technology here is image recognition (for identifying plant species) – leveraging the kind of technology that DNNs have provided over the past decade. Maybe not as exciting as the creative AI systems that will capture the world's attention in 2023, but a good, practical application of the techniques that sit under the AI ​​umbrella.

My tendency these days is to try to use the term “machine learning” instead of AI when it's appropriate, in the hope that “AI” will avoid both the hype and the allergic reaction it now generates. And with Patrick Winston's words fresh in my mind, I might just want to talk about “making computers useful.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment