When you challenge a computer to play a game of chess, interact with an intelligent assistant, type a question in ChatGPT, or create artwork on DALL-E, you are interacting with a program that computer scientists would classify as artificial intelligence.
But defining artificial intelligence can get complicated, especially when other terms like “robotics” and “machine learning” are thrown into the mix. To help you understand how these different fields and terms relate to each other, we’ve put together a quick guide.
What is a good definition of artificial intelligence?
Artificial intelligence is a field of study similar to chemistry or physics that started in 1956.
“Artificial intelligence is about the science and engineering of making machines with human-like characteristics in how they see the world, how they move, how they play games and even how they learn,” says Daniela Rus, Director of Informatics and Artificial Intelligence Intelligence Laboratory (CSAIL) at MIT. “Artificial intelligence consists of many sub-components, and there are all kinds of algorithms that solve different problems in artificial intelligence.”
Humans tend to conflate artificial intelligence with robotics and machine learning, but these are separate, related fields, each with a different focus. Generally, machine learning is grouped under the umbrella of artificial intelligence, but that’s not always true.
“Artificial intelligence is about decision making for machines. Robotics is about making computers move. And machine learning is about using data to make predictions about what might happen in the future or what the system should do,” adds Rus. “AI is a broad field. It’s about making decisions. You can make decisions by learning or you can make decisions by using models.”
AI generators like ChatGPT and DALL-E are machine learning programs, but the field of AI encompasses much more than just machine learning, and machine learning is not fully embedded in AI. “Machine learning is a subfield of AI. It spans statistics and the broader field of artificial intelligence,” says Rus.
Complicating the playing field is that non-machine learning algorithms can be used to solve problems in AI. For example, a computer can play the game Tic-Tac-Toe using a non-machine learning algorithm called minimax optimization. “It’s a direct algorithm. You build a decision tree and start navigating. There is no learning, there is no data in this algorithm,” says Rus. But it’s still a form of AI.
In 1997, the Deep Blue algorithm IBM used to defeat Gary Kasparov was AI but not machine learning as it didn’t use game data. “The justification of the program was manual work,” says Rus. “During AlphaGo [a new chess-playing program] used machine learning to create its rules and its decisions for locomotion.”
If robots need to move in the world, they need to understand their environment. This is where the AI comes in: it has to recognize where the obstacles are and come up with a plan to get from point A to point B.
“There are ways robots use models like Newtonian mechanics, for example to figure out how to move, how not to fall, to figure out how to grab an object without letting it fall,” says Rus. “If the robot needs to plan a path from point A to point B, the robot can look at the geometry of the room and then figure out how to draw a line that doesn’t encounter obstacles and follow that line.” That’s an example of a computer that makes decisions that don’t use machine learning because it’s not data-driven.
[Related: How a new AI mastered the tricky game of Stratego]
Or take, for example, teaching a robot to drive a car. In a machine learning-based solution, for example to teach a robot this task, the robot could observe how humans steer or drive around corners. It will learn to turn the wheel either a little or a lot depending on how flat the turn is. For comparison, in the non-machine learning solution to learn to drive, the robot would simply look at the geometry of the road, consider the dynamics of the car, and from that calculate the angle that needs to be applied to the wheel to keep the car on the road without turning . However, both are examples of artificial intelligence at work.
“In the model-based case, you look at the geometry, think about the physics, and calculate what the actuation should look like. In the data-driven [machine learning] In this case, you look at what the person did and remember, and in the future, when you come across similar situations, you can do what the person did,” says Rus. “But both are solutions that make robots make decisions and move in the world.”
Can you tell me more about how machine learning works?
“When you’re doing data-driven machine learning, which people equate with AI, the situation is very different,” says Rus. “Machine learning uses data to determine the weights and parameters of a vast network called an artificial neural network.”
Machine learning, as the name suggests, is the idea that software learns from data, as opposed to software that just follows rules written by humans.
“Most machine learning algorithms sort of just compute a set of statistics,” says Rayid Ghani, a professor in the machine learning department at Carnegie Mellon University. Before machine learning, if you wanted a computer to recognize an object, you had to describe it in painstaking detail. For example, if you want computer vision to recognize a stop sign, you must write code that describes the color, shape, and specific features on the face of the sign.
“People thought it would be exhaustive for the people describing it. The most important change that has taken place in machine learning is [that] What people were better at was giving examples of things,” says Ghani. “The code that people wrote shouldn’t describe a stop sign, it should differentiate things in category A from category B [a stop sign versus a yield sign, for example]. And then the computer figured out the distinctions, which was more efficient.”
Should we worry about artificial intelligence outperforming human intelligence?
The short answer at the moment: no.
Today the AI is very limited in its abilities and able to do certain things. “AI designed to play very specific games or recognize specific things can only do that. It’s not really good at anything else,” says Ghani. “So you have to develop a new system for every task.”
In a way, Rus is saying that research within the framework of AI will be used to develop tools, but not tools that you can unleash autonomously in the world. ChatGPT, she notes, is impressive, but it’s not always accurate. “They’re the kind of tools that provide insights, suggestions, and ideas that people can act on,” she says. “And these insights, suggestions and ideas are not the ultimate answer.”
Additionally, Ghani says that while these systems “appear to be intelligent,” they are really just looking at patterns. “They were just coded to put things together that happened together in the past and put them together in new ways.” A computer doesn’t learn on its own that falling over is bad. It needs to get feedback from a human programmer telling it it’s bad.
[Related: Why artificial intelligence is everywhere now]
Also, machine learning algorithms can be lazy. For example, imagine giving a system images of males, females, and non-binary people and telling it to distinguish between the three. It will find patterns that are different, but not necessarily ones that make sense or are important. If all men wear one color of clothing, or all photos of women are taken against the same color background, colors will be the characteristics these systems pick up on.
“It’s not intelligent, it’s basically saying, ‘You asked me to distinguish between three sentences. The laziest way of distinguishing was that feature,’” says Ghani. Additionally, some systems are “designed to answer the majority from the Internet for many of these things. That’s not what we want in the world to take the response of the majority, which is usually racist and sexist.”
In his view, a lot of work still needs to be done in tailoring the algorithms for specific use cases, in making it understandable for humans how the model achieves specific results based on the inputs received, and in working to ensure that the input data is fair and accurate.
What does the next decade hold for AI?
Computer algorithms are good at taking in and synthesizing large amounts of information, while humans are good at sorting through a few things at a time. For this reason, understandably, computers tend to be much better at sorting through a billion documents and finding recurring facts or patterns. But humans are capable of digging into a document, picking up small details and thinking them through.
“I think one of the things that’s overrated is the autonomy of AI, which operates on its own in uncontrolled environments where humans are also found,” says Ghani. In very controlled environments – like determining the price of groceries within a certain price range based on the end goal of profit optimization – AI works very well. However, collaborating with people remains important, and he predicts that the field will see many advances in systems designed for collaboration over the next few decades.
Drug research is a good example, he says. Humans still do a lot of the work with lab tests, and the computer simply uses machine learning to help them prioritize which experiments to run and which interactions to study.
“[AI algorithms] can do really extraordinary things much faster than we can. But the way you think about it is that these are tools designed to expand and improve the way we work,” says Rus. “And like all other tools, these solutions are not good or bad per se. They are what we want to do with them.”