Today, AI helps doctors diagnose patients, pilots fly commercial aircraft, and city planners predict traffic. But sometimes what these AI’s doing, the computer scientists who designed them even don’t know exactly how they’re doing it. This is because AI is often self-taught, working off a simple set of instructions to create a unique array of rules and strategies.
So how exactly does an AI learn?
There are many different ways to build self-teaching programs. But they all rely on the three basic types of AI learning: unsupervised learning, supervised learning, and reinforcement learning.
First coming to unsupervised learning. This approach would be ideal for analyzing all the situations to find general similarities and useful patterns. This broad pattern-seeking approach can be used to identify similarities between the situation and find emerging patterns, all without human guidance.
Now, moving to something more specific. Suppose computer scientists want to create an algorithm for a particular condition and begin collecting different sets of data. Then, they input the data into a program designed to identify features shared by those who face a particular problem. Based on how frequently it sees certain features, the program will assign values to those features’, generating an algorithm for diagnosing future problems. However, finally, the programmer will make the final report and check the accuracy of the algorithm’s prediction. Then computer scientists can use the updated datasets to adjust the program’s parameters and improve its accuracy and this whole thing is called supervised learning.
As we know every problem has different stages, thus different plans will be implemented in different stages, and they may change depending on each individual’s response to the problem, and here comes reinforcement learning into play. This program uses an iterative approach to gather feedback about which algorithm, code, and solution are most effective. Then, it compares that data against each problem to create their unique, optimal solution plan. As the problems are solved and the program receives more feedback, it can constantly update the plan for each different problem.
None of these three techniques are inherently smarter than each other. While some require more or less human intervention, they all have their strengths and weaknesses which make them best suited for certain tasks. By using all three them together, researchers can build complex AI systems, where individual programs can supervise and teach each other.
When an unsupervised learning program finds groups of problems that are similar, it could send that data to a connected supervised learning program. That program could then incorporate this information into its predictions or perhaps dozens of reinforcement learning programs might simulate potential patient outcomes to collect feedback about different problem-solving plans.
There are infinite ways to create these machine-learning systems, and the best models are those that mimic the relationship between neurons in the brain, while these artificial networks can use millions of connections to solve difficult tasks like image recognition, speech recognition, and even language translation. The more self-directed these models become, the harder it is for computer scientists to determine how these self-taught algorithms arrive at their solution but they are already looking at ways to make machine learning more transparent.
As AI becomes more involved in our daily lives, these enigmatic has increasingly large impacts on our work, health, and safety. So as machines continue learning to investigate, negotiate and communicate, we must also teach them to teach each other to operate ethically.