Machine learning is a term we hear a lot these days. Since DeepMind’s AlphaGo beat world champion Go player Ke Jie in May 2017, it also became famous beyond the social circle of data scientists and techies. But only a few have knowledge of machine learning that goes deeper than what we know from science fiction movies. I will try to give you answers to the questions why using it and how Machine Learining (ML) actually works.
Why Machine Learning?
I am really bad at botany. If you show me any flower I can only say if it’s a rose or not. So, it would be great to have a software able to figure out the names of at least a presentable set of domestic flowers. Let’s have a look at how a programmer would tackle this.
To solve the Flower-Detection-Problem (FDP) the first step is to define the set of flower types to classify. After a short inquiry, we notice that this will be a quite laborious task as there are thousands of different types. To keep us motivated let’s limit the set to 500.
Next, we have to specify attributes describing each of the 500 flowers. Again, a challenging job. We can only guess which ones are relevant and the more attributes we consider the more complex the detection logic we have to implement later becomes. Anyway, here are some possible candidates I could think of:
- Total height
- Size of the blossom
- Number of blossoms
- Petal color
- Number of petals
- Diameter of the stalk
You might think, why not just use an image of the flower? Good point but writing logic that inspects every pixel and tries to classify it is quite impossible. But also implementing rules for just the six parameters above is way to complex and time consuming for a human being and would end up in a ton of confusing conditional constructs
alternative to injection therapy. Intraurethral therapy is buy cialis and the expansion of the lacuna spaces compresses the.
We see even if we limit the classification set and the number of attributes we use for classification rules, using a traditional programmer approach to solve the FDP will fail painfully. You might already guess a better solution. You’re right, machine learning! ML can figure out the rules a programmer can never identify on his own in a satisfying way. Moreover, a computer can process data way faster and more precise than a human being.
However, we still have to do step one and two. Specifying the input and the output. And that’s not all. We also have to provide a dataset. Something the computer can learn from. Only the black box between input and output is generated by the machine learning algorithm.
How it basically works
As we said before machine learning can figure out the rules converting our input data into an output. This set of rules is called a model. A well-trained model is the output of each ML project. But how do we train a model? The recipe contains two main ingredients. A data set and an algorithm. The algorithm takes the data set and generates the model.
A dataset basically is a table or matrix of data. Each column represents one aspect of the problem. In our FDP case, it is one attribute. Petal color for example. Each row is a set of attributes describing a certain instance of the problem, so-called features. For example one specific representation of a flower type.
The quality of the dataset highly affects the quality and reliability of the model. If you only have one feature for each flower type in your dataset the model will rather be a random generator than anything else. Also false or missing information in features are harmful. Including irrelevant attributes to the dataset is not adjuvant but also not very influencing. It slows down the algorithm but one of the features of machine learning is to find patterns humans can’t see. So, if you’re not certainly sure of the irrelevance of data you have, keep it.
Algorithms are the core of machine learning. Here is where the magic happens. Data scientists and mathematicians already racked their brains to develop a broad selection of machine learning algorithms. The great challenge is to choose the right one. The algorithms can be grouped into different machine learning types.
The most applied learning type is supervised learning. This learning style is similar to how humans learn. From examples.
The dataset is then called training data and contains the right answer for each problem instance (feature). In other words a mapping table of input and output data. The training data has to be well prepared by research and can be very time-consuming.
The Flower-Detection-Problem can also be solved by a supervised learning algorithm. Let’s pick one specific feature here to make it easier to understand. Imagine we go to the florist and buy a rose. By this very rose we determine values for each attribute we specified earlier. This then is our input data for the feature. The output data or answer is “this is a rose”. Now our training data contains one feature. We should buy another rose as it could have slightly different attribute values and create another feature. We continue doing this multiple times for each flower type and thus get the training data.
Some common applications of supervised learning include handwriting recognition, face detection on images, speech recognition and spam detection to name just a few.
The main difference between unsupervised and supervised learning lies in the dataset. Data is not labeled and there is no given answer to the problems. The algorithms are not supposed to learn how to match input to output. However, the goal is to find patterns, similarities or deviations in the data.
Back to the Flower-Detection-Problem, we could also consider using an unsupervised learning algorithm to find a solution if we had a different dataset. Suppose we had a bunch of photos of many different flower types. Depending on the ratio between the number of types and the number of different photos of a single type, an unsupervised learning algorithm could group the pictures according to the flower shown on it, in a more or less precise way.
In a second step, we could label the result and use it as training data for a supervised learning algorithm. Facebook does something similar when suggesting tags for people on your pictures. It uses unsupervised learning to find similar faces on the pictures of their users and asks them to tag this face on a picture. If you link a Facebook profile to a detected face you did the labeling for them.
As already mentioned above, collecting training data for a supervised learning algorithm can be very exhausting and in many cases even impossible. When you can’t derive the solution of a problem from an example you have to figure it out by trial and error. And that’s basically how reinforcement learning works.
An agent has to achieve a goal within a dynamic environment. For example, a robot needs to walk on a bumpy ground. To do so it has a set of actions to choose from. Usually, the agent will initially select actions according to an underlying rudimentary model also known as the policy. The policy itself can be based on other machine learning models.
Depending on how well the agent performs he will be rewarded or punished in form of a scalar value. Of course, the agent wants to maximize the number of rewards and will, therefore, adapt its policy corresponding to the feedback from the environment.
To improve the play of AlphaGo reinforcement learning was used during games of AlphaGo against instances of itself. Training an AI to play computer games is a nice method to show the potential of reinforcement learning. Another cool example from DeepMind is a software playing Atari games.
More useful applications of reinforcement learning can be found in robotics and financial market trading.
There is a bunch of awesome stuff you can do with machine learning and people are already doing so. But this is only the beginning of an exciting journey. The biggest challenges are still ahead but also the greatest possibilities. Science fiction is turning into reality and the potential is even beyond every imagination.
Now it’s up to you. This has only been an introduction if any. There is much more to explore and many exciting things are waiting for you. You can find further reading here.
Stanford University also offers a machine learning online course.
Nice text. I wanted to add that ML is widely used nowadays in self-driving car too (For example by Waymo, a branch from Google company). Indeed, since the diversity of situations that can be encountered by a self-driving car controller is huge, it would be IMPOSSIBLE to use a traditional programming approach to address all these situations. This is why self-driving car company uses ML: the car learn by itself while driving millions of kilometer assisted by a human driver.
I’m curious to know if there are these kind of projects at Palfinger…