Learning machine learning takes a lot. It’s not just about knowing enough, it’s also about knowing why.
We’re sure you are all caught up on the top 10 machine learning algorithms we discussed a while ago, but what remains is to understand when to use these high-powered algorithms and why exactly to use them!
In this article, we will discuss and compare supervised and unsupervised machine learning algorithms, select applications and understand their working.
Before we dive in, let’s see what are machine learning algorithms are from scratch!
What do machine learning algorithms mean?
Machine learning algorithms are structures which are fed data to predict output values. As these structures continue to receive data, they optimize and develop to enhance themselves and learn over time.
Machine learning algorithms are further classified into two groups: supervised and unsupervised.
Consider taking a walk in a zoo. All animals that you come across are classified in your brain differently based on previous knowledge and awareness of the animal characteristics, features and behaviors.
This is supervised learning. In this case, your brain acts as a supervisor, relating everything you’ve seen to previously acquired knowledge and finally assigning an animal to a certain class. Simply put, supervised learning needs a supervisor.
Now imagine a child is walking in that zoo with you. This is the child’s first visit to a zoo and he has no previous knowledge of the animal kingdom. Now if you were to not help the child, the child would be totally unsupervised when learning about these animals.
If this child was to make sense of what he saw at the zoo that day, he’ll probably describe the animals as being big or small, having wings or not, having stripes or a plain coat, having a long neck or a long trunk and the possibilities go on.
The child would consider classifying animals with most commonality and assign them to different groups. In this case, the child is completely unsupervised. This is a case of unsupervised learning.
Now let’s talk about supervised and unsupervised learning in ML algorithms.
Supervised machine learning algorithms
The two key requirements of supervised learning in ML algorithms are having historical data and output data.
Borrowing from the example before, having previously seen zoo animals (historical data) and being aware of the classes these animals belong to (output data) makes this a case of supervised learning.
Supervised ML algorithms try to find and fit a relationship between historical data fed to them and the output data. In supervised learning, machines learn from data that is already labeled. It then applies its learning to label other data-sets based on what it learned from the training data-set.
Supervised learning tackles two major problems: regression and classification.
What is supervised learning in regression?
Regression involves predicting output values based on a relationship between continuous variables. To refresh your understanding of regression, you might want to check out our article on Top 10 ML algorithms.
Now how is regression a part of supervised learning?
If a regression algorithm was required to predict the sales of team jerseys for an apparel manufacturing company in the IPL season, you will essentially establish a relationship between jersey sales and the IPL fervor. Here, the predicted output would be continuous sales figures.
Also, the machine would learn about the nature of the relationship from the sales data fed to it collected from the previous IPL matches. This is how regression is a part of supervised learning.
How does supervised learning fit in classification?
A classification problem simply requires us to group an incoming stream of data into classes based on some common set of features. This is the closest to the zoo problem we discussed.
If a system was tasked with classifying incoming people as male or female, a classification algorithm, already exposed to a set of people data classified as male and female, would come in handy.
The algorithm would leverage its previous knowledge and classify incoming data into the two classes based on common features and characteristics identified by it. You can learn more about machine learning classification algorithms here.
Supervised learning meets Naive Bayes classifier
Naive Bayes classifiers are popularly used for classification problems. The most common use would be to classify emails into spam or useful. To know how the algorithm achieves that, you might want to refer to our previous article- Top 10 Machine Learning Algorithms.
The classifier is trained on labeled data and used to group emails into two distinct classes, this is how the algorithm finds its place in the sphere of supervised learning. The algorithm uses Bayesian theorem of probability to determine the classification.
It is called ‘naive’ as it assumes that the features or characteristics of the input data are independent of each other. In our email example, it would consider the probability of one word occurring is independent of each other which assumes that in an email titled Best food services in New Delhi – New and Delhi are independent words!
Supervised learning and Linear Discriminant Analysis
LDA is a prominent algorithm in dimensionality reduction.
What is Dimensionality Reduction?
Dimensionality reduction takes the stage when we’re dealing with heavy, complicated data-sets. Suppose you had to arrange your data based on certain characteristics but these characteristics kept getting repeated or duplicated. Or maybe some characteristics weren’t relevant to the data-set at all. In this case, it makes sense to alter your data by reducing such redundant characteristics or dimensions.
This is what dimensionality reduction deals with. What’s easier than painting a room is painting a wall and easier than that would be painting a line on the floor. As dimensions go out the window, the job becomes easier to handle!
LDA reduces dimensions or features in a data-set which is labeled. Since it uses a labeled data-set to learn, it becomes a part of supervised learning.
Supervised learning and other algorithms in its kitty
Supervised learning has a number of other algorithms grouped under it.
Decision trees, Random Forests and Support Vector Machines are also algorithms which work on labeled input data. You can learn more about these supervised machine learning algorithms here.
Unsupervised machine learning algorithms
We’ve now arrived at unsupervised learning.
Unsupervised learning in machine learning algorithms happens when you have incoming input data but you have no clue on what the output should look like.
In unsupervised learning, we drive in the dark trying to make sense of whatever data we have on our plate by performing a set of different functions on it. Unsupervised learning covers a number of different algorithms. Let’s discuss some of them!
Unsupervised learning and K-means!
Imagine a system is introduced to a large data set completely unsupervised. It would try to put two and two together and the approach it would use here would be k-means.
K-means would require grouping the data into k number of clusters based on some similarities. A detailed explanation of the algorithm can be found here. Since the system does not know what to look for, it will make associations within the data known as clusters. This is why k-means falls under the unsupervised learning umbrella.
Unsupervised learning meets DBSCAN
DBSCAN stands for density-based spatial clustering of applications with noise. DBSCAN improves on some limitations with k-means, earning its spot as a desired machine learning algorithm.
DBSCAN makes for great recommendation systems. A number of services including Amazon and Netflix use previous or similar experiences to recommend new options to the users.
An easier way to imagine this would be if you consider a system receiving purchase data from various users. DBSCAN groups data based on density. Highly dense data reflects a possible pattern that can be formed.
It improves on other clustering algorithms by eliminating confusing or unimportant data distribution as noise. The system then makes sense of the data by grouping them into clusters based on some similarities. This is how it maps a user A’s preferences with that of a similar user B and suggests products purchased by user B to user A.
Unsupervised learning and Market Basket Analysis
Let us offer a little context on market basket analysis.
Market basket analysis is used to predict consumer behavior. It is much like going through the products in a consumer’s basket or shopping cart.
It falls under unsupervised learning because there is no clarity on the output you’re expecting. What you can do is sneak into different shopping carts and find patterns about which products are bought together, which products are most bought on a particular day or weekends, which customers are buying common products and so on.
Sellers use these tactics in deciding the layout of their store. For example, protein shakers will be close to gym apparels. This is also used in grouping similar customers together and suggesting products to them based on each other’s purchases.
Unsupervised learning in Local Outlier Factor
Ever wondered how credit card frauds are detected?
Surely, there’s no one sitting at a bank tracking every transaction you make and judging if it actually is something you would buy! But close enough, banks use algorithms like local outlier factor or LOF for short to study unusual card activity. LOFs detect anomalies or outliers in the purchase data to understand if a fraud has been committed.
These outliers are identified based on the density of distribution.
Unsupervised learning and Neural networks
Neural networks are named after the neural networks of neurons in a human brain. They are associated with deep learning, a sub-field of machine learning in artificial intelligence.
Artificial neural networks (ANN) are modeled after the working of human neural networks. They imitate or simulate processing of information just like a human. Of course, we haven’t achieved the same level of complexity as a human brain but that sure is the objective.
ANNs are built to explore complex relationships between the input data and the output. ANNs learn constantly through an input stream of data and optimize themselves accordingly. This is what makes them a part of unsupervised learning.
ANN train on input data sets themselves. Each node in the network acts as a processor and is responsible for a definitive function. A combination of these nodes and networks is what gets complex tasks done.
Unsupervised learning and Principal Component Analysis
PCA is another algorithm used in dimensionality reduction.
PCA does things a little differently. It alters the data to undergo a linear transformation, thus reducing dimensions. To do this, it finds patterns and associations among the features or dimensions by itself. Since the incoming data is not labeled, PCA becomes a part of unsupervised learning. Now that we’ve covered the supervised and unsupervised machine learning algorithms, you might want to know about a third, distinguished class: Semi-supervised learning.
Semi-supervised learning
Semi-Supervised learning relates to a situation where the incoming data (the data on which the system trains itself) is selectively labeled.
Consider the zoo example again. Only this time around, you have limited knowledge about the different animals. Say you were exposed to wolves, foxes and dogs before but not all types in the family. Now when you see a coyote in a zoo, you would not know how to classify it.
Similarly, consider the auto-tagging feature on social media platforms. The feature refers to the past tags that you’ve chosen and tags your new pictures accordingly.
In this case, let’s say not all your older pictures were tagged, so the system was trained on a stream of input data that was not fully labeled. This is a classic case of semi-supervised learning.
You’re almost there…
Let’s hope now you can differentiate between supervised and unsupervised learning, their corresponding algorithms and when to use them.
Let us know if you’ve attempted any of these algorithms before and your experience with it in the comments section!
7 comments