Deep Learning over Machine Learning

1970 0

 

What is Artificial Intelligence?

AI is the capability of a machine to imitate intelligent human behavior.

It is accomplished by studying how human brain thinks, and how humans learn, decide and work while trying to solve a problem. An outcome of this study is used as a basis of developing intelligent software and systems.

Applications of AI (Few): Speech Recognition, NLP and Image Recognition

Technologies:

  • Artificial Intelligence – It is first coined in 1956, but it is just the theoretical concept. But in 80s and 90s, they were talking about neural networks. But since we don’t have that much computational power during that time, it is not implemented. But in late 90s and 2000s, we started using Neural Networks from ML. Then in 2006, Deep Learning is the term which is coined for the first time that overcame the limitations of ML. But from 2010, Deep Learning is used extensively.
  • Machine Learning – is a subset of AI and it has some limitations. To overcome the limitations of ML, Deep Learning comes into the picture
  • Deep Learning – is a subset of Machine Learning and uses the concept of neural networks to simulate human like decision making.

Machine Learning (ML)

ML is a type of AI that provides computers with the ability to learn without being explicitly programmed.

Types of ML:

  1. Supervised – where you have input variables (x) and an output variable (y) and you use an algorithm to learn the mapping function from the input to the output.
  2. Unsupervised – is the training of a model using information that is neither classified nor labelled. This model can be used to cluster the input data in classes based on their statistical properties. Classes will be clustered based on: High intra-class similarity and Low inter-class similarity.
  3. Reinforcement – is learning by interacting with a space or an environment. An agent learns from the consequences of its actions, rather than from being taught explicitly. It selects actions on basis of its past experiences (exploitation) and also by new choices (exploration)

Consider an agent has performed some action on some environment. If the action is fine, it rewards the agent by performing the actions repetitively. If the action has some issues, it changes its state and another action will be performed by an agent and the same process continues.

Limitations of Machine Learning

  • Are not useful while working with high dimensional data, that is where we have large number of inputs and outputs
  • Cannot solve crucial AI Problems like NLP, Image Recognition, etc.,
  • One of the big challenges with traditional Machine Learning models is a process called feature extraction.
  • Machine learning will not automatically predict the features that are not provided to predict the output.
  • For complex problems such as object recognition or handwriting recognition, this is a huge challenge.

DEEP LEARNING TO THE RESCUE

What is Deep Learning?

A collection of statistical machine learning techniques used to learn feature hierarchies often based on artificial neural networks.

If we have few hidden layers, it is called as shallow neural networks and If we have more hidden layers, it is called deep neural networks. The following are the highlights:

  • Deep Learning models are capable to focus on the right features by themselves, requiring little guidance from the programmer.
  • These models also partially solve the dimensionality problem.
  • The idea behind Deep Learning is to build learning algorithms that mimic brain.
  • Deep Learning is implemented through Neural Networks
  • Motivation behind Neural Networks is the biological Neuron

Functionality of Biological Neuron:

Neuron is nothing but our brain cells.

We have Dendrites Receivers which will provide the input to our neuron. We have multiple Dendrites here as seen in the above figure, so these many inputs will be provided to our neuron. We have cell body and inside cell body, we have something called nucleus, which performs some function. After this, the outputs travel through Axon and go towards Axon Terminals which we call it as outputs and the neuron will fire these outputs to the next neuron. Studies tell us there is a space between two neurons which we call it as ‘Synapse’. This is how the neuron works like.

And we have Artificial Neural Network, like Neurons, we have multiple inputs. These inputs will be sent to a processing element like our cell body. In the processing element what happens, the summation of the product of input and weights for every input, which is randomly selected will be calculated.  The Summation of processing element will be sent to the function F(S), which is transfer function. After this, here comes the activation function, which is nothing but to provide the threshold. Either step function or sigmoid function can be used an activation function. Once it exceeds the threshold, it will fire the outputs. We will be having the desired outputs, i.e., the expected output. The actual outputs will be compared with the desired output. If both are not matched, the same process will be repeated by selecting the different weights for the respective inputs and then the outputs will be checked again. This process will be iterated until all the actual outputs are equal to the desired outputs.

If we increase the number of hidden layers, the complexity of the model increases and we can solve ‘n’ number of problems.

Application of Deep Learning

  • Self-Driving Cars
  • Voice Control Assistance
  • Automatic Image Caption Generation
  • Automatic Machine Translation
  • With Deep Learning, MIT is trying to predict the future
About Mohanaraj Jagadesan

Mohanaraj Jagadesan

Mohanraj, SpringPeople’s technical consultant & expert, is a well-recognized name in the training industry. An innovator at heart, he has several notable projects to his credit, specifically in the area of automation & scripting.


Posts by Mohanaraj Jagadesan

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA

*