Idea of AI
Most of us are probably familiar with "Artificial Intelligence" as it has been depicted in famous Hollywood movies such as "The Matrix", "The Terminator", "Interstellar". Although Hollywood science fiction movies and novels portray AI as human-like robots capturing the planet, the actual evolution of AI technologies is not even that smart or that scary. Instead, AI now offers many benefits across industries from retailing, security service, autonomous driving and many more. It appears like a man-eating dinosaur in the series "Jurassic" but ends up with a fried chicken dish in your everyday diner.
What is Artificial Intelligence?
The idea is even simple: "Can machines think?" asked Alan Turing in 1950. AI allows the machine to think and make its own decisions without the involvement of human. Ever since that idea, people have been obsessed with questioning nature of AI. Yet, more or less we are on our way there.
AI systems are classified according to their ability to mimic human behavior, by the hardware employed, their functionality, and theory of mind. Based on these comparative characteristics, all actual and hypothetical artificial intelligence systems can be assigned to one of the three types:
ANI: Artificial Narrow Intelligence
Narrow AI or weak AI is goal-oriented and trained to perform a specific task. It is exceptionally competent at processing the task for which it was trained. They use information from a specific data set and do not perform tasks unrelated to the single purpose for which they were designed. Narrow AI systems are thus not conscious, sentient, or driven by emotion like humans.
Some examples of ANI are Siri, autopilot on an airplane, chatbots, self-driving cars, etc. ANI is the only form of AI that currently exists in our world.
AGI: Artificial General Intelligence
General AI, often referred to as strong AI, is the concept of machines exhibiting human intellect. In this regard, robots have the potential to learn, comprehend, and act in ways indistinguishable from humans in a given situation.
Although General AI does not yet exist, it has been utilized in numerous Hollywood sci-fi films in which humans interact with robots that have consciousness, emotions, and self-awareness.
ASI: Artificial Super Intelligence
"The Terminator," a machine that surpasses human intelligence in all respects. Artificial superintelligence is a hypothetical AI in which machines will be able to develop intelligence that outperforms even the brightest humans. With this sort of AI, machines not only have the versatile intelligence of humans, but also greater problem-solving and decision-making abilities far superior to those of human. The end of man being follows the sci-fi movies!
Machine Learning
Machine Learning is a subset of Artificial Intelligence that relies on statistical learning algorithms to create systems that are able to learn and improve on their own without being explicitly programmed.
In machine learning, an algorithm is trained with a large amount of data, in which the algorithm itself learns information from the processed data.
Recommendation engines are a popular application of machine learning. Fraud detection, spam filtering, malware threat detection, business process automation (BPA), and predictive maintenance are all prominent applications. Machine learning is fundamental to today's largest companies, such as Facebook, Google, and Uber. For many businesses, machine learning has become a critical competitive advantage.
ML algorithms can be broadly divided into three categories: supervised, unsupervised, and reinforcement learning.
Supervised Learning
In this type of machine learning, data scientists provide algorithms with labeled training data and specify the variables between which the program should access for relationships. Both the input and the output of the algorithm are specified.
Unsupervised learning:
This type of machine learning uses algorithms trained on data that is unlabeled. The algorithm scans data sets for meaningful correlations. The data used to train the algorithms and the predictions/recommendations they output, are predefined.
Reinforcement learning:
Reinforcement learning is often developed in data sciences to teach a machine to perform a multi-step procedure with well-defined criteria. Data scientists train an algorithm to perform a task, giving it positive or negative cues as it determines how to complete the task. In most cases, however, the algorithm itself decides which actions to take along the way.
Deep Learning
Deep learning is a machine learning approach inspired by the way the human brain filters information; it is basically learning from examples. It enables a computer model to predict and classify information by filtering input data through multiple layers. As most Deep Learning methods operates neural network topologies, they are often referred to as Deep Neural Networks. A multi-layer neural network in Deep Learning then involves a high number of parameters and layers.
The three basic network architecture are as follows.
Convolutional Neural Networks:
A convolutional neural network (CNN, or ConvNet) is a type of deep learning (DL) that is most typically used to process visual imagery. Their functioning is based on the shared-weight architecture of convolution kernels or filters that slide along input features and create translation equivariant responses known as feature maps. The network learns to optimize the filters (or kernels) via automatic learning, whereas traditional algorithms hand-engineer these filters. This independence from prior knowledge and human interference in feature extraction is a major benefit.
Convolutional networks were inspired by biological processes in which single cortical neurons respond to stimuli only in a limited area of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.
Recurrent Neural Networks:
A recurrent neural network (RNN) is a form of artificial neural network that operates on sequential or time-series data. Recurrent neural networks, like feedforward and convolutional neural networks (CNNs), learn from training input. They are characterized by their "memory," which allows them to influence current input and output by using information from previous inputs. While typical deep neural networks assume that inputs and outputs are independent, the output of recurrent neural networks depends on the previous parts of the sequence.
These Deep Learning algorithms are typically used for ordinal or temporal problems, such as speech translation, natural language processing, speech recognition, and image labelling. They are integrated into popular applications such as Siri, voice search, and Google Translate.
Recursive Neural Networks
Recursive Neural Networks, or RvNNs, are nonlinear adaptive models capable of learning deeply structured information. A recursive neural network is more of a hierarchical network, where the input sequence has no temporal aspect, but must be processed hierarchically in the form of a tree.
The models have not yet been widely implemented primarily by their inherent complexity. Even though, RvNNs have proven useful in learning sequence and tree structures in natural language processing, namely in the continuous representation of phrases and sentences based on word embeddings.