Artificial Intelligence (AI) has been long discussed in the academic world, but only in the last decade has it been surfaced to the general public in the form of mobile applications, voice assistants, autonomous vehicles, robots, and so on. With AI’s commercial success came the hype, and the buzzwords naturally started spreading, often bringing confusion to the mix. Let’s start by explaining the term Artificial Intelligence itself.
Artificial Intelligence (AI)
AI is, simply put, any system that has the purpose of mimicking human's intelligence when confronted with data. We are constantly receiving data from the environment via our senses, but it is the way we process these data that defines our intelligence. For example, your grandfather may go around the city by car with a paper map in hands, reading multiple signs, and driving at the same time. Such intelligent behavior is clearly very complex and remains a challenge for AI. More interesting examples of AI not always receiving the spotlight of a self-driving car, or a robot, for example: AI engines helping companies with automation and decision-making processes, for tasks that traditionally relied on humans looking at spreadsheets.
What determines whether a system can be called AI or not is its purpose and the user experience, not the underlying employed techniques. Machine Learning (ML), Deep Learning (DL), and Reinforcement Learning (RL) are major areas that supply most of these techniques to the engineers and will usually yield results that feel more natural to the user.
Machine Learning (ML)
A ML algorithm can train on known data to learn patterns and extrapolate the learnings to unseen data. The ML algorithm relies on the variability of real-world data to learn by making heavy use of Statistics. Additionally, it learns in an optimal way, which happens to be suitable for today’s processors. Think of it as a student that is learning how to classify photos of objects, and the photos of each object are very diverse. The student does this as a very clever way and knows what he should prioritize in order to learn faster. There are two possibilities: the student can either learn with supervision, where a teacher tells him the name of every object, during the training phase, this is called “supervised learning”; or the student can learn how to differentiate between photos by himself, without having access to pre-existing labels or names, this is called “unsupervised learning”. You can see now how ML is trying to mimic human’s way of learning, and that’s why it is suitable to be the backbone of most AI engines.
Deep Learning (DL)
DL follows the same logic as ML, with the difference that it is based on Neural Networks. These were designed originally as abstractions of how neurons would communicate to each other to learn, but it is widely accepted now that this is not how our brain is structured. But we can build Neural Networks with multiple layers of neurons to achieve what is called DL. In 2012, Google managed to train a DL model over ten million images. It has been demonstrated that the model could successfully recognize cats, in these images, without any type of supervision, opening the floodgates to a new era in AI.
Reinforcement Learning (RL)
RL is basically a way to learn by running simulations of multiple possible scenarios, within well-known constraints. For example, a RL algorithm can learn how to navigate inside a maze, given the constraints of its walls, by trying out millions of different paths. RL has been achieving great results by borrowing optimization methods from DL to skip less optimal paths and thus reducing the total amount of simulations that are necessary to achieve reasonable results. It is easy to see how AI can benefit from the RL approach, since trying something out multiple times is one of the ways that humans use to learn sequential tasks.
Augmented or Enhanced Intelligence
AI can augment or enhance your intelligence by supplying options and information that will guide your decision. We have recently been augmenting our intelligence with all kinds of apps in our smartphones, without even noticing it. Just imagine the number of reviews you would have to read in order to best choose the next series to binge on, or the next product to buy. Navigation apps are the most obvious example of AI helping augment human’s intelligence and are maybe the most successful one for quite some time now. These apps provide you with a few routes and even adapt them while you drive your car and miss the turn. Furthermore, some of these navigation apps use advanced ML models for the traffic prediction component.
It is worth mentioning that using AI for augmenting intelligence is also a trend in the corporate world, where increasingly, decisions are being guided by these engines. In this context, usually there is a requirement for constantly monitoring the performance of the AI systems and for the possibility of human intervention. This is called a “Human in the Loop”.
Human in the Loop
There are various possibilities for a Human in the Loop, in the context of AI. In general, it is assumed that humans have better intuition and can adapt better to unusual situations than a machine. That’s the reason, for example, for allowing humans to override AI outputs based on business rules. Another way to approach this is allowing for humans to provide inputs that will change the AI computations, according to exceptional situations that eventually come up, such as an unforeseen catastrophe.
ML at Scale
When implementing AI, in real-world applications, we would like to deploy it at scale. It ultimately requires ML at Scale, or any of the possible underlying methods to be easily scalable. In this context, that system must be able to process data which are: scattered between multiple users or customers, in large volumes, and received at a high frequency; and, also, be able to return outputs with low latency.
As the AI field progresses, additional buzzwords will be constantly created. We, at ADC, work hard to demystify AI buzzword jargon and make it clear what is truly possible to achieve with the AI engine in our FreshIQ platform. Request a demo to learn more about the power of FreshIQ's Forecasting and explainable AI.
About the Author
Felipe Campos Penha is a Senior Data Scientist at ADC, where he works on the development of the AI engine that is behind the FreshIQ platform. Prior to joining the ADC team, Felipe was a Data Scientist and Data Analytics Manager at Neoway. He had a key role in the international expansion of that company by managing projects with US customers, especially in the Consumer and Packaged Goods Industry. He has a Physics Ph.D. in Physics and devotes part of his time to Research & Development in partnership with universities, and to outreach initiatives in Data Science by writing blog posts, creating content for YouTube, and giving talks and interviews.