Artificial intelligence systems for dummies: types, methods, examples of use

Artificial intelligence is a technology that allows a program to reproduce and even exceed the human mind’s capabilities. The last few years have shown how rapidly this direction is developing, bringing scenes from science fiction novels closer to reality. This article is a short guide to the world of artificial intelligence. In it, we will define the technology, list the main types, advantages, and disadvantages, as well as development trends and examples of the use of AI.
What is artificial intelligence?
Artificial intelligence or AI (eng. Artificial Intelligence, AI) is defined as a set of software algorithms that allows you to simulate several functionalities of the human brain in a dynamic computing environment. Artificial intelligence algorithms use several technologies that allow machines to sense, understand, plan, act, and learn, similar to humans.
AI systems perceive the environment, recognize objects, facilitate decision-making, solve complex problems, learn from past experiences, and mimic patterns. These abilities can be combined to perform tasks such as self-driving cars or face recognition when unlocking device screens.
AI primarily develops through the study of the mechanisms of work and the human brain’s abilities to apply this knowledge to “smart” machines. The main goal of artificial intelligence is to develop a technology that allows computer systems to work independently of a person and make decisions on a full-fledged intellectual basis.
History of artificial intelligence
- 1942 – AI cracks the German Enigma cipher machine.
- 1950 – Alan Turing’s machine intelligence test is created.
- 1955 – American computer scientist John McCarthy coined the term “artificial intelligence,” which he officially announced at a conference a year later.
- 1958 – The first high-level programming language Lisp was developed for AI research.
- 1959 – Computer game pioneer Arthur Samuel coined “machine learning.”
- 1961 – The first Unimate industrial robot is installed on the General Motors assembly line in New Jersey.
- 1966 – The first ELIZA chatbot appeared at the MIT Artificial Intelligence Laboratory.
- 1970 – The first anthropomorphic robot, WABOT-1, was developed at Waseda University in Japan.
- 1986 – The first unmanned vehicle, a Mercedes-Benz van, toured.
- 1995 – the ALICE computer (Artificial Linguistic Internet Computer Entity) is created to collect selective data in natural language.
- 1997 – The AI of the IBM DeepBlue computer beats the world chess champion.
- 1998 – the first robot with “emotions,” Kismet, is created.
- 2002 – Roomba’s first standalone robot vacuum cleaner.
- 2008 – iPhone and Siri introduced voice recognition.
- 2010 – Microsoft released the Kinect for Xbox 360, the first gaming device to track the movement of the human body using a 3D camera and infrared light.
- In 2011 – the IBM Watson computer system beat two ex-champions in the television quiz “Jeopardy!” the IBM Watson computer system beat two ex-champions.
- 2014 – Apple released the first built-in personal assistant with Siri voice control.
- 2016 – the first robot-citizen Sophia, appeared.
- 2017 – Amper’s first AI composer.
- 2020 – the revolutionary language model GPT-3 (Generative Pre-trained Transformer) has learned to independently generate high-quality text on a given topic using pre-trained algorithms.
- 2022 – The company OpenAI, which developed GPT-3, created an advanced chatbot with artificial intelligence ChatGPT based on this model, capable of supporting dialogue with requests in natural languages.
- 2023 – the largest players in the IT market begin to integrate ChatGPT algorithms into their flagship products. Already at the beginning of the year, the chatbot was taken on board by companies such as Google (Chrome browser), Microsoft (Bing search engine, Edge browser, Teams collaboration service), and Kunlun Tech (Opera browser).
Artificial intelligence systems’ development continues rapidly, changing the surrounding reality. We have already seen smart self-driving cars, AI chips, powerful AI-powered Azure-Microsoft online infrastructures, and many other brilliant inventions. We’ll talk more about what changes AI technologies can bring us soon.
How artificial intelligence works
The artificial intelligence system takes input data from speech, text, and images and processes them using various rules and algorithms. After processing, the system issues a result, i.e., success or failure, upon data entry.
The result is then evaluated through analysis, discovery, and feedback. Finally, the system uses its scores to adjust inputs, rules, algorithms, and target outcomes. This cycle continues until the desired result is achieved.
Key areas of artificial intelligence
The AI landscape extends to many advanced technologies that enable computer systems to understand human language, learn from examples, and make predictions. While each of the components of AI evolves independently, in conjunction with other technologies, data, analytics, and automation, they can be used in any manufacturing task, from supply chain optimization to customer service improvement.
- Machine learning or ML (Machine learning) is an AI-based application that automatically learns and improves based on previous data sets without explicit programming. Machine learning is used to create artificial intelligence tools, from automatic letter and speech recognition applications to complex computer vision systems embedded in robots or uncrewed vehicles.
- Deep learning or DL (Deep learning) is a subset of machine learning in which training data is processed using artificial neural networks. Deep learning creates solutions such as chatbots and virtual assistants, marketing recommender systems, and automatic translators from foreign languages.
- A neural network is a computer system or systems that loosely model neural connections in the human brain and provide deep learning. There are several main types of neural networks (NNs), including feed-forward NNs (FFNNs), convolutional NNs (CNNs), and recurrent NNs (RNNs). They are used for tasks such as voice recognition, image recognition, speech recognition, image recognition, language translation, classification, and anomaly detection.
- Cognitive computing aims to recreate the human thought process in a computer model. Technology seeks to mimic the logic of human consciousness and improve the interaction between people and machines. This happens, for example, through AI recognition of human language and the meaning of images for subsequent self-learning. Implementing a cognitive computing system today is possible only in such complex machines as the IBM Watson supercomputer.
- Natural language processing or NLP (Natural language processing) is a technology that allows computers to understand, recognize, interpret, and reproduce human language and speech. The most striking example of the implementation of NLP technology today is advanced natural language processing models, such as GPT from OpenAI.
- Computer vision or CV (Computer vision) uses deep learning and pattern identification to interpret image content – photos, raster and vector graphics, tables, PDFs, and videos. Autopilots of self-driving vehicles (for example, Tesla Autopilot) and AI systems that process information from traffic cameras are based on computer vision.
- Robotic process automation or RPA (Robotic process automation) is an artificial intelligence discipline that helps to partially or completely automate repetitive manual operations by setting up a robot or software capable of interpreting, transmitting, and analyzing data. RPA systems are used in areas such as accounting (invoice processing), HR (recruitment and onboarding), retail (stock management), and help desk organization.
Types of artificial intelligence
Artificial intelligence technologies can be divided into two broad categories: capability-based AI and functionality-based AI. Each of these varieties, in turn, is divided into more specialized subcategories.

Capability Based
Narrow AI
Narrow or weak artificial intelligence (Narrow AI, NAI, or Artificial Narrow Intelligence, ANI) is a highly specialized AI trained to perform a specific task. Weak AI operates within a limited and predetermined set of parameters, constraints, and contexts.
Examples of the use of NAI include user recommendations for video/audio content on popular online movie theaters or social networks, purchase offers on e-commerce sites, autonomous cars, and speech and image recognition systems.
General AI
General or strong artificial intelligence (General AI, GAI, or Artificial General Intelligence, AGI) is a version of AI that performs any intellectual task with human efficiency. General AI aims to develop a system that can think for itself as humans do. General AI is still in the research phase, and efforts are being made to develop machines with enhanced cognitive capabilities.
Super AI
Artificial superintelligence (Super AI, SAI) is a version of AI that surpasses human intelligence and can perform any task better than a human. The capabilities of the super AI machine include the following independent activities:
- thinking;
- argumentation;
- puzzle-solving;
- making judgments;
- education;
- Communication.
Today it is a hypothetical concept, but it represents the future of AI.
Based on functionality
jet cars
Reactive machines are a basic type of AI that does not store past experiences or memories for future actions. Such systems focus on and react to current scenarios based on the best possible course of action. Popular examples of reactive machines include IBM’s Deep Blue chess supercomputer and Google’s AlphaGo Go game.
Machines with limited memory
Limited memory machines can store and use experience or data for a short period. For example, a self-driving car can store information about the speed of nearby vehicles, their respective distances, speed limits, and other important information for navigating through traffic.
AI with Theory of Mind
Theory of mind or theory of mind refers to a type of AI that can understand human emotions and beliefs and is capable of human-like social interaction. This artificial intelligence has yet to be developed and exists only in concept.
Self-aware AI
Self-aware artificial intelligence (Self-aware AI) refers to superintelligent machines with their consciousness, feelings, emotions, and beliefs. It is expected that such systems will be smarter than the human mind and can surpass us in the tasks at hand.
Advantages of artificial intelligence
More efficient solution
AI research is focused on developing algorithms for solving complex problems that can draw logical conclusions and mimic human reasoning. Artificial intelligence, such as stock market forecasting systems, offers methods for solving uncertain situations or puzzles with incomplete information based on the practical use of probability theory.
Facilitate planning
With the help of AI, a person can make predictions and find out the long-term consequences of his future actions to make the right decisions in the present. AI-powered planning enables you to achieve goals more effectively and optimize overall performance with predictive analytics, data analysis, forecasting, and optimization models. This is especially true for robotics, autonomous systems, cognitive assistants, and cybersecurity.
Development of creativity
AI can process huge amounts of data and consider options and alternatives to find new directions for creative thought or opportunities for social progress. For example, an artificial intelligence system can provide several interior design options for a 3D apartment layout or offer some unexpected solutions for a company’s corporate identity.
Opportunity for continuous learning
Machine learning refers to the ability of computer algorithms to improve AI knowledge through observation and experience. Artificial intelligence mainly uses two learning models – supervised and unsupervised; the main difference is using different data sets.
Because AI systems learn independently, they require little or no human intervention. For example, ML technology involves a continuous automated learning process.
Creation of a knowledge representation system
AI research revolves around the idea of knowledge representation and knowledge engineering. It represents “what is known” to machines with an ontology to create a set of objects, relationships, and concepts.
Knowledge representation reveals the information that a computer uses to solve complex practical problems, such as diagnosing medical diseases or interacting with people in natural language. Researchers can use the information provided to expand their AI knowledge base and to fine-tune and optimize their AI models.
Encouraging Social Intelligence
Affective Computing, also called Emotional AI (EAI), is a branch of AI that recognizes, interprets, and models human experiences, feelings, and emotions. With their help, computers can read facial expressions, body language, and tone of voice to allow AI systems to interact and communicate at a human level. Research efforts in the direction of “emotional AI” in the future will lead to the emergence of social intelligence in machines.
Disadvantages of artificial intelligence
Algorithm Bias
AI systems work with trained data, meaning their quality directly depends on the data quality, which inevitably causes bias. This deficiency may arise from racial, gender, social, or cultural biases inherent in humans and later carried over to algorithms trained on human-generated content. Artificial intelligence bias can affect vital decisions, such as choosing the right candidates during an interview or determining eligibility for a loan.

The black box problem
Artificial intelligence algorithms are like “black boxes” – their methods of operation are securely hidden from users and specialists. We can see the system’s prediction, but we need to know how it came to that conclusion, which lowers confidence.
The high processing power required
The more AI algorithms are involved in the workflow, the more they require additional cores and GPUs. The restrictions set by hardware are one of the main factors preventing the widespread penetration of artificial intelligence systems into all areas of the economy.
Complex integration
Integrating AI into an existing enterprise infrastructure is harder than adding plugins to websites or modifying Excel spreadsheets. It is important to ensure that the current software and hardware are compatible with the requirements of the AI system, which means that the integration will not reduce current performance. In addition, an AI interface needs to be implemented to make it easier to manage its infrastructure.
The use of artificial intelligence – the main trends
Information Security
Artificial intelligence can effectively complement the efforts of cybersecurity experts by taking some of the burdens off them. The machines are great at processing big data quickly and successfully detecting any strange or suspicious activity. Of course, even the most advanced AI cannot completely replace human intelligence, but together they can create more advanced tools for solving key information security problems, including:
- Detection and Prediction – AI models can detect even the smallest potential security threats, vulnerabilities, and malicious activities to stop them proactively.
- Network Security – Through network traffic patterns, AI can help automate critical aspects of network security, such as security settings and network topography.
- Password Protection and Authentication – While a strong password remains a prerequisite for maintaining cybersecurity, AI technologies with biometric verifications (such as Face ID on the iPhone) can add an extra layer of security.
- Reducing the influence of the “human factor” – according to statistics, human error when performing routine operations is becoming one of the most common causes of data leakage. AI will help to avoid the problem, as it perfectly copes with repetitive tasks.
- Combating AI-based cybersecurity threats – after it was discovered that the first cyber attack using artificial intelligence using the Text-to-SQL model reached 100% efficiency, it became obvious that such attacks can only be countered with the involvement of the resource of neural networks.
Implementation example: DDoS protection system based on artificial intelligence CyberFlow. The use of AI-enabled it to effectively detect and prevent distributed denial of service attacks at all levels of the OSI network model and create a unique security profile for each user account.
Chatbots and virtual assistants
Another AI trend made last year and carried over into this year is smarter chatbots and virtual assistants. This was kickstarted by the COVID-19 pandemic, which forced businesses of all industries and sizes to rush to equip their employees with remote workspaces.
Most chatbots and virtual assistants use deep learning (DL) and natural language processing (NLP) technologies to automate routine tasks. One of the most promising applications of this type of AI is language modeling, which allows computers to understand the semantics of the language, form sentences using word prediction, and convert text into computer codes.
Implementation example: we constantly encounter “smart” chatbots when trying to contact the call center of a bank or a large organization, and we also use voice assistants from IT industry leaders (for example, Alice from Yandex or Siri from Apple). A breakthrough in this area is the GPT-3 (Generative Pre-trained Transformer 3) neural network model from OpenAI, which uses deep learning on 175 billion parameters to process and generate a human-like language. GPT-3 can create poetry, prose, news, posts, and jokes, translate from a foreign language, solve examples, give descriptions, answer reading questions, structure information, and even program.
An improved version of this GPT-3.5 language model is the basis for the most sensational AI product of recent times from OpenAI – ChatGPT artificial intelligence chatbot. The project, officially launched on November 30, 2022, gathered a record audience of 100 million users in just two months.

In February 2023, Microsoft implemented the ChatGPT neural network in the Edge browser and the Bing search engine in two flagship products. The GPT natural language processing model that underpins ChatGPT will also be the basis for Microsoft’s No-Code Microsoft 365 Copilot platform. This generative artificial intelligence technology will automate various tasks in the popular office applications Word, Excel, Teams, PowerPoint, Outlook, Power Platform, and Viva.
Google announced the imminent release of an analog and closest competitor to ChatGPT – the Bard neural network based on its own LaMDA language model. The IT giant later said it would shortly implement text and image generation AI in all Google Workspace user products, including Google Docs, Gmail, Sheets, and Slides.
Generative AI
Generative artificial intelligence is a technology that uses AI and machine learning algorithms to create new content. Its application covers a wide area – from creating visual, audio, and video materials and program code to software stress testing and developing advanced drugs.
Analysts from leading technology publications such as Gartner and Info-Tech have included generative AI in the main artificial intelligence trends for the coming years including due to increased public interest and good commercialization.
However, the widespread adoption of generative AI also has its risks. Today’s main one is the increase in threats created by the so-called deep fakes (from the English deep fake, “deep fake”). These realistic imitations of a person, often a media celebrity or business person, can be successfully used to deceive, misinform, or financially scam. In this regard,e real-time deep fakes or live deep fakes (real-time deep fakes are particularly dangerous). AI-based tools such as Microsoft’s VALL-E neural network, which can imitate a human voice in just three seconds of audio input, could usher in a new era of cybercrime.
Implementation example: the past year has become truly “explosive” for generative AI, relegating to the background the technology of the metaverse or cryptocurrencies that was previously leading in public opinion. Thanks to services such as Midjourney, DALL-E 2, Prisma Lab, and Stable Diffusion, the Internet has become flooded with extremely realistic images created by artificial intelligence on the “order” of users. And the research being done with generative AI by big companies like Google’s music project MusicLM can potentially revolutionize the current approach to creating any Internet content.
Technologies with computer vision
“computer vision” or “machine vision” refers to AI that uses machine learning algorithms to replicate human vision. Models are trained to identify a pattern in real-world images and classify objects based on recognition.
This technology finds applications in the service industry, healthcare, agriculture, industrial manufacturing, autonomous vehicles, and security systems. For example, computer vision can scan inventory in warehouses in the retail sector or automatically locate pedestrians on video from traffic cameras when designing smart city security systems.
Implementation example: Tesla Vision, the most advanced integrated computer vision system to date, was created using the NVIDIA CUDA parallel computing platform. The software supports the latest generation of Tesla autopilots (Tesla Autopilot) and autonomous driving technologies.
Autonomous vehicles
As more and more car manufacturers continue to invest in autonomous vehicles, the market penetration of self-driving vehicles is expected to increase significantly. By the 2030s and 2040s, autonomous vehicles will have a solid place in the public bus and truck fleet, and by 2045 their number could account for half of the total fleet of new cars, according to the forecasts of the authoritative non-profit organization Victoria Transport Policy Institute.

Implementation Example: Self-driving cars with computer vision are already being tested by companies such as Tesla, Uber, Google, Ford, GM, Aurora, and Cruise. In August 2021, Tesla unveiled a Dojo chip designed to process large amounts of images collected by computer vision systems built into self-driving cars. And in December 2022, Waymo (ex-Google Self-Driving Car Project) filed a final application for permission to operate fully autonomous taxis in California.
Digital twins
“Digital Twin” refers to perfectly synchronized, physically accurate virtual copies of real objects, processes, or environments. This breakthrough technology of digitalization and digital transformation has always gone hand in hand with artificial intelligence. AI enhances digital twins by allowing the technology to analyze probabilistic scenarios and run simulations, providing researchers with more data to analyze and allowing them to make better decisions.
Improving the quality of teaching
In his sensational article “The Age of AI has begun, “Bill Gates stated that one of the most important breakthroughs artificial intelligence will make shortly would be a dramatic improvement in the quality and accessibility of education. In the next 5-10 years, AI-based software will finally be able to revolutionize the learning process even more than the ubiquity of the PC did at one time, according to the Microsoft founder. Breakthrough tools such as ChatGPT can help students better understand difficult terms and choose subject areas for in-depth study. Teachers improve the quality of knowledge assessment when checking written assignments.
Development of innovative drugs
The widespread involvement of artificial intelligence in the creation of medicines can truly revolutionize the pharmacological industry. Thanks to the tremendous speed and accuracy of data processing and new approaches to predicting results, AI can make a breakthrough in critical areas on which millions of people depend. Machine intelligence can give hope to defeat incurable diseases today and make the existing healthcare system more efficient by introducing automation and digital transformation principles everywhere.
Implementation Example: AlphaFold, a deep learning system developed by DeepMind, can simulate the behavior of protein structures in the human body in 3-D. In early 2023, the reputable British scientific journal Chemical Science published a study stating that with the help of AlphaFold, in just 30 days of work, scientists were able to come close to creating a cure for liver cancer.
Conclusion
As the role of artificial intelligence rises in all aspects of society, science, and business, we see more and more examples of the brilliant implementation of this technological paradigm. Today, “machines” help people automate production, analyze large amounts of data, improve the customer experience, or find new inspiring creative ideas.
Such a qualitative transition has become possible as AI, machine learning, deep learning, and neural networks have become available to large companies, small businesses, and even ordinary users through ready-made services that work on the “click and ready” principle.
Contrary to popular belief that AI will replace humans in all jobs, only a greater degree of integration between humans and machines can be expected in the coming years. Such cooperation will be useful for a person. As accessibility increases, AI will improve our cognitive skills, abilities, productivity, and decision-making.