Artificial intelligence (AI)

Artificial intelligence (AI) is the computer-rendered replication of human intelligence abilities. Since the early 2020s, most often it refers to technologies based on machine learning and neural networks.

The History of AI

The term “artificial intelligence” was first coined in 1955 by American computer scientist, John McCarthy. In 1956, the Dartmouth Summer Research Project on Artificial Intelligence, also known as the “Dartmouth workshop”, laid down the foundations of this new branch of science aimed at modeling the human mind. However, in terms of AI research, year zero can be considered to be 1950, when Alan Turing formulated a method for determining whether a machine possesses intelligence (“Computing Machinery and Intelligence”, 1950). This method later became known as the Turing test. According to this test, a computer does indeed possess intelligence if it can demonstrate intelligent behavior indistinguishable from that of a human.

Research into artificial intelligence also took off in the 1950s. It was during that decade the first neural networks appeared, followed in the 1960s by the first chatbot. In the mid-1970s, AI development slowed due to funding cuts, limited computing power, and general disillusionment with the technology – a period that became known as the “AI winter”. Interest in the field briefly resurfaced in the 1980s with the development of expert systems potentially capable of replacing human specialists in certain areas. But these systems failed to live up to expectations, and another winter set in.

The turn of the millennium saw another revival of interest, driven largely by Deep Blue’s sensational victory over world chess champion Garry Kasparov. Another key factor was the massive boost in computing power, and the emergence of large data sets needed for machine learning. In the 2010s, various new technologies emerged under the “generative AI” umbrella: generative adversarial networks able to generate photorealistic images, as well as transformer neural networks, which allowed training large language models (LLMs) capable of receiving queries and responding in natural languages.

Strong AI vs weak AI

Artificial intelligence comes in two flavors: strong and weak.

Strong AI, or AGI (artificial general intelligence), would be capable of performing tasks traditionally seen as requiring intelligence, such as establishing causal relationships and decision-making. As of October 2025, AGI remains a hypothetical concept. Some experts believe that only strong AI is worthy of being called artificial intelligence.

Weak AI solves a limited set of tasks for which it is trained – such as generating text or images on demand. Typically, weak AI is trained on large sets of homogeneous training data (for example, millions of labeled photographs of cats and dogs), and builds patterns based on this data. Trained AI can deliver verdicts based on input data of the same format (for example, responding “cat” or “dog” when asked what is in a photo of a pet) or generate new ones by analogy (creating photorealistic images of non-existent cats and dogs).

Examples of AI and their applications

As of 2025, we can divide AI-based technologies into the following broad categories:

  • Text-processing models – AI whose main function is natural language processing, such as LLMs. Such models are widely used in AI assistants (e.g., ChatGPT, Siri, Alexa, etc.), automatic translators (Google Adaptive Translation, DeepL, etc.), code generation tools (GitHub Copilot, Amazon CodeWhisperer, etc.), and many other popular tools.
  • Visual content processing systems – AI tasked with processing or generating images and videos. This includes computer-vision systems used in image search and for object recognition in self-driving and video surveillance systems. Also in this category are neural network image generators (Dall-E, Midjourney, Stable Diffusion, etc.) and video generators (Sora, Luma, etc.).
  • Audio content processing systems – AI capable of processing sound. Examples are biometric voice authentication systems, neural network voiceover tools, and music generators.

Thanks to its ability to generalize information, we can also use AI to analyze any big data. For example, AI powers the recommendation systems of streaming services – analyzing user interests and suggesting related content. Comparable systems have become widespread in medicine for analyzing symptoms and making preliminary diagnoses.

AI in cybersecurity

The advance of AI has led to the emergence of new cybersecurity threats.

  • Deepfakes. Generative neural networks allow the creation of believable fake images, audio, and videos of real people. Threat actors can leverage these for extortion or phishing
  • Generating malicious code. AI assistants from major companies are generally incapable of generating malicious code due to built-in protection mechanisms. However, the dark web is home to hacked versions of such assistants that can bypass restrictions to perform malevolent actions – including creating malware.
  • Generating phishing pages and emails. Scammers use LLMs to create phishing emails and websites in different languages.

Attackers can also target AI in an effort to bypass the built-in restrictions of legitimate models. These kind of attacks are called jailbreaking (similar to gaining root access in iOS). Threat actors can also search for vulnerabilities in machine learning models for use in future attacks – for example, on security solutions.

For their part, security experts use AI to optimize resource-intensive tasks and improve protection.

  • Detecting malware. Modern security solutions use AI to process files and detect current threats – among which is polymorphic malware, in which every specimen has a unique file hash.
  • Detecting fraud. AI can detect abnormal user activity (for example, transactions at unusual times or from unusual locations, which could point to a card data-breach). Online banking and online gaming are two areas that leverage AI for this purpose.
  • Detecting email attacks. AI analyzes email traffic based on multiple parameters to catch phishing emails – including sophisticated spear phishing.
  • Detecting abnormal activity in corporate networks. AI can process telemetry and transaction logs in both digital and physical systems (for example, as part of SIEM solutions), and identify anomalous activity that may be an indicator of compromise or an attack.
  • Threat hunting. AI can detect previously unknown indicators of compromise in historical data to bring new and sophisticated threats to light.

Related Posts