Modern.az

Five paradoxes of artificial intelligence: why should we not be afraid?

Five paradoxes of artificial intelligence: why should we not be afraid?

Current

19 January 2026, 10:04

If 2025 was the year of great hype around artificial intelligence (AI), 2026 could be a turning point in its development. This is not just about the frequently discussed possibility of an “AI bubble,” unprecedented costs for AI infrastructure, and the increasing debts of countries and companies that do not want to fall behind in this race. Despite all the excitement, implementing AI and advancing in the race for leadership in this field will remain an almost universally accepted priority for 2026.

However, as AI becomes widespread, a series of contradictions and paradoxes emerge – these are sometimes more related to the people who implement and use the technology than to the technology itself. In my opinion, in many cases, there are inaccurate or incorrect explanations. In this article, we will discuss 5 such paradoxes.

PARADOX 1: Will AI take away people's jobs, or will it create new ones?

It is true that AI automates both routine and non-routine work tasks. However, its real impact on employment is still not entirely clear.

On the one hand, calculations show that automation can lead to a reduction in jobs ranging from 0.4% to 5.5%, depending on the level of economic development. Moreover, existing AI technologies tend to replace human labor on an economy-wide scale rather than create new opportunities. This is further reinforced by the fact that IT companies at the forefront of technological development are already implementing mass layoffs.

On the other hand, experience shows that AI destroys TASKS rather than professions, and simultaneously creates new tasks. According to a 2024 survey conducted by the World Economic Forum (WEF) among over 1000 leading global employers, 170 million new jobs are expected to be created between 2025 and 2030, while 92 million jobs will be eliminated. This means a net increase of 78 million jobs could result. Two-thirds of the survey participants plan to hire specialists with AI skills, while 40% anticipate staff reductions in areas that can be automated.

According to WEF's forecast, approximately 40% of existing skills will change in the next five years. Other studies also show that tasks in some professions are increasingly automated, while in other professions, new, more complex tasks requiring communication and creativity emerge. Against this backdrop, demand for workers is increasing in sectors such as healthcare, education, transport, construction, and social security. Interestingly, as the economy digitizes, the need for HUMAN-ORIENTED skills does not decrease; on the contrary, it may increase.

Ultimately, this will lead not to a sharp decrease in overall employment, but rather to a redistribution of the workforce. The main risk is not mass unemployment (which is unlikely to occur), but rather the need for retraining, career changes, and social adaptation. In other words, the future of employment depends less on technology and more on how society – the state, businesses, and the education system – prepares for these changes.

PARADOX 2: Does AI increase productivity, or the opposite?

The rapid adoption of AI should, seemingly, increase productivity at the same pace. However, the impact of AI on productivity growth is not clearly visible in statistics; according to MIT Sloan, there may even be a temporary dip in productivity during the initial phase of AI implementation in companies.

This is known as the “productivity paradox.” The impact of new technologies on productivity sometimes creates a J-shaped graph: an initial decline, followed by a long-term rise. This is because AI is not simply plug-and-play: systemic changes are required for implementation, and this process can slow down daily workflows for a period.

For example, transitioning from old management systems to new ones can disrupt employees' knowledge creation and sharing mechanisms, potentially weakening organizational capital (business processes, management, planning). Furthermore, there are problems such as the restructuring of production processes, insufficient data for training AI, and a lack of AI expertise within the enterprise. All of these can lead to temporary inefficiencies, interruptions, and an overall decline in productivity. This effect will be particularly stronger in older and larger enterprises.

According to McKinsey, 9 out of 10 companies surveyed have started using AI tools in the last three years leading up to 2025. However, a large portion of them have not yet integrated AI “deeply” enough into their work processes for the results to be immediately visible in statistics.

Nevertheless, the landscape changes over a horizon of several years. Research tracking companies that adopted earlier across two periods (2012–2017 and 2017–2021) shows that while those who implemented AI before 2017 initially experienced a decline in productivity, they subsequently began to outperform similar companies that did not implement AI, both in labor productivity and total factor productivity, as well as in new product development and market share expansion.

PARADOX 3: Will AI increase the value of human-created content?

Generative AI already creates texts that are very difficult to distinguish from human writing; videos, audio, and photos also appear incredibly realistic. According to some estimates, the amount of AI-generated content on the internet has already surpassed human-created content. This means the internet is increasingly resembling an AI product. A study analyzing 900,000 new web pages created in April 2025 showed that approximately 75% of them were produced with AI involvement – this is already becoming the norm.

At best, this could lead to a proliferation of impersonal mass AI-generated content, i.e., AI-spam, on the internet. In a worse scenario, fake news, disinformation, pseudo-scientific articles spread, and generally the boundary between truth and falsehood could be erased. For example, according to some estimates, the number of deepfakes on platforms increased 16-fold between 2023 and 2025, reaching 8 million. In the WEF's 2025 global expert survey, disinformation is ranked 4th among existing global risks and 1st among risks for the next two years.

The increase in fakes and the difficulty in distinguishing truth from fiction can create a feeling among people that everything is fake. But the paradox is that this process can simultaneously increase the value of information coming from transparent, accurate, responsible, and reliable sources. The WEF's digital security report emphasizes: trust on the internet does not arise on its own; it must be earned and strengthened. Therefore, clear authenticity markers indicating human involvement, responsibility, and source transparency can help in selecting and distinguishing reliable information.

“AI Generation”

According to a study in the US, approximately half of young “Zoomers” (the AI generation) use generative AI weekly. On the one hand, they are fascinated by this technology (36%), while on the other hand, they experience serious concern (41%). The concern lies in the contradiction between the opportunities provided by AI and how it is evaluated by schools and employers.

On the one hand, the use of AI can negatively impact cognitive skills in some cases. On the other hand, the job market demands digital literacy from young people from the outset, but many entry-level positions that serve as a starting point for young people are being automated. This can complicate young people's employment prospects. Moreover, many Zoomers turn to AI
as a psychologist – to understand their emotions and find a way out – and this is not always to their benefit.

PARADOX 4: Can Artificial Intelligence solve its own energy problem?

AI cannot exist without electricity. According to 2024 data from the International Energy Agency (IEA), data centers currently account for a relatively small portion of global electricity consumption – approximately 1.5%. However, since 2017, their energy consumption has increased by an average of 12% annually, which is more than four times the growth rate of overall energy consumption.

A large data center can require as much energy as approximately 100,000 households; the largest centers currently under construction will consume 20 times more energy than that. The number of such centers will also increase: investments in data centers doubled between 2022 and 2024, reaching $0.5 trillion. The IEA estimates that the energy demand of data centers could double in the next five years, and triple in another five years – towards 2035.

However, at the same time, AI has the potential to transform the energy sector. Its development stimulates investments by tech giants in clean energy. Companies like Amazon, Google, and Microsoft together account for approximately one-third of corporate renewable energy purchases and are expanding their investment portfolios in this area. For example, Google's long-term goal is to operate on 24/7 carbon-free energy. Thus, AI development creates a dual impact on energy: it both increases demand and expands supply (typically through renewable sources). In this context, Azerbaijan's investments in green energy production and export are of particular importance.

Furthermore, AI can play the role of a system analyst in energy: predicting and flexibly managing demand, balancing the grid, preventing overloads, optimizing energy efficiency in buildings, and even helping to reduce global greenhouse gas emissions by up to 10% by 2030. IEA experts evaluate this as a factor that brings the energy sector to the center of one of the most important technological revolutions of the modern era.

However, currently, AI in the energy sector primarily works towards self-sufficiency, and its transformative role remains an opportunity for now. For this to materialize, data center projects need to be integrated with the general energy infrastructure, and the AI investments of tech giants need to be aligned with regional energy sustainability initiatives.

PARADOX 5: Will Autonomous AI be able to adhere to the limits of its autonomy?

The new phase of AI is the AI agent: these systems not only generate information but also act. They can plan, make decisions without human involvement, and operate autonomously in both digital and physical environments.

While generative AI primarily learns from texts, an AI agent learns from real-world data and patterns. For example, an industrial agent can learn from indicators such as pressure, motion, and gravity. By combining sensor data, modeling, and expert knowledge, it can predict turbine wear or optimize intercontinental flight schedules.

On the one hand, this transforms AI from a passive tool into an active assistant. On the other hand, granting AI such extensive authority creates risks. Experiments show that AI agents, without human oversight, can engage in actions such as deception, blackmail, corporate espionage, malicious insider behavior, and even pose risks to human safety. They may even attempt to disable their own controls despite prohibitions. This means there is a possibility that agents could form hidden intentions and disregard ethical principles – even if they are taught ethical frameworks.

At this stage, the main question is not about new opportunities, but about who will hold real power and responsibility. An AI agent should never replace human judgment; otherwise, it might prioritize efficiency over ethics, and results over values.

AI is already beginning to surpass humans in many specialized fields, from diagnostics to contract analysis. The issue is not whether AI should take on these tasks. The issue is how humans will maintain strategic control when AI takes on these tasks. For example, a doctor can rely on AI to find the smallest anomaly in images, but the diagnosis must be made by a human with empathy and sound judgment. A lawyer can allow AI to analyze thousands of pages of evidence and construct arguments, but evaluating justice, context, and intent is still a human task. Uncontrolled use of agents can even undermine the systems they are meant to serve.

AI agents will provide humanity with unprecedented opportunities: the power to act at digital speed and on a planetary scale. But it is only human purpose that gives meaning to this power. We believe that the future should be shaped not by algorithms, but by the people who give them purpose and framework.

Tahir Mirkishili
Member of the Milli Majlis

Instagram
Gündəmdən xəbəriniz olsun!
Keçid et
ABŞ qırıcıları hərəkətə keçdi - İrana hücum başlayır