Redefining the relationship between humans and AI

Algorithms already perform many human tasks with greater accuracy and efficiency. While AI cannot yet do everything a human brain can, progress in this field is rapid. What next for humanity?

Nicola Ferrarese

Redefining the relationship between humans and AI

Now several years old, artificial intelligence (AI) has fast emerged as a transformative force, reshaping the fabric of everyday life, permeating nearly every facet of human experience, from health and education to travel and work. This deepening integration means the relationship between humans and machines is rapidly evolving into something very different.

Today, the world stands on the edge of another (related) revolution: that brought about by Generative AI (GenAI), a type of AI that creates content based on everything it has been trained on. No longer confined to simply analysing, distilling, predicting, summarising or rephrasing, GenAI can appear to ‘think’ for itself.

AI in general has become indispensable in the realms of scientific research, pharmaceutical development, and data analysis, but questions persist about its trajectory and the extent to which it can rival human cognition.

At this juncture, existential and technological inquiries converge, rekindling one of philosophy’s oldest questions: What does it mean to be human? As AI algorithms rapidly evolve, they exhibit an extraordinary capacity for learning and simulation. Are we entering a new era of symbiosis between the human mind and AI? Is this an existential struggle that will redefine our roles, values, and very essence?

At its core, AI mirrors the human mind, crafted in symbols and programmed in a language of our own design. Within this, it has the potential to surpass the boundaries of its creators. Do we embrace that as a natural extension of our capabilities, a means to transcend our biological limits and expand our consciousness? Or will we regard it as a force that could unsettle our psychological, social, and economic equilibrium?

Given that the technology is fast reshaping the dynamics between agent and instrument, between creator and creation, perhaps the more urgent question is not what machines can do, but what will be left for humanity.

Getty
Plato and Aristotle

The origins of AI

Ancient philosophers inquired into the nature of the mind and cognition, with Aristotle and Plato asking whether thinking was a logical process that could be represented and replicated. As science and philosophy evolved, the notion of logical reasoning grew more complex; others asked whether human thought processes could be modelled using mathematical and mechanical systems.

Badi Al-Zaman Al-Jazari, a 12th-century polymath from Mesopotamia, laid the groundwork for modern automation and robotics through his advanced inventions, including robots, water clocks, and self-operating musical devices that utilised intricate hydraulic systems. His designs laid the foundation for modern control systems and had a profound influence on the development of mechatronics, later inspiring the emergence of industrial robotics and AI.

In the 17th century, French philosopher René Descartes introduced the idea of 'thinking machines.' Thought, he said, could be broken down into mechanical operations. Meanwhile, German philosopher and mathematician Gottfried Wilhelm Leibniz suggested building a "calculating machine" to solve logical problems, much like a human.

By the 19th and early 20th centuries, scientists began developing the mathematical and logical foundations that would form the core of today's AI. Englishman George Boole introduced Boolean algebra, paving the way for the logical representation of mental operations. Meanwhile, his Czech peer Kurt Gödel formulated theories on formal systems and mathematical proof, contributing to the understanding of the limits of computation and artificial intelligence.

AI mirrors the human mind, crafted in symbols and programmed in a language of our own design

Simulating intelligence

One of the key figures in AI was English mathematician Alan Turing, whose Turing Machine was a mathematical model of computation. In 1950, he proposed the Turing Test to assess a machine's ability to think like a human. In 1956, the term "artificial intelligence" was coined at the Dartmouth Conference, which brought together researchers to explore how to create machines that could simulate human intelligence. 

Led by John McCarthy (a mathematics professor at Dartmouth College), Marvin Minsky (of Harvard), Nathaniel Rochester (of IBM Corporation) and Claude Shannon (of Bell Telephone Laboratories), the conference was based on the idea "that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it".

Systems were created in the 1970s and 80s that relied on knowledge-based rules to solve problems. This was followed by early machine learning in the 1990s, exemplified by IBM's supercomputer 'Deep Blue' defeating world chess champion Garry Kasparov in 1997. Its predecessor, 'Deep Thought,' lost to Kasparov in 1989.

Deep learning has now revolutionised image and voice recognition, enabling smart assistants and self-driving vehicles, to name but a few, yet AI is still grounded in the same core principles that have guided its development. It relies on representing knowledge in a way that an algorithm can understand and process, using logical rules, semantic networks, and ontologies that structure information for machine decision-making. 

Modern AI focuses on learning from data rather than direct programming. This is achieved through machine learning algorithms, including supervised learning, unsupervised learning, and reinforcement learning. Deep neural networks have since demonstrated the ability to learn complex concepts from unlabelled data. 

Reuters
Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017.

Enhancing potential

AI generally adopts either a rule-based approach, a logic-based approach, or a statistical approach, which relies on data analysis and probability estimation. Artificial neural networks, pioneered by Geoffrey Hinton—known as the "Godfather of Neural Networks"—are the cornerstone of deep learning, which is used in image recognition, natural language processing, and autonomous vehicles.

Ultimately, AI aims to enhance the relationship between humans and machines through the development of smart assistants, brain-computer interfaces, and intelligent robotics; however, AI should not be viewed as a direct competitor to human capabilities. 

Rather, it is a powerful tool that enhances human potential and helps perform tasks more efficiently. AI currently lacks the ability for pure creative thinking and cannot yet make decisions grounded in ethics or what would be called 'human emotion'.

The remarkable progress in AI comes down to the abundance of big data (to feed machine learning algorithms), advances in computing power (letting these algorithms operate efficiently), developments in machine learning, investment, the proliferation of cloud computing, and the evolution of natural language processing, coupled with growing awareness of AI's value across various sectors.

In the workplace, AI has become a crucial partner to humans, capable of analysing vast volumes of data at exceptional speed to support better decision-making. In medicine, AI can diagnose disease or malignancy, but physicians are still needed for confirmation and treatment decisions. 

AI should not be seen as a direct competitor to human capabilities. Rather, it is a powerful tool that enhances human potential

In education and industry, AI is used to automate routine tasks, freeing humans to focus on more creative and strategic aspects. Even though AI can generate artistic or literary content, genuine creativity is still rooted in human experience, critical thinking, and the ability to draw inspiration from reality. AI assists in creative processes, but cannot replace the human sensibility that gives creativity its unique character.

In decision-making, AI crunches data and forecasts trends, but cannot grasp social and cultural complexities in the way humans can. It has huge potential to improve human life, but it also presents serious risks and requires an approach that balances innovation with ethical responsibility. Research is increasingly directed at technology that augments, rather than replaces, human abilities. Brain-computer interfaces, for instance, allow humans to control devices using their minds.

Redefining thought

The future of AI may redefine our very concept of intelligence. This is linked to the ambition to achieve Artificial General Intelligence (AGI)—a system that can perform any intellectual task that a human can do, including reasoning, learning, solving problems, understanding language, and making decisions.

AGI machines exceeding human capacity could trigger profound transformations in the economy, society, and our understanding of human existence. With progress being made, human-machine collaboration becomes essential, as people learn to work alongside AI rather than compete with it.

Coupling AI with quantum computing could lead to the world's most complex problems being solved, new drugs being discovered, and unbreakable encryption systems being created, but with great power comes great responsibility. 

Shutterstock

AI needs data, much of which is personal or sensitive. What guarantees are in place to prevent this data from being used for surveillance or exploitation? Violating rights under the guise of 'service improvement' seems a poor trade-off. Likewise, some worry that algorithmic bias is "baked in" to the latest models, mirroring the societal biases embedded within the data on which they are trained. This raises the risk of discrimination.

Perhaps the biggest day-to-day concern is the prospect of large-scale job losses, as tasks are increasingly performed by algorithms. While AI may generate new jobs, many worry that the pace of change is exceeding society's ability to adapt.

Others fear that AI will be deployed destructively, whether for military purposes or surveillance. Will autonomous systems be able to make life-altering decisions without direct human oversight? Robust legal and regulatory frameworks are needed to protect humanity from any potential missteps, but these do not seem forthcoming.

There is an urgency to it. When it comes to agency and reason, humanity is no longer alone. We now share these capacities with digital entities capable of learning, analysing, deciding, and creating. Cooperation between humans and machines need not negate humanity. Rather, it may be an opportunity to rediscover it. 

font change