The Battle between Human Intelligence and Artificial Intelligence
Just this month, Tesla revealed a prototype of their humanoid robot, Optimus. This latest project will have an AI-chip powered brain, cameras for eyes, microphones for ears, and the capacity to walk and carry 9 kilograms per hand, amongst other things.
This alongside other developments are proof of the mergence of artificial intelligence (AI) as one of the most exciting technological innovations in the world of business. Even now with its technology still in its relative infancy, AI is driving people around, delivering packages, trading securities, and translating languages. Its breath-taking abilities are starting to shape a lion’s share of industries. A hotly contested debate of late is how the relationship between human intelligence and Artificial Intelligence will play out as AI adoption grows. Will it be one of competition and conflict, or will we see eye-to-eye with our AI counterparts?
The word intelligence derives from the Latin word intelligentia meaning “the action or faculty of understanding”. What does it mean to understand? Oscar Wilde wrote that “to define is to limit”, and hence I think we must interpret intelligence in a rather broad and fluid sense. Intelligence cannot be categorised as a single characteristic or competency and so we should recognise that the abilities and understanding possessed by humans and robots are different. When it comes to comparing AI and Humans, we must consider them differently too.
Talos, Turing & Siri
The concept of intelligent robots stems as far back as the presence of automatons like Talos and Pandora in Greek mythology. Talos was constructed by Hephaestus to help King Minos of Crete guard the island from invaders while Pandora was essentially an ‘all-gifted’ robot. Where philosophy would then mull the presence of artificial beings, science fiction would imagine and depict it with more colour and drama. But in 1950, Alan Turing began to bring the art to life when he discussed how to build intelligent machines and test this intelligence in his seminal paper ‘Computing Machinery and Intelligence.’ Five years later, the Dartmouth Summer Research Project on Artificial Intelligence catalysed AI research for the following decade.
However, in the 1970s, AI development entered its first winter in the 1970s where funding completely dried up, before a small boom in research and development occurred in the 1980s, followed by a second AI winter. IBM Deep Blue becoming the first computer to beat a world chess champion in 1997 was a critical point in the evolution of the technology. The same year, speech recognition software developed by Dragon Systems was implemented in Windows. In the mid-2000’s we saw widespread adoption and exploration of AI by Big Tech culminating in products like the Google search engine, Google Translate, Siri, facial recognition, and Alexa.
And now in 2022, we are in the golden era of AI with sophisticated Machine Learning imitating how humans learn, quantum computing attempting to dramatically increase the power and speed of computing, and the embedment of AI in the Augmented Reality enhancing the experience of the metaverse. With these developments on the precipice of reshaping industries, experts are predicting that using AI at a large scale will add as much as $15.7 trillion to the global economy by 2030. In 2021 Venture Capital (VC) funding for AI start-ups reached an eye-watering $89.2 billion as investors and entrepreneurs look to realise the power of robots. We might hope that amidst all the hype surrounding the adoption of AI, that we, like Minos did with Talos, can firmly place our faith in AI. However, sceptics warn of a scenario more akin to opening Pandora’s box and unleashing its evils.
What’s Inside Pandora’s Box?
It is no surprise that VCs globally have invested billions into AI start-ups in recent years. With the mammoth funding it requires, the range of possibilities are almost beyond the scope of our wildest imagination. Tech giants such as Alphabet and Meta have pumped over $50 billion into R&D to build this brave new world of automation. You’ll find it in the chatbox providing you with “excellent customer service”, helping you to turn on the lights in your kitchen and keeping your home clean, amongst an array of other things. AI is transforming processes in healthcare too with robots pumping out vaccine development and drug design, enhancing quality of life, and possibly saving lives that otherwise would not have been saved.
Here in Ireland, we have companies like Manna Drone Delivery incorporating some AI to autonomously deliver coffees and pastries to people’s homes. And then there is the leveraging of AI in ‘Web 3.0’ ‘s move to decentralisation where it is now enabling some blockchain and token-based transactions.
We couldn’t have a conversation about cutting edge technology like AI without mentioning the wide-eyed futurist that is Elon Musk. Alongside, the development of Optimus and Tesla’s self-driving cars, there is the incredibly frightening neurotechnology of Musk’s Neuralink which wants to create an implantable brain chip to record the activity of the brain and improve human intelligence. With all these technologies, AI’s intelligence derives from their potential to learn and make decisions based on the data they are fed.
On the other hand, humans rely on a different kind of intelligence that is certainly more expansive and intuitive. Humans rely on memory but more critically, possess an emotional intelligence (EQ) that enables us to relate, adapt, empathise, and understand. The significance of EQ in the workplace was underlined by a study of 2,662 U.S. hiring managers which found that a whopping 71% of employers value EQ over IQ. If we consider the work of a nurse, the ability to show empathy, sympathy and compassion is fundamental to their ability to do their work competently. The same can be said for the work of solicitors, actors, comedians, and many other fields where humans cannot be outsmarted due to the essential personal and emotional element. There are also question marks surrounding AI’s ability to maintain the ethical standards that the human ability to empathise enables us to maintain. There was shock and great disappointment when Microsoft’s automated Twitter account, Tay, was easily coaxed by a user to publish a flurry of anti-Semitic tweets in 2016.
‘Never send a human to do a machine’s job’, a quote from the iconic film The Matrix. But is the doom-ridden depiction of AI in contemporary science fiction correct?
AI is impeccable at carrying out repetitive tasks. It does not tire in the same way that the human mind or body might. Automating these types of tasks makes a great deal of sense and achieves cost savings for businesses. AI also provides a solution to the distorted effect of cognitive biases that affect the decisions of managers every day, a challenge that management theorists have contested and theorised about for over 50 years. However, the lack of EQ and perhaps an understanding of ethics is an enormous challenge faced by AI and ultimately signifies a limit to their abilities. Returning to the earlier point, do we really expect that the costs saved from automating something like nursing will be worth the loss of personal human touch? There is also the fact that while those in Big Tech and business talk very bullishly of the plethora of opportunities that AI will generate, its presence is not as highly-anticipated by the public who have fears about ethical dilemmas, surveillance, and data privacy. A study by Ipsos found that only 50% of people trust companies that use AI as much as they trust other companies. The public response to Mark Zuckerberg’s 1 hour 17-minute-long video describing his grand plans for the metaverse was one of partial ridicule but also unease. Some of the scepticism might be borne from headlines like the World Economic Forum’s prediction of AI wiping out some 85 million jobs by 2025. Although people should be cognisant of the fact that this loss of jobs will be offset by new jobs servicing AI. Harari, author of Sapiens, predicts that the job market of 2050 “may well be characterised by human-AI cooperation”.
A war of every robot against every man?
As we mentioned before, intelligence is not black-and-white so we cannot easily identify a winner and nor should we want to. It is counterproductive to address AI and human intelligence by pitting the two against each other in a fight to the death unless we want our lives to actually transform into something reminiscent of a dystopian sci-fi film. Yes, the future of work will resemble something very unrecognisable but in this world of new jobs, the relationship between humans and AI will be one of collaboration where AI will support human productivity.
Companies will benefit from optimizing collaboration between humans and Artificial Intelligence. What it will come down to is striking the optimal balance between how the human element and the AI element can interact and synergise processes. I refute the suggestion that AI will be the last invention humanity will ever have to make. The discovery of AI will enable humans to deliver the next great wave of industry-defining and ground-breaking innovations. But be warned, it is pivotal that as we integrate the technology we align its goals to our own. If we do not manage our AI, we may well find ourselves witnesses of Hawking’s stark prediction of the end of humanity at the hands of Artificial Intelligence.