The future of AI is often framed in dramatic, dystopian visions, like robots stealing human jobs leading to mass unemployment, or corporations and governments using powerful technologies for population surveillance. Having worked on AI for decades, I believe our future will be better for AI, and that many of its flaws reflect our own foibles — and can be prevented.
The year 2016 was a turning point for artificial intelligence after AlphaGo, created by British AI start-up DeepMind, beat Lee Sedol at Go. The board game, thousands of years old, was a target for AI researchers because of its strategic sophistication as compared to chess, at which computers had long beaten human grandmasters by their brute number-crunching force.
Since then, AI has boomed in terms of new technical breakthroughs and commercial uptake. DeepMind, for instance, has continued to tackle long-standing, thorny problems. Its more recent AlphaFold project predicted a protein’s 3D structure at incredible speed and precision, a feat that has eluded scientists for decades and which has positive implications for medicine and life sciences. GPT-3, a human language model released in 2020, dramatically surpasses the performance of domain-specific natural language processing (NLP) tools, being able to perform a range of tasks from producing poetry and philosophical reflections to writing press releases and technical manuals. Businesses are racing to deploy AI in everything from agriculture to banking and retail. PriceWaterhouseCoopers estimates that AI will create $15.7 trillion in economic value by 2030.
Job losses are not caused by firms that automate their manufacturing processes, but rather by firms that miss the critical juncture of automation.
This AI boom is down to several trends. One is the evolution and refinement of ‘deep learning’ that allows AI systems to become more accurate and effective. The second, fueling the first, is the proliferation of data sets on which AI systems train. The third is improvements in hardware and processing power.
Tech companies, industry observers and critics and academics are now grappling with what AI’s rapid progress will mean for the future of humanity. But tech forecasting is a minefield. When I worked on speech recognition in the 1980s and 90s, I thought it would be ready in ‘the next five years’ and I was wrong. I learned then how many factors shape the development and uptake of a new technology, from user interface to law and regulation. There can also be long lags between ideas and practical applications. Deep learning is powering many of the most exciting breakthroughs in AI right now, but the concept dates back to publications in 1967.
With all that said, what kind of predictions can we make today about where AI will take us in the next two decades? Will the future be one marked by mass unemployment, autonomous lethal weapons or uncontrolled AI which turns on its human creators, as pessimists fear? Could it give more power to commercial tech juggernauts, fattening their profits but not making a meaningful difference for humanity? Might it even be a giant flop, with only a few enduring applications, leading to a new ‘AI winter’?
My belief is that, following Amara’s law, we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. My vision of the future, outlined in my 2021 new book AI 2041 which I co-wrote with science fiction writer Chen Qiufan, sets out a range of pathways in which AI, automation and robotics will impact our lives.
By 2041, I predict that AI will beat humans at an ever-increasing number of tasks. But that will not render us useless or unemployed. I think there will be many existing tasks that humans can handle much better than software programs or robots and I expect us to work symbiotically with our creation, as it performs quantitative analysis, optimization, and routine work, while we contribute creativity, strategy, and passion.
I believe AI and automation will move upstream in life science, to optimise and transform drug discovery, pathology, and diagnosis. We recently saw history in the making, with Insilico Medicine nominating a drug target and novel molecule both discovered through AI. COVID response PCR test automation systems are now being adapted to penetrate life science lab work for cell biology, molecular biology, and stem cell research. This is only the beginning of AI’s impact on health and medicine.
Some bombastic predictions look overly confident. For instance, I do not think we will reach ‘artificial general intelligence’ (AGI), a holy grail in which AI can perform with the flexibility and range of humans, in the medium term. There are many challenges that we have not made much progress on or even understood, such as how to model creativity, strategic thinking, reasoning, counterfactual thinking, emotions, and consciousness.
I do envision potentially radical changes to society though. I think AI could, within two decades, revolutionize how we make goods and services in ways that could challenge the fundamental premise of our economic models. It will transform material science and electricity generation, bringing about an up to 90% reduction in energy costs which will, in turn, radically cut the prices of goods to near zero. That could in effect do away with scarcity, the cornerstone of modern economic thinking, and help usher an era of material plenitude.
The power of AI should in turn make us conscious of its flaws and dangers — biases, new cyber security risks, deep-fakes, privacy infringements, autonomous weapons, and job displacements are among them. But we should evaluate these threats carefully. If AI suffers from bias, it is in large part a reflection of us, its creators. If anything, AI systems can be built to identify and correct biases, including through responsible governance protocols, in ways that humans, oblivious to ours, cannot. In a world where a judge’s sentencing is harsher when he or she is hungry, the solution is to build technologies that are objective and self-correcting in ways that surpass our frailties, rather than giving up on technology and falling back on our fallible selves. We need more AI research on quantifying notions like “time well spent,” “fairness,” or “happiness” and engineers should be trained with a set of standard principles — like an adapted physician’s Hippocratic oath
In a world where AI transforms economies, we will, of course, need broader discussions about the kinds of society we want to live in. If AI and related technologies drive down the cost of almost all goods, most of which will be produced for next to nothing, would money be phased out? If so, what would take money’s place to motivate people to live purpose-filled lives? Would any economic theory apply anymore? I believe in the very long term, we might see an economy built on a new social contract that grants increasing basic services of a comfortable life, while redefining concepts like work, money, and purpose, as well as the role of corporations and institutions. That is a world in which everyone should have a voice — and the design for it should start now.
This article first appeared on Economist Impact website on April 18, 2022