In October 2019, the activist movement Extinction Rebellion disrupted daily routines in major cities around the world to highlight the dangers from man-made climate change, and that time is running out to do anything meaningful about it. In London, demonstrators glued themselves to office building doors, the pavement, trains and cars – even to the top of an aircraft about to take off!
They also ran a UK crowdfunding project with a target of £1 million to fund their activities, maybe even to pay some of the fines their members picked up – just a guess. As at October 29 the crowdfunding is still running and they’ve reached nearly £965,000, they’re almost there.
Though within the 30-50 year time frame we are usually told is going to be decisive, some people believe there are other threats that ought to be taken just as seriously, if not more, from artificial intelligence and robotics.
Robot threat to jobs
Many of us have become accustomed to doom-mongers’ comments about the threats to livelihoods from robots doing repetitive and menial work. Inevitable consequences usually list mass unemployment, with non-working people subsidized by far higher taxes levied on those still in work. How would norms of social inclusion and the rule of law cope with an ever more divisive and polarized world of haves and have nots? And that includes having a sense of purpose as much as anything else.
Stuart Russell, a professor of computer science at Berkeley, California, and one of the world’s leading experts in AI, has weighed in with his own opinions in a new book published this month titled “Human Compatible: AI and the Problem of Control.”
He asks readers to imagine a scenario in which a comparable risk is external, one in which advanced aliens from another world email the United Nations and say “we’re coming, we’ll be with you in 30 to 50 years.” Would our planet’s best minds be mobilized to prepare for this extra-terrestrial incursion more than we are preparing for the creation of our own super-humanly intelligent machines?
Pace of technology leaves controls behind
Technology continues to develop at a faster and faster pace. Machine learning-powered artificial intelligence is increasingly likely to enable automation to take on more complex tasks thought were once thought to be ‘machine-proof.’
Flying aircraft, as an example, is a highly skilled profession, not one of the highly repetitive jobs that are supposedly under most threat from robots. Airline pilots can earn substantial incomes and generally receive public admiration. How close are we to that changing, with their role totally automated?
The Probability of Job Automation By Occupation
Lockheed Martin, the US global aerospace corporation, is currently sponsoring an open innovation challenge to combine AI, machine learning and fully autonomous flight. The goal is to create an AI framework that could pilot racing drones through high-speed aerial courses without any GPS, data relay or human intervention.
420 teams from 81 countries have been whittled down to nine finalists who will compete in four races in the coming months. The winning team will win $1 million plus an extra $250,000 if their AI drone can beat a human-piloted drone: the challenge-winning drone will race the fastest 2019 DRL Allianz World Champion pilot at the end of the season.
However, there are often unintended consequences. In a less sporting context, weaponization of drone technology has already been achieved. In 2016, the Islamic State of Iraq and the Levant (ISIL) carried out its first successful drone attack, killing two ‘opponents’ in northern Iraq. Terrorist groups are increasingly using drones and elementary artificial intelligence in attacks. Improved AI could prove a formidable threat, allowing non-state actors to automate killing on a massive scale, creating incidents of mass destruction.
A former Google software engineer and member of the International Committee for Robot Arms Control, Laura Nolan, has warned that autonomous killer robots could accidentally start a war in the future. She has called for automated weaponry to be outlawed by international treaties. Which ones? What treaties do terrorist organizations sign up to?
Terrorist groups aren’t the only parties involved. Stuart Russell’s book points out Israel has developed an autonomous “loitering munition” called Harop, which can hunt and destroy objects it classes as hostile. Anti-personnel microdrones equipped with facial-recognition systems and explosive weaponry might already exist. Slaughterbots, they are called.
At the time when Extinction Rebellion were disrupting major cities around the world, and targeting hubs of finance, media and transport, Russell put forward the notion that the leading tech firms in Silicon Valley and China must learn to accept regulation in the area of weaponry. “Let’s hope it doesn’t require a Chernobyl-sized disaster (or worse),” he warns, “to overcome the industry’s resistance.”
But whilst authors and activists can point and warn of the dangers, I have to ask the question again about “who can introduce enforceable regulation, and act with whose authority?” Any suggestions or comments out there?