Saturday, 3 June 2023
Is it too late to moderate the impact of artificial intelligence?
Artficial intelligence has become the hot topic. Will it destroy our way of life, will robots take over, will science fction become reality? In the military world, AI is key to future, if not current, warfare. The three rival great powers, the US, China and Russia are spending billions of dollars on developing AI weapons that will transform the battlefield in the future both on land, in the air and on and under the sea. Other countries, including the United Kingdom, India. Israel, South Korea and Iran, are also trying to exploit the AI potential for developing unmanned “thinking” weapon systems and platforms. It is said that AI represents the third revolution in warfare, following the invention of gunpowder and nuclear weapons. The third revolution has been underway for years but now the availability of artificial intelligence both in the commercial and military research worlds has led to a new arms race.
The war in Ukraine has largely been described as attritional warfare with both sides dug into trenches and firing artillery shells, more Second World War than a war of the future. And yet glimpses of what a future war will look like have also emerged in Ukraine with autonomous “loitering” drones, underwater drones and even uncrewed ground vehicles. The Kyiv government has also benefited from western-supplied AI geospatial intelligence provided by low-orbit satellites which give an instant picture of Russian positions and movements. A US AI company has given Kyiv facial-recognition software to help with identifying dead soldiers and to pinpoint suspected Russian war criminals. The AI revolution has created what has been termed “algorithmic warfare”; and it’s the sophistication of the algorithms which have led to concerns that if the human element is removed, warfare of the future will be uncontrollable. The AI weapon that turns on its supposed human operator is just one example of how robotic systems can make their own judgments and act accordingly. The Pentagon has its own advanced “lethal autonomous weapons” programme to develop what are sometimes called slaughterbots” or killer robots, which use AI to identify, select and kill human targets without human intervention. The drones used by the CIA and US special operations military to kill terrorist targets in Iraq, Afghanistan, Syria and elsewhere were controlled by human operators in front of monitors thousands of miles away. If there was a risk of civilian casualties drone strikes could be called off. In the case of lethal autonomous weapons, final decisions are made by algorithms using facial recognition technology.
The US, China and Russia are all researching weapons platforms that can operate independently whether they be drones or fighter aircraft or submarines. The US Air Force has already successfully tested an F-16 fighter jet flown by AI. It flew for 17 hours and was put through numerous challenging manoeuvres. The argument for more unmanned systems, such as fighter jets, which can operate without putting human pilots at risk of enemy fire, is unassailable. In February at a meeting in The Hague, more than 60 countries, including the US and China, endorsed the responsible use of AI in the military. But the agreement was not legally binding and for the moment research into AI weapon systems is being given more priority by the world’s biggest defence spenders than anything else.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment