Unleashing a legal and ethical debate worldwide, AI (artificial intelligence) is progressing with leaps and bounds as it portends to change human society forever. For example, if a driverless car meets with an accident involving fatalities, it is the algorithm operator who faces “product liability” rules. In the case of AI used in conventional war, machines killing humans is an ethically chilling concept. Carrying major implications, it is feared that the proto-AI technologies of today are going to evolve into true AI super-intelligence very rapidly without giving enough time for research into the pros and cons.
As apprehensions of a “hyper-war scenario” build up, the main challenge remains: how to place the human factor in AI and prevent a drastic downgrade in military security as combat involving the technology changes the dimensions of warfare. Every country today needs to re-evaluate its defense mechanisms and reinterpret its geostrategic defenses to fit in with the modern use of artificial intelligence.
Discussing the risks of “hyper-war,” August Cole, senior fellow at the Atlantic Council, predicts that, “The decision-making speed of machines is going to eclipse the political and civilian ability.” Being dual-use in nature, most AI algorithms can also be modified for security purposes and preparing for a “hyper-war” will soon be a priority.
The US and China have already announced that they intend to harness AI for military use. Recognising the military significance of AI, Russian President Vladimir Putin has termed it the future for all mankind that would introduce “colossal opportunities” and “threats that are difficult to predict.” Declaring that the country leading in artificial intelligence will rule the world, Putin felt the gravest of threats would be that involving nuclear stability.
Every country today needs to re-evaluate its defense mechanisms and reinterpret its geostrategic defenses to fit in with the modern use of artificial intelligence
Exploring the possibilities of nuclear mishaps due to AI, the RAND Corporation started a project known as Security 2040, one of the researchers, engineer Andrew Lohn, says, “This isn’t just a movie scenario, things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.”
Basically, the fear is that computer miscalculations could lead to nuclear annihilation if the machines taught to think and learn like humans suddenly go haywire and spin out into a ‘Terminator’ kind of nightmare. Redefining the rules of nuclear deterrence, there is a probability that the “red button” may not be in human control if things go wrong.
Depending on data that can be analyzed in real time, an accessible, data-friendly ecosystem with cross-sharing enabled is imperative, thus it is nations that promote open data sources and data sharing that advance most in AI. However, there are risks that even the most sophisticated signature-based cyber protection will be constantly exposed to malware and cyber threats so whoever wins the race for AI superiority also has massive responsibilities to secure the systems sufficiently. The US ranks eighth in data-openness ratings, while China ranks at 93rd, according to a McKinsey Global Institute study.
Ahead of other countries, the US is experimenting with autonomous boats that track submarines for thousands of miles while China explores the use of “swarm intelligence” to enable teams of drones to hunt in unison. Planning an underwater doomsday drone, Russia could deliver powerful nuclear warheads to vaporize entire cities.
Employing AI technology in the autonomous weapons Project Maven, the US military is trying “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” Calling on Google to abandon the US military project, more than 3,000 employees of the internet giant signed an open letter declaring that it should not be in “the business of war and outsource moral responsibility.” They also expressed the fear that Google locations history would be used for target killing. Currently, even the CIA is executing around 137 pilot projects related to artificial intelligence.
According to McKinsey Global Institute, “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” Thus, it is not surprising that a $150-billion domestic AI industry is planned according to China’s State Council to provide security, fight terrorism and improve speech recognition programs, and to begin with a facial recognition application by Baidu is being used to search for missing people. Even though China is behind the US and UK in AI, the sheer size of its market gives it an advantage in pilot testing and product development.
Evaluating the positive aspects of AI, it may be possible that these technologies could be suitably harnessed over time and provide stability and, ultimately, security that is reliable as there would be no room for human error. Until that time arrives, proactive measures should be taken to rule out the dangers to humanity and prevent catastrophic miscalculations between nuclear powers.
Well the danger of AI would appear to be to the nations that like war the most and the nation that has built it`s entire economy around war and military. The US for example goes into Iraq to secure the oil for it`s international oil companies and causes the deaths of some 2,000,000 ( million people) and set another 8,000,000 Iraquis on the road as refugees and flattened their country. To these people what does it matter if the desaster that struck them was caused by an AI decision on a computer or a Dick Cheney, Paul Wolfowitz, G W Bush lie? They are still dead. They are still homeless. No, this writer is worried about the danger that a machine might decide to make the US look like Iraq after the illegal American invasion and occupation of that country. So personally I think AI is a good thing if for nothing else it will destroy the countries around the world that are already doing their best to destroy the world. in other words it will introduce some semblance of justice to world affairs.
.
C’mon Sabena, what better than the freedom from being sued by stormy Daniels and Karen McDougals? A.I. naturally. Men’s your country should seriously think about this technology and avoid getting trapped and charged for groping of their women’s. After all, women your country weren’t meant by God to be groped — they were intended by God to be left alone, as is, no harm done by their male counterparts.
.
AI can be applied for GOOD purposes or EVIL purposes – it all depends on INTENT and MORAL JUDGMENT!
I had worked on AI projects in the UK in the past and my understanding is that AI is supposed to mimic human minds. BUT there is a FLAW. We DON’T KNOW enough about the human mind to model an AI Algorithm on it!
AI processes humungous amounts of data, recognises patterns and draws some conclusions and then executes its decisions. It is as flawed as a human mind doing the same.
But science and technology will progress, despite the inherent flaws…but it’s our MORALITY that needs to be balanced before we employ AI for the betterment of humanity. Oh…and OUR EGOS also need to be controlled before we take control of AI weapons.
.
LOL ????…
No more Stormy Daniels… No more Karen McDougals… No more Albert Weinstein sexbots who’ll guaranteed to come out of the closet and take a bite at your little brother eons later …
No more access Hollywood or Access Hollywood tapes… No more Billy Bush jokes pre-authorised by Unc Jeb… Free, free as a bird at last and, most vital of all, Swartznigger’s holiday of dreams coming to real life..
Dream on sisters. Who needs biological matters that fires up strip joints!
????????????
.
Those of us who have been in the IT business for more than half century know too well that when a computer calculates at 1 Terraflops, it makes mistakes at the same speed.
IT has not delivered in the West. Its CAD/CAM/CAE/ERP model has been soundly beated by Asia’s TotalQuality/JIT/Kanban/Keizan/FMS paradigm that uses no computer. Using its home grown processes Asia has reduced production costs to a fraction of that in the West.
If experience is any guide, AI in the West will meet the same fate.
Western model is work oriented, Asian is thought based. Humans will always beat the machine (except dumb mechanical work).