Tech titans putting world at risk, says Dutch NGO
Use of AI in weapon system heralds third revolution in warfare, after gunpowder and nuclear technology
By Ali Murat Alhas
ANKARA (AA) – Leading global companies like Amazon, Microsoft and Intel are putting the world at risk by developing killer robots, according to a report that surveyed major tech players about their stance on lethal autonomous weapons.
Dutch NGO Pax ranked 50 companies by criteria; whether they were developing technology related to deadly artificial intelligence (AI); and if they were working on military projects.
The use of AI to allow weapon systems to autonomously select and attack targets has sparked ethical debates in recent years.
Experts warn that they could jeopardize international security and could begin a third revolution in warfare, after gunpowder and the nuclear technology.
While arguing that AI has potential to contribute to society, the voluntary group stressed that it is important to avoid its negative effects.
The NGO's global survey graded companies from 12 countries, which focus on hardware, AI software and system integration, pattern recognition, autonomous and swarming aerial systems or ground robots.
As many as 21 companies fell into a “high concern” category, notably Amazon and Microsoft who are both bidding for a $10 billion Pentagon contract to provide the cloud infrastructure for the US military.
This group also includes Palantir, a U.S. company which has been awarded an $800 million contract to develop an AI system “that can help soldiers analyze a combat zone in real time.”
Another 22 companies were declared of “medium concern”.
Google, General Robotics and Japan’s Softbank, were among seven companies, found to be engaging in “best practice”.
The NGO also released classification of "best practice", based on firms' commitment to ensuring that their technology would not be used to develop or produce lethal autonomous weapon systems.
A campaign titled "Stop Killer Robots", urges countries to ban fully autonomous weapons, which might cross a moral threshold. In addition, replacing traditional army units with machines might make decisions to go to war easier and therefore, escalate tensions and conflicts.
At the recently held G20 summit in Japan, Turkey had called for devising international standards, ethics and norms to govern the AI technology.
Last April, the EU has released guidelines for the companies and governments to develop AI, including the need for human oversight, working towards societal and environmental wellbeing in a non-discriminatory way, and respecting privacy.
Kaynak:
This news has been read 430 times in total
Türkçe karakter kullanılmayan ve büyük harflerle yazılmış yorumlar onaylanmamaktadır.