Tech industry vow to combat 'deceptive AI' in elections
Companies, including Amazon, Google, Meta, X, OpenAI sign Tech Accord to Combat Deceptive Use of AI in 2024 Elections
By Burak Bir
MUNICH, Germany (AA) - The world's leading technology companies pledged Friday to take action against the deceptive use of artificial intelligence (AI) in the 2024 elections.
In a joint statement, 20 tech companies, including Amazon, Google, Meta, Microsoft, OpenAI and X, vowed to work together to detect and counter harmful AI content.
"Today at the Munich Security Conference (MSC), leading technology companies pledged to help prevent deceptive AI content from interfering with this year’s global elections in which more than four billion people in over 40 countries will vote," said the statement.
It came amid the 60th edition of the MSC as the world's most pressing security challenges are being debated at the three-day meeting in Munich.
"The ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters," said the statement.
It noted that signatory companies pledge to work collaboratively on tools to detect and address the online distribution of such AI content, "drive educational campaigns, and provide transparency, among other concrete steps."
The accord is one important step to safeguard online communities against harmful AI content, and builds on the individual companies’ ongoing work, noted the statement.
According to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, the companies agreed to specific commitments, including "developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate."
Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X have signed the accord as of Friday, added the statement.
Kaynak:
This news has been read 285 times in total
Türkçe karakter kullanılmayan ve büyük harflerle yazılmış yorumlar onaylanmamaktadır.