Use of AI spurs privacy concerns in US

While recent boom in AI causes excitement and concerns at same time, investigations are underway into data collection by AI firms

By Sevgi Ceren Gokkoyun and Dilara Zengin Okay

WASHINGTON/NEW YORK (AA) – Though the rapidly developing artificial intelligence (AI) technologies seemingly make our lives easier, the users of AI tools are more concerned about data privacy than the new technology replacing them in workplaces.

The use of AI tools shape our work and social life, and bring with them privacy concerns.

While other concerns regarding job losses due to AI replacing human workers are on the rise, the effect of AI tools on personal privacy has become a hot topic.

A survey by the consulting firm KPMG showed that some 1,000 college-educated US consumers believe the benefits of AI technology outweigh the risks attached to using them.

Some 42% of the customers questioned said that generative AI tools have significantly impacted their personal lives, while the remaining 58% stated such applications shaped their professional lives, and 51% of the respondents expressed significant excitement over generative AI.

More than half of the participants in the KPMG survey believe that generative AI tools will enhance a wide range of issues, ranging from physical health to cybersecurity, from personalized recommendations to education.

However, the surveyed participants showed concerns over fake news and content, AI scams, data privacy, disinformation, and cybersecurity arising from the increased use of AI.

Among the participants, 51% expressed concerns over job losses due to AI replacing human workers.

As for opinions regarding federal regulations on AI development, 60% of Gen Z and Millennial respondents stated that they are currently “just right” or “too much.”

Additionally, 36% of Gen X and 15% of Boomers and Traditionalist participants agreed with the current government schemes on regulating AI development in the US.



- Biden administration’s executive order on AI

The Biden administration issued an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023.

The order, issued to protect Americans from potential risks of AI tools, required companies developing AI technologies to share security test results and other information with the US government.

In addition, new rules were introduced to protect people against fraud from AI-made content by implementing verification.

Meanwhile, the US Federal Trade Commission (FTC) launched a wide-range investigation into the ChatGPT-maker OpenAI last year for allegedly violating consumer protection laws.

The FTC launched an investigation into Alphabet, Amazon, Anthropic, Microsoft, and OpenAI’s generative AI investments and partnerships in January.

At the beginning of June, news in the US revealed that the Department of Justice would investigate chipmaker Nvidia for its role in the AI craze.

*Writing by Emir Yildirim in Istanbul


Be the first to comment
UYARI: Küfür, hakaret, rencide edici cümleler veya imalar, inançlara saldırı içeren, imla kuralları ile yazılmamış,
Türkçe karakter kullanılmayan ve büyük harflerle yazılmış yorumlar onaylanmamaktadır.

Money News