The 20 participating companies pledged to "work collaboratively on tools to detect and address online distribution" of AI-generated content meant to deceive voters in elections around the globe, in addition to engaging in educational campaigns and offering transparency, according to a press release.
The companies include Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X.
The head of the Munich Security Conference, where the accord was announced, touted the move as a "crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices."
"[The Munich Security Conference] is proud to offer a platform for technology companies to take steps toward reining in threats emanating from AI while employing it for democratic good at the same time," Chair Christoph Heusgen said in a statement.
The accord comes as more than 4 billion people are set to head to the polls this year in more than 40 countries, including the U.S., the press release noted.
"With so many major elections taking place this year, it's vital we do what we can to prevent people being deceived by AI-generated content," Nick Clegg, the president of global affairs at Meta, said in a statement.
"This work is bigger than any one company and will require a huge effort across industry, government and civil society," Clegg added. "Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge."
As political campaigns and their supporters increasingly utilize AI, concerns have grown about the power of the rapidly advancing technology to deceive voters.
Last month, a robocall impersonating President Biden urged voters in New Hampshire not to vote in the state's primary election.
Read more in a full report at TheHill.com.
No comments:
Post a Comment