The new policies require creators to disclose when they've created or incorporated AI content into their posts, including content that "realistically depicts an event that never happened" or shows "someone saying or doing something they didn't actually do."
"Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties," YouTube said in a news release on Tuesday.
"We'll work with creators before this rolls out to make sure they understand these new requirements," it added.
Viewers will also be allowed to submit a request form for YouTube to remove AI content "that simulates an identifiable individual, including their face or voice," our colleague Olafimihan Oshin reported.
The company noted that not all requests will be honored, and they'll "consider a variety of factors when evaluating these requests."
"This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar," the release said.
YouTube also said it plans to introduce a process where its music industry partners can request for AI content to be removed from the site if it "mimics an artist's unique singing or rapping voice."
The release comes after Meta, the parent company of Facebook and Instagram, announced last week that it would require political advertisers on its platforms to disclose when they use AI or other digital methods.
Read more in a full report at TheHill.com.
No comments:
Post a Comment