AI-generated media: Industry leaders and advocates publish ethical guidelines for synthetic media – Times of India

Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organisations that create solutions so that AI advances positive outcomes for people and society. PAI has now unveiled a first-of-its-kind Framework for the ethical and responsible development, creation and sharing of synthetic media. The framework is backed by an inaugural cohort of launch partners including Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, Witness and other synthetic media startups like — Synthesia, D-ID, and Respeecher. Users can view Partnership on AI’s Responsible Practices for Synthetic Media: A Framework for Collective Action on this site:
What is the guideline framework
Partnership on AI’s Responsible Practices for Synthetic Media guidelines is a set of guiding recommendations for those creating, sharing, and distributing synthetic media – also known as AI-generated media. The guideline has been created over a year-long process with input from over a hundred contributors. It was prompted by a belief among industry experts that the evolving landscape of synthetic media represents a new frontier for creativity and expression, but also holds troubling potential for misinformation and manipulation if left unchecked.
Head of AI and Media Integrity at PAI, Claire Leibowicz said, “In the last few months alone we’ve seen AI-generated art, text, and music take the world by storm. As the field of artificially-generated content expands, we believe working towards a shared set of values, tactics, and practices is critically important and will help creators, content platforms, and distributors use this powerful technology responsibly.”
How PAI has developed the guideline framework
PAI claims to have worked with over 50 organisations to refine the Framework. These organisations include synthetic media startups, social media and content platforms, news organisations, advocacy and human rights groups, academic institutions, policy professionals, experiential experts and public commenters. The results of this effort build on PAI’s work over the past four years to evaluate challenges and opportunities for synthetic and manipulated media.
Read what popular launch partners said about the guideline framework
Adobe: “Adobe launched the Content Authenticity Initiative (CAI) in 2019 to increase trust and transparency online. Since then, our membership has grown to over 900 leading media and tech companies, publishers, creators, and camera manufacturers working to address misinformation at scale through attribution,” said Andy Parsons, Senior Director of the Content Authenticity Initiative at Adobe. “As synthetic media techniques become increasingly powerful, we are committed to advancing standards and frameworks that promote ethical creation and use of digital content. We are excited to be involved in the PAI Framework and look forward to continuing to shape the future of responsible use of AI.”
BBC: “The BBC, as a PAI partner, is pleased to have made a contribution to developing PAI’s Responsible Practices for Synthetic Media,” said Jatin Aythora, Director of Research and Development at the BBC. “Establishing principles for the responsible use of synthetic media has enormous value as many organisations grapple with its implications. As a public service broadcaster with a focus on trust and safety, we look forward to reflecting work in this area in our own editorial guidelines as appropriate and continuing to support and develop work in this area.”

Bumble: “We are steadfast advocates for safe spaces online for less represented voices. Our work with PAI on developing and joining the Framework, alongside an amazing group of partners, is an extension of that,” said Payton Iheme, VP of Global Public Policy at Bumble. “We are especially optimistic about how we continue to show up to address the unique AI-enabled harms that affect women and marginalized voices.”
TikTok: “TikTok is built on the belief that trust and authenticity are necessary to foster safe, creative and joyful online communities, and we’re proud to support Partnership on AI’s Responsible Practices for Synthetic Media. Like many technologies, the advancement of synthetic media opens up both exciting creative opportunities as well as unique safety considerations.” said Chris Roberts, Head of Integrity and Authenticity Policy at TikTok. “We look forward to collaborating with our industry to advance thoughtful synthetic media approaches that empower creative expression by increasing transparency and guarding against potential risks.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.