The Federal Trade Commission (FTC) this week released a new report detailing key takeaways from an October 2023 public virtual roundtable that examined the impact of generative artificial intelligence (AI) on creative fields.
The report highlights concerns raised by working creative professionals about the use of generative AI, including collection of their work without consent, nondisclosure of its use, and the potential for AI-generated work to compete with human-made work.
The report outlines several key concerns raised by creative professionals during the roundtable:
- Collection without Consent: Creative professionals expressed concern that their past work was being collected and used without their consent or awareness to train generative AI models. Some participants noted that AI developers may be using expansive interpretations of prior contractual agreements to justify this practice.
- Nondisclosure: Participants also expressed concern that they might not even know that their works are being used because many AI developers do not publicly disclose what works have been included in training data. This lack of transparency makes it difficult for creative professionals to track and manage their rights.
- Competing for Work with AI: Participants said that generative AI outputs are starting to appear in the venues where creative professionals compete for work, potentially making it more difficult for consumers and potential publishers to find human-made work. AI generated content could also lower the quality standards expected of human-made work.
- Style Mimicry: Some participants expressed concerns about generative AI tools being used to mimic their own unique styles, brands, voices and likenesses, which could allow strangers and former clients to create knockoffs including synthetic voices and images. This could damage the reputation and brand value of creative professionals.
- Fake Endorsements: Participants said generative AI has been used to create false depictions of artists selling products that they never endorsed or used by trolls to generate offensive content using their cloned voices. This could damage the reputation and brand value of creative professionals and could also lead to legal action.
The report recommends that AI developers adopt an opt-in approach to using artists' work, which would give artists control over whether they want their work to be used for generative AI. This would allow creative professionals to protect their rights and ensure that their work is used in a way that is consistent with their values.
The report also recommends that AI developers be more transparent about how they collect and use data, including the types of data they collect, how the data is used, and how it is protected. This transparency would help to build trust between AI developers and creative professionals and would also help to address concerns about the misuse of data.
The report notes that, although many of the concerns raised at the event lay beyond the scope of the Commission's jurisdiction, targeted enforcement under the FTC's existing authority in AI-related markets can help protect fair competition and prevent unfair or deceptive acts or practices.