The long-standing stock photography agency Getty Images has co-written and published an open letter calling for copyright protection, responsible development, and full transparency in using generative AI models and AI-generated media.
This move comes months after the same agency sued the AI tech lab Stability.AI for copyright infringement on millions of images in Getty’s catalog, and the points the missive touches are part of an ongoing discussion in the photography and artistic fields.
The open letter signed by Getty Images, The Associated Press, Agence France-Presse, the European Pressphoto Agency, and The Authors Guild, among several other notorious press and media organizations, begins its title with “Preserving public trust in media,” and they believe this can only be achieved through standardized policies for the training, usage, and publication of generative AI tools and its outputs.
The signing entities state their belief in the positive transformations that generative AI technology can bring to society and the media ecosystem. Their missive is not a call against this novel technology but rather an action for awareness of the pressing concerns surrounding the vertiginous escalation of generative AI beyond any legal frames that can properly contain it, aimed at industry-related companies, governments, and international law organizations.
They bring attention to the fact that, even without malicious intent, the current state of generative AI model development can still potentially result in misinformation and copyright infringement, impacting the press and media industries.
One of the letters’ requests is to establish a structure that requires AI tech developers to acquire permission from copyright holders to use their intellectual property in training datasets for generative AI models and for using copies of their work in the output images generated through those models.
Clearly, this has to do with the rising concern and discontent from artists and photo agencies around the world at the fact many of the current AI image-generative apps have been trained with billion-file image datasets made from copyrighted content mined from the web, without authorization and without fair compensation to copyright owners.
Getty Images, for example, has an ongoing lawsuit against Stability.AI for allegedly illegally using over 12 million photos from the agency’s library into training datasets for their popular AI image generator Stable Diffusion, without permission.
The letter expresses that developers should be legally required to obtain the rights to use copyrighted material in their datasets. Another point of their text is that they believe the new legal frame for AI visual technology should enable media companies –like themselves– to collectively discuss and arrange with AI tech developers the terms and limitations of use for copyrighted content in generative AI model training.
The communication addresses another highly-discussed issue with synthetic media: the fact it still reproduces and sometimes highlights social bias and potentially harmful concepts, and it can occasionally produce incorrect or misguiding content.
They strongly believe all developers of AI visual tools should define and follow policies that ensure the reduction and eventually elimination of these negative aspects in their results, to ensure a healthier and more ethically responsible background for AI-generated images.
It’s worth mentioning that many AI-image generation tools are already working on this and have different filters and user guidelines aiming at precisely this.
Finally, the media and press organisms signal the importance of making AI-generated images as transparent as possible to increase the responsible use of the technology and eliminate the potential for misinformation and other ill-intended uses.
They propose that all AI-image-generating tools should make their outputs so that they are clearly labeled as AI-generated, and all users should be required to disclose the synthetic nature of the media they publish.
Adobe and many other companies have already ascribed to this idea through the Content Authenticity Initiative that the software giant started last year, which includes the newly designed Adobe Content Credentials, which include, you guessed, the AI-generated label for synthetic pictures. Stock photo agencies are also already on board with this, and all the ones that offer AI-generated photos in their libraries do so by including mandatory “AI-generated” and even “illustration” tags, clearly disclosing the origin of the visuals for all potential users.
The total of ten signing entities all coincide that the issue is not generative AI on its own, but the fact that –so far– it’s grown at such a pace that the necessary policies and standards to regulate it are far behind.
They believe the industry can adapt to this technological advance like it has done in the past with other impactful advancements, as long as the appropriate laws and limitations are put in place and enforced so that the livelihood and integrity of artists are protected, and the trust and reputation of press content aren’t compromised.
What do you think of this open letter? Do you agree with their proposal?
Header image: Copyright by davidpereiras / photocase.com, all rights reserved