Fashion and luxury: Generative AI facing ethical, legal challenges
Nicola Mira
Should we be afraid of generative AI? How can we use it ethically? Are the designs created by means of generative AI copyrightable? What about the original creations exploited in this process? These were some of the questions to which the guest participants at the 22nd edition of the Rencontres internationales de la mode forum – staged on October 14 during the Hyères Festival by the French Fashion and Haute Couture Federation (FHCM) – tried to provide some answers. Questions and concerns that are increasingly relevant for fashion labels: the use of generative AI has been growing exponentially for almost a year, within an almost non-existent legal framework.
The Hyères debate, exciting though it was, wasn’t altogether reassuring, because the road towards fencing in this disruptive new technology is looking tortuous. The EU is in the process of passing new legislation in this area, the IA Act, which was approved on first reading by the European Parliament in June 2023, and is set to become the world’s first law of its kind. It was proposed by the European Commission, and is currently in the final stages of discussion between the EU Council and Parliament, with a view to reaching a common position. “[The IA Act] is a harmonisation effort, it contains a set of rules that must be harmonised at EU level to ensure that artificial intelligence technologies are safe, that they are consistent with European values, and that they can also foster innovation,” said Eric Peters, the deputy head of unit in charge of the European Commission’s Digital Decade Strategy 2030.
“The bill has been essentially created to ban various inappropriate uses of these technologies, such as China’s social credit system, and also to highlight a number of technologies or purposes that are high-risk, and which will need to be regulated to ensure that those risks are properly controlled,” added Peters. The plan is to regulate the use of generative AI more strictly, introducing transparency obligations and setting limits for technological models considered at risk.
“The law will probably be passed in 2025, and become operational in 2026, but the issue needs to be tackled now. How will [the law] be applied, how will controls be implemented? These topics are still up for debate, and need to be thought through very carefully,” said Laurence Devillers, professor of artificial intelligence at Paris’s Sorbonne University/CNRS. She told the following anecdote, which said a lot about existing challenges. Devillers was invited to join a research committee on AI ethics, and expected to meet mainly with scientists, but instead found herself “dealing with all the major big tech giants from China and the USA, all of them keen to put a brake on regulation.”
Devillers warned against the growing hype around generative AI: “We must speak out against the generative AI myth. In most surveys, people see the machine as an authority. We must stop and overturn this belief, we must be critical. Big tech wants to scare us so that we won’t take action, but [AI] is just a machine, with no notion of time or space. AI is incapable of making sense. Of course, [AI] is a great tool, but you must know how to use it.” Devillers also questioned the future and impact of AI: “Generating synthetic data has major consequences. When one day we’ll arrive at 80% artificial data, what shall we do with the original knowledge we currently have?”
Less than a year ago, almost no one had heard of generative AI. The EU law on digital services, the Digital Services Act, which regulates the digital world and online giants, came into force at the end of 2022, and did not even mention the word ‘AI’. But generative AI is now booming. “We are facing an absolutely extraordinary challenge. All stakeholders must harmonise their positions in a very short time, and the general public has only recently become aware of generative AI. We are facing a situation where we need to be active on all fronts, while the technology continues to evolve. The important thing is to set a course, and really make sure that we are sending the right signals to the players involved,” said Peters.
Meanwhile, generative AI is already flooding the market, and it doesn’t care about abiding by existing rules. For example, it is developed with no respect for copyright, as noted by Vincent Fauchoux, associate lawyer and managing partner at legal firm DDG, which advises both generative AI firms and users. “Built through practices like data scraping, and the storing and indexing of all kinds of documents, images and videos retrieved from the web without permission, these technologies have carried out the theft of the century. They have committed an original sin right from the outset,” he said.
“Finding the source of an output produced by generative AI is almost impossible, because we are dealing with an aggregated set. Likewise, one cannot find a tomato in its original form in a ratatouille. We’re unable to identify the original creation that served as [the output’s] basis. Original sin has left no trace,” added Fauchoux. He went on to say that, “although the technology has not been developed lawfully, the work created by generative AI is actually copyrightable, because its original ingredient is impossible to find.”
How to remedy this paradox? “A first solution would be entering into extraordinary agreements with the players involved. Another is advocating traceability at all levels in all production contracts,” said Fauchoux. The French Society of Authors, Composers and Music Publishers (Sacem) is considering, for example, the possibility of exercising a right of opposition on the use of its authors’ data, and of instituting an AI tax to authorise machines to use authors’ artworks for development purposes. Others are proposing that authors should allow generative AI publishers the use of their artistic content only if they are paid a royalty.
According to estimates cited by Bloomberg, the generative AI market is expected to be worth $67 billion in 2023, and $1.3 trillion by 2032. Investment in the sector is also expected to grow strongly. “There is a lot to be gained by using these technologies. But it must be done carefully, with a focus on good practices. We must help each other out, and share information so that human beings, who are creative, responsible and aware of the side effects, are able to intervene,” said Devillers.
As the EU is preparing to regulate the use of generative AI, Peters tried to reassure the audience: “A whole new system of governance will of course be put in place, with controls and rules ensuring that [generative AI] players are aligned with European values.” Besides adopting a common regulatory position, the speakers noted that it would also be crucial for more powerful players capable of leading the field to emerge in Europe.