News

We have participated in an international study on the caveats of "AI autophagy"

4 March 2025
Javier Revista Nature

“AI autophagy occurs when generative models begin to consume their own output”

Our colleague, Javier Del Ser, has participated in an international study on the caveats of "AI autophagy" in the Nature Machine Intelligence journal

Our colleague, Javier Del Ser, has participated in an international study analysing the emerging caveats of "AI autophagy", published in the prestigious Nature Machine Intelligence journal. This study, led by Imperial College London as a result of collaboration between experts from diverse institutions, warns of the risk of artificial intelligence (AI) models using data generated by other AI systems suffering from performance crashes, loss of diversity and ethical implications.

AI autophagy occurs when generative models consume their own output.

The phenomenon, known as AI autophagy, occurs when generative models begin to consume their own output, leading to an ever-increasing degradation in the quality of their output. The main findings of the study include the silent contamination of training data, the increasing difficulty in distinguishing between real and AI-generated content, and the lack of effective regulatory frameworks to address these challenges.

The study offers a comprehensive analysis of the problem, synthesising previous studies and proposing technical solutions, such as the use of watermarking, hybrid training and detection tools. Furthermore, it underlines the need for global governance frameworks to ensure the ethical and transparent use of synthetic data.

Through this research, Javier Del Ser and the international team of scientists involved in the study are contributing to the development of strategies to mitigate the risks of AI when it consumes its own output, ensuring its responsible and sustainable evolution in the future.