Bob Sturm presents: “I am Troubled”

May 5, 2026

On March 6, 2026, the Cultural Analytics series welcomed Bob L. T. Sturm from KTH Royal Institute of Technology, Sweden, for a talk and discussion about his research in AI-generated music. The Cultural Analytics series is a joint initiative between the Berkeley Institute for Data Science (BIDS) and the School of Information, aimed at highlighting research that focuses on the data-driven analysis of cultural phenomena. The event started with a talk held at the School of Information, followed by a discussion held at the AI Futures Lab.

Transformation of music with LLMs

Sturm’s talk focused on how AI has transformed music production and practice, and his research through the Music at the Frontiers of Artificial Creativity and Criticism (MUSAiC) project. Starting in 2020, MUSAiC aimed to study how AI technology is changing our relationships to music. AI models have already changed the way we write and, with these models, music production is also becoming easier and more accessible. Sturm discussed the AI explosion and subsequent explosion of machine-generated music in 2023, and a potential “musicpocalypse” in the future (channeling Matthew Kirschenbaum’s textpocalypse), when machine-generated music may be the norm. Platforms, such as Suno AI, allow users to generate music using text prompts, and Sturm noted the shift towards AI-produced music on streaming platforms such as Spotify.

A man stands at a podium by a graph depicting a downward trend, with a room full of audience members

Photo: Bob Sturm explains his optimism about AI over the course of the MUSAiC project

What can be done about this situation? Sturm recommends broader AI literacy. Educating the public on how these AI models work, how to use them appropriately, and discussing the balance between risks and benefits can all help mitigate the impacts of LLMs.

Contradictions in using and resisting AI

Sturm described the contradictions he experiences while using AI in his creative and teaching practices, specifically balancing the risks and benefits of using AI. The first contradiction he raised was related to copyright law and his disapproval of AI companies using copyright protected music to train their models, even though he himself creates music based on sampling and reuse of other music.

The second contradiction he investigated was how to explore the possibilities of AI-generated music, without overhyping what AI can accomplish. Sturm described some experiments he conducted with LLMs, using text prompts to generate music. He even tried generating music using text from emails in his spam folder to see how well the model handled different inputs! However, as he explained, “every time you interact with these systems, you are ‘voting’ for this system.” Sturm wants to continue experimenting with the possibilities that AI can open up as a musician, while recognizing that there are drawbacks and limitations to what it can produce.

The final contradiction he presented relates to teaching. How can resistance to overreliance on AI be incorporated while teaching courses and conducting research focused on AI? Sturm concluded these are the sources of his troubles. There is no clear resolution, but he suggested some paths forward including acceptance, indulgences (e.g., performing one beneficial action for each interaction with generative AI platforms), parasitical resistance, total rejection of AI, or working with small data and local compute.

A closer look at AI and musical meaning

Following the talk, the conversation continued at AI Futures Lab, diving deep into the trends in AI-generated music and the meaning behind these trends. The discussion covered several compelling threads that further extended from Sturm’s contradictions. 

A man speaking to a group of seated audience members

Photo: The discussion with Sturm continued at AIFL

One highlight of the discussion was the challenge of automatically detecting AI-generated music. His research team considered the use of steganography, the practice of hiding invisible information, to watermark AI-generated music and detect it in larger datasets. They also dived into approaches such as using probabilistic methods, raising the question of whether it’s even possible to identify the presence of human intention in a piece of music, and what it would mean to do so.  

Another highlight was when the group sparked ideas around this question: “How should AI-generated music be assessed in cultural and social norms?” As Sturm put it, “you don't create algorithms for music without a deep understanding of the dimensions of value and how it's used.” This led to an example of political groups using AI-generated music to spread ideological content, underscoring that the stakes of this research go far beyond simply music theory or computer science. 

Troubles worth having

Sturm’s “troubles” are not his alone. They are shared by researchers, educators, and practitioners. As technology continues to reshape cultural production, workshops like this one serve as a reminder that the most important conversations happen not in the model themselves, but between the people trying to make sense of them. 

Find the recording of the full talk on the I School website here. To join more conversations like this and stay in touch with the BIDS community, please follow us on Bluesky & LinkedIn and subscribe to the BIDS newsletter