AI Music Isn't the Problem. The Sameness Was Already Here.

AI-generated music can be its own category. But the flood of generic, interchangeable music didn't start with AI - it's been the reality for years.

AI Music Isn't the Problem. The Sameness Was Already Here.

Photo by Danny Greenberg on Unsplash

The conversation around AI-generated music has been growing louder. Tools like Suno and Udio can produce a full song in seconds, and understandably, that makes a lot of people uneasy. There are real questions about artistry, authorship, and what the future of music looks like when machines can generate it on demand.

I’m not here to defend AI music. Honestly, I think it can comfortably exist in its own category - labeled, filtered, and left for people to engage with or ignore as they see fit. That seems reasonable and probably inevitable.

But I do think the AI panic is obscuring something we should’ve been talking about a long time ago. The problem everyone is afraid AI will create - a flood of generic, interchangeable music drowning out originality - isn’t a future threat. It’s been the reality for years. And we got here without any help from algorithms generating songs.

The Flood That Nobody Talks About

The numbers paint a pretty clear picture. According to Luminate’s 2025 Year-End Music Report, 106,000 new tracks are uploaded to streaming platforms every single day. The total catalog has ballooned to over 253 million tracks. Of those, 88% received fewer than 1,000 streams in the entire year. Nearly half - about 120.5 million tracks - got somewhere between zero and ten plays total. For the whole year.

This didn’t happen overnight, and it isn’t because of AI. The independent and DIY distribution sector now accounts for 96.2% of daily uploads. The democratization of music production - affordable DAWs like FL Studio, Ableton, Logic Pro, and GarageBand, combined with distributors like DistroKid and TuneCore - removed every barrier between having an idea and putting it on Spotify. That’s genuinely wonderful in a lot of ways. More people than ever can make and share music, and some incredible artists have emerged from bedrooms with nothing but a laptop and talent.

But the side effect is undeniable. When the tools are the same, the tutorials are the same, the sample packs are the same, and the reference tracks are the same, a lot of the output ends up sounding… the same. Not because the people making it lack passion, but because the ecosystem quietly rewards familiarity over experimentation.

The Metalcore Question

I don’t think any genre illustrates this better than modern metalcore.

The early days of the genre were exciting. Bands like Converge, Integrity, and early Killswitch Engage were genuinely fusing hardcore punk energy with metal precision, and it felt fresh. But once the formula crystallized - aggressive verses, melodic choruses, low-tuned guitars, polished production, and breakdowns on schedule - it became remarkably difficult to tell one band from another unless someone had a truly distinctive voice or style.

This isn’t just an outsider’s complaint. Metal Hammer’s Stephen Hill noted that some later metalcore acts had more in common with “airbrushed boy bands” than with the counter-cultures that birthed the genre. Musicians within the scene have said similar things, sometimes more colorfully. YouTuber Jarrod Alonge once stitched together breakdowns from several well-known metalcore bands into a seamless medley, and the fact that it works - that you genuinely can’t tell where one band ends and another begins - says something.

There are always exceptions. There are always bands pushing boundaries within any genre. But the ratio of derivative to distinctive in modern metalcore is hard to ignore. The sheer volume of bands running the same playbook with the same production, the same song structures, and the same emotional beats creates a landscape where originality becomes the exception rather than the rule.

EDM and the Formula

Electronic dance music has a similar story, and some of its biggest names have been surprisingly candid about it.

Deadmau5 has openly said that mainstream EDM “all sounds the same.” Avicii, before his passing, remarked that most EDM lacked “longevity.” Porter Robinson described feeling that mainstream EDM was oriented toward “entertainment” rather than artistry, which led him to deliberately move away from it. Wolfgang Gartner described the dominant sound as “this mashup of every single subgenre possible, to try and appeal to the most people possible, with these cheesy played-out trance pads and vocal hooks.”

The underground electronic scene - deep techno, experimental house, drum and bass - has always had innovation and genuine creative risk-taking. But the mainstream EDM that fills festival stages and Spotify playlists often operates on a pretty narrow formula. Build, tension, drop, repeat. The sounds are accessible enough that replicating what’s popular is relatively straightforward, and the market rewards tracks that fit neatly into existing playlists rather than ones that challenge listeners.

This isn’t a condemnation of everyone working in electronic music. It’s an observation that when a genre’s commercial infrastructure optimizes for a specific formula, the output tends to converge. And that convergence happened long before anyone was worried about AI.

K-Pop: The Quiet Part Out Loud

Then there’s K-pop, which in some ways is the most fascinating example because it barely pretends to be anything other than what it is.

The K-pop system is, by its own architect’s description, a manufacturing process. Lee Soo-man, founder of SM Entertainment, called it “cultural technology” and described his approach as codifying “the entire process of producing culture into a form of technology by creating a formula and manualizing it.” Entertainment companies invest in trainee programs lasting 3 to 10 years, with intensive instruction in vocals, dance, languages, and media presence. The companies control song selection, production, choreography, and public image. SM Entertainment alone receives 300,000 applicants annually.

Even within the industry, voices are starting to question the formula. Min Hee-jin, the producer behind NewJeans, publicly called the factory system “a disease for the industry” in late 2024, saying “you will never reach the top spot if you only follow what’s already been done over and over again.”

K-pop has been enormously successful, and there are genuinely talented people within it. But the system is designed to produce a consistent product, and consistency by definition means less variation. When the conceptual overlaps between groups - visual aesthetics, lyrical themes, musical structures - keep growing, it becomes fair to ask how much of the output is art and how much is product designed to hit the same marks as the last successful product.

Where AI Actually Fits

So here’s the thing. AI-generated music can and probably should be its own category. Label it. Let listeners make informed choices about what they’re hearing. That’s perfectly fine.

But I think it’s worth being honest about the landscape AI is entering. It’s not disrupting a golden age of musical diversity. It’s arriving in a market that was already saturated with 106,000 tracks a day, where 88% of them disappear into the void, where entire genres have calcified around formulas, and where one of the most successful music industries on the planet openly describes its process as manufacturing.

AI didn’t create the sameness problem. It’s just the newest participant in a system that was already optimized for it. The bedroom producer revolution gave everyone the same tools and the same path. Genre conventions narrowed what was commercially viable. Algorithms rewarded what sounded like what was already working. And here we are.

None of this means great music isn’t being made. It absolutely is, in every genre, by people who care deeply about their craft. But those people were already swimming upstream before AI entered the picture. The challenge of standing out in a sea of similarity isn’t new - it’s just getting one more wave added to it.

Maybe instead of only asking “what do we do about AI music,” we should also be asking why so much of the human-made music already sounds like it could have been generated by a formula. That question has been waiting for us for a while now. AI just made it harder to avoid.

Sources -11 references

Comments