This is not only a problem for Spotify, but for every platform on the internet that publishes content, particularly for social media. Most people don't actively discern whether something they observe is slop or not, and it's a huge problem concerning the autheticity of information and the consequences of it. This issue began with text content back in the 90's when internet spam boomed after Eternal September and the reachable market audience for it boomed, but has been slowly evolving until recently until the "passable" capability of artificial content generation has both increased exponentially and become finacially feasible for bad actors.
There have been reports of Spotify being gamed by gangs in Sweden (and iirc abroad) to monetize artificial engagement[0], which I found tied in very closely to some of the amateur research I've done regarding bots on Reddit. Before the public availability of LLMs and other generative AI, most content that was "monetizable" was mostly direct engagements (views, likes, followers, reposts, shotgunning ads, etc.); this resulted in bot farms all over the internet that focused on providing services that gave exactly these results. On Reddit specifically, many of the more sophisticated bot networks and manipulators would feature content that was scraped from other sources like Youtube, Twitter, Quora, and others (including Reddit itself) and simply consist of reposting content. Some of the more novel agents would use markov generators to get around bot detection tools (both first-party and third party), but they would often create nonsense content that was easily discernable as being such.
After generative AI took off toward the late end of covid, these bot farms and nefarious agents capitalized on generative AI instantly and heavily. This is particularly known as an issue on Facebook with the "like this image because this AI generater "person" lives in a gutter and has a birthday cake and is missing all their 7 limbs" pictures, but the text content they can produce is insidiously everywhere on sites like Reddit and Quora and Twitter. Some small subset of these agents are either poorly made or buggy and have exposed their prompts directly, which is rather embarrassing, but others are incredibly sophisticated and have been used in campapigns that reach far beyond just gaming outreach on platforms - many of these bot farms are also now being used for political disinformation and social engineering compaigns, and to great effect.
Witnessing the effects of these agents in such mundane places as a music playlist is a dismal annoyance, but learning that they are in fact being used to alter public opinion and policy on top of culture and the arts is disturbing. Many people have scorned large production studios such as A24 for their us of generative AI[1], but not being able to trust even assumedly-mundane content anywhere online is something that most people, and especially the the average consumer, is not prepared for. People who are genuinely interested in anything are going to soon recognize that there is a market for gatekeeping content, but they are not going to want to participate in it because the barrier for entry is going to be completely different from that of the classic internet that we have all generally come to accept and build upon.
[0]: https://www.stereogum.com/2235272/swedish-gangs-are-reported... [1]: https://petapixel.com/2024/04/23/a24-criticized-for-using-ai...