https://github.com/ListenNotes/ai-generated-fake-podcasts/bl...
Google is taking a different approach this time, moving quickly. While NotebookLM is indeed a remarkable tool for personal productivity and learning, it also opens the door for spammers to mass-produce content that isn't meant for human consumption.
Amidst all the praise for this project, I’d like to offer a different perspective. I hope the NotebookLM team sees this and recognizes the seriousness of the spam issue, which will only grow if left unaddressed. If you know someone on the team, please bring this to their attention - Could you please provide a tool or some plain-English guidelines to help detect audio generated by NotebookLM? Is there a watermark or any other identifiable marker that can be used?
Just recently, a Hacker News post highlighted how nearly all Google image results for "baby peacock" are AI-generated: https://news.ycombinator.com/item?id=41767648
It won't be long before we see a similar trend with low-quality, AI-generated fake podcasts flooding the internet.
What's new? Every novel class of genAI product has brought a tidal wave of slop, spam and/or scams to the medium it generates. If anyone working on a product like this doesn't anticipate it being used to mass produce vapid white-noise "content" on an industrial scale then they haven't been paying attention.