Awesome, if you're a victim of an AI company having your voice, you can help yourself by sending another AI company your voice!
> Audio is never used to train commercial models without explicit consent
I'm sure Mercor has explicit consent as well, legal teams are reasonably good at legally covering their asses with license terms.
Now 40k people have learned that biometrics aren't passwords. You can't rotate your voice.
In the idealized world, the legal system is meant to provide an accessible alternative to violence for reconciling disputes, but it's increasingly wielded as an impossibly kafkaesque system meant to maintain corporate power over individuals.
I think "CYA" is an overly-flowery term for the reality that they're blocking every avenue for legal recourse, while a variety of other avenues still exist for which adding friction requires the maintenance of expensive and ongoing costs (owning multiple residences, hiring security, etc.)
(To be clear, I am advocating for a more accessible and level legal system, not for UHC-style violence.)
A lot of people were basically wiretapping themselves AND their businesses!
While a lot of Mercor "contractors" claim Mercor over-reached with data gathering via Insightful, it's kind of smart because people are too afraid to complain too much knowing they'll not only lose their primary job, but also open themselves up to uncapped liability for willful misconduct.
[0] https://www.wsj.com/tech/ai/mercor-ai-startup-personal-data-...
Selling the solution to the problem you caused ought to be illegal.
The good thing about the grift economy is it grifts itself, like the turtles!