“…a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview.”
A couple years??
If you raise 1B in VC, it'd be shame to burn it all at once :D
well since it's no longer ok to just suck up anyone's data and train your AI, it will be a new challenge for them to avoid that pitfall. I can imagine it will take some time...
I believe the commenter is concerned about how _short_ this timeline is. Superintelligence in a couple years? Like, the thing that can put nearly any person at a desk out of a job? My instinct with unicorns like this is to say 'actually it'll be five years and it won't even work', but Ilya has a track record worth believing in.
what laws have actually changed that make it no longer okay?
we all know that openai did it
There are class actions now like https://www.nytimes.com/2024/06/13/business/clearview-ai-fac...
Nobody even knew what OpenAI was up to when they were gathering training data - they got away with a lot. Now there is precedent and people are paying more attention. Data that was previously free/open now has a clause that it can't be used for AI training. OpenAI didn't have to deal with any of that.
Also OpenAI used cheap labor in Africa to tag training data which was also controversial. If someone did it now it would they'd be the ones to pay. OpenAI can always say "we stopped" like Nike said with sweat shops.
A lot has changed.
loading story #41448267
A lot of APIs changed in response to OpenAI hoovering up data. Reddit's a big one that comes to mind. I'd argue that the last two years have seen the biggest change in the openness of the internet.
loading story #41449314
What do you expect? This seems like a hard problem to solve. Hard problems take time.
loading story #41448892
They’d need a year or two just to rebuild a ChatGPT-level LLM, and they want to go way beyond that.
loading story #41447492