Hacker News new | past | comments | ask | show | jobs | submit
> Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on."

I guess the "mountain" is the key. "Safe" alone is far from being a product. As for the current LLM, Id even question how valuable "safe" can be.

to be honestly from the way "safe" and "alignment" is perceived on r/LocalLLaMA in two years its not going to be very appealing.

We'll be able to generate most of Chat GPT4o's capabilities locally on affordable hardware including "unsafe" and "unaligned" data as the noise-to-qubits is drastically reduced meaning smaller quantized models that can run on good enough hardware.

We'll see a huge reduction in price and inference times within two years and whatever SSI is trained on won't be economically viable to recoup that $1B investment guaranteed.

all depends on GPT-5's performance. Right now Sonnet 3.5 is the best but theres nothing really ground breaking. SSI's success will depend on how much uplift it can provide over GPT-5 which already isn't expected to be significant leap beyond GPT4