2. What are they doing? AGI/ASI is a neat trick, but then what? I’m not asking because I don’t think there is an answer; I’m asking because I want the REAL answer. Larry Ellison was talking about RNA cancer vaccines. Well, I was the one that made the neural network model for the company with the US patent on this technique, and that pitch makes little sense. As the problem is understood today, the computational problems are 99% solved with laptop-class hardware. There are some remaining problems that are not solved by neural networks, but by molecular dynamics, which are done in FP64. Even if FP8 neural structure approximation speeds it up 100x, FP64 will be 99% of the computation. So what we today call “AI infrastructure” is not appropriate for the task they talk about. What is it appropriate for? Well, I know that Sam is a bit uncreative, so I assume he’s just going to keep following the “HER” timeline and make a massive playground for LLMs to talk to each other and leave humanity behind. I don’t think that is necessarily unworthy of our Apollo-scale commitment, but there are serious questions about the honest of the project, and what we should demand for transparency. We’re obviously headed toward a symbiotic merger where LLMs and GenAI are completely in control of our understanding of the world. There is a difference between watching a high-production movie for two hours, and then going back to reality, versus a never-ending stream of false sensory information engineered individually to specifically control your behavior. The only question is whether we will be able to see behind the curtain of the great Oz. That’s what I mean by transparency. Not financial or organizational, but actual code, data, model, and prompt transparency. Is this a fundamental right worth fighting for?