Hacker News new | past | comments | ask | show | jobs | submit
Two interesting points to consider

1. If it’s really amazing autocomplete, is there a distinction between AGI?

Being able to generalize, plan, execute, evaluate and learn from the results could all be seen as a search graph building on inference from known or imagined data points. So far LLMs are being used on all of those and we haven’t even tested the next level of compute power being built to enable its evolution.

2. Fancy autocomplete is a bit broad for the comprehensive use cases CUDA is already supporting that go way beyond textual prediction.

If all information of every type can be “autocompleted” that’s a pretty incredible leap for robotics.

* edited to compensate for iPhone autocomplete, the irony.