Hacker News new | past | comments | ask | show | jobs | submit
IMO it was mostly that people didn't want to rewrite (and maintain) their code for a new proprietary programming model they were unfamiliar with. People also didn't want to invest in hardware that could only run code written in CUDA.

Lots of people wanted (and Intel tried to sell, somewhat succesfully) something they could just plug-and-play and just run the parallel implementations they'd already written for supercomputes using x86. It seemed easier. Why invest all of this effort into CUDA when Intel are going to come and make your current code work just as fast as this strange CUDA stuff in a year or two.

Deep learning is quite different from the earlier uses of CUDA. Those use cases were often massive, often old, FORTRAN programs where to get things running well you had to write many separate kernels targeting each bit. And it all had to be on there to avoid expensive copies between GPU and CPU, and early CUDA was a lot less programmable than it is now, with huge performance penalties for relatively small "mistakes". Also many of your key contributers are scientists rather than profressional programmers who see programming as getting in the way of doing what they acutally want to do. They don't want to spend time completely rewriting their applications and optimizing CUDA kernels, they want to keep on with their incremental modifications to existing codebases.

Then deep learning came along and researchers were already using frameworks (Lua Torch, Caffe, Theano). The framework authors only had to support the few operations required to get Convnets working very fast on GPUs, and it was minimal effort for researchers to run. It grew a lot from there, but going from "nothing" to "most people can run their Convnet research" on GPUs was much eaiser for these frameworks than it was for any large traditional HPC scientific application.

Thanks!

It seems funny though: The advantages of GPGPU are so obvious and unambiguous compared to AI. But then again, with every new technology you probably also had management pushing to use technology_a for <enter something inappropriate for technology_a>.

Like in a few decades when the way we work with AI has matured and become completely normal it might be hard to imagine why people nowadays questioned its use. But they won't know about the million stupid uses of AI we're confronted with every day :)

> The advantages of GPGPU are so obvious and unambiguous

I remember being a bit surprised when I started reading about GPUs being tasked with processes that weren't what we'd previously understood to be their role (way before I heard of CUDA). For some reason that I don't recall, I was thinking about that moment in tech just the other day.

It wasn't always obvious that the earth rotated around the sun. Or that using a mouse would be a standard for computing. Knowledge is built. We're pretty lucky to stand atop the giants who came before us.

I didn't know about CUDA until however many years ago. Definitely didn't know how early it began. Definitely didn't know there was pushback when it was introduced. Interesting stuff.

I'm dealing with someone in 2026 insisting that everything has to be written in Python and rely on entirely torch.compile for acceleration rather than any bespoke GPU kernels. Times change, people don't.