Hacker News new | past | comments | ask | show | jobs | submit
Lets say you already have deep knowledge of GPU architecture and experience optimizing GPU code to saves 0.5ms runtime for a kernel. But you got that experience from writing graphics code for rendering, and have little knowledge of AI stuff beyond surface level stuff of how neural networks work.

How can I leverage that experience into earning the huge amounts of money that AI companies seem to be paying? Most job listings I've looked at require a PhD in specifically AI/math stuff and 15 years of experience (I have a masters in CS, and no where close to 15 years of experience).

I've only done the CUDA side (and not professionally), so I've always wondered how much those skills transfer either way myself. I imagine some of the specific techniques employed are fairly different, but a lot of it is just your mental model for programming, which can be a bit of a shift if you're not used to it.

I'd think things like optimizing for occupancy/memory throughput, ensuring coalesced memory accesses, tuning block sizes, using fast math alternatives, writing parallel algorithms, working with profiling tools like nsight, and things like that are fairly transferable?

I don't have a great answer except learn as much about AI as possible - the easiest starting point is Simon Prince's book - and it's free online. Maybe start submitting changes to pytorch? Get a name for yourself? I don't know.

Most companies aren't doing a lot of heavy GPU optimization. That's why deepseek was able to come out of nowhere. Most (not all) AI research basically takes the given hardware (and most of the software) stack as a given and is about architecture, loss functions, data mix, activation functions blah blah blah.

Speculation - a good amount of work will go towards optimizations in future (and at the big shops like openAI, a good amount already is).

Is this hypothetical person someone you know? if yes, please email me to pavel at centml dotz ai
You can get paid that without the GPU experience so yes. Getting up to speed with this is mostly just a function of how able you are to understand what modern ML architectures look like.