Hacker News new | past | comments | ask | show | jobs | submit
Not a stupid question at all! Imo, you can definitely dive deep into CUDA and GPU architecture without needing to be a math whiz. Think of it like this: you can be a great car mechanic without being the engineer who designed the engine.

Start with understanding parallel computing concepts and how GPUs are structured for it. Optimization is key - learn about memory access patterns, thread management, and how to profile your code to find bottlenecks. There are tons of great resources online, and NVIDIA's own documentation is surprisingly good.

As for the data engineering side, tbh, it's tougher to get into MLE without ML knowledge. However, focusing on the data pipeline, feature engineering, and data quality aspects for ML projects might be

Thanks for the help!

> As for the data engineering side, tbh, it's tougher to get into MLE without ML knowledge. However, focusing on the data pipeline, feature engineering, and data quality aspects for ML projects might be

I have a feeling that companies usually expect MLE to do both ML/AI and Data Engineering, so this might indeed be a dead end. Somehow I'm just not very interested in the MLE part of ML so I'll dormant that thought for the meanwhile.

> Start with understanding parallel computing concepts and how GPUs are structured for it. Optimization is key - learn about memory access patterns, thread management, and how to profile your code to find bottlenecks. There are tons of great resources online, and NVIDIA's own documentation is surprisingly good.

Thanks a lot! I'll take these points in mind when learning. I need to go through more basic CompArch materials first I think. I'm not a good programmer :D

Agreed, not sure how much math is really needed.