Hacker News new | past | comments | ask | show | jobs | submit
Is AI ready to crawl through all open source and find / fix all the potential security bugs or all bugs for that matter? If so will that become a commercial service or a free service?

Will AI be able to detect bugs and back doors that require multiple pieces of code working together rather than being in a single piece of code? Humans have a hard time with this.

- Hypothetical Example: Authentication bugs in sshd that requires a flaw in systemd which then requires a flaw in udev or nss or PAM or some underlying library ... but looking at each individual library or daemon there are no bugs that a professional penetration testing organization such as the NCC group or Google's Project Zero would find. In other words, will AI soon be able to find more complex bugs in a year than Tavis has found in his career and will they start to compete with one another and start finding all the state sponsored complex bugs and then ultimately be able to create a map that suggests a common set of developers that may need to be notified? Will there be a table that logs where AI found things that professional human penetration testers could not?

No, that would require AGI. Actual reasoning.

Adversaries are already detecting issues tho, using proven means such as code review and fuzzing.

Google project zero consists of a team of rock star hackers. I don't see LLM even replacing junior devs right now.

Seems like there is more gain on the adversary side of this equation. Think nation-states like North Korea or China, and commercial entities like Pegasus Group.
loading story #42000551
loading story #42004476