(TL02) (INCYBER) AI and Secure Code Learning: An Empirical Analysis of 420 AI-Generated Security Fixes

(TL02) (INCYBER) AI and Secure Code Learning: An Empirical Analysis of 420 AI-Generated Security Fixes

Tuesday, March 31, 2026 10:05 AM to 10:35 AM · 30 min. (Europe/Paris)

Information

AI can look secure while still being wrong. AI-generated fixes often produce big, polished “textbook” patches that feel professional, but frequently leave the real vulnerability in place. Don’t confuse confident output with correct security.

Over-reliance on AI erodes your security skills. When developers just accept AI suggestions, they stop deeply reading code, stop reasoning about root causes, and gradually lose their secure-coding instincts.

Humans must stay in control of security. The message isn’t “don’t use AI” – it’s “don’t let it drive alone.” Use AI as a power tool: understand every change, verify the actual fix, and treat AI-generated code as untrusted until you have proven it safe.
Room
Tech Lab
Event
INCYBER
Type of session
Tech Lab