GitHub is adding AI-powered security detections to GitHub Code Security, targeting languages and frameworks that CodeQL's traditional static analysis does not cover. In 30 days of internal testing, the system processed over 170,000 findings with more than 80% positive developer feedback. Coverage includes Shell/Bash, Dockerfiles, Terraform/HCL, and PHP. Public preview lands in early Q2.
The detection results surface inside pull requests alongside existing CodeQL findings, flagging issues like string-built SQL queries, insecure cryptographic algorithms, and exposed infrastructure configurations. Copilot Autofix connects directly to those findings: it has resolved over 460,000 security alerts in 2025 so far, cutting average time to fix from 1.29 hours to 0.66 hours. That pairing of detection and remediation inside the PR workflow is the operational core of what GitHub is building here.
The full post is worth reading for how GitHub frames this as an 'agentic detection platform' rather than a discrete feature, and what that architectural choice implies for where CodeQL and AI-powered analysis diverge or converge over time. The RSAC booth #2327 demo will show the enforcement layer, where policy gates sit at merge. That governance piece is the least detailed in the post and the most consequential for enterprise security teams.
[READ ORIGINAL →]