Quality and Shared Responsibility: The Next Chapter of GitHub's Bug Bounty Program
The security research community has always been a cornerstone of GitHub’s security strategy. With over 180 million developers relying on our platform, we recognize that external researchers bring invaluable insights to help us identify and address vulnerabilities. Our bug bounty program is built on the principle that collaboration fuels improvement, and we remain deeply committed to this partnership. However, the security landscape is shifting rapidly, and we are adapting to ensure our program continues to drive meaningful results. In this update, we share our observations, the steps we are taking to enhance quality, and our vision for the future of bug bounty at GitHub.
Why is GitHub raising the bar on submission quality?
The volume of bug bounty submissions has surged across the industry, partly due to the accessibility of new tools, including AI. While it is encouraging to see more people engaging in security research, this growth has also brought an increase in low-effort reports. Many submissions lack working proof of concepts, describe theoretical attacks that do not hold up, or cover issues already listed as ineligible. Rather than shutting down our program—as some have done—we are investing in higher standards. By raising the bar, we can focus resources on reports with real impact, reward researchers effectively, and maintain a sustainable program that benefits everyone.

How has the volume of submissions affected GitHub’s program?
Over the past year, the number of submissions has grown significantly, mirroring broader industry trends. AI lowers the barrier to entry, which in many ways is positive because it expands the pool of potential discoverers. However, this has also led to a disproportionate rise in reports that do not demonstrate genuine security impact. For instance, submissions may lack a concrete attack path or describe scenarios that could not be realistically exploited. This influx of noise can overwhelm response teams and delay the processing of valid reports. GitHub is addressing this by implementing stricter upfront evaluations, ensuring that only well-prepared submissions reach the analysis stage.
What makes a strong submission in GitHub’s updated program?
A strong submission begins with a working proof of concept that clearly shows security impact. Instead of merely suggesting a vulnerability, researchers should demonstrate what an attacker can actually achieve—crossing a boundary or accessing protected data. Additionally, reports must be checked against our published scope and ineligible findings list. Submitting known out-of-scope issues (such as missing security headers without exploitation) leads to immediate closure as Not Applicable, which may affect a researcher’s HackerOne Signal. Finally, validation is crucial: whether using scanners, static analysis, or AI, every finding should be manually verified before submission. A false positive caught early saves everyone time.
Does GitHub accept reports generated with AI assistance?
Absolutely. GitHub welcomes the use of AI in security research. AI tools are a powerful force for innovation, and we have no objection to researchers incorporating them into their workflows. However, the responsibility for validating outputs remains with the researcher. AI might suggest potential vulnerabilities, but those suggestions must be tested and confirmed before submission. Submitting raw AI outputs without verification leads to low-quality reports that waste effort. We encourage researchers to treat AI as a productivity enhancer, not a replacement for critical thinking and manual validation. This approach ensures that every submission is a genuine contribution.

What is the future direction of GitHub’s bug bounty program?
GitHub is committed to the long-term health of its bug bounty program. Rather than scaling back, we are investing in measures that reward quality over quantity. This includes clearer evaluation criteria, better communication of expectations, and recognition for researchers who consistently provide high-impact reports. We also aim to foster shared responsibility between researchers and the platform. By collectively focusing on verified, impactful findings, we can reduce noise and make the program more efficient. The future involves a tighter integration with the security research community, leveraging tools (including AI) responsibly, and continuously adapting to emerging threats while upholding fairness and transparency in all interactions.
How does shared responsibility play a role in the program’s success?
Shared responsibility means that both GitHub and researchers have roles in maintaining program quality. For GitHub, it involves setting clear guidelines, responding promptly to valid reports, and offering fair rewards. For researchers, it entails careful preparation: understanding the scope, providing working proof of concepts, and validating findings before submission. This partnership reduces friction and ensures that legitimate vulnerabilities get the attention they deserve. When both sides commit to quality, the program thrives. Ultimately, the goal is to create a virtuous cycle where good reports lead to better security, which in turn attracts more talented researchers. This collaborative ethic is what makes bug bounties a powerful tool for platform safety.
Related Discussions