Our Ranking Methodology
How we evaluate and rank security tools and audit firms — transparent criteria, no pay-to-play.
Our Ranking Methodology
Web3 Security AI is an independent resource. No firm pays for ranking or featured placement. Here's exactly how we evaluate tools and auditors.
Tool Scoring
We assess security tools across five dimensions:
1. Community Adoption
- GitHub stars and fork count
- npm/pip download trends
- Mentions in security researcher blogs and Twitter/X
- Usage in public audit reports
2. Active Maintenance
- Commit frequency in the last 90 days
- Issue response time
- Release cadence
- Dependency freshness
3. Detection Accuracy
- Known vulnerability detection rate (where benchmarks exist)
- False positive rate reported by practitioners
- Coverage of OWASP and SWC vulnerability categories
4. Practitioner Endorsement
- Recommendations from recognized security researchers
- Adoption by top audit firms in their workflows
- Mentions in security conference talks (ETHSecurity, DeFi Security Summit)
5. Ecosystem Coverage
- Number of chains and languages supported
- Integration with development frameworks (Foundry, Hardhat, Anchor)
- Interoperability with other security tools
Auditor Scoring
Audit firms are evaluated on six criteria. There is no widely-adopted open standard for scoring audit firms — this methodology is our own, developed from years of working with auditors and observing outcomes.
1. Public Track Record (Weight: 25%)
- Number and quality of public audit reports
- Severity and depth of findings
- Quality of remediation guidance
- Consistency of audit thoroughness across engagements
2. Post-Audit Exploit Rate (Weight: 25%)
- How many protocols were exploited after being audited by the firm?
- We track major exploits (>$1M) against the firm's client list
- Lower is better — but context matters (scope limitations, unpatched findings)
3. Specialization Depth (Weight: 15%)
- Does the firm have deep expertise in the relevant domain?
- Chain-specific knowledge (EVM vs Solana vs Move vs ZK)
- Protocol-type expertise (DeFi, bridges, governance, NFT)
4. Transparency (Weight: 15%)
- Are audit reports publicly available?
- Does the firm disclose methodology?
- Are findings categorized with clear severity ratings?
5. Community Reputation (Weight: 10%)
- Researcher sentiment on Twitter/X and Discord
- Bug bounty hunter feedback
- Peer recognition from other audit firms
6. Responsiveness (Weight: 10%)
- Turnaround time for initial engagement
- Communication quality during audit process
- Post-audit support and re-audit availability
What "Featured" Means
A tool or auditor marked as "Featured" on our homepage scored highest across these criteria in their category. Featured status is reviewed monthly.
What We Don't Do
- No pay-to-play. Featured placement cannot be purchased.
- No self-reporting. We verify claims against public data.
- No single-metric ranking. A firm with 500 audits isn't automatically better than one with 50 if the quality differs.
Limitations
- Post-audit exploit data is incomplete — not all exploits are public, and scope limitations aren't always disclosed.
- Community sentiment can be biased by marketing spend.
- Newer firms have less data to evaluate, which may disadvantage them.
- We're practitioners, not academics — this methodology reflects real-world experience, not peer-reviewed research.
Feedback
Think we're missing something? Disagree with a ranking? Contact us — we take corrections seriously.