Replies: 3 comments 1 reply
-
I would suggest no. Manual pen testers use many automated tools, and automation often requires manual review and triage. This is a very complex issue that I think we want to keep away from.
|
Beta Was this translation helpful? Give feedback.
-
Hi, first, I think the testing part is out of the scope of ASVS. ASVS provides requirements but not the info or guidelines, on how to test it. For sure, the more helpful it is, the better, but I see it as a testing guide area. Second, I don't want to give misleading guidance. To say if something is automatable or not, really depends on the application. The topic is so often covered in full naive, that you execute scanner and "now you are safe". For example, is there a content-security-policy response header field set? One makes an HTTP request to the front page and makes conclusions based on that, but in reality, it is required to cover all responses from different proxied endpoints from the same domain, including error messages from each endpoint and extra headers from the program code. And if it requires specific permissions or going through complicated business logical workflow, scanners don't help much. The other part of it, I also don't want to say never, is that it is not automatable, because "close to everything" is automatable if you automate those processes. Which automation tools are "complement" is the same iffy topic for the same reasons I just mentioned. Additionally, we don't want to highlight any specific tools, because those are not part of the ASVS and we don't control their direction and future releases. Whatever we are going to say at the release, must stay valid for years. To summarize: if I see an article or presentation "I automated testing for N amount of requirements from ASVS in my application" I believe, if I see "I automated N amount of requirements from ASVS (for every application)" I call it BS by default. There are some general requirements you can be sure of (e.g. HTTPS preload, code review), but in general, it stays valid. |
Beta Was this translation helpful? Give feedback.
-
Yes, 👍 💯 there are incentives for people (executive, etc.) to want to just "have" security with whatever tooling available (and sometimes through a big check) in order to be able to think or claim they are safe and be able to check ☑️ security without having to have (and pay) an actual human verify the requirements or claims of security of the tooling. |
Beta Was this translation helpful? Give feedback.
-
I recently came across an article highlighting the possibility of automating at least fifty percent of the ASVS items. This got me thinking—would it make sense for ASVS to include an identifier for each item indicating whether it is:
Value Proposition:
For organizations adopting ASVS, this could provide clarity on how to distribute the workload across teams:
Manual testers or penetration testers could focus on items requiring human oversight.
Developers could address automatable items through unit tests or API-level automation.
Automated testing teams could leverage tools like SAST, SCA and DAST for partially automatable items.
This distinction would streamline the implementation of ASVS by clarifying what needs manual attention versus what can be handled by automation. It could also help organizations allocate resources more effectively, reducing friction in integrating ASVS into their workflows.
For example, leveraging tools like OWASP RAT to narrow down relevant requirements and then using automation feasibility markers could further simplify adoption.
Would this be a feasible enhancement for ASVS, and if so, what would be the best approach for categorizing items based on automation potential?
Heres the article for context
https://codific.com/requirements-driven-testing-the-best-roi-security-practice/
Beta Was this translation helpful? Give feedback.
All reactions