Static code analysis is the practice of examining application’s source, bytecode, or binary code without ever executing the program code itself. Instead, the code under review is analyzed to identify any defects, flaws, or vulnerabilities which may compromise the integrity or security of the application itself.
The roots for static code analysis actually pre-date the existence of computers and transistors themselves. In 1936, mathematician and computer scientist Alan Turing studied “the halting problem”, which determines whether an arbitrary computer program will finish running or continue to run forever, based upon an input source of data. Turing concluded that — while it is possible to provide specific input to cause a given program to halt, it is not possible to create a generic algorithm that applies to all such computer programs. It was at this point the concept of static code analysis was born.
However, it was not until forty-two years later (in 1978) when static code analysis options started to emerge as commercially available products. The first product to reach the market was called “lint”, a product of Stephen C. Johnson (AT&T Bell Laboratories). The term is a metaphorical reference to small programming errors that can result in big consequences — similar to small pieces of fabric which are caught in the lint trap of a drying machine used to launder clothes.
The lint program, originally examining C-based source code, processed in a manner more strictly than the C compiler to identify programming errors, bugs, code-style errors, and suspicious constructs. The lint program would also issue warning and error messages as well to assist the programmer.
Since the late 1970s, static code analysis tooling has continued to evolve and become part of the application security testing (AST) market segment. In fact, linters exist for most modern languages in use today, including interpreted languages (like JavaScript and Python), which do not contain a compiling phase.
Static code analysis solutions focus on one (or more) of the following aspects of the application under review:
The illustration below provides a conceptual view of how these components work together, overlapping slightly, to protect the integrity of the source code being analyzed:
Static code analysis tools often do not focus on every aspect noted above, which is why categories of static code analysis were defined.
General vulnerability analysis includes the original work completed by the “lint” program, but has been expanded to include: logic flaws, hardcoded secrets, data leaks, authorization bypass, back doors, or logic bombs in the source code.
The language and framework security component focuses on locating items such as: cross-site request forgery (unauthorized requests), session fixation (assume a valid user’s session), and clickjacking (tricking the user into clicking an element disguised as another) within a given application instance.
The compliance component seeks potential violations with standards like: SOC 2 (Secure Data Management), PCI-DSS (Payment Card Industry/Data Security Standard), GDPR (EU General Data Protection Regulation), and CCPA (California Consumer Privacy Act).
The end-to-end component focuses on customer-facing client tier validation and can also include services and APIs that are available for end-user consumption. While the end-to-end component does not interact with the lower-level categorizations, the results of non-compliance for those categories may surface to this level.Section 2
In order to represent and differentiate aspects of the application stack, application security testing (AST) is branched into four different categorizations:
Consider the following diagram for a Java-based RESTful microservice written using the Spring Boot framework:
The goal of this document is to concentrate on the AST categorization of static code analysis, focusing on custom code introduced to meet business needs and objectives. Section 3
While SAST (Static AST) focuses on the custom business logic or service code, tools focused on Dynamic AST (or DAST) perform validations against the application, service, or APIs for vulnerabilities at that layer of the application stack.
DAST tools perform black box (or outside-in) security testing — where the tooling has no understanding of the technologies or frameworks used to create the service being validated. Because of this fact, DAST tooling does not analyze the source code in any way.
Using a DAST solution requires the application be in a running state and can locate run-time vulnerabilities which are not exposed with SAST tools.
The following table is intended to provide a simple comparison between SAST and DAST tooling:
DAST solutions are not intended to replace SAST solutions, and both are required to reduce potential vulnerabilities in the application. Section 4
The cost for a feature team to participate in a two-week sprint using the Agile methodology ranges between $60,000 and $100,000 (USD) — depending on the size and geographical location of the team. With this assumption in mind, time is certainly money when it comes to delivering new functionality.
Static code analysis tools can provide the following benefits to development teams:
Section 5
Until recently, three major roadblocks impeded the necessary implementation levels of static code analysis:
Because of these limitations, static code analysis efforts were not part of the standard development lifecycle. CI/CD pipelines could not include static code analysis and such efforts were placed in the hands of application security engineers and executed on a less periodic basis. Consequently, this led to results that were often outdated by the time they reached the hands of the feature developer who introduced the custom program code that was identified. Section 6
In order for static code analysis tools to be effective, modernization is needed to address the challenges presented above. The next generation of SAST tools being considered should meet the following needs:
With these needs in place, feature teams will inherently produce better source code, since any items noted by the static code analysis processing will be addressed while the code is fresh in the developer’s mind and before a peer review begins. In fact, a next-generation SAST design will serve as a mechanism to keep vulnerabilities inside the development environment.
With a comprehensive SAST engine, less false positives are expected, and additional vulnerabilities can surface from an understanding of all the paths in the original source code.Section 7
Getting started with static code analysis involves seeking out vendors that provide a product in the SAST market. Some current vendors with known CI/CD support are noted below:
To gain a comparison around the performance and effectiveness of each vendor, the OWASP Benchmark for Security Automation should be utilized. The OWASP Benchmark is an open and free Java test suite designed to facilitate comparisons of different static code analysis tools. The test suite measures the speed, coverage, and accuracy of vulnerability detection tools and services.
There are four possible test outcomes in the Benchmark:
1. Tool correctly identifies a real vulnerability (True Positive – TP)
2. Tool fails to identify a real vulnerability (False Negative – FN)
3. Tool correctly ignores a false alarm (True Negative – TN)
4. Tool fails to ignore a false alarm (False Positive – FP)
The diagram below represents how the OWASP Benchmark should be interpreted:
Static code analysis tools should strive to reach the target area in the illustration above, where the resulting vulnerability is in the 85-100% range on the true positive rate and 0-15% on the false positive rate.
As part of the analysis of potential SAST vendors, the OWASP Benchmark should be a key part of the decision-making process — even if Java is not the primary language that will utilize the SAST solution at implementation time. This is because the OWASP Benchmark is the most-thorough tool to compare/contrast static code analysis solutions. Keeping track of false positives should be included in the analysis for each vendor under review.
The setup and configuration for each vendor will vary depending on the design of their product. However, once the product itself has been setup and configured, the next step will be to allow the SAST product to access the source code that will be reviewed.
Before taking the time to implement a new solution into your CI/CD tooling, it is always a good idea to determine how long the analysis will take. Simply allowing the tool to scan the code manually will not only validate the functionality of the product with the supplied source code, but also provide basic information regarding the amount of time required to perform the analysis.
Most SAST tools will include full and partial scan functionality. It is a good idea to perform multiple iterations of all available scan modes for product comparison analysis.
In addition to the OWASP Benchmark results, analysis of the SAST findings should be reviewed to gain an understanding of the gaps reported by all static code analysis tools under review. This would include factors like: overall performance, ease of use and product feature set – which may vary based upon each customer’s needs. Maintaining an understanding on how the results differ can also become a factor in the decision-making process.
The following high-level illustration is intended to present the ideal design for SAST in a feature team’s CI/CD pipeline:
The example above represents the following flow:
1. Developer creates new feature branch off the develop branch.
2. Developer makes changes according to acceptance criteria noted by the Product Owner.
3. Developer commits code and pushes changes to the origin/remote host.
4. The CI/CD solution executes the following pipeline:
a. The compile stage validates the branch can be compiled.
b. The scan stage executes the SAST solution to identify any vulnerabilities.
c. The test stage fires all unit and integration tests.
d. The containerize stage creates the expected container for future usage.
5. Any failures result in a failed build, which requires developer attention.
6. When ready and free of any build errors, the peer review aspect of the development lifecycle begins.
a. If approved, the branch is merged into the develop branch.
b. If issues exist, they will be addressed by the feature developer making the changes, which will return to step three (above).
When static code analysis tooling is included in the CI/CD pipeline, any issues with the analysis (as defined by the implementation) result in a failed build, which forces the feature developer(s) to address these issues before the changes can be peer reviewed.Section 8
While this document is focused on getting started with static code analysis, there are a few advanced topics that evaluators may wish to include in the decision-making process:
Section 9
Static code analysis is a vital requirement for all teams producing features and functionality for customer-facing products, services, and APIs. At the minimum, SAST solutions should be part of the development lifecycle, participating in the CI/CD pipeline and utilized as part of the peer review process.
While quality solutions require a cost investment, analysis of the cost to attempt manual checks during peer-reviews should be taken into account. Most likely, the cost to maintain static code analysis will be easy to justify.
When seeking a static code analysis solution in your organization, the following questions should be considered and ranked for each vendor under review:
□ Is the user-interface for the solution easy to use and effective?
□ Does your organization adhere to any compliance regulations?
□ Does the solution support CI/CD integration?
□ Does the solution’s scan time introduce any challenges in the development lifecycle? □ What is the vendor’s OWASP Benchmark for Security Automation Score?
□ What is the core design behind the SAST scan engine?
□ How are false positives remediated?
□ What advanced features are required by your organization?
Source: https://dzone.com/refcardz/getting-started-with-static-code-analysis
Department of Information Technologies: https://www.ibu.edu.ba/department-of-information-technologies/