Evaluation Results

The figure below shows the result of the STIP (Security Testing Improvement Profiling) evaluation for all the case studies considering the positive effects of the DIAMONDS techniques and tools. The red line shows the maximum possible increase if all DIAMONDS techniques and tools could be side effect less combined.

Case Study Evaluation
| @Fraunhofer FOKUS

In the following we show the radar diagrams for all case studies separately. The radar diagram shows the situation with DIAMONDS in blue and the situation without DIAMONDS in red.

Security Testing Improvement Profiling (STIP)

The Security Testing Improvement Profiling (STIP) approach has been developed in the DIAMONDS project to assess the progress that could bee achieved in selected key areas of the security-testing domain. The key areas describe major aspects or activities in a security testing process and are chosen in that way that they cover the most relevant DIAMONDS innovations.

We have defined the key areas  to be self-contained and distinct so that each of the areas represents a relevant aspect of a security testing process. For each of the key areas we have defined a four level performance scale with levels that are hierarchically organized and build on each other. The levels can be used to evaluate concrete security testing processes with respect to their performance in the belonging key area. Each level with a higher number represents an improvement for the underlying security testing process.

Key Areas

  • Security risk assessment – Security risk assessment is a process for identifying security risks.
  • Security test identification – Test identification is the process of identifying test purposes and appropriate security testing methods, techniques and tools.
  • Automated generation of test models – For model-based security testing (e.g. fuzzing, mutation based testing) various kinds of models are required, which can be either created manually or generated automatically.
  • Security test generation – Security test generation is about the automation of security test design.
  • Fuzzing – Fuzzing is about injecting invalid or random inputs in order to reveal unexpected behave or to identify errors and expose potential vulnerabilities.
  • Security test execution automation – The automation of security test execution conducts the automatic application of malicious data to the SUT, the automatic assessment of the SUT's state and output to clearly identify a security flaw, and the automatic control of the test execution with respect to different kind of coverage.
  • Security passive testing/security monitoring – Security monitoring based on passive testing consists of detecting errors, vulnerabilities and security flaws in a system under test (SUT) or in operation by observing its behaviour (input/output) without interfering with its normal operations.
  • Static security testing – Static security testing involves analysing application without executing it. One of the main components is code analysis.
  • Security test tool integration – Tool integration is the ability of tools to cooperate with respect to data interchange