Automated Methodology for Common Criteria Certification

Certification & Accreditation Frameworks & Standards

AMC3

Active

Defence-related Research Action (DEFRA)

January 2024 August 2027

45 months

Wim Mees

Belgian Defence is increasingly relying on software, both in terms of pure software applications as well as in the form of cyber-physical systems. When this software suffers from defects, vulnerabilities and weaknesses, attackers might exploit its inherent vulnerabilities and tamper with mission critical systems or exfiltrate sensitive information. In order to mitigate this risk and ensure that software is dependable and trustworthy, certification and accreditation activities have traditionally been integrated into the software lifecycle.

Software assurance through certification and accreditation suffers from the fact that these processes are extremely resource and time consuming, and therefore represent an obstacle that blocks the adoption of more agile DevOps development methodologies as well as the rapid implementation of bug fixes and security patches. A structured and to a large extent automated approach must therefore be developed that reconciles the requirement for more frequent software updates on the one hand and the need to ensure that the software is trustworthy and dependable on the other.

It is precisely the goal of this project to develop a methodology for performing automated certification and accreditation, assemble a set of tools that support this methodology, and validate the methodology on two typical Defence related use cases. The first use case is an in-house developed Advanced Persistent Threat (APT) detection tool for protecting government and military networks, while the second is a weapon-system piece of software.

AMC3 is a collaboration between UCLOUVAIN, CETIC, FN Herstal and the Cyber Defence Lab of the Royal Military Academy. It aims to design and prototype a flexible (incremental) certification methodology and prototype platform to automatically (re-)generate evidence, curate this evidence and (re)create assurance cases for product certification schemes such as the Common Criteria (CC). Evaluation assurance levels (EAL) define the extent of verification by describing the depth and rigor of an evaluation. The methodology should be able to manage different EAL from non-critical to critical security requirements. To create such a methodology the following research sub-objectives are defined. A first group of sub-objectives focus on automated generation of evidence:

  1. automate the generation of design time cybersecurity certification evidence for different target EAL, covering the entire spectrum from 1 to 7 and involving both testing and formal approaches as advocated by the common criteria process;
  2. automate generation of evidence for incremental cybersecurity product certification to determine which parts of a new software version need to be recertified, and adapt validation techniques to maintain efficiency;
  3. automate generation of run-time evidence to verify that assumptions on the operating environment hold, and propose correction via machine learning in case it does not hold. A second group of sub-objectives focuses and evaluating generated evidence;
  4. automatically curate into a common model the evidence produced by the many design time and run-time monitoring tools, and deal with interoperability issues like different data formats;
  5. automatically generate assurance cases with argumentation to show that SFR are satisfied with curated evidence for different EAL;
  6. automatically assess the confidence in the assurance cases to determine how much confidence can be had in the argumentation for different EAL, and optimize.
  7. a last sub-objective aims to validate the AMC3 methodology on industrial Belgian Defence case studies to demonstrate that the methodology scales for industrial size Defence systems and remains cost effective with scale.
This website uses cookies. More information about the use of cookies is available in the cookies policy.
Accept