You are here

Cyber Evaluation and Testing Assessment Toolkit (CETAT)

Description:

OBJECTIVE: Develop innovative tools and techniques that aid in a formalized, Cyber-based forensic evaluation and assessment of a System under Test (SuT). DESCRIPTION: The U.S. Air Force Command and Control (C2) core function must address the increased necessity to develop and employ proper testing and evaluation capabilities for high-fidelity security assessments of their SuT. The analysis of overall system performance and behavior prior to implementation and transition activities is leading to organic, cyber-centric system testing methodologies, which can lead to volumes of data that must be evaluated with manual processes and system-level tools post exercise. Deriving accurate qualitative and quantitative measurements of a designed system while informing the level of composition"goodness"(e.g. toward a quantitatively-measurable certification and accreditation activity), regardless of intended domain or system-level implementation, is based on many contributing factors. To produce definable and repeatable performance metrics, technologies are needed to aid in gauging attack success, interdependent of attacker starting point or level of authorization to the underlying system. A reasonable quantified output should be in a form which can not only measure the success of the attack but also may lead to improvements in components of the SuT attack vectors and the eventual implementation of potential mitigation strategies. This should help lead to a concrete measurement to the of the attack and attack effects based on a functional requirement to prove or disprove the basic claims of the system, while capturing system-level sustainability, any potential degradation of service, and likelihood of future attacks. Analysis of observed events across multiple client sessions and correlation between collection points requires a focused solution. Adherence to testing and evaluation Rules of Engagement (RoE) should not negatively impact the collection of test data. Multiple attacker starting zones with potentially varying degrees of privilege escalation can make log capture and real-time, dynamic awareness of the attack effects challenging. Coverage against multiple classes of attacks and the distributed nature of today"s services-centric systems warrant measurement at the physical network level to properly characterize the physical attack effects at the node and network layers. Measurement of undue stress at the CPU or network layer should inform system orchestrators and administrators based on best-practice composition techniques and acceptable thresholds of the system. Understanding and validating adversary location through various checkpoints, primary and secondary decision points, and situational analysis may help capture any unforeseen consequences of attack on any attack vector within the system and lends to a repeatable measure of success and failure. In this research, we seek to research, augment, and develop cyber testing and evaluation analysis tools that incorporate best practice concepts where services-based systems can be designed and evaluated for performance and behavior characteristics. Other desirable features of this toolkit might include the ability to dynamically capture the efficacy of information technology assets under test, run simulations over heterogeneous system models, generate reports supporting testing and certification activities, and perform change impact analysis on existing systems using the models that are specified for them. PHASE I: Describe and develop an open framework, techniques, and tools for the aggregation, visualization, and dynamism of a combination of specific logging functionality, network and node behavior, and starting point to aid in the real-time security assessment of heterogeneous nodes and diverse network topologies to include cloud-based assessments. PHASE II: Develop, implement and validate a prototype system that leverages the capabilities of Phase I. The prototype should produce adequate representation of the factors in attack; give proper coverage of the target environment and the functional goals for the assessment. Accuracy across multiple runs should yield a solution the can be replayed in real or near real-time to aid security members from multiple teams (i.e., Red/Blue/White) easing capture of the assessment of the security claims of the system and ensuring the safety of military operations. PHASE III DUAL USE APPLICATIONS: The value of assessments of distributed systems helps assure critical mission technologies. A standardized form of automated evaluation of the critical components of a security assessment without conducting manual log and data gathering post-exercise will allow security assessment teams and system designers the capability to alter testing specifics based on a better, more focused representation of the state of the system during attack. This will allow members of the defense industry and the commercial domain to conduct a more thorough evaluation of their existing systems and will result in a visualized-awareness regardless of the order of magnitude of the logging. Focus on log interpretation and integration from disparate sources has the potential to become a central service in the civilian domain where a real-time situational awareness at the system level is mandatory.
US Flag An Official Website of the United States Government