Article

Digital Twins & the Defense Industry’s Digital Transformation

Posted April 30, 2023 | Technology | Amplify
digital twin defense
In this issue:

AMPLIFY  VOL. 36, NO. 4
  
ABSTRACT
Alexander Weber discusses the use of digital twins in radar systems. This is a good example of using digital twins to simulate products that are costly to build (especially if they are built incorrectly) and their use in addressing compliance requirements. Weber explains how the model was verified and how the simulated data corresponds to the real data.


Digital twins are becoming more commonplace in the defense industry, including one for solid-state radar developed by a US Department of Defense (DoD) contractor to assess high-fidelity search-and-track radar-performance metrics. The metrics are comparable to the deployed system the digital twin represents (referred to here as the “end item”), reducing testing costs for the contractor and DoD.

Cost savings from the radar digital twin (RDT) stem from the ability to digitally assess end item software, verify system-level requirements, support pre-mission analysis and events without hardware, and conduct virtual warfighter training and exercises.

Millions of dollars have been saved from these activities, savings that increase year after year and can be reinvested into product development. Expansion and adoption of these types of digital twins are expected to support additional cost savings and faster product delivery as they evolve (see Figure 1).

Figure 1. Digital twin overview
Figure 1. Digital twin overview

RDT model development costs less than traditional modeling and simulation (M&S). RDT’s use of wrapped tactical code (WTC) methodology is also an advantage: it keeps development in lockstep with the development of the end item system. (Note that “tactical code” refers to the end item software.)

RDT provides various configurations to support multiple use cases. Its interface is identical to the end item radar system, so it supports interoperability with defense system (DS) models that adhere to the end item interfaces. RDT supports a high-fidelity, non-real-time configuration. This configuration is used for singleton radar simulations (using parallelized Monte Carlo simulations1) and integration into traditional modeling and simulation.

RDT also supports a medium-fidelity, real-time configuration. This configuration is used for pairwise testing with other DS models and as substitute for the end item radar in interface-level testing and assessments. The digital twin’s operators include the prime radar contractor, sister contractors that develop DS models, and warfighters (primarily pilots and sailors).

A deployed radar system requires multiple cabinets of equipment to keep up with demanding real-time requirements. RDT’s hardware requirements are lower because they’re based on the computing resources needed for the deployed radar systems minus the resources needed for redundancy and processes not required in the twin, such as calibration. This allows RDT to be run in both a real-time and a non-real-time configuration on a single computer with a single software build.

This configuration can be scaled to execute multiple instances of the RDT and run in parallel in a clustered or cloud computing environment to allow for Monte Carlo statistical variation and support high volumes of data. This data is used to assess the radar performance across hundreds or thousands of simulations. Note that RDT is platform-agnostic; it can be run on any machine that meets RDT’s minimum hardware requirements. Cloud computing solutions are the most popular choice for running RDT.

RDT’s Modular Open Systems Approach (MOSA) architecture allows for the construction of additional digital twins.2 The additional twins represent variations of the radar system with little changes to source material. Various radar components can be swapped depending on the configuration the user wants to test. Components of RDT are also swappable with traditional M&S components.

This blend of end item–deployed code and M&S models facilitates issue isolation and identification in both models. The end item–deployed code uses the same input as a system integration lab or test environment, so issues identified in fielded configurations can be easily tested with RDT.

In addition, RDT outputs the same data as the deployed system, due to the WTC methodology. This allows the same debugging tools, strategies, and procedures developed for the deployed radar system to be used with RDT, supporting reuse.

RDT provides quick assessment of the end item–deployed software in a simulated environment. Since RDT uses WTC methodology, the performance of the end item system, including its features and bugs, are present within RDT. RDT builds are produced in lockstep with the deployed system.

RDT’s ability to mirror the behavior of the deployed system has been a leading driver of ROI. For example, RDT is capable of being deployed at the same time or sooner than builds that make it to the laboratory floor. RDT loads are regression tested with the same inputs as the builds in the deployed configuration. Crashes, software bugs, and radar-performance issues associated with these loads are thus identified earlier in the test cycle for the end item system. Early identification means issues can be resolved with less effort (versus needing to staff a laboratory of engineers to run the same inputs as RDT).

Issue identification using RDT is more sophisticated than identifying problems in single-input situations. Radar performance metrics require much more than one test in the laboratory to assess. Typical test input in the end item configuration runs for dozens of minutes and requires multiple engineers to process the data. This process is often repeated to get a larger data sample size.

RDT’s parallelization feature allows hundreds or thousands of these simulations to be done in less time versus the end item system (on the order of 100x less). Since RDT exists in a digital-only environment, it can easily be paired with automation. The output of these parallelized simulations can be fed into analysis scripts that assess key radar metrics across the distribution of data.

This led to the discovery of harder-to-find edge-case issues in the end item software, saving thousands of person-hours. Figure 2 shows RDT Monte Carlo simulation output of RDT for low-level radar-performance metrics. The colored curves are 10 individual simulations, run in the clustered cloud computing configuration, and the black curve is reference data from the end item radar system. The figure shows how closely the end item radar system tracks with the RDT.

Figure 2. 10 RDT Monte Carlo simulations (shown in color) vs. referent deployed radar system (shown in black) vs. time
Figure 2. 10 RDT Monte Carlo simulations (shown in color) vs. referent deployed radar system (shown in black) vs. time

Facilitating Radar System Verification

DoD works with its contractors to define radar system requirements, which can range from high-level functionality to low-level, radar-specific performance metrics. RDT’s use of WTC methodology lets potential buyers observe radar functionality, as written in the requirements, within the twin. Government personnel can assess and observe the behavior of proposed radar systems at their own sites without the need to buy hardware and without support from the contractor.

Formal verification of radar system requirements requires a government witness and an assessment of the data produced during testing events. Certain requirements cannot feasibly be tested using the end item radar system for several reasons, including the amount of data required for verification and hardware limitations. RDT excels in these situations — it is inherently not hardware-limited due to its ability to be clustered and run in non-real-time.

Radar system-level performance requirements have now been verified solely using RDT, and requirements for newer radar systems’ verification test procedures are being written to include it. For requirements to be verified using RDT, the model first must be validated, known, and accepted. RDT must be compared to the deployed end item radar system and match all customer-defined criteria. These criteria range from low-level sensor measurements to high-level metrics involving the objects identified by the radar. Validation requires data from live data-collection events. Once validated, RDT is then accepted by the government and used in additional federated models to represent the end item radar configuration. At this point, RDT can be used for event pre-mission analysis.

Figure 3 shows the output of the end item hardware in the loop radar system with RDT output overlaid. Both charts show low-level radar metrics as a function of time. We can see that the curves follow the same trends. Closer analysis shows that RDT is within the acceptability criteria for formal validation as specified by DoD.

Figure 3. Example RDT data vs. referent deployed radar system data vs. time
Figure 3. Example RDT data vs. referent deployed radar system data vs. time

For testing purposes, DoD conducts events using live equipment for potential radar systems to observe. Pre-mission analysis events occur prior to these events. RDT is heavily used in these events to generate accurate data representing the mission. These events take place over months. DoD and contractors work in tandem to ensure that the systems will be ready and that the event will be successful. Recently, RDT has been used during these events. RDT’s WTC methodology allows developers to assess performance and identify and resolve issues in the end item system during the event lead-up.

Traditional M&S is commonly used in pre-mission analysis, but it does not use WTC methodology, introducing risk. Historically, traditional M&S has been used to develop the design for a radar system. As the design matured, the end item–deployed software would enter development, trailing the traditional M&S. In preparation for an event, teams would use Monte Carlo statistical variation simulations of the traditional M&S to assess potential performance. This can introduce risk, as the end item software is not being tested to the same degree. The teams have identified issues in the end item software late into development with this methodology.

In recent years, the digital twin has been maintained during the development of a radar system, including through to the pre-mission analysis. Having RDT as another test point helps the development team identify issues earlier and gives the teams a more accurate look at the expected radar performance before execution. A blend of traditional design with a digital twin facilitates faster development across most systems engineering development phases.

Digital-Only Missions

Given the cost of months-long test events, DoD is now investing in digital-only missions. These events use a federated model to create interoperability between various parts of the DS. The federated model components include RDT along with digital twins for other mission participants.

The digital missions provide the same level of configurability and output as the end item system, execute in non-real-time, and require only a single operator to configure the system. The rest of execution is done overnight on cloud-based computers. The resulting data is assessed using automated scripting solutions, and information on issues flows to each digital twin development team.

During these digital events, the teams have identified deployed radar system software issues in a purely digital environment, marking a milestone in DoD’s movement toward a digitally transformed work environment.

Integration with Other Defense Systems

RDT’s real-time configuration is an important tool for interface-level testing with other digital twins or M&S models. RDT’s MOSA architecture allows radar components to be swapped with M&S models or lower-fidelity variants. RDT uses a medium-fidelity signal-processing model during real-time configuration to ensure simulations are running on time. This version of RDT can be delivered to the other elements of the DS, where radar-to-radar and radar-to-command-center interoperability is tested. This configuration has led to the discovery of numerous software issues in product interfaces.

This real-time configuration can be scaled to represent the radar system in laboratory environments. In situations where hardware is limited or under contention, RDT can be used in place of the end item system. This configuration, referred to as “tactical representation mode,” is just now being adopted in government radar programs. Similar to digital twin to digital twin interface-level testing, RDT is used in tactical representation mode to test the interface of the radar with the end item code from other products, such as a battle manager, another radar, or a command center.

Whereas laboratories often use tools to simulate operator button presses, tactical representation mode incorporates that automation into the model. RDT in tactical representation mode is a new configuration in this environment, and it offers the same interface and configurability as the deployed system without the need for large amounts of computing resources.

Conclusion

Digital twins are an important part of a larger push toward digital transformation in the defense industry. Although their initial development was challenging and adoption has been slower than desired, more industry players are now recognizing their potential and investing in them.

Since its development, RDT has provided millions of dollars of benefits each year, leading its developer to believe that RDT, and digital twins in general, will eventually become part of everyday development and data analysis at DoD. RTD’s ability to identify design issues, participate in events, and facilitate requirements verification has more than proven its worth.

References

1 “What Is the Monte Carlo Simulation?” Amazon Web Services (AWS), accessed April 2023.

2 “Modular Open Systems Approach.” Office of the Under Secretary of Defense, Research and Engineering, US Department of Defense (DoD), accessed April 2023.

About The Author
Alexander Weber
Alexander Weber is a Senior Engineer at Lockheed Martin Rotary Mission Systems–Radar Sensor Systems, where he leads the Solid State Radar Digital Simulation Team. Mr. Weber has created a radar digital twin, which has been provided to the US Department of Defense (DoD) — and is used in high-fidelity radar performance predictions. His interests center on expanding digital twin use cases into additional aspects of radar development. Mr. Weber… Read More