Article

Exploring Digital Twin Potential for Energy & Defense

Posted April 30, 2023 | Technology | Amplify
digital twin energy
In this issue:

AMPLIFY  VOL. 36, NO. 4
  
ABSTRACT
Jason Radel explores the application of a digital twin framework for the ingestion, application, and visualization of digital twins and the integration of light detection and ranging data, photographs and scans, and other engineering documents. The article includes case studies from the energy and defense sectors, demonstrating how such an approach can be used in managing digital twins in different industries.

 

To demonstrate the degree to which digital twins are becoming a powerful tool, this article examines a digital twin framework that was used to create, adjust, and deploy a digital twin of a nuclear power plant in the Middle East and another of a North Atlantic Treaty Organization (NATO) member’s naval ship.

The digital twin framework can ingest multiple forms of 3D models/3D data and convert them to a standardized format. This allows any twin created using the framework to be modified without changing the original data, creating an environment in which ingested data can quickly be used to create new applications.

Creating a Nuclear Power Plant Digital Twin

A nuclear power plant wanted a digital twin to use for virtual training, a process that involved combining engineering drawings with scans of the plant; developing a storage method conducive to additions, changes, and variations; producing a visualization via immersive display systems; and creating a platform capable of rapidly crafting training scenarios. The digital twin framework consists of three parts: a digital twin creator to ingest and contain digital twin assets, a digital twin application where assets could be used to create new applications, and a digital twin manager to visualize solutions (see Figure 1).

Figure 1. The three-part digital twin framework
Figure 1. The three-part digital twin framework: the digital twin creator ingests many types of user data and stores it in a common format; the digital twin application creates applications and can interface with external simulation or other services, such as learning management systems; the digital twin manager handles the visualization of the digital twin on a variety of hardware [click to enlarge image]

The engineering drawings supplied by the power plant comprised more than 20,000 i.dgn files. Around 10,000 of the files were tagged with a variety of metadata. Within the files were nearly 10,000 pieces of equipment and more than 100,000 total valves and instruments, all of which needed to be tagged, searchable, and viewable within the digital twin.

Light detection and ranging (LiDAR) data of one of the reactor facilities and surrounding buildings was obtained using Leica RTC360 laser scanners. These instruments take LiDAR data and, immediately after measurement, collect a series of high dynamic-range photographs the scanner uses to create a 360-degree panoramic photo. These are referred to as “sphere maps” because they are overlayed onto the interior of spheres in the 3D space for viewing. The individual point clouds, inherently colorless due to the measurement process, can be colorized by mapping photograph pixels to point cloud positions.

More than 11,000 of these scans/photographs were captured at the full resolution capability of the scanners, resulting in more than 11,000 colorized point clouds, each approximately 4 gigabytes. The point cloud scans were organized by building, by building level, and (in some cases) by quadrants on the building level. Using Leica Cyclone software, these smaller point cloud groups were combined into a single point cloud using a process known as “point cloud registration.” After that, the point cloud was down-sampled by removing points closer than 2 mm from one another (as opposed to down-sampling by measurement angle) so that objects far from the scanners retained full resolution while the resolution of objects close to the scanner was reduced (to shrink overall data size).

The next step was converting all the data into a common format in the digital twin framework. A tool was created to convert a variety of data types into a common format conducive to graphical rendering. The conversion was done using a series of sequential steps, visualized in the user interface (UI) as colored node blocks wired together. These node networks can be saved for later reuse or used as starting points for other, similar data in the future. The process of converting different data types into that framework data format is called “data ingestion.”

The ingested data was stored with 3D models, material, and other metadata. The data could then be viewed in a 3D viewer, with textures and lighting properties automatically applied to all like materials (see Figure 2).

Figure 2. Imported CAD model (left), with material properties tagged (middle), and with final light-ing/shading added (right)
Figure 2. Imported CAD model (left), with material properties tagged (middle), and with final lighting/shading added (right)

After the computer-aided design (CAD) and LiDAR data was ingested, it was aligned and oriented in a common coordinate system, including a common origin, scale, and orientation. Each individual point in a point cloud has an accuracy of roughly +/-2 mm, so the process of combining two point clouds from two scans adds inaccuracy to the resulting data. Although the inaccuracy is small, it accumulates as more point clouds are registered together. For example, the width of a building level measured using 100 or more point cloud scans could be off by as much as a half meter. When viewing the point cloud overlaid onto the CAD model, this offset created an inaccurate association between CAD and point cloud, which was significant when switching between CAD and LiDAR data views.

To fix the inaccuracies between the CAD data and point cloud, the point cloud was “stitched” onto the CAD using foundational components, such as walls, floors, and doorways.

The result was a dramatic improvement in accuracy between the CAD model and the point cloud data. The process also effectively located and oriented the panoramic photographs taken from the 3D scanner with respect to the CAD data.

This resulted in a CAD model, point cloud data, and sphere maps with a common coordinate system and scaling. This enabled smooth switching between various data views, including point cloud data, sphere maps, CAD data, or a mixture of these. Figure 3 shows a hybrid view of a sphere map and point cloud data. Distances close to the viewer show sphere map photographs; distances farther from the viewer show point cloud data.

Figure 3. A common coordinate system lets users view hybrid scenes of photographs/point cloud/models in 3D; the scenery within 5 meters of the viewer (up to the red dotted lines) is captured by camera; eve-rything beyond 5 meters changes to point cloud data taken from LIDAR scanners
Figure 3. A common coordinate system lets users view hybrid scenes of photographs/point cloud/models in 3D; the scenery within 5 meters of the viewer (up to the red dotted lines) is captured by camera; everything beyond 5 meters changes to point cloud data taken from LIDAR scanners

To make the digital twin functional, the digital twin designers worked with a partner to connect the 3D model to simulation software, allowing the digital twin to change based on the data from the simulation.

Storing the Digital Twin

The philosophy behind storing the data in that digital twin framework was based on two points:

  1. Store data such that the original data cannot be altered or corrupted. This prevents the need to recollect or re-ingest large data sets.

  2. Store the data such that it can be readily modified, copied, or removed from the digital twin.

The data was stored in layers, with the original data set serving as the base layer and layers of modifications applied sequentially to the base. For example, a cabinet could be modified by adding a layer to the cabinet asset that applied burn marks onto its exterior, and another layer could be added to swap components in the cabinet with melted or destroyed components. The final asset could be used in appropriate training scenarios, and the original cabinet asset could be easily recovered at any time for use in another scenario.

The physical storage was done on a series of servers, with data primarily organized by reactor unit, then by building level, then by sectors on these levels. Asset bundles could be created to group systems together (e.g., pipes, valves, and other components connected to the plant’s turbine generator system).

This enabled views of isolated systems, a powerful training tool. For example, the turbine generator and all supporting components could be viewed together, with the rest of the plant components hidden. These systems could also be modified to have semitransparent walls, making viewing the internal workings easier.

Visualizing the Digital Twin

3D data, especially complex 3D data, is best viewed on 3D displays. Three solutions were developed for visualizing the digital twin: large flat-panel displays, head-mounted displays, and an immersive theater.

The first viewing option was four 60-inch LCD screens tiled in a 2x2 configuration. These were arrayed on the wall of a classroom at the plant and used for instructional purposes. To provide a sense of immersion, the fields of view that were rendered onto the four display screens were set to be equal to the fields of view that the displays subtend in a user’s vision for a viewer seated in the center of the classroom. Although not true 3D, this provided the sense of looking out a window when viewing the 3D scenery, adding to the sense of realism (see Figure 4).

Figure 4. A view of the digital twin; information related to the valve in cyan displays in a pop-up dialog box when a user clicks on the valve
Figure 4. A view of the digital twin; information related to the valve in cyan displays in a pop-up dialog box when a user clicks on the valve

The second visualization solution involved an industrial virtual reality headset with a foveated display. The headset tracks the user’s gaze and uses a beam splitter to combine a small, high-resolution display with a larger, more typical resolution display for the viewer’s periphery, resulting in increased visual acuity.1 The display requires powerful graphics cards, but the results are excellent.

The final visualization solution was Station IX, an immersive theater (see Figure 5). The display is 22 feet in diameter and consists of seven curved mirrors combined with projectors and a curved central display screen. The combination of projection imagery and off-axis curved mirrors provides 3D immersive visuals to groups of people located in the center.2 The system provides highly effective 3D depth perception for distances of 5 feet or more from the user without headgear.

Figure 5. The 22-foot diameter immersive theater uses curved mirrors and projectors combined with a curved front projection screen to provide immersive 3D visuals for up to 5 users
Figure 5. The 22-foot diameter immersive theater uses curved mirrors and projectors combined with a curved front projection screen to provide immersive 3D visuals for up to 5 users

Creating Digital Twin Applications

The first application for the digital twin involved connecting it to the plant’s physical control room simulator, which was created for training exercises. Because our digital twin interfaced with the same simulation, we could connect the physical control room simulator to the digital twin. Changes made in the control room triggered changes in the digital twin that could be visualized virtually in real time. Changes made to the digital twin resulted in changes to instrument readings, alerts, and other data in the control room. For example, an application was created in which a user could virtually perform maintenance on a critical piece of equipment, and if the user made a mistake, alarms and alerts triggered in the physical training control room, providing a useful training tool.

The second application involved providing software tools for plant engineers to create custom applications using the digital twin, for training or other purposes. A set of software tools allowed users to create applications by importing or altering parts of the digital twin, triggering events, and tasking the user to do a certain action or view certain components.

The UI for this tool was designed to be usable by employees with minimal software development experience. Tasks that need to be performed, events that trigger at certain moments, and pop-up messages and dialog are represented as colored nodes in the UI. For example, an emergency response training exercise could be created by selecting nodes that represent the following actions:

  • Dialog prompt node — prompts user to enter control room

  • Location node — control room location

  • Fire alarm node — fire alarm sounds and lights turn on

  • Location node — exit door location

By wiring these nodes sequentially, a user running this scenario is first prompted to enter the control room. After the user reaches the control room, the fire alarm sounds, and lights are triggered. The exercise ends when the user reaches the correct exit door. Instructors can play back training sessions with students for further discussion.

In the next phase of this project, the digital twin will be connected to real-time data from the plant so actual data from the plant can be used to further improve the digital twin. Successful completion of this phase will demonstrate the possibility of using the digital twin actively in operations, such as allowing virtual control of select components of the power plant.

In Progress: A Naval Ship Digital Twin

A NATO member country’s navy sought a digital solution to display assets (e.g., naval ships) in a digital format it could use for training, engineering, and operational activities. The project involved creating a tool to support the conversion and management of assets, processes, documentation, and applications into interconnected digital objects, addressing requirements for all lifecycle activities. The tool would also be used during design reviews.

The first phase of the project involves creating a digital twin of a single piece of equipment; the second phase involves creating a digital twin of a naval ship.

The navy provided engineering CAD files for a piece of naval ship equipment in the form of .dgn files. The equipment has roughly 100,000 polygons and consists of between 100 and 300 objects. The data for the ship has more than 1 million polygons and approximately 4,000 objects, each with 30 key metadata attributes. The navy is performing its own LiDAR scanning using Leica RTC360 scanners.

The ingestion process to convert this data into the format used in the digital twin framework was nearly identical to the nuclear power plant, so those pipelines could be used with only small alterations. The navy intends to perform the ingestion process itself in the future, so the UI was improved for ease of use.

Station IX was used to visualize the digital twin for naval leaders, training groups, and design review teams, all of whom can view the digital assets in a highly immersive way.

Two types of digital twin applications are being created for the navy: training and engineering operations. For ship-familiarization training, users will be able to walk through various parts of the ship, click on items to see more information about them, and extend bridges/move cranes to better understand how they work.

For engineering design reviews, teams will be able to view all or part of the digital twins using 3D visualization in Station IX, move and manipulate assets, and view data associated with these assets. They can also compare design drawings to physical parts scanned using LiDAR. In the near future, engineers will be able to edit parts and add metadata and notes to 3D assets; this will become a regular part of design-review meetings.

The Power of Digital Twins Has Not Been Fully Realized

Both projects described in this article demonstrate the power and utility of digital twins, especially when created, stored, and used within a common framework. Digital twins provide immersive, realistic visuals for an assortment of applications, and they can can be combined with powerful simulation and control systems as well as real-world data.

However, these applications only scratch the surface of digital twins’ potential. For example, a real-world nuclear control room ingests 5,000 variables and displays this data to plant operators via meters, gauges, and flashing lights. Operators turn switches, knobs, and valves to control the reactor and its related components.

An intermediary that records the inputs and outputs to the control room could send that data to a digital twin so the twin reacts in real time to what the operators are doing and what the reactor is doing.

If an accident occurred that could cause radiation to be released somewhere in the plant, an operator could quickly step into a virtual environment to search for the radiation leak, determine the affected areas, count the number of nearby workers, and measure the amount of radiation in various areas of the plant. Operators could immediately contact workers to provide detailed, accurate information about the situation.

Taking this scenario a step further, the intermediary could circumvent the control room completely, taking direct control of the plant (or control a ship remotely) using a digital twin.

References

1 Kappler, Elizabeth, Rosemarie Figueroa Jacinto, and Steve Arndt. “Evaluation of Visual Acuity and Perceptual Field of View Using the Varjo XR-3 Headset in a Virtual Environment.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 66, No. 1, October 2022.

2 Radel, Jason C., Victor Belanger-Garnier, and Marc P. Hegedus. “Virtual Image Determination for Mirrored Surfaces.” Optics Express, Vol. 26, No. 3, 2018.

About The Author
Jason Radel
Jason Radel is Chief Scientist for Imagine 4D, a Montreal, Canada–based company that provides solutions for 3D data creation, collection, and visualization, including the Station IX immersive visual system. He has more than 10 years’ experience developing new display technologies, including optical systems used for nuclear safety training, flight training, and for military simulation and training. Dr. Radel earned a master of science degree and… Read More