Live Training, Limits of Today
Adapting Tactical Fire-Control: Train as you fight, fight as you train” is a mantra that has circulated in the live training community for 30+ years. Soldiers’ training should be 100% representative of their actions in the fight. Whatever cannot get appropriately simulated in training is unlikely to be performed adequately in battle.
Thirty years later, we are still missing the mark. Live force-on-force training, the most realistic and battle-specific training for soldiers today, is not an accurate simulation of “the fight.” It does not fully utilize all the different weapons at the disposal of the military. It includes a reasonable simulation of small-arms weaponry and engagements using laser-based technologies. However, indirect fire and call-for-fire activities, which are inherently impossible to represent using laser technology, are entirely lost in live force-on-force training.
With current advancements in artificial intelligence, augmented reality, and head-mounted display systems, such as the Integrated Visual Augmentation System (IVAS), there is renewed hope that incorporating more advanced weapons into live training may be possible and may be right around the corner. However, the successful development and implementation of force-on-force training of weapons such as grenade launchers, mortars, and artillery are more complex tasks.
There must be an adequate way to realistically represent all the different weapons, including kill/no-kill scenarios, blast effects, unencumbered weapon usage, and accurate assessments of where virtual rounds land in the training environment during the exercise.
A Problem in Search of a Solution
The incorporation of indirect fire weapons has several stumbling blocks to overcome. Still, perhaps the most substantial is the ability to predict the landing of rounds fired virtually within the exercise with ballistic accuracy. Weapons such as mortars and artillery can engage at long range and reach targets miles away. Minor errors in understanding the exact direction of fire for the round can lead to significant errors in the location of where virtual rounds have landed, leading to improper training.
Over a decade ago, Inertial Labs Inc was involved in some of the US Army’s earliest efforts to investigate potential methods to integrate indirect fire and counter defilade weapons into live training. Prototype and demonstration efforts offered the Army a glimpse of possible ways to incorporate augmented and virtual reality into live training specifically for indirect fire weapons. These systems typically involved a fully tracked display technology linked to a pointing device mounted to the weapon that then provided the ability to display virtual entities, i.e., blast effects, in the real world.
During these different efforts, one thing became abundantly clear, the technology for determining the barrel azimuth and elevation for the other weapons did not exist. Traditional inertial sensor systems, known as attitude and heading reference systems, comprising gyroscopes, accelerometers, and magnetometers, were the center of the initial steps. If magnetic sensors can get calibrated on the different weapons, they can provide an azimuth reference concerning true north. Accelerometers would give the reference relative to gravity, and gyroscopes would provide real-time tracking of the change in orientation at high frequency to track through movements.
Photos of the initial prototype system, the Weapon Orientation Module (WOM), being tested on various weapons are shown in Figure 1 below.
Although the WOM showed promise, achieving accuracies better than 1deg on mortar systems and 0.5deg on smaller arms, ultimately, the downfall for any system relying solely on traditional inertial approaches alone was the reliance on magnetic sensing and magnetic calibration. Magnetic calibration, even in the simplest case, requires manipulating the weapon impractically for soldiers in the field. Furthermore, in the case of weapons that use blank fire, we determined from testing efforts that the magnetic interference from the weapon changes after firing, causing the system to lose accuracy during operation.
Figures 2 and 3 below provide graphical results from tests performed on M4 and M240 weapons with blank fire. The test results show the magnetic field data from shot to shot, including single-shot and burst examples. Before and after each image, the weapon got stabilized on the target to check azimuth against the actual for each given target.
From these data sets, it is evident that weapon firing affects the magnetic interference from the weapon. Each example shows a relatively significant initial magnetic field shift; 100s of nanoteslas, up to almost 2000 nanoteslas in the most severe case, is witnessed that appears to stabilize for subsequent fires. This shift in magnetic field interference causes a change in azimuth determination, 0.5 deg, and up to over 1 deg in the most severe case. As a result, it got determined that magnetic solutions, although readily available, small, and low-cost, were not likely viable options for fielding. Such technology could get considered if advances include auto-calibration or auto-correction of magnetic-based errors.
In addition to pure inertial solutions, Inertial Labs investigated the potential uses of gyro-compass, celestial, and dual antenna GNSS solutions through prototypes and test efforts. All were found to be impractical options, either from SWAP-C-related issues or performance-related issues.
In addition to pure inertial solutions, Inertial Labs investigated the potential uses of gyro-compass, celestial, and dual antenna GNSS solutions through prototypes and test efforts. All were found to be impractical options, either from SWAP-C-related issues or performance-related issues.
The final effort involved incorporating optical tracking techniques initially developed for ground robotics applications that utilized simultaneous location and mapping (SLAM). The system can mitigate the risks caused by changing magnetic environments while using visual tracking. The optical system replaces the use of magnetic sensors entirely during regular operation. Additionally, with this added azimuth reference, the visual solution can incorporate auto-calibration and auto-correction of magnetic errors.
Enter the Optical Inertial Fused Weapon Orientation Module (OptoWOM)
The resulting development finalized the completion of a Kalman filter, Figure 4 below, that integrated optical-based orientation as part of the system solution. This system replaced the ongoing reliance on magnetometers in the Kalman filter with optical azimuth determination. Initial azimuth is determined using the magnetic sensors through the system’s initial alignment. Still, during general operation, the system relies predominantly on the optical solution for the ongoing azimuth correction of the filter.
Resulting tests conducted on a prototype OptoWOM fabricated under an SBIR Phase III effort for live training showed the ability to track barrel orientation to within 0.3deg (5mils) azimuth and 0.1deg (2mil) elevation and led to initial interest from a surprising customer. As the live training community was essentially tabling their efforts to continue developing solutions for indirect fire weapons in live force-on-force training, the tactical community was busy looking for new and exciting solutions to a different problem, man-portable fire control.
They were actively pursuing new technologies that could act as pointing devices for fire control solutions for dismounted mortar systems: 81mm and 60mm mortars.
WULF
Engineers within Picatinny Arsenal who had previously worked on the development of the Mortar Fire Control System – Dismounted (MFCS-D) for the 120mm mortar had the vision to create a new digital fire control solution that would be low-cost and man-portable, allowing for use on smaller mortar systems such as the 81mm and 60mm mortars. After initial efforts centered around a more traditional inertial sensor approach – not dissimilar in technology to the WOM device – they, too, concluded that conventional inertial sensor approaches were not adequate. Also, gyro-compass solutions, such as those used in MFCS-D, were not practical for the smaller guns and were not man-portable from a SWAP-C standpoint. Celestial technology was not reliable enough under many conditions. Finally, GNSS solutions had one fatal flaw; they relied on the availability of GNSS, which can not get guaranteed in battle.
Once discovering the work that started initially within the Army Research Laboratory – Human Research and Engineering Directorate (ARL-HRED) within the SBIR Phase III efforts, the final puzzle piece for a man-portable dismounted fire control system appeared to be there, and the groundwork for the Weaponized Universal Lightweight Fire Control (WUL)F was born.
WULF consists of three major components: the Gun Computer, the Weapon Pointing Device (WPD or OptoWOM), and the System Battery. The Gun Computer provides the gunner with the information needed to direct the movement of the gun tube to engage an enemy target. It shows the changes required in elevation and azimuth in mils. The azimuth and elevation numbers descend toward zero as the gunner moves the tube correctly toward the required pointing vector. Once the final required orientation is reached, the gunner is notified that the weapon is “laid” (on target), and firing can commence. The system relies on data from the Fire Direction Center regarding the location of the target and the required gun orientation to engage the target. Then it utilizes the data from the Weapon Pointing Device (WPD) mounted on the dovetail mount of the bipod to track the gun tube orientation to direct the gunner appropriately.
Tests conducted with the WULF system by engineers at Picatinny Arsenal using their Bore Elevation and Azimuth Measurement System (BEAMS) confirmed the system’s ability to maintain accuracy to within 3.25mils Azimuth and 2mils Elevation using WPD prototypes completed within the SBIR Phase III efforts.
Continued Developments and Algorithmic Metamorphosis
The initial developments of the optical system relied heavily on features used in simultaneous location and mapping (SLAM) that are used widely in robotics applications today. By collecting images from onboard cameras, the system turns the images it receives into a series of uniquely identifiable feature points.
Once stored, these visual references determine the current system orientation by comparing the existing images (and their corresponding unique features) to the stored reference features. Once reliably matched, the system can calculate the orientation shift from the reference to the current features and provide these data to the Kalman filter.
Figure 8 above provides an example of images received by the two onboard cameras and how the system turns these images, using techniques of SLAM, into a 3D map of the features in the local environment. These machine vision maps can create links between the different reference feature sets collected and adjust those over time to make a final bound error, eliminating error drift typically associated with inertial-based solutions.
Although initial test efforts showed these optical map creation and adjustment methods to provide generally accurate results (typically within 3mils), advancements in micro-electro-mechanical systems (MEMS) gyroscopes led to a breakthrough in the more recent prototype systems. Rather than with small movements of the system, today’s MEMS gyroscopes can more accurately determine changes in the system’s orientation than optical alone. Due to the parallax realized by the visual solution from its mounting on the bipod.
In previous detailed tests, single movements of the gun tube might generate errors up to 3mils due to the parallax – increasing the risk of exceeding error limits, especially early in use. In the map-based system, these errors would reduce over time and usage but could lead to unacceptable initial errors. However, using state-of-the-art MEMS gyroscopes, in contrast, precise movements of the gun tube have been found to only generate errors up to 1mil maximum within any single movement. Thus, to keep the mistakes minimized, we adjusted the algorithm.
We modified the filter to become an intelligent one that would understand when gyro orientation needed to be weighed more heavily versus when optical orientation should get considered more heavily. Furthermore, the embedded mapping system adjusted accordingly to where the system – rather than building a series of linked reference feature sets based solely on optical data – would create unique reference sets that merged gyroscopic and visual data. These provide the luxury of continuing to have a bound, non-growing error source over time but without the inherent errors seen in the optical-only reference creations caused by system parallax.
This method’s current testing has shown an ability to reliably operate within a bound overall system error of 3mils azimuth. It is comparing that to the original system that, at times, although rare, was found to see as much as a 3-mile error from a single barrel move. This discovery was a significant shift in the approach that has greatly benefitted the system’s performance.
One area of significant improvement that resulted from this algorithmic shift is in operation behind the defilade. Defilade relates to the use of a structure to protect the mortar system. Figure 5 below shows a typical defilade configuration. The challenge for the optical system here is that the visible features available are all very near to the visual system, thus exacerbating the effects of parallax error. Based on the new algorithm’s “smart filter” operation, there is no difference in performance, regardless of the optical environment.
Adaption of WULF to Live Training
Due to the successes accomplished in the ongoing developments of the WULF system in the tactical environment and the relative lack of success in finding any practical alternative solutions to meet the live force-on-force training community for indirect fire weapons needs, the US Army has now begun something quite rare. Communications between engineers within the live training community and those in the tactical community have concluded with an effort to adapt the solution developed for the tactical fire control community into a training device for the live training community.
Currently, engineers at Picatinny Arsenal are undertaking an effort to complete a training solution for both mortar systems and the Mk-19 grenade launcher, mainly based on the WULF system. However, instead of taking data from a Fire Direction Center (FDC) and advising a gunner on how to move the gun tube to engage an enemy, in the training scenario, the system will allow the gunner to make use of current aiming technologies to aim the weapon precisely as they do today. Using a unique mount that allows both the current aiming sight for the mortar and the WPD to be mounted simultaneously, the WPD provides the gun tube azimuth and elevation, and the ballistic computer using the NATO Armaments Ballistic Kernel is then able to calculate the round impact points for moments of fire.
In a live force-on-force training exercise, the gunner will use the mortar weapon as defined in the mortar tactics, techniques, and procedures (TTPs). The system aims points (azimuth and weapon elevation) get calculated by the FDC and verbally called out to the gunner, just like in the tactical environment. The gunner and assistant gunner use the M67 sight unit to aim at the FDC-provided aim points. The mission data (round type, charge, and fuse) gets inputted into the “training computer.” The assistant gunner then pretends to drop a round, and the gunner presses “Shot” on the computer. An electronic simulation round can automate all actions the training computer performs in the system’s initial implementation. All information on the round type, charge, and fuse will be programmed into the virtual round, making the firing possible to build into the round.
The training system then calculates where the round would have impacted based on the azimuth and elevation of the weapon system and the round data and creates an accurate fly-out map of the round trajectory. This trajectory determines round impact using topographic maps. The impact point and radius of effect can be used inside multiple training systems to signal a hit/kill based on the weapon’s actual aim and not just where to aim.
Likewise, for the Mk-19, the WPD would provide the gun azimuth and elevation of the Mk-19 used similarly. In the case of the Mk-19, however, it used to give the ability system to provide virtual blast effects into the gunner’s view. These could be delivered via a gun-mounted display or a head-mounted display system, whichever was more desirable.
They are addressing the significance of finding a practical solution to the problem of proper weapon orientation tracking for indirect fire weapons in live training can. Accurate tracking of actual tube azimuth is the only way to properly implement weapons into the game. Also, by using technologies developed for tactical systems, the solution immediately supports including those future systems in live training inherently, not requiring new developments or new technologies once those systems get fielded.
A Brief Look Into the Future
Although initial efforts have focused on the need for indirect fire solutions, the same technology lends itself well to the future desire to solve the problem of direct fire and small arms weapons. Advancements in camera technologies and continued advancements in onboard computing technologies and MEMS sensor technologies show tremendous promise toward the completion of a similar system that could be gun mounted on small arms taking the place of current laser-based systems.
However, the challenge for small arms extends beyond those of typical indirect fire systems. Dynamic movements of dismounted soldiers and quick engagements will require those developing such systems to consider new solutions. Strategies that include object detection and recognition technologies and rapid 3d mapping will likely need to be a significant part of any efforts to solve that problem.
Today’s state of the art in UAV payloads using Lidar, for example, shows great promise in creating highly accurate and detailed maps of local environments. Completion of a 3D map provides the possibility of offering an abundance of visual references identified by the optical system and provides accurate north contacts utilized in weapon orientation tracking.
With ongoing advancements in optical technologies, the idea of this added capability realized within OptoWOM is becoming a reality. As such, the idea of the eBullet for all weapons in live training becomes more and more realizable.
References
[1] Healy, Melissa. Feb 26, 1991. “In Face of Death–What Makes Soldiers Disregard Instinct? : Training: Reflex, loyalty and hatred of the enemy can be cultivated to ensure that GIs fight instead of flee” Los Angeles Times.
[2] Freedberg, Syndey J. Nov 30, 2020. “eBullet Brings Richer Realism to Army Training: No More Laser Tag” www. BreakingDefense.com.
[3] Lopez, Ed. May 10, 2017. “The Road to Providing a Faster, More Accurate Mortar Firing System” www.USArmy.mil.
[4] Calloway, Audra. Dec 2, 2011. “Picatinny Provides Soldiers with Quicker, Safer, Mortar Fire Control System” www.army.mil.
[5] Pinto, Robert P. May, 2011. “Bore Elevation and Azimuth Measurement System (BEAMS)”, Joint Armaments Conference, Exhibition and Firing Demonstration Proceedings.