PNT and Sensor Fusion

Inertial Labs’ Expertise in PNT and Sensor Fusion Paves Way for Level 5 Autonomy

The dream of fully autonomous vehicles seamlessly navigating our roads, devoid of human intervention, is closer than ever. Achieving this vision of Level 5 Autonomy, where vehicles operate under all conditions without human oversight, requires combining cutting-edge technology and deep domain expertise. Central to this evolution is the profound understanding and application of PNT and Sensor Fusion. Inertial Labs’ unparalleled expertise in PNT and Sensor Fusion sets new benchmarks in this dynamic landscape, effectively paving the way for the next era of automotive excellence.

 

Autonomous Vehicles

Autonomous Vehicles have long been viewed as the logical next monumental breakthrough in engineering. An unbelievable feat depicted throughout Hollywood and analyzed by many journals, autonomous vehicles are one of this decade’s most highly scrutinized potential breakthroughs. Six levels of autonomy represent a progressive pathway to level 5 – full autonomy. So, this begs the question: How long until we reach complete autonomy? What level are we at now? And how is it accomplished? First, let’s get a clear picture of each level of autonomy.

Levels of Autonomy

Level 0

This level represents a complete lack of autonomy – manually controlled vehicles. The human is responsible for the “dynamic driving task,” but a few systems can be in place to aid the driver. The “dynamic driving task” refers to the actual time operational and tactical functions required to operate a vehicle in on-road traffic. Systems that do not “drive” the car, such as an emergency braking system or standard cruise control, do not qualify as autonomy and are considered level 0 systems. Many vehicles that are on the road today operate at level 0 autonomy.

 

Level 1

 

The lowest level of automation, level 1 autonomous vehicles feature a single automated system that provides driver assistance. Adaptive cruise control, where the vehicle can be guided and kept safely behind the next car, is a popular example of level-one autonomy. These systems vary but typically use some combination of LiDAR, radar, or camera to automatically accelerate or brake to keep the vehicle at a safe distance from the vehicle ahead. Many newer car models implementing this system qualify as level 1 autonomous vehicles.

 

Level 2

 

Level 2 automation dictates an advanced driver assistance system (ADAS). An ADAS can control the vehicle’s steering and acceleration or deceleration. Using a human-machine interface, an ADAS improves the driver’s ability to react to dangers on the road through early warning and automation systems. An ADAS utilizes high-quality sensors, cameras, and LiDAR to provide 360-degree imagery, 3D object resolution, and real-time data. Other ADAS examples include Anti-lock brakes, forward collision warning, lane departure warning, and traction control. Some currently implemented examples of level 2 autonomy include Tesla Autopilot and Cadillac Super Cruise systems.

Level 3

Vehicles with environmental detection capabilities and can make informed decisions are considered level 3 autonomous. While having these features, these vehicles still require human intervention if unable to execute a task. The Audi A8L got set to hit the road with level 3 autonomous technology. Its Traffic Jam Pilot combines a lidar scanner with advanced sensor fusion, processing power, and built-in redundancies. That said, global regulators still needed to agree on an approval process for level 3 vehicles, forcing Audi to abandon its three hopes. Until the Honda Legend got the worlds-first approval for level 3 autonomous vehicles in Japan with its own Traffic Jam Pilot automated driving equipment.

 

Level 4

 

The main improvement between levels 3 and 4 is that level 4 vehicles can intervene if things go wrong or there is a system failure. With this ability, these cars do not require human intervention in most situations. However, the human still has the option to override the vehicle manually. In the future, level four vehicles could be used for ride-share and public transportation.

 

Level 5

 

The hottest topic in engineering, level five vehicles have full driving automation and do not require human attention. These vehicles will not have typical driving components such as steering wheels or pedals. While testing for level five vehicles was conducted, they have yet to be available to the general public.

Recently, the timetable for level five autonomy has become murky, with many automakers backing off on their claim of having level five vehicles available between 2018 and 2025. With very few cars approved for general level three autonomy, there is still room for innovation, and it will likely take a decade before level five automation gets achieved.

 

Sensor Fusion in Autonomous Vehicles

 

Sensor Fusion

Sensor fusion is utilized in various systems that output estimated data based on the raw data from multiple sensors. Sensor fusion is the ability to bring together numerous sensor inputs to produce a single result that is more accurate than that of the individual inputs alone. A popular example of sensor fusion is the Kalman Filter. A Kalman filter utilizes a series of observed measurements over time, filtering out naturally occurring statistical noise and other inaccuracies. It produces a weighted estimate that is more accurate than data from the sensors alone. Check out our white paper here to learn about Sensor Fusion and the Kalman Filter.

Sensor Fusion in Autonomy


Advanced Driver Assistance Systems (ADAS)

Many vehicles with ADAS utilize information from sensors such as radar, optical cameras, LiDAR, or ultrasound. Sensor fusion can combine data from these sensors to provide a clearer view of the vehicle’s environment and is integral in advancing these systems. A typical example of sensor fusion in ADAS is information fusion between a front camera and radar. Alone, both sensors have issues in environmental detection. A camera has problems in conditions such as rain, fog, and sun glare, but it is a reliable source of color recognition. Radar helps detect object distance but is not good at recognizing features like road markings. Sensor fusion between a camera and radar can be found in Adaptive Cruise Control (ACC) and Autonomous Emergency Braking (AEB). These are a couple of ADAS examples found in many newer car models.

Autonomous Vehicles

As noted earlier, a significant advantage of sensor fusion in the frame of autonomous vehicles is that the combined data from various sensors can overcome the environmental shortcomings of individual sensors. As a result, this reduces false negatives and positives while increasing the overall performance and reliability of the system. Sensor fusion is highlighted most in the path-planning aspect of autonomous vehicles. Here, sensor readings were integrated to provide a precise analysis of the vehicle’s state and predict the trajectories of the surrounding objects. They allow the car to perceive its environment more accurately, as the noise variance of a fused sensor is smaller than that of individual sensor readings.

Inertial Labs Involvement in Autonomous Vehicles

 

Inertial Labs INS-D in Autonomous Navigation

 

The Inertial Labs INS-D is an Inertial Navigation System (INS) that uses data from gyroscopes, accelerometers, fluxgate magnetometers, a dual antenna GNSS receiver, and a barometer. This data is fed into an onboard sensor fusion filter to provide accurate position, velocity, heading, pitch, and roll of the vehicle under the measure. The INS-D, fused with aiding data from LiDAR, optical cameras, or Radar, can provide the data required for object detection or simultaneous localization and mapping (SLAM) algorithms. For example, the INS-D, combined with LiDAR and optical camera data, can produce georeferenced and time-stamped data for vehicle navigation that is accurate within a few centimeters. Georeferenced data is related to a geographic coordinate system, so this data has a precise location in physical space, which is crucial for mapping a vehicle’s environment.

Additionally, the optical camera’s sensor fusion allows this data to get colorized, meaning that the camera images project over the georeferenced LiDAR data, so each point is colorized to reflect the point’s color in the real world, becoming especially important for object detection and feature recognition, such as road lines.

Real World Applications


Virginia Tech

Inertial Labs has partnered with the Autonomous Systems and Intelligent Machines (ASIM) lab in the Mechanical Engineering department of Virginia Tech to research Connected and Autonomous Vehicles (CAV). Students use the INS-D and other sensors to convert a hybrid sedan into an autonomous car. The ASIM lab was established to research autonomy from a perspective on dynamic systems and intelligent controls. The ASIM lab has developed intelligent machines such as autonomous or semi-autonomous robots and vehicles through sensor fusion techniques, connectivity through communications, and advanced learning algorithms. 

Robotics Plus

 

Robotics Plus has developed a line of unmanned ground vehicles (UGV) for precision agriculture applications. Inertial Labs’ Inertial Navigation System, fused with optical aiding data, can produce real-time georeferenced data. When fed into a robust algorithm, this allows the UGV to be aware of its surroundings. Designed due to the demand to mechanize orchard and horticultural tasks, these UGVs are continuously developing to get applied to various applications in varying environments. Robotics Plus was made to solve the increasingly pertinent agricultural challenges of labor shortages, crop sustainability, pollination gaps, and yield security. An award-winning innovative company, Inertial Labs is proud to work with Robotics Plus in its continued goals of improving grower experience and yield security. 

 

Pliant Offshore

 

Pliant Offshore is an offshore measurement and control expert combined with 3D software technology. Inertial Labs has worked with Pliant Offshore to aid them in producing authentic products that can withstand harsh environments and yield precise results. Inertial Labs’ inertial technology is used on their unmanned surface vessels (USV) to track and trace assets, review asset history, and monitor the wear, usage, and lifetime of support. USVs can use Inertial Labs’ Motion Reference Units (MRU) fused with a multi-beam echo sounder (MBES) to inspect assets with accurate, georeferenced data and monitor these assets over time. With a mission of developing and improving products to maximize time efficiency, cost-effectivity, and safety for end-users, Inertial Labs is excited to continue working with Pliant Offshore.

Inertial Labs’ Expertise in PNT and Sensor Fusion Paves Way for Level 5 Autonomy

ENG

1,3 Mb

Latest White Papers

Scroll to Top

Website maintenance has been scheduled for Sunday, April 2 from 7 am to 9 pm EDT.
The resource may be unavailable at this time. Please accept our apologies for any inconvenience.