Introduction

Accurate and reliable global positioning of moving vehicle is of great importance across many domains. Here, Global Navigation Satellite Systems (GNNS) like GPS and Galileo provide ways to locate vehicles worldwide. However, there are applications where either the GNNS’s update rate, precision, stability or reliability are not sufficient or other positioning systems, like optical ones, are available and need to be integrated to achieve the best possible quality of the estimated location.

LPNAV is a software system for global vehicle positioning, movement estimation and orientation stabilization and which supports multiple application domains.  At the core of this system is an algorithm which performs the combination of sensor data from different sources to form a consistent and accurate estimation of a vehicle’s position, velocity and trajectory. This process is called sensor fusion and the accurate modelling of the vehicle’s dynamics and steering system allows for the prediction of the vehicle’s movement parameters if sensor data is not available or unreliable.

The figure shows an overview of the LPNAV system with various sensor data sources and the ouput data which is computed. An additional Post Processing step improves the visual stablity of these output values for VR and AR applications.

Figure 1 shows an overview of the LPNAV input, processing and output. Multiple heterogenous sensors provide measurement data related to the vehicle’s state. The LP-Research LMPS-IG1 sensor combines a high-quality and high-rate Inertial Measurement Unit (IMU) and a GNSS receiver in one compact package. It serves as the backbone to the LPNAV system, as it provides a global position using the GPS, Galileo and GLONASS satellite constellations. Furthermore, information on the velocity and driving direction can be extracted from the satellite navigation systems by comparing multiple measurements.

The LPMS-IG1’s high-rate angular velocity and linear acceleration IMU is used to track the vehicle dynamics to predict the future global position of the vehicle, even if no GNSS measurement is available.

These core measurements can be enriched by additional data sources available in a specific application scenario. If the vehicle’s CAN bus can be accessed, the current wheel velocity and the steering angle of the wheels can be read out. This information can improve the movement prediction if no updated global position measurement is available. Additional data sources like optical marker-based tracking systems and Lidar can also be used to improve the estimation of the vehicle system state.

Application Areas

LPNAV can be used in the following application areas:

Indoor / outdoor positioning and dead-reckoning during signal unavailability

Due to poor satellite reception in indoor areas, other methods like optical or radio beacon systems are used to locate vehicle. In many scenarios, not all areas of the buildings can be properly covered by these indoor tracking systems, for example in between storage racks in warehouses.  Here, LPNAV can be used to combine the indoor tracking information with a vehicle’s CAN bus data to provide a continuous, high-frequency tracking for vehicles throughout the facility.

In-car virtual reality or augmented reality

Vehicle-based virtual and augmented reality applications have a high requirement on the visual stability and low-latency on the position and orientation estimation (in short tracking) of all components involved. On the one hand, this is the user’s VR or AR headset or additional tracked objects inside the car. On the other side, if the VR and AR content should also integrate with external objects like traffic signs or street markings, the car needs to be localized in global space and a stable and reliable orientation solution for the car needs to be computed for the car, which can accommodate the cars movement and compensate for tilting of the suspension. Furthermore, these position and orientation estimations need to be provided with a high update rate of 100 Hz or more to give a good visual impression when viewed in AR and VR headsets. Here, LPNAV can combine low-frequency (~1 Hz) GNSS and CAN bus measurements with high-rate IMU (> 100 Hz) measurements to provide a high-quality global position and orientation solution with rates of 100 Hz.

Data Sources

The basis measurements for the LPNAV is provided by the LP-Research LPMS-IG1 combined IMU and GNSS unit. Additional data sources can be used improve the result of the sensor fusion:

·         by providing direct measurements which can otherwise only be inferred.

For example, vehicle velocity, which can be computed from multiple GNSS measurement, but can also be read out from the vehicle’s CAN bus directly

·         by providing measurements which are not inferred from other values.

For example, the vehicle’s wheel steering direction can be read from the CAN bus and is used by LPNAV to improve the position prediction when known.

Coordinate Systems and Unit Conventions

Units and Uncertainty

If not otherwise stated, LPNAV uses the SI base units: meter, second etc. All angles are given in radian. All uncertainties are given as the one sigma width under the assumption that the errors follow a gaussian distribution.

Coordinate Systems

LPNAV can use both global positions and local positions as an input and output of the sensor fusion. Internally, all computations are performed using local positions. Depending on the case in which LPNAV is employed, it may be easiest to only use global positions as input and output, only use local positions or a combination of both.

·         Global Coordinate System

This type of representation can express locations on the whole global with in one coordinate system. LPNAV uses the World Geodetic System 84 (WGS 84) which approximates the earth as an ellipsoid and uniquely expresses a three-dimensional location on this ellipsoid in latitude, longitude and height above the ellipsoid. The same coordinate system is used by the GPS system and popular online maps like Google Map and Openstreetmap.org. Therefore, the global positions from LPNAV can be directly used with these services and vice versa.

·         Local Coordinate System

The local position is a representation which uses one global position as a reference frame and its origin. This transformation assumes that the curvature of the earth is flat. Therefore, all local coordinates used should be sufficiently close to the global reference point for this assumption to hold. The local position coordinate system is cartesian and makes it therefore much easier to compute distances and directions between two points. The local coordinate system is left handed and Z+ points towards the north pole and Y+ is pointing upwards. The origin position of the local coordinate system is chosen by fusion algorithm during the initialization stage. In the common case, the position 0,0,0 in the local coordinate system is the position where the vehicle was located when the fusion algorithm was started.

·         In-Vehicle Coordinate System

For all positions which are fixed to the vehicle frame, the in-vehicle position coordinate system is used. Its origin is in the center of the front axle, so right between both front wheels. The coordinate system is right handed, the negative x axis points in the direction of travel and the positive y axis to the right side of the vehicle. 

Orientation

The orientation of the vehicle is represented by a Quaternion. With zero rotation around the Y axis around the vehicle’s Yaw axis, the vehicle is pointing straight to the north pole. With pi/2 rotation, it is pointing towards the east. The rotation around the vehicle’s roll axis is represented by a rotation around the X axis, where a rotation of pi/2 means the vehicle is tilted 90° to the right. Rotations around the vehicle’s pitch axis are represented by rotation around the Z axis.

Vehicle Model

The vehicle model is a mathematical approximation of the behavior of a moving and steering vehicle and is for at multiple areas in the processing of measurement values and in the sensor fusion process. The computations which need to be performed using the vehicle model are:

For all these purposes, one vehicle model is used. This steering model is based on the Ackermann steering geometry and assumes that only the forward wheels are steerable. The main assumption of the Ackermann model is that while steering around a corner, all wheel hubs meet in one point. This point Pr is the rotation center of the car and the distance to the individual wheels are the curve radii.

In the Ackermann model, each wheel rotates with a different curve angle R around the rotation point Pr and  the steerable wheels each have their own steering angle ϕfl and ϕfr. Two important vehicle parameters influencing the drive dynamics are the track width Lt and the wheel base Lwb.

If the vehicle provides only one steering angle measurement ϕf and not individual measurement for each steerable wheel, the steering angle is assumed to be measured at a virtual wheel which is at the center of the front axle.

To compute the curves radius of this virtual front center wheel, the following formula can be used:

Because the angular velocity when driving a curve is the same at all four wheels and at the virtual front center wheel, the wheel velocities measured at four wheels individually can be projected to one position (for example the virtual front center wheel) to perform the sensor fusion in one point.

Available Data Sources

The GPS fusion algorithm can process sensor values from multiple data sources. The data sources can be configured dynamically and added or removed from the configuration on demand. A data sources can provide multiple measurements. For example can a GPS be used to measure the global position, the orientation and the velocity of a moving vehicle. If a better velocity measurement is available in the system (for example by wheel encoders), the GPS velocity measurement should be disabled because its precision is not good enough to compete with the wheel encoders.

The default configuration of the data sources is selecting only the measurements which are of high quality and cannot be provided by any other data source.  

LPMS-IG1 Data Source

This data source reads out all data provided by the LPMS-IG1 combined Inertial Measurement Unit and GNSS sensor. The IG1 is connected via a USB cable to the host PC. Furthermore, an external GNSS antenna is connected and installed at a place with good sky visibility, preferably the car roof.

Provided Measurements

 Name of the data source in configuration file and FusionManager.addDataSource() call: I1g

Configuration Parameters

Parameter Name

Type

Default

Description

sensor_name

String

  • no default -

The name of the IG1 to connect to. The sensor name can be configured by commissioning tools and the software only connects to a sensor with the exact same name.

retry_count

Integer

10

The number of times an attempt is made to connect the sensor.

connect_remote

String

  • no default -

If this value holds some URL (for example “tcp://localhost:9982”) the data of a remote IG1 will be streamed over the network to this sensor fusion instance.

use_gps

Boolean

true

If true, the IG1 GPS will be used in the sensor fusion

use_gps_orientation

Boolean

true

If true, the car heading information provided by the IG1 GPS will be used in the sensor fusion

use_imu

Boolean

true

If true, the information on acceleration and turn velocities will be measured by the IG1 and used in the sensor fusion.

use_pressure

Boolean

false

If true, the measured ambient pressure will be used to estimate the height of the vehicle.

pressure_sigma

Float

100

The error estimate used for the pressure measurement in Pascal.

imu_position

In-Vehicle Position

(0,0,0)

Location of the IG1 in the vehicle coordinate frame

imu_orientation

In-Vehicle Orientation Quaternion

(1,0,0,0)

Orientation of the IG1 in the vehicle coordinate frame. In the default orientation, the IG1’s and vehicle coordinate axis are the same and the IG1’s connector cable are pointing towards the back of the car.

reference_to_optical_quat

Orientation Quaternion

  • no default -

Rotation transform between the IMU frame and the optical tracking frame from an optical tracking marker installed on the IG1 case. This value is determined as part of the LPVR calibration procedure (same name in LPVR) and can be directly plugged in here.

If this value is set, reference_orientation_quat must also be set and imu_orientation must not be set.

reference_orientation_quat

Orientation Quaternion

  • no default -

Rotation of an optical tracking marker installed on the IG1 case. This value is determined as part of the LPVR calibration procedure (same name in LPVR) and can be directly plugged in here.

If this value is set, reference_to_optical_quat must also be set and imu_orientation must not be set.

gnss_antenna_position

In-Vehicle Position

(0,0,0)

The position of the GNSS antenna connected to the IG1.

 LpSlam Data Source

This data source will read and process the video stream from a mono or stereo-camera system to create a map of the environment and determine the vehicle position and orientation of the vehicle in this environment.

Provided Measurements

Configuration Parameters

Parameter Name

Type

Default

Description

camera_angle

Float

0

Up or down tilt angle of the camera in degrees. The sign is negative if the camera is tilted up and positive if looking down. If the camera is looking straight along the vehicle’s forward direction, this angle is 0.

camera_orientation

Orientation Quaternion

(1,0,0,0)

Can be used instead of the camera angle if the camera is rotated in more axis than just the tilt angle.

camera_position

In-Vehicle Position

(0,0,0)

Position of the camera in relation to the vehicle’s fixed coordinate origin.

connect_remote

String

  • no default -

If this value holds some URL (for example “tcp://localhost:9982”) the data of a remote IG1 will be streamed over the network to this sensor fusion instance.

use_position

Boolean

true

If true, the local position provided by the SLAM alogrithm will be used by the  sensor fusion.

use_orientation

Boolean

true

If true, the orientation measurement will be used by the sensor fusion.

config_file

String

  • empty  -

Path to the config file which contains the camera calibration and SLAM parameters.

forward_fusion_state

Boolean

false

If true, the current sensor fusion result will also be available and used by the SLAM alrogithm

write_logfile

Boolean

false

If true, a logfile named “Lpslam.log” will be written with information about the SLAM processing.

log_verbose

Boolean

false

If true, the logfile will contain more verbose logging output.

Installation and Calibration of the System

The most common setup includes the LPMS IG-1 sensor for global positioning and high-precision inertial measurement unit and a source of vehicle velocity, for example a CAN Bus adapter.

IG1 installation in Vehicle

The LPMS IG-1 should be installed at a place where it can be firmly mounted to the vehicles frame. In our experience mounting the sensor to the B-pillar of a car on either the driver or co-driver side. At this location the sensor can be easily mounted and accessed. The ideal installation method is to screw the sensor to the vehicle’s plastic panels so no excess wobble occurs.

The installation position of the IG1 must now be measured in the In-Vehicle coordinate system frame. This can either be done via an optical tracking system or by using reference points known from the vehicle’s CAD drawings. The point which must be measured is at the center of the IG1 case in x and y coordinates and at the bottom base plate in z coordinates.

Furthermore, the correct orientation of the sensor in the vehicle must be determined. Here, two conventions can be used:

The first option is to give the rotation of the IG1 sensor as a quaternion. For this the configuration option imu_orientation of the IG1 data source can be used. For example, setting the parameter to a quaternion with W=-0.5 X=-0.5 Y=0.5 Z=0.5 can be used if the IG1’s case is installed with the base plate torwards the left side of the car frame and the connection cable pointing straight down.

The second option is to reuse the orientation parameters which were derived during an installation of the LPVR system. Here, the same parameters which were derived using a optical tracking system for the calibration of the LPVR system. The two quaternions named reference_orientation_quat and reference_to_optical_quat need to be set in this case.

The IG1’s data connector must now be connected to the host computer via the include USB cable.

The GNSS antenna is best placed on the vehicle roof and the cable can threaded through the seals of a door. Then, the antenna cable can be connected to the GNNS antenna connector of the IG1. If the procedure to calibrate the the IG1’s internal GNSS sensor is completed (see below) the position of the GNSS antenna must not be measured because it is automatically determined as part of the calibration procedure.

Satellite data download (only for GPS variant)

If the IG1 sensor has not been used before or was not used for some months, the GNSS satellite data has either not been downloaded yet or is outdated. In this case, the sensor will provide a GNSS fix, but the errors listed in the status of the IG1 sensor is larger than 3 meters. In this case, the sensor needs to be connected to a GNSS antenna which has an unobstructed view of the sky. It will take around 1 to 2 hours to download the orbit information from sufficient satellites for a high-precision GNSS fix.

The sensor fusion algorithm should be running during this time, because stopping the sensor fusion will persist the downloaded satellite data on the sensor.

Auto-calibration of IG1-internal GNSS sensor (only for GPS variant)

IG1’s internal GNSS chip needs to be calibrated after it was installed inside of the vehicle. This procedure is automatic and is only required once after the installation. This procedure can only be performed when the vehicle is driving straight sections and around corners. On average, it takes around 10 minutes for the calibration to complete and driving at velocities up to 50 km/h and starting and stopping is required.

To start the calibration procedure, the fusion algorithm must be executed, either using the Unity plugin, with the command line tool or the standalone application. You can monitor the IG1 sensor status string to check when the calibration procedure is complete. If the GNSS sensor is not yet calibrated, the status will report “3d fix”. If the calibration is complete the status will change to “Gnss + Imu combined fix”. Once the calibration is complete, stop the fusion algorithm. This will store the computed calibration on the chip.

Important note: If the IG1 is removed from the car and/or installed in another place, this calibration procedure needs to be repeated.

Web Application Programming Interface

If the webserver component of the LPNAV system is started, multiple HTTP endpoints are available to query the system status, configure its components or provide additional measurements to the system.

The base address is formed by host name and port number then the name of the endpoint is attached. For example, if the LPNAV webserver is hosted on the address 192.168.1.5 and port 9091, the base address would be:

http://192.168.1.5:9091/

Now the name of the endpoint (for example “lpglobalfusion/sensorvalues”) to form the full URL to call:

http://192.168.1.5:9091/lpglobalfusion/sensorvalues

Information Endpoints

lpglobalfusion/sensorvalues

Provides a JSON result which contains the most up-to-date values of all connected sensors.

lpglobalfusion/sensorconfig

Provides a JSON result which contains information of the configuration of all connected sensors.

lpglobalfusion/status

Provides a JSON result which contains information on the current system status and the most recent result of the position and orientation sensor fusion.

Measurement Endpoints

Measurement values can be provided to the LPNAV system via HTTP POST calls. The parameters for this wall are transferred as part of the query string.

lpglobalfusion/set_one_wheel_odometry

Provide the wheel velocity of a one wheeled vehicle to the sensor fusion.

Parameter Name

Description

sigma

uncertainty of wheel velocity measurement at one sigma of the standard error in meter/seconds

wheel

wheel velocity in meter/seconds

lpglobalfusion/set_two_wheel_odometry

Parameter Name

Description

sigma

uncertainty of wheel velocity measurement at one sigma of the standard error in meter/seconds

left_wheel

wheel velocity in meter/seconds

right_wheel

lpglobalfusion/set_four_wheel_odometry

Parameter Name

Description

sigma

uncertainty of wheel velocity measurement at one sigma of the standard error in meter/seconds

left_front_wheel

wheel velocity in meter/seconds

right_front_wheel

left_rear_wheel

right_rear_wheel

Example

Here is a complete example how Odometry data for a twot-wheeled vehicle can be transferred with python:

from urllib.request import urlretrieve
from urllib.parse import urlencode

## fill with the last measurements from odometry
# units is meter / second
odometry_dict = {
    'sigma' : 0.02,
    'left_wheel': 0.4,
    'right_wheel': 0.5}
qstr = urlencode(odometry_dict)

# forward the measurement to LPNAV with a POST call
# by providing an empty data dictionary, a POST call will be used
ret = urlretrieve("http://localhost:9091/lpglobalfusion/set_two_wheel_odometry?" + qstr,
    data={})

C-Library Application Programming Interface

A C-API is provided to integrate the functionalities into your own applications. This API allows to load an existing configuration file, further configure the system, start the sensor fusion and query the current position and orientation of the vehicle.

The example folder provided with the release of LpGlobalFusion contains a complete example program of the API usage. In the following the most relevant API commands will be described.

Library and Fusion Lifecycle

lpgfFusionManager

This function must be called to create an instance of the library and returns a pointer which needs to be provided to all further API calls. This function only needs to be called once during the lifetime of your application.

lpgfFusionManagerReadConfigurationFile

This function will read the system configuration about connected sensors and the sensor fusion configuration from a text file. Within this call, all data sources will be created and the function will return false if a sensor could not be properly connected.

This method should be called before the sensor fusion is started.

lpgfFusionManagerStart

To start the data acquisition from the connected sensor this method will be called. The sensor processing will run in a background thread and this function call will return as soon as all sensors were started.

After a call to this function, the most recent system state can be acquired with a call to the lpgfFusionManagerGetStateEstimate method.

lpgfFusionManagerStop

Stops processing of sensor data. After a call to this function, no updated state estimates will be computed any more. The lpgfFusionManagerStart can be called to start the data acquisition and sensor fusion again.

lpgfDestroyFusionManager

This function is called to release all resources held by the library. After this call, no resources associated with the instance of this library can be used any more.

Access to Fusion State

Once lpgfFusionManagerStart has been called, the fusion of the connected sensor’s data will occur in the background with the rate at which the sensors provide their values.

lpgfFusionManagerGetStateEstimate

The most recent result of the sensor fusion can be queried via a call to this function. It returns the lpgf_state_estimate data structure which contains the global and local position of the vehicle as well as its orientation.

Providing Odometry Measurements

Wheel odometry measurements to improve the sensor fusion quality can either be provided via a connected data source (for example a CAN bus encoder) or via API calls. When integrating into a larger existing system, it may be more convenient to use API calls instead of using a dedicated data source.

Wheel odometry information for idealized one-wheeled vehicles, two-wheeled vehicles or vehicles with four wheels can be provided. The correct vehicle model needs to be selected and configure in the library for the respective API call. So if a two-wheeled vehicle model is configure, only the API call lpgfFusionManagerAddTwoWheelOdometry may be used.

lpgfFusionManagerAddOneWheelOdometry

Use this function to provide one wheeled odometry measurement to the sensor fusion.

lpgfFusionManagerAddTwoWheelOdometry

Use this function to provide two wheeled odometry measurement to the sensor fusion.

lpgfFusionManagerAddFourWheelOdometry

Use this function to provide four wheeled odometry measurement to the sensor fusion.

Control of Mapping Functionality

If a camera or lidar based mapping functionality should be used together with the sensor fusion library, these function calls allow to control the process of map creation and where to load and store it.

Except for lpgfMappingGetStatus, all the function calls to configure the mapping behavior need to be done before the sensor fusion is started. So before the call to lpgfFusionManagerStart.

lpgfMappingSetFilename

This function call allows to set the filename from which a previously created map is loaded and where an updated map is stored to after the sensor fusion was stopped. If the mapping mode is enabled (see next function call), either a new map is created or stored if no previous map file is found or an existing map file is updated with the additional information gathered during this run of the sensor fusion.

If the mapping mode is disabled (see next function call), the map will be loaded and used to localize the vehicle in the environment. However, the map will not be updated or saved when stopping the library.

lpgfMappingSetMode

Allows to enable or disable the mapping feature. If mapping is enabled, new content is added to the internal map which allows the vehicle to be localized in new environments. For environments where no previous map is available, the mapping mode must be enabled to create an initial map of the relevant areas of the environment.

If mapping mode is disabled, a pre-existing map (see function lpgfMappingSetFilename) needs to be loaded which can be used by the vehicle to determine orientation and position within this map

lpgfMappingGetStatus

Returns the data structure which contains information about the performance (frames per seconds and frame processing time) and whether the localization using the mapped data is active or not.

Appendix

Open Source Licenses

See the folder licenses in the release archive for details on the conditions of the used Open Source Licenses.