/
LPVR-AIR Manual

LPVR-AIR Manual

Table of contents

Releases

Check the latest release notes here: LPVR-AIR Releases

Introduction

The purpose of LPVR-AIR is to wirelessly stream image data from a SteamVR application such as Autodesk VRED to a wireless HMD like the Meta Quest or VIVE Focus. LP-Research's FusionHub software in combination with the open-source application ALVR fulfill this purpose well.

ALVR by default uses the internal inside-out tracking of Meta Quest for pose calculation. LPVR-AIR exchanges the Quest’s native inside-out tracking with combined IMU and ART / Optitrack outside-in tracking to allow simultaneous, spatially synchronized operation of several HMDs in large tracking volumes.

To make the tracking functionality of FusionHub available to standalone augmented and virtual reality headsets, it can be integrated with Android-compatible OpenXR HMDs. This works via a customized version of the ALVR open source project. ALVR allows streaming image data wirelessly from a host computer and interfaces to 3D content engines through SteamVR. While the original ALVR client was built to work on Meta Quest HMDs, ALVR works in principle on any OpenXR compatible headset.

We use a thin client library to receive IMU data from the HMD API, pass it to FusionHub, process it there and then re-inject the information into the video pipeline of the headset. Depending on the type of HMD this happens within the ALVR client’s standard interface or in a separate hardware-specific API layer.

image-20241121-213343.png

System components

Applications

The following applications need to be started on the head mounted display and the host computer. They should all the included in the installation package that you received from us. We will discuss the order of starting these applications and what their status output should be below.

On the headset:

Application

Purpose

Name (can vary by release)

Application

Purpose

Name (can vary by release)

ALVR client

  • Receives image data from ALVR server

  • Sends pose information to ALVR server

alvr_client_openxr.apk

On the host computer:

Application

Purpose

Name

Application

Purpose

Name

FusionHub GUI

  • Connects to FusionHub server

  • Configures FusionHub

FusionHubUI.exe

ALVR server

  • Communicates with FusionHub server

  • Receives pose information from client

  • Sends pose information to SteamVR

  • Sends image data to ALVR client

  • Receives image information from SteamVR

ALVR Dashboard.exe

FusionHub

  • Calculates sensor fusion between IMU and optical data

FusionHub.exe

Starter script

  • Starts all server-side applications above

Start-LPVR-AIR.bat

LPVR-AIR is copy protected by a USB dongle that needs to inserted when the application is started.

Starting LPVR-AIR

Installation

Install the ALVR client APK on the headset using a side-loading tool like Sidequest. In case of a Meta Quest HMD this will require you to put the HMD into developer mode. See here the steps for putting the HMD into developer mode: Device Setup

In case you’re using a VIVE Focus 3 headset you need to do something similar as described here: How Do I Put The Focus Into Developer Mode? - Developer Resources

ALVR requires SteamVR to be set up on the host computer. If you haven’t installed it on your computer yet, please refer to the instructions here: SteamVR

By default SteamVR is started via Steam, therefore Steam has to run in order to start SteamVR.

It is however possible to start SteamVR directly from it’s binary directory by using the STEAMVR_BIN_DIR environment variable. LPVR-AIR detects automatically if this variable is set and will attempt to start SteamVR. A typical path for STEAMVR_BIN_DIR would be C:\Program Files (x86)\Steam\steamapps\common\SteamVR\bin\win64.

Make sure you turn the guardian (i.e. the automatic tracking boundary detection for the Quest’s internal tracking) is turned off in the developer settings of the HMD. For VIVE Focus use the equivalent setting in the Focus' configuration.

Start-up

  1. Run alvr_client_android/alvr_client_android.apk on your headset

  2. Start your optical tracking system (ART DTrack or Optitrack Motive)

  3. Run Start-LPVR-AIR.bat

Notes:

  1. Make sure the copy protection dongle is inserted into your computer

  2. To upload an APK to your HMD you might need to activate its developer mode

  3. In the FusionHub configuration script make sure to correctly configure your optical tracking system. The script can be accessed through the FusionHub GUI.

  4. Make sure to use a pre-configured HMD optical target provided by ART or Optitrack

Once streaming starts, you should see the SteamVR default environment through the headset. Check if the nIMU counter in the FusionHub GUI is increasing. If both nOptical and nIMU are increasing then the communication between ALVR, optical tracking and FusionHub is working.

Using the FusionHub GUI

  • Connect the GUI to FusionHub from the start screen of the FusionHub GUI. Note that running the GUI is optional, FusionHub works normally without the GUI running.

image-20241113-090313.png
  • Once connected the FusionHub on the headset select base configuration to see the current configuration of FusionHub:

image-20241104-091040.png
  • Adjust parameter blocks as needed. Refer to the description of FusionHub BASE for configuration options:

image-20241104-091524.png
  • Note the following input and output ports that are hard-coded in the ALVR FusionHub API layer. These are already correctly set in the default configuration file installed with FusionHub, so usually there is no need to change them.

Endpoint

Direction

Purpose

Endpoint

Direction

Purpose

tcp://*:8799

Output

Fused pose data

tcp://localhost:8898

Input

IMU data

  • If it’s not running yet make sure to start and configure your optical tracking system. Once optical data is streamed to FusionHub, the nOptical counter in the GUI should be increasing.

  • Default configuration script with optical input defined for ART DTrack:

{ "LicenseInfo": { "LicenseKey": "", "ResponseKey": "" }, "settings": { "websocketDataOutputRate": 20 }, "sinks": { "VRPN": { "settings": { "inputEndpoints": [ "inproc://optical_data_source_1" ], "settings": { "deviceName": "FusionHub", "port": 3883, "tracker0": "HMD" } } }, "fusion": { "dataEndpoint": "tcp://*:8799", "inputEndpoints": [ "inproc://optical_data_source_1", "tcp://localhost:8898" ], "settings": { "Autocalibration": { "minAgeS": 60, "nSamplesForAutocalibration": 1500, "nSamplesForSteady": 256, "noiseRmsLimit": 0.02, "steadyThresholdAverage": 0.2, "steadyThresholdRms": 1 }, "Intercalibration": {}, "MotionDetection": { "omegaLimit": 3, "positionSampleInterval": 1000, "rotationFilterAlpha": 0.9, "timeToUnknown": 500 }, "SensorFusion": { "alignment": { "w": 0.990892966476337, "x": 0.13458639604387848, "y": 0.0005637732357904688, "z": 0.004160907038605602 }, "orientationWeight": 0.005, "predictionIntervalMs": 10, "sggPointsEachSide": 5, "sggPolynomialOrder": 5, "tiltCorrection": null, "yawWeight": 0.01 }, "runIntercalibration": false }, "type": "ImuOpticalFusion" } }, "sources": { "optical": { "settings": { "bodyIDs": [ 1 ], "endpoints": [ "inproc://optical_data_source_1" ], "port": 5000 }, "type": "DTrack" } } }

Press the buttons ‘Set’ and ‘Save’ after changing the configuration script to make your changes active. It might take 1-2 seconds for FusionHub to reset.

In case you happen to enter an invalid configuration, FusionHub might not restart correctly. If you would like to reset your settings, just re-install the FusionHub APK.

Once the configuration is correct, you’ll most likely not have to touch the script again in the foreseeable future.

Optical tracking systems

Optical marker target setup

LPVR-AIR uses the OpenVR coordinate frame convention as shown below:

From version 3.3 LPVR-AIR contains a tool to do a cross-calibration between the HMD’s inside-out tracking and ART / Optitrack. Find this tool in the Calibration folder in the LPVR-AIR deployment archive.

This is a tool for calibration of an ART or OptiTrack tracking body relative to SteamVR tracking (Lighthouses or inside-out tracking of a standalone HMD). It requires an HMD to be simultaneously tracked by ART / OptiTrack and SteamVR.

In order to receive native ie. inside-out tracking poses from SteamVR, establish a streaming connection to your HMD using the Oculus app. Download it from here. Once SteamVR is connected to your HMD run one of the following tools depending on which optical tracking system you’re using.

Optitrack

Tool filename: lh_optitrack_calibration.exe
After starting the tool, it will guide the user through the process. In the end it will output a CSV file of the calibrated body, which can be loaded into OptiTrack via the file menu. Before loading the file, please be sure to disable the uncalibrated body, as sometimes a bug in OptiTrack is triggered which prevents further executions of the calibration tool.

ART

Tool filename: lh_art_calibration.exe
The tool will guide you through the complete calibration process and upload the tracking body directly to the camera system.

Configuration options

Advanced Realtime Tracking (ART) DTrack

FusionHub works with all ART tracking systems, based on their DTrack tracking software.

"optical": { "settings": { "port": 5000, "bodyIDs": [ 1 ], "endpoints": [ "inproc://optical_data_source_0" ], "objectNameMapping": { "1": "hMD" } }, "type": "DTrack" }

Adjust the body ID of the HMD as configured in DTrack.

Optitrack

FusionHub works with all Optitrack tracking systems based on their Motive tracking software.

"optical": { "type": "Optitrack", "settings": { "connectionType": "Multicast", "localAddress": "192.168.0.99", "remoteAddress": "192.168.0.100", "bodyIDs": [ 1 ], "endpoints": [ "inproc://optical_data_source_0" ], "objectNameMapping": { "1": "QuestPro" } } }

Adjust the body ID of the HMD as configured in Motive.

Using LPVR-AIR on a motion platform

FusionHub has the ability to use an additional platform IMU to compensate the motion of a simulator platform or vehicle. It combines data from both IMUs to calculate poses relative to a moving platform.

Differential IMU node

The differential IMU node allows compensation of the movement of eg. the motion platform of a simulator using an IMU attached to the simulator base. This is achieved by transforming the output of the reference IMU into the same coordinate system as the headset IMU and calculating the difference between the two.

The signal flow of the IMU-optical fusion in connection with the differential IMU node is shown in the block diagrams below. Please note that these diagrams were originally created for LPVR-CAD and LPVR-DUO and therefore contains the imuToEyeQuat parameter, which isn’t available in FusionHub. The DefaultCombiner in the diagram represents the basic IMU-optical fusion, which the DifferentialCombiner stands for the differential IMU node.

The configuration block for the differential IMU node and adjusted fusion block looks like this:

"differentialImu": { "dataEndpoint": "inproc://differential_imu_output", "inputEndpoints": [ "tcp://localhost:8898", "inproc://imu_data_source_0", "tcp://localhost:8799" ], "settings": { "referenceOrientationQuat": { "w": 1, "x": -1, "y": 1, "z": 1 } } }, "fusion": { "dataEndpoint": "tcp://*:8799", "inputEndpoints": [ "inproc://optical_data_source_0", "inproc://differential_imu_output" ], "settings": { "Autocalibration": { "minAgeS": 60, "nSamplesForAutocalibration": 1500, "nSamplesForSteady": 256, "noiseRmsLimit": 0.02, "steadyThresholdAverage": 0.2, "steadyThresholdRms": 1 }, "Intercalibration": {}, "MotionDetection": { "omegaLimit": 3, "positionSampleInterval": 1000, "rotationFilterAlpha": 0.9, "timeToUnknown": 500 }, "SensorFusion": { "alignment": { "w": 1.0, "x": 0.0, "y": 0.0, "z": 0.0 }, "orientationWeight": 0.005, "predictionIntervalMs": 10, "sggPointsEachSide": 5, "sggPolynomialOrder": 5, "tiltCorrection": null, "yawWeight": 0.01 }, "runIntercalibration": false }, "type": "ImuOpticalFusion" }

See an explanation of the configuration parameters of the differential IMU node below:

Parameter name

Description

Default value

Parameter name

Description

Default value

dataEndpoint

The data output endpoint to which the differential IMU data is forwarded to

inproc://differential_imu_output

inputEndpoints

The input endpoints of the node. Two IMU inputs and looping in the output of the IMU optical fusion node are required.

"inproc://imu_data_source_0",
"inproc://imu_data_source_1",
"inproc://fusion_output"

referenceOrientationQuat

The orientation of the reference IMU target in the optical coordinate system

"w": 1.0, "x": -1.0, "y": 1.0, "z": 1.0

referenceToOpticalQuat

Transformation from reference IMU local coordinate system to optical frame

"w": 1.0, "x": 0.0, "y": 0.0, "z": 0.0

Calibration of platform IMU

Option 1 (Recommended) - IMU fixed to optical tracking bar

In case you’re using an ART Smarttrack 3 system it makes sense to attach the platform IMU directly to the camera unit. As this puts the IMU into a known reference frame relative to the optical coordinate system, a further calibration of the relationship between the two isn’t needed. Please note that correct adjustment of the optical tracking body of the HMD relative to its IMU is still needed. The image below shows how to attach LPMS-IG1 on top of an SMT 3. The corresponding referenceOrientationQuat for this configuration will be w=1.0, x=-1.0, y=1.0, z=1.0.

Option 2 - Platform IMU-optical system intercalibration

  1. Attach an optical target to the platform IMU. Any target shape is fine, the target could look like the one displayed below.

image-20240823-102305.png
  1. Create a intercal-config.json file for FusionHub that runs the intercalibration. This file will look similar to the following script:

    { "sinks": { "imuOpticalIntercalibration": { "inputEndpoints": [ "inproc://optical_data_source", "inproc://imu_data_source" ] } }, "sources": { "imu": [ { "outEndpoint": "inproc://imu_data_source", "settings": { "id": "referenceImu", "name": "ig1pcan9003c0028" }, "type": "OpenZen" } ], "optical": { "settings": { "bodyIDs": [ 1 ], "endpoints": [ "inproc://optical_data_source" ], "port": 5000 }, "type": "DTrack" } } }

This code block defines two sources, an IMU source and an optical source. The output from both sources is piped into the intercalibration node. Make sure to adjust the IMU and optical node parameters to the devices you are using as input, in this case ART tracking and an LPMS-IG1 IMU.

Save this configuration file under a separate file name such as intercal-config.json and load it into FusionHub by calling FusionHub.exe -c intercal-config.json.

  1. FusionHub will output the status of the intercalibration to the command line. In order to perform the calibration, slowly rotate the IMU with the optical target attached. The intercalibration will sample 50 poses until it outputs a referenceToOpticalQuat. Write this quaternion into your original configuration file as part of the differential IMU node.

image-20240823-133029.png
  1. Leave FusionHub running and now fix the IMU in the location where you’d like to keep it permanently. Make sure that the camera system can still see the optical marker attached to the IMU after you fixed the IMU.

  2. From the command line retrieve the optical quaternion output and use it in your original config.json as referenceOrientationQuat in the differential IMU node. You can now remove the marker target from the IMU. Leave the IMU in its place.

  3. Quit FusionHub, double check your modified config.json and run FusionHub. Platform motion compensation should now work correctly.

You can verify correct operation of the motion compensation by following the steps below. This requires that you have an HMD with a working rendering pipeline connected to FusionHub.

  1. Mount platform, put on HMD. Look straight ahead.

  2. Rotate the platform around the yaw, pitch and roll axis.

  3. While the user keeps their head steady, the 3D image displayed in the HMD should be stationary.

To optimize peformance of in-car tracking switch the internal tracking of Meta Quest 3 to 3-DOF, ie. switch the native pose tracking to off.

External object tracking

In addition to delivering high-quality mixed reality and precise wireless headset tracking, LPVR-AIR seamlessly integrates controllers tracked by the HMD’s inside-out system with objects tracked via optical targets in the outside-in tracking frame, all within a unified global frame. The video above shows this unique capability in action.

When combined with our LPVR-CAD software, LPVR-AIR enables the tracking of any number of rigid bodies within the outside-in tracking volume. This provides an intuitive solution for tracking objects such as vehicle doors, steering wheels, or other cockpit components. Outside-in optical markers are lightweight, cost-effective, and require no power supply. With camera-based outside-in tracking, all objects within the tracking volume remain continuously tracked, regardless of whether the user is looking at them. They can be positioned with millimeter accuracy and function reliably under any lighting conditions, from bright daylight to dark studio environments.

See documentation about how to set up LPVR-CAD here: LPVR-CAD for SteamVR System Setup

When setting up LPVR-CAD in connection with LPVR-AIR, adjusting the headset related settings in LPVR-CAD isn’t needed. Uncheck the HMD active checkbox in the LPVR-CAD settings.

The integration with LPVR-CAD is still work in progress, handle with care. We will release an updated version of LPVR-CAD for this purpose soon.

Network setup

Router

In order to establish high bandwidth communication between the host and HMD we recommend setting up a 5GHz or for optimum performance a 6GHz (WIFI 6e) WIFI router. In some environments changing the internal channel setup of the router might increase performance. Some experimentation might be needed to find the perfect setting for your system.

Network topography

We recommend to set up a simple network structure, to minimize potential error sources in the installation process as shown in the image below.

Network performance

  • A WIFI 6E connection is recommended to achieve optimum performance:

image-20241104-093508.png
  • Transmission speeds are expected to be around 2 Gbps for a stable 6 GHz connection.

image-20241104-093624.png
  • A typical ALVR performance graph is shown below. Overall latencies in good environments should be between 70 and 90ms.

image-20250223-045716.png
  • In SteamVR check Advanced Frame Timing for performance problems:

image-20241104-093913.png
  • In case of a sufficient rendering performance the advanced frame timing window should look like the output below:

image-20241104-094147.png

Optmizing LPVR-AIR

Frame timing

In order to avoid synchronization issues with frame synchronization between the HMD’s displays and the optical tracking data, ideally set the sampling frequency of the optical tracking to a multiple of the display frequency of the HMD, eg. 90Hz display ferquency and 180Hz optical tracking frequency. The HMD display frequency can be adjusted in the settings menu in ALVR.

WIFI environment quality

LPVR-AIR transmits images from the server to the HMD through a regular WIFI connection. Usually a 5GHz band is used, in the optimum case we switch to WIFI 6E ie. a 6GHz band. In environments without much WIFI interference, i.e. other devices using the same WIFI bands, this works very well. Crowded WIFI environments limit the bandwidth of the used WIFI transmission. This can lead to unpredictable loss of image and tracking quality. Examples of crowded spaces are public locations such as exhibition. Beware!

Optical tracking parsing latency

Due to limited WIFI bandwidth and computing power limitations on the HMD pose information streamed from the the optical tracking system is parsed on the HMD with some delay. This lantency is mostly compensated by pose prediction. This isn’t perfect, therefore try to optimize the performance of your optical tracking system as much as possible.

References

 

Related content