LPVR-AIR Manual
Table of contents
- 2 Introduction
- 3 System components
- 3.1 Applications
- 4 Starting LPVR-AIR
- 4.1 Installation
- 4.2 Start-up
- 5 Using the FusionHub GUI
- 6 Optical tracking systems
- 6.1 Optical marker target setup
- 6.1.1 ART DTrack
- 6.1.2 Optitrack Motive
- 6.2 Configuration options
- 6.2.1 Advanced Realtime Tracking (ART) DTrack
- 6.2.2 Optitrack
- 6.1 Optical marker target setup
- 7 Using LPVR-AIR on a Motion Platform
- 8 Network setup
- 8.1 Router
- 8.2 Network topography
- 8.3 Network performance
- 9 Challenges and limitations of using Android-based wireless HMDs
Introduction
The purpose of LPVR-AIR is to wirelessly stream image data from a SteamVR application such as Autodesk VRED to a wireless HMD like the Meta Quest or VIVE Focus. LP-Research's FusionHub software in combination with the open-source application ALVR fulfill this purpose well.
ALVR by default uses the internal inside-out tracking of Meta Quest for pose calculation. LPVR-AIR exchanges the Quest’s native inside-out tracking with combined IMU and ART / Optitrack outside-in tracking to allow simultaneous, spatially synchronized operation of several HMDs in large tracking volumes.
To make the tracking functionality of FusionHub available to standalone augmented and virtual reality headsets, it can be integrated with Android-compatible OpenXR HMDs. This works via a customized version of the ALVR open source project. ALVR allows streaming image data wirelessly from a host computer and interfaces to 3D content engines through SteamVR. While the original ALVR client was built to work on Meta Quest HMDs, ALVR works in principle on any OpenXR compatible headset.
We use a thin client library to receive IMU data from the HMD API, pass it to FusionHub, process it there and then re-inject the information into the video pipeline of the headset. Depending on the type of HMD this happens within the ALVR client’s standard interface or in a separate hardware-specific API layer.
System components
Applications
The following applications need to be started on the head mounted display and the host computer. They should all the included in the installation package that you received from us. We will discuss the order of starting these applications and what their status output should be below.
On the headset:
Application | Purpose | Name (can vary by release) |
---|---|---|
ALVR client |
|
|
On the host computer:
Application | Purpose | Name |
---|---|---|
FusionHub GUI |
|
|
ALVR server |
|
|
FusionHub |
|
|
Starter script |
|
|
LPVR-AIR is copy protected by a USB dongle that needs to inserted when the application is started.
Starting LPVR-AIR
Installation
Install the ALVR client APK on the headset using a side-loading tool like Sidequest. In case of a Meta Quest HMD this will require you to put the HMD into developer mode. See here the steps for putting the HMD into developer mode: Device Setup
In case you’re using a VIVE Focus 3 headset you need to do something similar as described here: https://developer.vive.com/resources/hardware-guides/vive-focus-specs-user-guide/how-do-i-put-focus-developer-mode/
ALVR requires SteamVR to be set up on the host computer. If you haven’t installed it on your computer yet, please refer to the instructions here: https://store.steampowered.com/app/250820/SteamVR/
By default SteamVR is started via Steam, therefore Steam has to run in order to start SteamVR.
It is however possible to start SteamVR directly from it’s binary directory by using the STEAMVR_BIN_DIR
environment variable. LPVR-AIR detects automatically if this variable is set and will attempt to start SteamVR. A typical path for STEAMVR_BIN_DIR
would be C:\Program Files (x86)\Steam\steamapps\common\SteamVR\bin\win64
.
Make sure you turn the guardian (i.e. the automatic tracking boundary detection for the Quest’s internal tracking) is turned off in the developer settings of the HMD. For VIVE Focus use the equivalent setting in the Focus' configuration.
Start-up
Run
alvr_client_android/alvr_client_android.apk
on your headsetStart your optical tracking system (ART DTrack or Optitrack Motive)
Run
Start-LPVR-AIR.bat
Notes:
Make sure the copy protection dongle is inserted into your computer
To upload an APK to your HMD you might need to activate its developer mode
In the FusionHub configuration script make sure to correctly configure your optical tracking system. The script can be accessed through the FusionHub GUI.
Make sure to use a pre-configured HMD optical target provided by ART or Optitrack
Once streaming starts, you should see the SteamVR default environment through the headset. Check if the nIMU counter in the FusionHub GUI is increasing. If both nOptical and nIMU are increasing then the communication between ALVR, optical tracking and FusionHub is working.
Using the FusionHub GUI
Connect the GUI to FusionHub from the start screen of the FusionHub GUI. Note that running the GUI is optional, FusionHub works normally without the GUI running.
Once connected the FusionHub on the headset select base configuration to see the current configuration of FusionHub:
Adjust parameter blocks as needed. Refer to the description of FusionHub BASE for configuration options:
Note the following input and output ports that are hard-coded in the ALVR FusionHub API layer. These are already correctly set in the default configuration file installed with FusionHub, so usually there is no need to change them.
Endpoint | Direction | Purpose |
---|---|---|
| Output | Fused pose data |
| Input | IMU data |
If it’s not running yet make sure to start and configure your optical tracking system. Once optical data is streamed to FusionHub, the
nOptical
counter in the GUI should be increasing.Default configuration script with optical input defined for ART DTrack:
{
"LicenseInfo": {
"LicenseKey": "",
"ResponseKey": ""
},
"settings": {
"websocketDataOutputRate": 20
},
"sinks": {
"VRPN": {
"settings": {
"inputEndpoints": [
"inproc://optical_data_source_1"
],
"settings": {
"deviceName": "FusionHub",
"port": 3883,
"tracker0": "HMD"
}
}
},
"fusion": {
"dataEndpoint": "tcp://*:8799",
"inputEndpoints": [
"inproc://optical_data_source_1",
"tcp://localhost:8898"
],
"settings": {
"Autocalibration": {
"minAgeS": 60,
"nSamplesForAutocalibration": 1500,
"nSamplesForSteady": 256,
"noiseRmsLimit": 0.02,
"steadyThresholdAverage": 0.2,
"steadyThresholdRms": 1
},
"Intercalibration": {},
"MotionDetection": {
"omegaLimit": 3,
"positionSampleInterval": 1000,
"rotationFilterAlpha": 0.9,
"timeToUnknown": 500
},
"SensorFusion": {
"alignment": {
"w": 0.990892966476337,
"x": 0.13458639604387848,
"y": 0.0005637732357904688,
"z": 0.004160907038605602
},
"orientationWeight": 0.005,
"predictionIntervalMs": 10,
"sggPointsEachSide": 5,
"sggPolynomialOrder": 5,
"tiltCorrection": null,
"yawWeight": 0.01
},
"runIntercalibration": false
},
"type": "ImuOpticalFusion"
}
},
"sources": {
"optical": {
"settings": {
"bodyIDs": [
1
],
"endpoints": [
"inproc://optical_data_source_1"
],
"port": 5000
},
"type": "DTrack"
}
}
}
Press the buttons ‘Set’ and ‘Save’ after changing the configuration script to make your changes active. It might take 1-2 seconds for FusionHub to reset.
In case you happen to enter an invalid configuration, FusionHub might not restart correctly. If you would like to reset your settings, just re-install the FusionHub APK.
Once the configuration is correct, you’ll most likely not have to touch the script again in the foreseeable future.
Optical tracking systems
Optical marker target setup
LPVR-AIR uses the OpenVR coordinate frame convention as shown below:
Make sure that your optical target is configured with the correct coordinate frame. Make sure to check even if you have a pre-defined optical tracking body.
ART DTrack
See this guide for further information about how to configure DTrack. Also, refer to this page for more information on ART.
Optitrack Motive
Refer to this page for Optitrack setup from the LPVR documentation.
Configuration options
Advanced Realtime Tracking (ART) DTrack
FusionHub works with all ART tracking systems, based on their DTrack tracking software.
"optical": {
"settings": {
"port": 5000,
"bodyIDs": [
1
],
"endpoints": [
"inproc://optical_data_source_0"
],
"objectNameMapping": {
"1": "hMD"
}
},
"type": "DTrack"
}
Adjust the body ID of the HMD as configured in DTrack.
Optitrack
FusionHub works with all Optitrack tracking systems based on their Motive tracking software.
"optical": {
"type": "Optitrack",
"settings": {
"connectionType": "Multicast",
"localAddress": "192.168.0.99",
"remoteAddress": "192.168.0.100",
"bodyIDs": [
1
],
"endpoints": [
"inproc://optical_data_source_0"
],
"objectNameMapping": {
"1": "QuestPro"
}
}
}
Adjust the body ID of the HMD as configured in Motive.
Using LPVR-AIR on a Motion Platform
FusionHub has the ability to use an additional platform IMU to compensate the motion of a simulator platform or vehicle. It combines data from both IMUs to calculate poses relative to a moving platform.
Differential IMU Node
The differential IMU node allows compensation of the movement of eg. the motion platform of a simulator using an IMU attached to the simulator base. This is achieved by transforming the output of the reference IMU into the same coordinate system as the headset IMU and calculating the difference between the two.
The signal flow of the IMU-optical fusion in connection with the differential IMU node is shown in the block diagrams below. Please note that these diagrams were originally created for LPVR-CAD and LPVR-DUO and therefore contains the imuToEyeQuat
parameter, which isn’t available in FusionHub. The DefaultCombiner
in the diagram represents the basic IMU-optical fusion, which the DifferentialCombiner
stands for the differential IMU node.
The configuration block for the differential IMU node and adjusted fusion block looks like this:
See an explanation of the configuration parameters of the differential IMU node below:
Parameter name | Description | Default value |
---|---|---|
dataEndpoint | The data output endpoint to which the differential IMU data is forwarded to | inproc://differential_imu_output |
inputEndpoints | The input endpoints of the node. Two IMU inputs and looping in the output of the IMU optical fusion node are required. | "inproc://imu_data_source_0", |
referenceOrientationQuat | The orientation of the reference IMU target in the optical coordinate system | "w": 1.0, "x": -1.0, "y": 1.0, "z": 1.0 |
referenceToOpticalQuat | Transformation from reference IMU local coordinate system to optical frame | "w": 1.0, "x": 0.0, "y": 0.0, "z": 0.0 |
Calibration of Platform IMU
Option 1 (Recommended) - IMU Fixed to Optical Tracking Bar
In case you’re using an ART Smarttrack 3 system it makes sense to attach the platform IMU directly to the camera unit. As this puts the IMU into a known reference frame relative to the optical coordinate system, a further calibration of the relationship between the two isn’t needed. Please note that correct adjustment of the optical tracking body of the HMD relative to its IMU is still needed. The image below shows how to attach LPMS-IG1 on top of an SMT 3. The corresponding referenceOrientationQuat
for this configuration will be w=1.0, x=-1.0, y=1.0, z=1.0
.
Option 2 - Platform IMU-Optical System Intercalibration
Attach an optical target to the platform IMU. Any target shape is fine, the target could look like the one displayed below.
Create a
intercal-config.json
file for FusionHub that runs the intercalibration. This file will look similar to the following script:
This code block defines two sources, an IMU source and an optical source. The output from both sources is piped into the intercalibration node. Make sure to adjust the IMU and optical node parameters to the devices you are using as input, in this case ART tracking and an LPMS-IG1 IMU.
Save this configuration file under a separate file name such as intercal-config.json
and load it into FusionHub by calling FusionHub.exe -c intercal-config.json
.
FusionHub will output the status of the intercalibration to the command line. In order to perform the calibration, slowly rotate the IMU with the optical target attached. The intercalibration will sample 50 poses until it outputs a
referenceToOpticalQuat
. Write this quaternion into your original configuration file as part of the differential IMU node.
Leave FusionHub running and now fix the IMU in the location where you’d like to keep it permanently. Make sure that the camera system can still see the optical marker attached to the IMU after you fixed the IMU.
From the command line retrieve the optical quaternion output and use it in your original
config.json
asreferenceOrientationQuat
in the differential IMU node. You can now remove the marker target from the IMU. Leave the IMU in its place.Quit FusionHub, double check your modified
config.json
and run FusionHub. Platform motion compensation should now work correctly.
You can verify correct operation of the motion compensation by following the steps below. This requires that you have an HMD with a working rendering pipeline connected to FusionHub.
Mount platform, put on HMD. Look straight ahead.
Rotate the platform around the yaw, pitch and roll axis.
While the user keeps their head steady, the 3D image displayed in the HMD should be stationary.
Network setup
Router
In order to establish high bandwidth communication between the host and HMD we recommend setting up a 5GHz or for optimum performance a 6GHz (WIFI 6e) WIFI router. In some environments changing the internal channel setup of the router might increase performance. Some experimentation might be needed to find the perfect setting for your system.
Network topography
We recommend to set up a simple network structure, to minimize potential error sources in the installation process as shown in the image below.
Network performance
A WIFI 6E connection is recommended to achieve optimum performance:
Transmission speeds are expected to be around 2 Gbps for a stable 6 GHz connection.
A typical ALVR performance graph is shown below. Overall latencies in good environments should be between 70 and 90ms.
In SteamVR check Advanced Frame Timing for performance problems:
In case of a sufficient rendering performance the advanced frame timing window should look like the output below:
Challenges and limitations of using Android-based wireless HMDs
WIFI environment quality
LPVR-AIR transmits images from the server to the HMD through a regular WIFI connection. Usually a 5G band is used, in the optimum case we switch to WIFI 6E. In environments without much WIFI interference, i.e. other devices using the same WIFI bands, this works very well. Crowded WIFI environments limit the bandwidth of the used WIFI transmission. This can lead to unpredictable loss of image and tracking quality. Examples of crowded spaces are public locations such as exhibition. Beware!
The Meta Quest firmware doesn't allow using a wired ethernet connection to mitigate the issue to provide a quick fix to this problem in urgent situations. Again, a clean execution of this project is prevented by inflexibility of Meta's software. Unfortunately there are no alternative wireless HMDs on the market that allow the modifications we need for an optimum implementation.
In an ideal setup, with several HMDs being used, each HMD uses a separate WIFI 6e channel, with the selected channels as far apart as possible.
Optical tracking parsing latency
Due to limited WIFI bandwidth and computing power limitations on the HMD pose information streamed from the the optical tracking system is parsed on the HMD with a significant delay. So far we have not found a way to reduce this delay. As described in the sensor fusion section, we added input from native inside-out tracking to the fusion in order to compensate for this latency.
References
ALVR open source project: GitHub - alvr-org/ALVR: Stream VR games from your PC to your headset via Wi-Fi
FusionHub documentation: FusionHub Manual