Transfer old wiki content

This commit is contained in:
Łukasz Mariański 2022-12-16 17:36:16 +01:00
parent 570a0d95e3
commit 41ace0b92d
No known key found for this signature in database
GPG Key ID: 6F3ED3F76565673B
25 changed files with 1518 additions and 0 deletions

24
wiki/ALVR-Checklist.md Normal file
View File

@ -0,0 +1,24 @@
## Hardware Requirements
* [ ] Intel Core i5-4590/AMD FX 8350 equivalent or better
* [ ] At least 4GB of Ram
* [ ] NVIDIA GeForce GTX 970, AMD Radeon R9 290 equivalent or better
* [ ] A 5Ghz router/access point or my PC can create its own 5Ghz hotspot
## Network settings
* [ ] My PC has a wired connection to the router/access point
* [ ] The access point is placed in sight of my designated playspace without any obstructions
* [ ] I'm using the 5ghz antenna of the router/access point
* [ ] No one else is using the router/access point
* [ ] I'm the only user of the 5Ghz channel of the router/access point. No one else is using the same channel in the vicinity
* [ ] The 5Ghz and 2.4Ghz parts of the access point have different SSIDs to prevent switching to 2.4ghz
## Software settings
* [ ] I have the latest Windows 10 updates
* [ ] I have a recent version of SteamVR
## Troubleshooting
* [ ] The firewall settings where successfully applied with the setup of ALVR
* [ ] I did not change the network settings since the installation of ALVR (Private/Public/Work)
* [ ] I did not move the installation folder of ALVR since the setup
* [ ] The path to the folder of ALVR does not contain any non latin characters or accents (ツ Л Ö ...)

View File

@ -0,0 +1,77 @@
# ALVR v14 and Above
Here are explained two methods to connect PC and headset remotely, port-forwarding and ZeroTier. The primary purpose of this is connecting the headset to a Cloud PC (like ShadowPC).
## Port-forwarding
Port-forwarding allows to connect devices that are behind different NATs, i.e. local networks. You need to have administrator access to your router. This method has the best streaming performance.
**IMPORTANT**: ALVR does not use end-to-end encryption of the stream data. By using this method you need to be aware that the connection is vulnerable to "Man In The Middle" attacks.
1. Take note of the public IP of your headset. You can use the online tool [WhatIsMyIP](https://www.whatismyip.com/).
2. Inside your router web interface or app, add a port-forwarding rule for your headset. You need to specify the ports 9943 and 9944 for both TCP and UDP.
3. Connect to the remote PC and open ALVR. In the Connection tab press `Add client manually`. Fill in the fields with a name for your headset (you can use the name you want), the hostname (you can read it in the welcome screen in your headset when you open the ALVR app), the remote IP of the headset (that is the IP you got on step 1.) and then press `Add client`.
You can now use ALVR to connect to your remote PC.
**Note**: The public IP can change often. Every time you want to use ALVR you need to check that your current public IP is the same as the last time. If the IP changed, you can update it using the "Configure client" interface, accessed with the `Configure` button next to your headset name on the server.
## ZeroTier
[ZeroTier](https://www.zerotier.com/) is a tunneling software that makes remote devices connect to each other as if they are in the same local network.
Comparing this to the port-forwarding method:
Pros:
* Does not require access to the router interface.
* You don't need to update the public IP often on the server.
* The connection in encrypted.
Cons:
* The streaming performance is worse. You may experience more glitches and loss of quality in the image and audio.
### Requirements
- [ZeroTier](https://www.zerotier.com/) for your PC
- ZeroTier APK for your Quest (you can find it online)
- SideQuest or some other method to install the ZeroTier APK onto your headset
### Installation
Use the "Install APK" function of SideQuest to install the ZeroTier APK to your Quest, and also download and install ZeroTier on your PC. After you've installed ZeroTier, follow Zerotier's official [Getting Started](https://zerotier.atlassian.net/wiki/spaces/SD/pages/8454145/Getting+Started+with+ZeroTier) guide to setup a network for ALVR. Join the network on both the Quest and the PC. On the Quest, make sure that the network is enabled by switching on the slider on the network in the list in the ZeroTier app (you may be prompted to allow ZeroTier to create a VPN connection).
After both your PC and your Quest are connected to the same ZeroTier network, we'll need to manually add your quest to the ALVR dashboard. To do so, we'll need to find your Quest's ZeroTier IP. There are two ways to do this.
- Go the the ZeroTier network page, find your quest under "Members", and copy the managed IP from there
- Or, in the ZeroTier app on your quest, click on the network you created. The IP is under the "Managed IPs" section at the bottom.
The IP should look something like this `192.168.143.195`. If there's a `/` at the end with a couple numbers following it, remove them along with the slash.
Next, we'll need to add the Quest to the ALVR dashboard. On your headset, launch ALVR. The on the ALVR dashboard on your PC, click the "Add Client Manually" button, provide a name and hostname (You can get this from the "trust" screen of ALVR on your Quest), then put in the IP address that we got from ZeroTier.
At this point, you should be ready to go. Have fun in VR!
### Troubleshooting
- If you can't get your Quest to connect to ALVR, and are stuck on the "Trust" screen, try to ping your Quest's managed IP address (the one we got earlier). If it says "no route to host" or something similar, your Quest can't see your PC. Try running through the steps above to make sure you didn't miss anything.
## Tailscale
An alternative to ZeroTier with practically the same setup procedure. This could have better latency, depending on your distance to the datacenter.
https://tailscale.com/
# ALVR v11 and Below
ALVR version Experimental v7 or newer is recommended for this configuration.
This configuration is **NOT** supported in ALVR v12. The latest release to still support this is v11.
To run ALVR client and ALVR server on separate networks (broadcast domains) the following things must be done:
1. UDP ports 9943 and 9944 of ALVR server must be accessible from Oculus Quest device (i.e. firewall openings must be made to allow Oculus Quest to connect to ALVR server UDP ports 9943 and 9944).
1. Oculus Quest must be connected to computer and command-line `adb shell am startservice -n "com.polygraphene.alvr/.ChangeSettings" --es "targetServers" "10.10.10.10"` must be run in Command Prompt to specify IP address of ALVR server (`10.10.10.10` must be substituted with IP address of ALVR server; the long line is a single command-line).
1. Next time when ALVR client will be started it should try to connect to the specified ALVR server. ALVR server should display the client in _Server_ tab (the same way local-network clients are displayed).
ALVR does **NOT** provide any kind of tunnel, NAT traversal etc. UDP ports 9943 and 9944 of ALVR server (VR gaming PC) must be accessible from ALVR client (Oculus Quest) otherwise this won't work.
**Important notes on security!**
* ALVR protocol does not have any encryption or authentication (apart from ALVR client IP address shown in ALVR server and the requirement to click _Connect_ on ALVR server).
* It is recommended to run ALVR via encrypted tunnel (VPN) over the internet. In case VPN is not an option, access to ALVR server (UDP ports 9943 and 9944) should be restricted by Windows Firewall (only connections from known IP addresses of ALVR clients should be allowed) and ALVR server should not be left running unattended.
* **Warning!** SteamVR allows to control desktop from VR headset (i.e. a **malicious ALVR client could take over the PC**).
* As the license states ALVR IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND (see the file `LICENSE` in this GitHub repository for legal text/definition). You are on your own (especially if you run ALVR over the Internet without VPN).

View File

@ -0,0 +1,173 @@
ALVR can be built on Windows and Linux.
# Windows
Preferred IDE (optional): Visual Studio Code with rust-analyzer extension
### Prerequisites
* [Chocolatey](https://chocolatey.org/install)
* [rustup](https://rustup.rs/). Alternatively install with: `choco install rustup.install`
## Server
**Note: These instructions are for the master branch.**
On the repository root execute:
```
cargo xtask prepare-deps --platform windows
cargo xtask build-server --release
```
ALVR server will be built into `build/alvr_server_windows/`.
To compile with software encoding support execute:
```
cargo xtask build-server --release --gpl
```
This will download and use FFmpeg binaries that are GPL licensed.
## Client
* [Android Studio](https://developer.android.com/studio)
* Latest NDK (currently 25.1.8937393)
* Environment variable `JAVA_HOME` set to `C:\Program Files\Android\Android Studio\jre`
* Environment variable `ANDROID_SDK_ROOT` set to `%LOCALAPPDATA%\Android\Sdk`
* Environment variable `ANDROID_NDK_HOME` set to `%LOCALAPPDATA%\Android\Sdk\ndk\25.1.8937393`
On the repository root execute:
```
cargo xtask prepare-deps --platform android
cargo xtask build-client --release
```
ALVR client will be built into `build/alvr_client_<platform>/`.
# Linux
**Note: Linux builds of ALVR are still experimental!**
## Supported GPU Configurations
* AMD using radv is known to work, with hardware encoding
* AMD using amdvlk does not work
* NVIDIA using proprietary driver works, with hardware encoding.
* Intel is untested
## Packaged Builds
#### Deb and RPM Distributions
The build script located at `packaging/alvr_build_linux.sh` allows building of client and server together or independently, along with installation of any necessary dependencies if requested. This script will respect existing git repositories; if you would like a specific release, simply clone this repository at the release tag you need, then run the script in the directory above the repository.
#### Note:
* Fedora **client** builds are not recommended as they may potentially pollute the system Rust install; better support for this will be added later
* Releases prior to the merge of [PR 786](https://github.com/alvr-org/ALVR/pull/786) will not function due to a lack of required packaging files
* This script is designed to request superuser permissions only when neccessary; do not run it as root
#### Usage:
```
Usage: alvr_build_linux.sh ACTION
Description: Script to prepare the system and build ALVR package(s)
Arguments:
ACTIONS
all Prepare and build ALVR client and server
client Prepare and build ALVR client
server Prepare and build ALVR server
FLAGS
--build-only Only build ALVR package(s)
--prep-only Only prepare system for ALVR package build
```
#### Example:
```bash
git clone https://github.com/alvr-org/ALVR.git
./ALVR/packaging/alvr_build_linux.sh all
```
## Server
### Dependencies
You need [rustup](https://rustup.rs/) and the following platform specific dependencies:
* **Arch**
```bash
sudo pacman -Syu clang curl nasm pkgconf yasm vulkan-headers libva-mesa-driver unzip
```
* **Gentoo**
* `media-video/ffmpeg >= 4.4 [encode libdrm vulkan vaapi]`
* `sys-libs/libunwind`
* `dev-lang/rust >= 1.51`
* **Nix(OS)**
Use the `shell.nix` in `packaging/nix`.
* **Ubuntu / Pop!_OS 20.04**
```bash
sudo apt install build-essential pkg-config libclang-dev libssl-dev libasound2-dev libjack-dev libgtk-3-dev libvulkan-dev libunwind-dev gcc-8 g++-8 yasm nasm curl libx264-dev libx265-dev libxcb-render0-dev libxcb-shape0-dev libxcb-xfixes0-dev libspeechd-dev libxkbcommon-dev libdrm-dev
```
### Building
Bundled version:
```bash
cargo xtask prepare-deps --platform linux --gpl [--no-nvidia]
cargo xtask build-server --release --gpl
```
To use the system ffmpeg install (you need a ffmpeg version with vulkan) you just run:
```bash
cargo xtask build-server --release
```
## Client
### Dependencies
* **Arch**
```bash
sudo pacman -Syu git unzip rustup cargo jre11-openjdk-headless jdk8-openjdk clang python libxtst fontconfig lib32-gcc-libs lib32-glibc libxrender
```
- Android SDK (can be installed using [android-sdk](https://aur.archlinux.org/packages/android-sdk)<sup>AUR</sup>)
```bash
sudo ${ANDROID_HOME}/tools/bin/sdkmanager "patcher;v4" "ndk;22.1.7171670" "cmake;3.10.2.4988404" "platforms;android-31" "build-tools;32.0.0"
```
### Building
```bash
cargo xtask prepare-deps --platform android
cargo xtask build-client --release
```
### Docker
You can also build the client using Docker: https://gist.github.com/m00nwtchr/fae4424ff6cda5772bf624a08005e43e
### Troubleshooting
On some distributions, Steam Native runs ALVR a little better. To get Steam Native on Ubuntu run it with:
```bash
env STEAM_RUNTIME=0 steam
```
On Arch Linux, you can also get all the required libraries by downloading the `steam-native-runtime` package from the multilib repository
```bash
sudo pacman -S steam-native-runtime
```
Dependencies might be missing then, so run:
```bash
cd ~/.steam/root/ubuntu12_32
file * | grep ELF | cut -d: -f1 | LD_LIBRARY_PATH=. xargs ldd | grep 'not found' | sort | uniq
```
Some dependencies have to be fixed manually for example instead of forcing a downgrade to libffi version 6 (which could downgrade a bunch of the system) you can do a symlink instead (requires testing):
```bash
cd /lib/i386-linux-gnu
ln -s libffi.so.7 libffi.so.6
```
and
```bash
cd /lib/x86_64-linux-gnu
ln -s libffi.so.7 libffi.so.6
```
A few dependencies are distro controlled, you can attempt to import the package at your own risk perhaps needing the use of alien or some forced import commands, but its not recommended (turns your system into a dependency hybrid mess), nor supported!

View File

@ -0,0 +1,54 @@
# PC
- A high-end PC is a requirement; ALVR is not a cheap alternative to a PCVR HMD
- ALVR resolution configuration and SteamVR multi-sampling may be used to influence quality in favor of performance or vice-versa
- Frequent dropped frames can cause a poor experience on ALVR; this can be verified using a tool such as [OVR Advanced Settings](https://github.com/OpenVR-Advanced-Settings/OpenVR-AdvancedSettings)
- Higher bit-rates will cause higher latency
- Ensure all relevant software is up to date; especially graphics and network drivers
- A good starting point is 100% resolution and 30mbit- 200kb buffer settings. In this config it should be butter smooth with almost no lag or packet loss; packet loss seen at this point is likely a result of network issues
# Network
- A wired connection from the PC to the network is **strongly recommended**
- A modern mid to high-end router and / or access point supporting at least 802.11AC (ideally 802.11AX) with regularly updated firmware is recommended
## Wireless
### General WiFi configuration best practices:
- Any device that can be wired should be; each wireless device slows down the overall wireless network
- Devices should have the fewest obstructions and be as close to the access point or router as possible
- Any other wireless networks (ex: a printer's default wireless network) should be disabled; each network slows others down
- Any devices that do not need high speeds but support them (ex: a thermostat) should use 2.4Ghz; often middle and higher end access points and routers support methods to "force" clients to use 2.4Ghz, and some can even perform this automatically based on signal strength and connection speed
- Only WiFi revisions which are necessary should be enabled; older standards such as 802.11B, 802.11G, and to a lesser extent, 802.11N, will slow down all clients
- Devices that require high speeds should use:
- 5Ghz only
- The newest WiFi specifications (802.11AX, followed by 802.11AC)
- In most environments, the largest channel width possible (160MHz for 802.11AX, 80MHz in practice for 802.11AC) (**note: some vendors do not set this to the maximum by default**)
- The lowest utilization, followed by the lowest channel number (sub-frequency) possible
- **Manually selecting channels should only be done in places with extreme noise, or on older, lower quality, or ISP provided access points or routers** ; modern mid to high-end routers and access points should optimize their channels fairly well, and as a result of other routers and clients "channel hopping", static settings are often less optimal
- If a specific WiFi channel range is absolutely necessary, use a WiFi scanning tool on a phone or PC to determine the least used channels; mid to high-end access points and routers may provide an interface for this as well, however, this sometimes causes a disconnect when scanning
- **Manually selecting wifi signal strength should only be done in places with extreme noise**; modern routers and access points do this well, and it is a complex task
- If a specific transmit power is necessary, keep in mind that stronger is not always better; as transmit power increases, distortion may increase (leading to *lower* speeds), battery life of clients may increase (due to the higher power requested by the access point or router), and issues with sticky clients (devices which stay connected to wifi even with bad signal) may appear
- If you have a significant number of devices, some routers and access points support features such as airtime fairness, which help to limit the amount of airtime slower clients take, improving the performance of higher speed clients
### Things to keep in mind when configuring a wireless network and devices:
- All devices on the same frequency impact each other (**including other WiFi networks on the same channel**) because only one device can transmit or receive data at a time, meaning that:
- If one device utilizes WiFi heavily it will impact the latency and throughput of all other clients
- If a slow device is connected, it can still take a significant amount of "airtime" (time for that dedicated client to transmit / receive data to the access point or router), even though it does so at a slower rate than other clients
- Each connected device requires additional time, regardless of whether it is actively in use (and often devices send small amounts of data when idle for things such as NTP and DHCP)
- WiFi is [half duplex](https://en.wikipedia.org/wiki/Duplex_(telecommunications)#Half_duplex) by nature of it being radio frequency, meaning data can only ever be transmitted **or** received on the same frequency, not both at the same time; twisted pair (copper ethernet cable) is full duplex
- Wireless frequency bands (ex: 2.4Ghz, 5Ghz) have separate channels that can be statically assigned if needed, but **these are not mutually exclusive, meaning the channels overlap significantly and interfere with each other**
- Different regions of the world support different channels (sub-frequencies); devices sold in these regions are generally locked to those channels (ex: in the US, 2.4Ghz channels 12 - 13 are low power only, and channel 14 is military and EMS use only)
- Different wireless devices support different frequencies, standards, speeds, and features; using these to your advantage is key to getting best performance
## Routing / Switching / Firewalling / General Info
- Ideally client and server should live on the same logical (layer 2) network and subnet; this allows for no routing overhead, and the correct function of client discovery via mDNS
- Twisted pair (normal copper ethernet cables) should never be run alongside power cables; this can cause signal noise and result in frame loss and lowered auto-negotiation speeds
- High quality CAT5E or higher (ideally CAT6A or CAT7) cabling should be used for modern networks
- In some cases firewall, anti-virus, malware, or EDR (enhanced detection and response) software may interfere with network traffic; Windows Defender and Sophos Endpoint Protection are reported to work without issue
- Pause frames should be disabled where possible, as these introduce additional latency and buffering
***
Someone did a few blog-posts on some of the points:
https://imaginevr.home.blog/author/imaginevrresearch/
Some points came from [FingrMastr](https://github.com/FingrMastr)

View File

@ -0,0 +1,19 @@
Controller tracking will always be difficult. There are so many factors that influence the latency and the motion prediction. Its not something like "100ms" constantly, but depends on your movements and even the movement of the headset.
There are many parameters that influence the movement that can be changed:
- Tracking is currently async to the rendering and running at 3*72=216Hz
- Movement prediction is set to 0 to get the latest tracking info -> no prediction on the quest
- Tracking info is sent to SteamVR
- Tracking info is fed into SteamVR with an offset of 10ms to enable SteamVR pose prediction
- The tracking point on the Quest is different that the point on the Rift S. Angular acceleration and linear acceleration of the controller needed to be transformed to the new reference.
There is a trade off between fast but wobbly and overshooting controllers and controllers that have a certain latency. For me, the current settings are perfectly playable for games like Skyrim, Fallout or Arizona Sunshine. Games like Beat Saber might be an issue.
You can change the 10ms offset for SteamVR in the "Other" tab of ALVR (Controller Pose Offset).
The parameter defines how old the data that is fed into SteamVR is and controls the SteamVR pose prediction. Set it to 0 to disable all predictions
The default is 0.01=10ms. Its the amount of time I needed to be able swing my sword in Skyrim without feeling weird. Its very possible that this value depends on the game/user, that's why it's exposed in the control panel, and you can change it on the fly

View File

@ -0,0 +1,46 @@
# Fixed Foveated Rendering (FFR)
## What is it, why do I need it
In short: The human eye can only see sharp in a very small area (the fovea). That's why we move our eyes constantly to get the feeling that our whole view is a sharp image.
The idea of foveated rendering is, to only render the small portion of the screen where we look at at the highest resolution, and the other parts at a lower resolution. This would massively increase the performance without any noticeable visual impact.
But it has one drawback: You need to track the movement of the eyes. While there are already headsets out there that have eyetracking, the quest does not have it.
That's why oculus is using Fixed Foveated rendering. There is some research that shows, that if you assume that the user looks at the center of the screen, some parts of the image are more important than others ([Oculus](https://developer.oculus.com/documentation/mobilesdk/latest/concepts/mobile-ffr/)). Many games on the Quest use this to improve rendering performance.
## FFR in ALVR
That's not how ALVR is using it :P
We don't have any influence on how games get rendered, we only get the final rendered image that should be displayed.
What ALVR normally does is:
- takes that image
- encodes it as a video with the resolution you set at the video tab
- transmits it to the Quest
- displays the video
With FFR:
- takes the image
- projects the image, keeping the center area at the resolution you set at the video tab, reducing the resolution at the outer regions
- encodes it as a video with the new, lower overall resolution
- transmits the video to the Quest
- reprojects the video to the original size
- displays the image
There are two implementations of FFR by [zarik5](https://github.com/zarik5)
- warp: Uses a radial warping of the video where the center stays the same resolution and the outer regions get "squished" to reduce resolution
- slices: Slices the image into parts (center, left/right, top/bottom) and encodes the outer slices with lower resolution. This method produces a much sharper image.
**Advantages**: Lower resolution results in faster encoding and decoding of the video which will decrease overall latency. Same bitrate at lower resolution results in higher image quality. On slow networks, bitrate can be reduced resulting in the same quality as without ffr.
**Drawbacks**: Using the warped method results in a slightly overall blurry image. You can compensate this by setting the initial video resolution to 125%.
Slicing will result in a noticeable border where the high res center ends.
Increasing the foveation strength will result in visual artifacts with both methods
## Configuration
That depends on your perception. You should try different settings going from strength = 2 up to 5 for both methods.
The higher you go, the more visual artifacts you will see at the edges of the screen.
For the slice method, you can also set a center offset. This moves the high res center up or down to accommodate games that have more interaction in the upper or lower part of the screen.
[wikipedia](https://en.wikipedia.org/wiki/Foveated_rendering)

View File

@ -0,0 +1,38 @@
Current bindings for ALVR 14.0.0
===
There's no trackpad/joystick position mapping.
Oculus Rift S
---
|Action|w/ handtracking pinch|w/o pinch|
|-|-|-|
|Trigger|index + thumb pinch|index finger curl|
|Joystick press|thumb press towards palm|thumb press towards palm|
|Grip press|middle, ring, pinky fingers curl|middle, ring, pinky fingers curl|
|A|right ring + thumb pinch| N/A |
|B|right middle + thumb pinch| N/A |
|X|left ring + thumb pinch| N/A |
|Y|left middle + thumb pinch| N/A |
|Menu button|left pinky + thumb pinch| N/A |
Valve Index
---
|Action|w/ handtracking pinch|w/o pinch|
|-|-|-|
|Trigger|index finger curl|index finger curl|
|Grip press|middle, ring, pinky fingers curl|middle, ring, pinky fingers curl|
|Trackpad press|thumb press towards palm|thumb press towards palm|
|A|middle + thumb pinch| N/A |
|B|index + thumb pinch| N/A |
|System button|ring + thumb pinch| N/A |
HTC Vive
---
|Action|w/ handtracking pinch|w/o pinch|
|-|-|-|
|Trigger|index finger curl|index finger curl|
|Grip press|middle, ring, pinky fingers curl|middle, ring, pinky fingers curl|
|Trackpad press|thumb press towards palm|thumb press towards palm|
|Menu button|middle + thumb pinch| N/A |
|System button|ring + thumb pinch| N/A |

231
wiki/How-ALVR-works.md Normal file
View File

@ -0,0 +1,231 @@
# How ALVR works
This document details some technologies used by ALVR.
If you have any doubt about what is (or isn't) written in here you can contact @zarik5, preferably on Discord.
**Note: At the time of writing, not all features listed here are implemented**
## Architecture
### The built application
ALVR is made of two applications: the server and client. The server is installed on the PC and the client is installed on the headset. While the client is a single APK, the server is made of three parts: the launcher, the driver and the dashboard. The launcher (`ALVR Launcher.exe`) is the single executable found at the root of the server app installation. The driver is located in `bin/win64/` and named `driver_alvr_server.dll`. The dashboard is a collection of files located in `dashboard/`.
The launcher sets up the PC environment and then opens SteamVR, which loads the ALVR driver. The driver is responsible for loading the dashboard and connecting to the client.
### Programming languages
ALVR is written in multiple languages: Rust, C, C++, Java, HTML, Javascript, HLSL, GLSL. C++ is the most present language in the codebase but Rust is the language that plays the most important role, as it is used as glue and more and more code is getting rewritten in Rust.
Rust is a system programming language focused on memory safety and ease of use. It is as performant as C++ but code written on it is less likely to be affected by bugs at runtime. A feature of Rust that is extensively used by ALVR is enums, that correspond to tagged unions in C++. Rust's enums are a data type that can store different kinds of data, but only one type can be accessed at a time. For example the type `Result` can contain either an `Ok` value or an `Err` value but not both. Together with pattern matching, this is the foundation of error management in Rust applications.
C++ and Java code in ALVR is legacy code inherited by the developer @polygraphene; it is almost unmaintained and it is getting replaced by Rust. HTML and Javascript are used to write the dashboard.
### Source code organization
* `alvr/`: This is where most of the code resides. Each subfolder is a Rust crate ("crate" means a code library or executable).
* `alvr/client/`: Crate that builds the client application. `alvr/client/android/` is the Android Studio project that builds the final APK.
* `alvr/common/`: Code shared by both client and server. It contains code for settings generation, networking, audio and logging.
* `alvr/launcher/`: This crate build the launcher executable.
* `alvr/server/`: This crate builds the driver DLL. `alvr/server/cpp/` contains the legacy code.
* `alvr/settings-schema/` and `alvr/settings-schema-derive/`: Utilities for settings code generation.
* `alvr/xtask/`: Build utilities. The code contained in this crate does not actually end up in the final ALVR applications.
* `server_release_template/`: Contains every file for ALVR server that does not require a build pass. This includes the dashboard.
* `wix/`: WIX project used to crate the ALVR installer on Windows.
## Logging and error management
In ALVR codebase, logging is split into interface and implementation. The interface is defined in `alvr/common/src/logging.rs`, the implementations are defined in `alvr/server/src/logging_backend.rs` and `alvr/client/src/logging_backend.rs`.
ALVR logging system is based on the crate [log](https://crates.io/crates/log). `log` is already very powerful on its own, since the macros `error!`, `warn!`, `info!`, `debug!` and `trace!` can collect messages, file and line number of the invocation. But I needed something more that can reduce boilerplate when doing error management (*Disclaimer: I know that there are tens of already established error management crates but I wanted to have something even more opinionated and custom fitted*).
ALVR defines some macros and functions to ease error management. The base type used for error management is `StrResult<T>` that is an alias for `Result<T, String>`. Read more about Rust's Result type [here](https://doc.rust-lang.org/std/result/).
`trace_err!` is a macro that takes as input a generic result and outputs and converts it into a `StrResult`. It does not support custom error messages and it should be used only to wrap `Result` types to convert them to `StrResult` when the result is actually not likely to return an error. This way we avoid calling `.unwrap()` that makes the program crash directly. In case of error, the `Err` type is converted to string and is prefixed with the current source code path and line number.
`trace_none!` works similarly to `trace_err!` but it accepts an `Option` as argument. `None` is mapped to `StrResult::Err()` with no converted error message (because there is none).
`fmt_e!` is a macro to create a `StrResult<T>` from a hand specified error message. The result will be always `Err`.
When chaining `trace_err!` from one function to the other, a stack trace is formed. Unlike other error management crates, I can decide in which point in the stack to insert trace information to make error messages more concise.
To show an error (if present) the function `show_err` is defined. It shows an error popup if supported by the OS (currently only on Windows) and the message is also forwarded to `error!`.
Other similar functions are defined: `show_e` shows an error unconditionally, `show_err_blocking` blocks the current thread until the popup is closed, `show_warn` opens a warning popup. More similar functions are in `alvr/common/src/logging.rs`.
### The messaging system
The communication between driver and the dashboard uses two methods. The dashboard can interrogate the server through an HTTP API. The server can notify the dashboard through logging. The server uses the function `log_id` to log a `LogId` instance (as JSON text). All log lines are sent to the dashboard though a websocket. The dashboard registers all log lines and searches for the log ID structures contained; the dashboard then reacts accordingly.
While log IDs can contain any (serializable) type of data, it is preferred to use them only as notifications. Any type of data needed by the dashboard that should be persistent is stored in the session structure (more on this later), and the dashboard can request it any time.
## The launcher
The launcher is the entry point for the server application. It first checks that SteamVR is installed and setup properly and then launches it.
The launcher requires `%LOCALAPPDATA%/openvr/` to contain a valid UTF-8 formatted file `openvrpaths.vrpath`. This file is crucial because it contains the path of the installation folder of SteamVR, the paths of the current registered drivers and the path of the Steam `config/` folder.
### The bootstrap lifecycle
1. The launcher is opened. First `openvrpaths.vrpath` is checked to exist and to be valid.
2. From `openvrpaths.vrpath`, the list of registered drivers is obtained. If the current instance of ALVR is registered do nothing. Otherwise stash all driver paths to a file `alvr_drivers_paths_backup.txt` in `%TEMP%` and register the current ALVR path.
3. SteamVR is killed and then launched using the URI `steam://rungameid/250820`.
4. The launcher tries to GET `http://127.0.0.1:8082` until success.
5. The launcher closes itself.
6. Once the driver loads, `alvr_drivers_paths_backup.txt` is restored into `openvrpaths.vrpath`.
### Other launcher functions
The launcher has the button `Reset drivers and retry` that attempts to fix the current ALVR installation. It works as follows:
1. SteamVR is killed.
2. `openvrpaths.vrpath` is deleted and ALVR add-on is unblocked (in `steam/config/steamvr.vrsettings`).
3. SteamVR is launched and then killed again after a timeout. This is done to recreate the file `openvrpaths.vrpath`.
4. The current ALVR path is registered and SteamVR is launched again.
The launcher can also be launched in "restart" mode, that is headless (no window is visible). This is invoked by the driver to bootstrap a SteamVR restart (since the driver cannot restart itself since it is a DLL loaded by SteamVR).
## Settings generation and data storage
A common programming paradigm is to have a strict separation between UI and background logic. This generally helps with maintainability, but for settings management this becomes a burden, because for each change of the settings structure on the backend the UI must be manually updated. ALVR solves by heavily relying on code generation.
### Code generation on the backend (Rust)
On ALVR, settings are defined in one and only place, that is `alvr/common/src/data/settings.rs`. Rust structures and enums are used to construct a tree-like representation of the settings. Structs and enums are decorated with the derive macro `SettingsSchema` that deals with the backend side of the code generation.
While the hand-defined structs and enums represent the concrete realization of a particular settings configuration, `SettingsSchema` generates two other settings representations, namely the schema and the "default" representation (aka session settings).
The schema representation defines the structure and metadata of the settings (not the concrete values). While arrangement and position of the fields is inferred by the definition itself of the structures, the fields can also be decorated with metadata like `advanced`, `min`/`max`/`step`, `gui` type, etc. that is needed by the user interface.
The second generated representation is the "default" representation. This representation has a dual purpose: it is used to define the default values of the settings (used in turn by the schema generation step) and to store the settings values on disk (`session.json`).
But why not use the original hand-defined structures to store the settings on disk? This is because enums (that are tagged unions) creates branching.
The branching is a desired behavior. Take the `Controllers` setting in the Headset tab as an example. If you uncheck it it means you *now* don't care about any other settings related to controllers. If we store this on disk using the original settings representation, all modifications to the settings related to the controllers are lost, but *then* you may want to recover these settings.
To solve this problem, the default/session representation transforms every enum into a struct, where every branch becomes a field, so every branch coexist at once, even unused ones.
### Code generation on the frontend (Javascript)
One of the main jobs of the dashboard is to let the user interact with settings. The dashboard gets the schema from the driver and uses it to generate the user interface. The schema has every kind of data that the UI needs except for translations which are defined in `server_release_template/dashboard/js/app/nls`. This is because this type of metadata would obscure the original settings definition if it was defined inline, due to the large amount of text. The schema is also used to interpret the session data loaded from the server.
### The schema representation
While the original structs and enums that define settings are named, the schema representation loses the type names; it is based on a single base enum `SchemaNode` that can be nested. `SchemaNode` defines the following variants:
* `Section`: This is translated from `struct`s and struct-like `enum` variants data. It contains a list of named fields, that can be set to `advanced`. In the UI it is represented by a collapsible group of settings controls. The top level section is treated specially and it generates the tabs (Video, Audio, etc).
* `Choice`: This is translated from `enums`. Each variant can have one or zero childs. In the UI this is represented by a stateful button group. Only the active branch content is displayed.
* `Switch`: This is generated by the special struct `Switch`. This node type is used when a settings make sense to be "turned off", and it also had some associated specialized settings only when in the "on" state. In the UI this is similar to `Section` but has also a checkbox. In the future this should be graphically changed to a switch.
* `Boolean`: translated from `bool`.
* `Integer`/`Float`: Translated from integer and floating point type. They accept the metadata `min`, `max`, `step`, `gui`. `gui` can be either `textBox`, `upDown` and `slider`. Only certain combinations of `min`/`max`/`step`/`gui` is valid.
* `Text`: Translated from `String`. In the UI this is a simple textbox.
* `Array`: Translated from rust arrays. In the UI this is represented similarly to `Section`s, with the index as the field name. In the future this should be changed to look more like a table.
There are also currently unused node types:
* `Optional`: This is translated from `Option`. Similarly to `Switch`, this is generated from an enum that has one variant with data and one that doesn't. The reason behind the distinction is about the intention/meaning of the setting. Optional settings can either be "set" or "default". "Default" does not mean that the setting is set to a fixed default value, it means that ALVR can dynamically decide the value or let some other independent source decide the value, that ALVR might not even be aware of.
* `Vector` and `Dictionary`: Translated from `Vec<T>` and `Vec<(String, T)>` respectively. These types are unimplemented in the UI. They should represent a variable-sized collection of values.
### The session
Settings (in the session settings representation) are stored inside `session.json`, together with other session data. The session structure is defined in `alvr/common/src/data/session.rs`. The session supports extrapolation, that is the recovery of data when the structure of `session.json` does not match the schema. This often happens during a server version update. The extrapolation is also used when the dashboard requests saving the settings, where the payload can be a preset, that is a deliberately truncated session file.
## The connection lifecycle
The code responsible for the connection lifecycle is located in `alvr/client/src/connection.rs` and `alvr/server/src/connection.rs`.
The connection lifecycle can be divided into 3 steps: discovery, connection handshake and streaming.
During multiple connection steps, the client behaves like a server and the server behaves like a client. This is because of the balance in responsibility of the two peers. The client becomes the portal though a PC, that can contain sensitive data. For this reason the server has to trust the client before initiating the connection.
### Discovery
ALVR discovery protocol has initial support for a cryptographic handshake but it is currently unused.
When ALVR is launched for the first time on the headset, a hostname, certificate and secret are generated. The client then broadcasts its hostname, certificate and ALVR version (`ClientHandshakePacket`). The server has a looping task that listens for these packets and registers the client entry, saving hostname and certificate, if the client version is compatible.
If the client is visible and trusted on the server side, the connection handshake begins.
### Connection handshake
The client listens for incoming TCP connections with the `ControlSocket` from the server. Once connected the client sends its headset specifications (`HeadsetInfoPacket`). The server then combines this data with the settings to create the configuration used for streaming (`ClientConfigPacket`) that is sent to the client. In particular, this last packet contains the dashboard URL, so the client can access the server dashboard. If this streaming configuration is found to invalidate the current ALVR OpenVR driver initialization settings (`OpenvrConfig` inside the session), SteamVR is restarted.
After this, if everything went right, the client discovery task is terminated, and after the server sends the control message `StartStream` the two peers are considered connected, but the procedure is not concluded. The next step is the setup of streams with `StreamSocket`.
### Streaming
The streams created from `StreamSocket` (audio, video, tracking, etc) are encapsulated in async loops that are all awaited concurrently. One of these loops is the receiving end of the `ControlSocket`.
While streaming, the server only sends the control message `KeepAlive` periodically. The client can send `PlayspaceSync` (when the view is recentered), `RequestIDR` (in case of packet loss), and `KeepAlive`.
### Disconnection
When the control sockets encounters an error while sending or receiving a packet (for example with `KeepAlive`) the connection pipeline is interrupted and all looping tasks are canceled. A destructor callback (guard) is then run for objects or tasks that do not directly support canceling.
## The streaming socket
`StreamSocket` is an abstraction layer over multiple network protocols. It currently supports UDP and TCP but it is designed to support also QUIC without a big API refactoring. `StreamSocket` API is inspired by the QUIC protocol, where multiple streams can be multiplexed on the same socket.
Why not using one socket per stream? Regarding UDP, this does not have any particular advantage. The maximum transmission speed is still determined by the physical network controller and router. Regarding TCP, having multiple concurrent open sockets is even disadvantageous. TCP is a protocol that makes adjustments to the transmission speed depending on periodic network tests. Multiple TCP sockets can compete with each other for the available bandwidth, potentially resulting in unbalanced and unpredictable bandwidth between the sockets. Having one single multiplexed socket solves this by moving the bandwidth allocation problem to the application side.
### Packet layout
A packet is laid out as follows:
| Stream ID | Packet index | Header | Raw buffer |
| :-------: | :----------: | :------: | :--------: |
| 1 byte | 8 bytes | variable | variable |
The packet index is relative to a single stream. It is used to detect packet loss.
Both header and raw buffer can have variable size, even from one packet to the other in the same stream. The header is serialized and deserialized using [bincode](https://github.com/servo/bincode) and so the header size can be obtained deterministically.
### Throttling buffer
A throttling buffer is a traffic shaping tool to avoid packet bursts, that often lead to packet loss.
If the throttling buffer is enabled, the packets are fragmented/recombined into buffers of a predefined size. The size should be set according to the supported MTU of the current network configuration, to avoid undetected packet fragmentation at the IP layer.
The current implementation is similar to the leaky bucket algorithm, but it uses some statistical machinery (`EventTiming` in fixed latency mode to 0) to dynamically determine the optimal time interval between packets such as the "bucket" does not overflow and the latency remains minimal.
## Event timing
`EventTiming` is a general purpose mathematical tool used to manage timing for cyclical processes. Some "enqueue" and "dequeue" events are registered and `EventTiming` outputs some timing hints to minimize the queuing time for the next events.
Currently, `EventTiming` is used for the stream socket throttling buffer and audio implementations, but it will be also used for video frame timing (to reduce latency and jitter), total video latency estimation (to reduce the black pull and positional lag), controller timing and maybe also controller jitter.
`EventTiming` supports two operation modes: fixed latency and automatic latency.
### Fixed latency mode
In fixed latency mode, `EventTiming` calculates the average latency between corresponding enqueue and dequeue events.
Todo
### Automatic latency mode
Todo
## Motion-to-photon pipeline
Todo
## Foveated encoding
Foveated encoding is a technique where frame images are individually compressed in a way that the human eye barely detects the compression. Particularly, the center of the image is kept at original resolution, and the rest is compressed. In practice, first the frames are re-rendered on the server with the outskirts of the frame "squished". The image is then transmitted to the client and then it gets re-expanded by using an inverse procedure.
But why does this work? The human eye has increased acuity in the center of the field of vision (the fovea) with respect to the periphery.
Foveated encoding should not be confused with foveated rendering, where the image is rendered to begin with at a lower resolution in certain spots. Foveated encoding will NOT lower your GPU usage, only the network usage.
Currently ALVR does not directly support foveated encoding in the strict sense, instead it uses *fixed* foveated encoding. In a traditional foveated encoding application, the eyes are tracked, so that only what is directly looked at is rendered at higher resolution. But currently none of the headset supported by ALVR support eye tracking. For this reason, ALVR does foveated encoding by pretending the user is looking straight at the center of the image, which most of time is true.
Here are explained three foveated encoding algorithms.
### Warp
Developed by @zarik5. This algorithm applies an image compression that most adapts to the actual acuity graph of the human eye. It compresses the image radially (with an ellipse as the base) from a chosen spot in the image, with a chosen monotonic function. This algorithm makes heavy use of derivatives and inverse functions. It is implemented using a chain of shaders (shaders are a small piece of code that is run on the GPU for performance reasons). You can explore an interactive demo at [this link](https://www.shadertoy.com/view/3l2GRR).
This algorithm is actually NOT used by ALVR. It used to be, but it got replaced by the "slices" method. The warp method has a fatal flaw: the pixel alignment is not respected. This causes resampling that makes the image look blurry.
### Slices
Developed by @zarik5. This is the current algorithm used by ALVR for foveated encoding. The frame is cut into 9 rectangles (with 2 vertical and 2 horizontal cuts). Each rectangle is rendered at a different compression level. The center rectangle is uncompressed, the top/bottom/left/right rectangle is compressed 2x, the corner rectangles are compressed 4x. These cuts are actually virtual (mathematical) cuts, that are executed all at once in a single shader pass. All slices are neatly packed to form a new rectangular image. You can explore an interactive demo at [this link](https://www.shadertoy.com/view/WddGz8).
This algorithm is much simpler than the warp method but it is still quite complex. The implementation takes into account pixel alignment and uses some margins in the rectangles to avoid color bleeding. Like the warp algorithm, the slices method was designed to support eye tracking support when it will be available in consumer hardware.
### Axis-Aligned Distorted Transfer (AADT)
This algorithm was developed by Oculus for the Oculus Link implementation. It is simpler than the other two methods, the end result looks better but it has less compression power. Like the slices algorithm, the image is cut into 9 rectangles where each rectangle is compressed independently. But actually the top and bottom rectangles are compressed only vertically, and the left and right only horizontally. This type of compression lends itself well to be used for images rendered in VR headsets, since it works in the same direction (and not against) the image distortion needed for lens distortion correction.
It is planned to replace the slices method with AADT in the future.
## Audio
Todo
---------------------------
Document written by @zarik5

107
wiki/Installation.md Normal file
View File

@ -0,0 +1,107 @@
# Basic installation
PC side:
* Install SteamVR, **launch it once** then close it. This is to make sure it sets the environment correctly for ALVR.
* Go to the latest release [download page](https://github.com/alvr-org/ALVR/releases/latest). In the "Assets" section at the bottom download the ALVR Installer.
* Run the installer. If prompted, allow the execution in the SmartScreen popup. You need to give administrator permissions to install ALVR. For best compatibility do not change the installation folder.
* Once the installation finished, launch ALVR. You are greeted with a setup wizard. Follow the setup to set the firewall rules and presets.
**If you have problems launching ALVR, follow the guide below to use the portable version**
Headset side:
* Install SideQuest on your PC and enable developer mode on the headset. You can follow [this guide](https://sidequestvr.com/setup-howto).
* Connect your headset to Sidequest. If you have an Oculus Quest 1/2 download the ALVR app [here](https://sidequestvr.com/app/9), if you have an Oculus Go download it [here](https://sidequestvr.com/app/2658)
### Usage
* Launch ALVR on your headset. While the headset screen is on, click `Trust` next to the client entry (on the PC) to start streaming.
* You can change settings on the PC in the `Settings` tab. Most of the settings require to restart SteamVR to be applied. Use the apposite button on the bottom right corner.
For any problem visit the [troubleshooting page](https://github.com/alvr-org/ALVR/wiki/Troubleshooting).
# Advanced installation
## Portable version
There is also a portable version for the PC that requires more manual steps to make it work.
* Install SteamVR and launch it once.
* Download `alvr_server_windows.zip` from the latest release [download page](https://github.com/alvr-org/ALVR/releases/latest).
* Unzip into a path that contains only ASCII characters and has edit permissions without administrator rights.
## Nightly
If you want to get new features early or you want to help with testing you can install a nightly version.
Download the latest nightly server [here](https://github.com/alvr-org/ALVR-nightly/releases/latest). Download the latest nightly client from Sidequest ([Quest version](https://sidequestvr.com/app/2281), [Go version](https://sidequestvr.com/app/2580)).
Since nightly releases can be unstable, for maximum compatibility always use matching versions for PC and headset. They are updated once a day.
## Microphone streaming
To use the microphone you need to install the [VB-CABLE driver](https://vb-audio.com/Cable/). Set "CABLE Output" as the default microphone. Then you can enable the microphone in the ALVR setting, leave "Virtual microphone input" to Default.
## Connect headset and PC on separate networks
Check out the guide [here](https://github.com/alvr-org/ALVR/wiki/ALVR-client-and-server-on-separate-networks).
## Use ALVR together with third-party drivers
By default ALVR disables other SteamVR drivers before starting. Among these drivers there is [Driver4VR](https://www.driver4vr.com/) for full body tracking. ALVR disables these drivers to maximize compatibility with every PC setup. You can disable this behavior by manually registering the ALVR driver. Go to the `Installation` tab and click on `Register ALVR driver`. The next time you launch ALVR you will be able to use the other drivers concurrently.
## Launch ALVR together with SteamVR
You can skip the ALVR Launcher and open ALVR automatically together with SteamVR. Open ALVR, go to the `Installation` tab and click on `Register ALVR driver`.
## Use a browser different than Chrome
ALVR requires a Chromium based browser to correctly display the dashboard. Chrome and Edge work out of the box, but Edge has a few bugs that make ALVR behave weirdly. If you want to use other Chromium based browsers like Brave or Vivaldi you have to add an environment variable `ALCRO_BROWSER_PATH` pointing to the path of the browser executable (for example `C:\Program Files\Vivaldi\Application\vivaldi.exe`). Unfortunately Firefox is not supported.
## Connect headset and PC via a USB Cable
Check out the guide [here](https://github.com/alvr-org/ALVR/wiki/Use-ALVR-through-a-USB-connection).
# Linux
Unless you are using a nightly version, make sure all audio streaming options are disabled.
## Arch Linux
* Install `rustup` and a rust toolchain, if you don't have it: <https://wiki.archlinux.org/title/Rust#Arch_Linux_package>.
* Install [alvr](https://aur.archlinux.org/packages/alvr)<sup>AUR</sup> (recommended), [alvr-nightly](https://aur.archlinux.org/packages/alvr-nightly)<sup>AUR</sup>, or [alvr-git](https://aur.archlinux.org/packages/alvr-git)<sup>AUR</sup>
* Install SteamVR, **launch it once** then close it.
* Run `alvr_launcher` or ALVR from your DE's application launcher.
## Other
* Install FFmpeg with VAAPI/NVENC + DRM + Vulkan + x264/x265 support. You can use this [ppa:savoury1/ffmpeg5](https://launchpad.net/~savoury1/+archive/ubuntu/ffmpeg5) under Ubuntu, or download `alvr_server_portable.tar.gz` which has ffmpeg bundled.
* Install SteamVR, **launch it once** then close it.
* Download `alvr_server_linux(_portable).tar.gz` from the release [download page](https://github.com/alvr-org/ALVR/releases/latest).
* Run `bin/alvr_launcher`
If you do not install the correct version of FFmpeg systemwide, a common problem is the server crashing or failing to show images on the headset because SteamVR loads the wrong version of FFmpeg.
## Audio Setup
* Until next major release (`v19`), you must use the nightly version of ALVR.
* For PipeWire, install `pipewire-alsa` and `pipewire-pulse`
* `pavucontrol` and `pactl`
Note: PipeWire's PulseAudio emulation is only used to make use of PulseAudio tools.
### Game Audio
* Enable Game Audio in ALVR dashboard.
* Select `pipewire` or `pulse` as the device.
* Connect with headset and wait until streaming starts.
* In `pavucontrol` set the device ALVR is recording from to "Monitor of \<your audio output\>". You might have to set "Show:" to "All Streams" for it to show up.
* Any audio should now be played on the headset, optionally you can mute the audio output.
### Microphone
* Run: `pactl load-module module-null-sink sink_name=VirtMain` (will have to be ran every time you restart/relog)
* Enable microphone streaming in ALVR dashboard.
* Connect with headset and wait until streaming starts.
* In `pavucontrol` set ALVR Playback to "VirtMain"
* Set "Monitor of VirtMain" as your microphone.

View File

@ -0,0 +1,81 @@
**Warning:** This page is very outdated, see [Building From Source](https://github.com/alvr-org/ALVR/wiki/Building-From-Source) instead.
## 2022-01-04
An experimental NVENC fork has successfully been created by [Toxblh](https://github.com/Toxblh), helping fix one of the larger bottlenecks on NVIDIA GPUs. [Pull Request here](https://github.com/alvr-org/ALVR/pull/906)
## 2021-05-18
No special build steps are required for users who can acquire the correct ffmpeg version, read more [here](https://github.com/alvr-org/ALVR/wiki/Build-from-source#linux-experimental-build).
## 2021-04-22
The PR in the last log was proceeded by [#604](https://github.com/alvr-org/ALVR/pull/604) and this new PR was merged into the main branch. Build instructions remain the same, but the `vrenv.sh` patching is no longer needed.
## 2021-04-01
A [PR](https://github.com/alvr-org/ALVR/pull/569) has been made integrating Xytovl's vulkan layer into the main ALVR tree. It doesn't actually stream video yet but it provides a solid base for future work and is compatible with nVidia GPUs.
After you've checked the PR's branch out and [built the server](https://github.com/alvr-org/ALVR/wiki/Build-from-source#build-server), you can build and install the Vulkan layer like this:
```
cd alvr/server/cpp/tools/vulkan-layer
mkdir build && cd build
cmake ..
make -j
```
Add this line: `source "$(cat $XDG_RUNTIME_DIR/alvr_dir.txt | rev | cut -d'/' -f3- | rev)/alvr/server/cpp/tools/vulkan-layer/layer/vrenv.sh"` **before** the last one (`exec "$@"`) to `/path/to/your/SteamLibrary/steamapps/common/SteamVR/bin/vrenv.sh`.
## 2021-03-15
Xytovl's branch has been merged into the main repository. The build steps are unchanged.
Work has started towards a new frame capturing method using a Vulkan debug layer.
## 2021-03-10
An experimental branch is available at https://github.com/xytovl/ALVR/tree/linux-port-openvr with many limitations
### Adopted solution
We use SteamVR direct rendering mode on a fake screen, and capture the output of the screen. Current implementation only works for AMD (and probably Intel) open source drivers.
### Limitations
- audio streaming is not working
- foveated encoding is not implemented
- requires superuser access for setup
- mostly untested
- requires a free port on the graphic card
- TCP streaming seems not to be working
- position is stuttering
- only supports open source drivers
### Setup
See [build from source](Build-from-source)
## Usage
Run `build/alvr_server_linux/ALVR Launcher`
On first setup, SteamVR will probably show the VR display on your screen, with the configuration window. If you have dual screen, you can move the configuration window to a visible area (with Alt + drag on most desktop environments).
In the setup, deactivate audio streaming, switch connection to UDP, and deactivate foveated encoding.
On the headset, launch the application, then click trust on the configuration window, which will quit.
The headset says that the server will restart, but it will not. You must relaunch it manually.
If you are here, once it is all restarted, you should be able to get the stream on the headset.
## 2021-01-15
The development road has been defined, but we are not completely sure everything will work.
* We can try to extract frames from the VR game using a custom Vulkan validation layer. Examples are:
* Vulkan tools screenshot: https://github.com/LunarG/VulkanTools/blob/master/layersvt/screenshot.cpp
* RenderDoc: https://github.com/baldurk/renderdoc
* For the compositor (layering, color correction and foveated rendering) we are going to use Vulkan as the underlying API. We can use the backend agnostic library gfx-hal, that supports Vulkan and DirectX. Reference: https://github.com/gfx-rs/gfx
* For the encoder we can use FFmpeg. FFmpeg's hardware acceleration API supports passing pointers to GPU memory buffers directly. FFmpeg supports various acceleration APIs (hardware agnostic or not) but to minimize the effort we can go with Vulkan for Linux and DirectX 11 for Windows. Reference: https://ffmpeg.org/doxygen/trunk/hwcontext_8h.html
* For audio we are going to use the Rust library CPAL, which is an audio backend abstraction layer. We can switch (maybe even at runtime) between ALSA and JACK. CPAL supports also Windows (WASAPI, ASIO), Android (OpenSL, AAudio), Web (Emscripten) and even macOS (Core Audio) if we need that in the future. Reference: https://github.com/RustAudio/cpal
## Earlier
We cannot find a way of obtaining the frames rendered by the VR game from SteamVR. The OpenVR API exposes methods to do this but they don't work on Linux (at least we were not able to make them work). The two methods to obtain frames with OpenVR are by implementing the interfaces `IVRVirtualDisplay` and `IVRDriverDirectModeComponent`. On Windows, ALVR uses `IVRDriverDirectModeComponent`. On Linux, `IVRVirtualDisplay` crashes on Nvidia GPUs and does nothing on AMD. Similarly `IVRDriverDirectModeComponent` does not work on Linux. We tried to get help from Valve through multiple channels but we were not successful.
References:
* OpenVR driver header: https://github.com/ValveSoftware/openvr/blob/master/headers/openvr_driver.h
* Main OpenVR issue tracker: https://github.com/ValveSoftware/openvr/issues
* Virtual display sample issue tracker: https://github.com/ValveSoftware/virtual_display/issues
* Linux SteamVR issue tracker: https://github.com/ValveSoftware/steam-for-linux/issues

View File

@ -0,0 +1,20 @@
While most games do work without any problems, some do only work partially or not at all. This includes
- headset not found
- warped image
- controller not tracking
- buttons not working
- ...
Most of the time its the overly specific initialization of the game towards a specific headset that breaks the game.
For example, Vivecraft broke because ALVR reported the headset manufacturer as "Oculus driver 1.38.0" and not as "Oculus".
In general, this is a rather bad practice as all relevant data can be accessed trough SteamVR and the game should not make assumptions based on the manufacturer of the hmd. There are many different fields that a game could require to run.
Nonetheless, we want to play and support those games.
Problem is, that we don't own all games. This is a Open Source without any funding. We can not buy any games just to fix a bug. In the case of Vivecraft, one user (thanks @Avencore) was generous to gift us a copy and the bug could be fixed.
There are no guaranties! Neither on the time it will take nor if the bug will ever be fixed! Please contact us before buying anything.

1
wiki/Other-resources.md Normal file
View File

@ -0,0 +1 @@
* Hand tracking OSC for VRChat with ALVR support: https://github.com/A3yuu/FingerTruckerOSC

View File

@ -0,0 +1,38 @@
# Why?
The Quest can display a resolution close to 4k. Rendering a game, encoding and decoding these kinds of resolutions is very taxing on both the PC and the Quest. So usually a lower resolution image displayed on the Quest.
Ideally the output of such an upscaled image should match the screens pixels 1:1. But because of the Asynchronous Timewarp step this is not possible in the Quest. OVR only accepts undistorted frames.
Currently ALVR does no upscaling prior to the image being mapped to an OpenGL texture. This texture gets interpolated to match the screen pixels by OVR. For this process video resolutions above 100% it use bilinear interpolation and for resolutions below 100% it uses nearest neighbor.
There's a lot of good info on this topic in this issue: https://github.com/alvr-org/ALVR/issues/39
# Lanczos resampling
This traditional upscaling method seems like a good step up from basic bilinear interpolation and is relatively light on GPU resources.
A GPL 2 implementation of a Lanczos shader can be found here: https://github.com/obsproject/obs-studio/blob/6943d9a973aa3dc935b39f99d06f4540ea79da61/libobs/data/lanczos_scale.effect
# Neural net image super resolution
I did some basic investigations on the feasibility of using AI upscalers to get even better results than traditional signal processing methods.
## Hardware acceleration on the XR2
There seem to be 3 paths towards getting fast NNs running on the Quest's SoC.
There is the [Qualcomm Neural Processing SDK](https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk/tools), which automatically detects what the capabilities of the system are and picks the right hardware to run the NN on (GPU, DSP, AI accelerator).
The [TensorFlow Lite NNAPI delegate](https://www.tensorflow.org/lite/performance/nnapi) relies on hardware and driver support for the Android Neural Networks API.
Then there is also the [TensorFlow Lite Hexagon delegate](https://www.tensorflow.org/lite/performance/hexagon_delegate) which specifically targets the Snapdragon DSP.
I only tested an example image super-resolution app from the [tensorflow respository](https://github.com/tensorflow/examples/tree/master/lite/examples/super_resolution) in CPU and generic GPU (OpenCL) accelerated modes. Even upscaling tiny 50x50 images took around 500ms with this. Even though better hardware acceleration could improve this I do not expect 100x improvements. The only hope for NN super-resolution to be viable would be to find a significantly faster neural net, which leads us into the next topic.
## Existing neural nets
A well established real-time upscaler is [Anime4K](https://github.com/bloc97/Anime4K/). It states that it can achieve 1080p to 2160p upscaling in 3ms on a Vega64 GPU. A [rough estimate](https://uploadvr.com/oculus-quest-2-benchmarks/) puts the Quest 2 at a 10x performance disadvantage compared to such high end desktop GPUs. It doesn't seem entirely impossible to get this to work with some optimizations and lowering of the upscaling quality, but there is more bad news. Anime4K has a rather bad Peak signal-to-noise ratio (PSNR). It can get away with this because the stylized look anime is quite forgiving in being heavily filtered.
For an upscaler that has a better PSNR there are many options but very few that can run real-time. The smallest neural net that I could find is [SubPixel-BackProjection](https://github.com/supratikbanerjee/SubPixel-BackProjection_SuperResolution). Tt gets nice results but in my testing took 3 seconds to upscale from 720p to 1080p with CUDA acceleration. Way out of the ballpark for XR2 the chip.
So in conclusion, it does not seem like there is enough performance to squeeze out of the XR2 to do do real-time NN upscaling at such high resolutions. We will more likely get better results out of classical techniques.

25
wiki/Roadmap.md Normal file
View File

@ -0,0 +1,25 @@
# Roadmap
This post will continue to evolve during ALVR development.
## Long-term goal
Create a universal bridge between XR devices.
## What is coming next
* OpenXR client
* **Purpose**: support other Android standalone headsets, improve latency on the Oculus Quest
* **Status**: in development
* Compositor rewrite
* **Purpose**: add Linux support for FFR and color correction, preparation for sliced encoding
* **Status**: exploration phase
* Encoder rewrite
* **Purpose**: support any OS and hardware with a single API, using [Vulkan video extensions](https://www.khronos.org/blog/an-introduction-to-vulkan-video)
* **Status**: blocked by adoption by AMD and Intel, landing of the feature on stable Nvidia drivers
* Dashboard rewrite
* **Purpose**: improved settings flexibility and better maintainability
* **Status**: paused, no roadblocks
* **What is done**: translation infrastructure, experiments with [iced](https://github.com/iced-rs/iced) UI library
Due to the low development capacity, no ETA can be provided. New releases will not have a regular cadence and they do not have scheduled features.

355
wiki/Settings-guide.md Normal file
View File

@ -0,0 +1,355 @@
# Settings guide
This guide lists all settings supported by ALVR and explain what they are, how they work and when to set them.
The user interface divides the settings into basic and advanced settings. To enable advanced settings you have to click `Show advanced options` in the top right corner in the Settings tab. Usually you should not touch advanced settings unless you know exactly what you are doing.
Under the hood, basic settings work by modifying some advanced settings.
In this document, settings in **Basic Settings** describe settings that are visible in basic mode, **Advanced Settings** describe settings that are visible only in advanced mode. Some basic settings are also visible in advanced mode.
**Document updated for ALVR v15.1.0**
------------------------------------------
## Basic Video Settings
### Video resolution
* Percentage of the native resolution of the headset to be used for encoding the video to be transmitted to the headset.
* Setting anything higher than 100% can slightly improve visual quality but at the cost of severely worse network performance.
* Setting anything lower the 100% can improve latency, stutters and remove encoder errors (especially with the Quest 2), at the cost of worse visual quality.
### Refresh rate
* Choice between frame rates supported by the Quest headsets screen.
* If a refresh rate is not supported on the headset (like 90Hz on the Quest 1), the closest supported refresh rate is picked and a warning will appear.
### Video codec
* Algorithm used to encode the video stream to be transmitted to the headset, where it is decoded. h264 (AVC) and h265 (HEVC) are two codecs that are generally supported by recent GPUs.
* Sometimes some older GPUs don't support either h264 or HEVC and you get an error message (and a SteamVR crash).
* In this case try switching the codec or try lowering the video resolution.
### Video Bitrate
* Bitrate used for the video streaming. Higher bitrate can increase image quality, it your network setup supports it. If you experience glitches and freezes of the image you should lower this.
### Foveated encoding
* This is an algorithm used to reduce network usage without sacrificing the image quality too much. You can read more at [this page](How-ALVR-works#foveated-encoding).
### Foveated encoding / Strength
* Higher value means the foveation effect is more pronounced, but also more flickering artifacts.
### Foveated encoding / Vertical offset
* Move the central high resolution rectangle higher or lower.
### Color correction
* Color correction can help to get a more clear image.
### Color correction / Brightness
* This setting produces a shift in the pixel color values. 1 means the image is completely white, -1 means completely black.
### Color correction / Contrast
* Contrast regulates the distance of color channels from gray. -1 means completely gray.
### Color correction / Saturation
* Saturation regulates the vividness of the image. -1 means the image is black and white.
### Color correction / Gamma
* Gamma is a parameter to regulate the density distribution of brightness levels. You can use this parameter to get deeper blacks.
### Color correction / Sharpening
* Values greater than 0 crates an embossing effect around elements on screen. This can make text easier to read. Values lower than 0 makes the image more fuzzy.
## Advanced Video Settings
### GPU index
* Zero-based index of the GPU. For correct compatibility with SteamVR, this must always be set to 0. If you want to change the primary GPU used by SteamVR you have to use the control panel provided by your GPU vendor.
### Video encoding resolution base
* This corresponds to `Video resolution`, but it gives the choice of specifying the resolution by relative scale or absolute value. Absolute width and height values could not respect the native aspect ratio of the headset screen.
### Preferred game rendering resolution
* This is reported by ALVR to SteamVR as the screen resolution of the virtual headset.
* SteamVR usually automatically chooses the game rendering resolution based on the available GPU resources, so most of the time only the aspect ratio matters.
### Custom refresh rate
* Same as `Refresh rate` but the value can be directly typed.
### Request real-time decoder priority
* Flag used by the android decoder.
### Use 10-bit encoder
* Encode the video stream with 10 bit for the luma channel. This is primarily useful for reducing color banding. This flag works only for NVidia graphics cards.
### Seconds from V-Sync to photons
* This is a timing variable needed by SteamVR drivers. It is not actually correlated to any real display v-sync.
### Foveated encoding / Shape
* Aspect ratio of the central high resolution rectangle. A value greater than 1 produces a rectangle wider than tall.
***
## Basic audio settings
### Stream game audio
* Play sounds from the PC on the headset.
### Stream game Select audio device
* Audio device used to record the output audio to be sent to the headset. You should keep this to "Default". "Default" uses the currently default output audio device on Windows. You can change the default audio device by going in the system tray in the bottom right, click on the speaker icon then click on the audio device name.
* The device selected by ALVR is reported as the virtual headset speaker on SteamVR, so for best compatibility please set "Audio output device" to "Headset" in the SteamVR settings.
* If your speakers don't work with ALVR, try selecting another device. If none works, install the Oculus Runtime, then select `Headphones (Oculus Virtual Audio Device)`.
### Stream game Mute when streaming
* Mute the selected audio device on the PC. Only the physical device connected to the PC is muted. The streamed audio in unaffected
### Stream game Configuration / Buffering
* Mean queue time interval for audio samples. Increase this value if you hear audio stutters.
* Audio samples are not immediately played when they are sent from the PC to the headset. Audio samples needs to be played back with high timing accuracy to avoid audio distortions and generally the playback cannot be stopped without causing audible clicks or pops, but when streaming the source of audio the samples can arrive too early or too late.
* For this reason we need some amount of latency (on top of the transport latency) to keep a sample queue. This queue should be big enough so that it never runs out and it never overflows. If the queue underflows or overflows the playback will be disrupted.
* This setting controls the mean time a sample stays in the buffering queue. Because of network jitter, the actual queue time will be in the interval `[0; 2 * buffering]`
### Stream headset microphone
* Enable microphone on the headset and sends the audio to the PC. You need to install [VB-CABLE Virtual Audio Device](https://vb-audio.com/Cable/) to be able to stream the microphone.
### Stream headset microphone / Select virtual microphone input
* This is the output audio device used to replay the microphone input captured on the headset. You cannot set this the same as the game audio device. When set to `Default`, ALVR searches for `Cable Input`.
### Stream headset microphone / Select virtual microphone output
* This is the other end of the virtual microphone cable. If you have VB-CABLE installed, leave this to default.
* This setting is used only to setup SteamVR microphone device. To make this setting effective you need to leave "Audio input device" to "Headset" in the SteamVR settings.
## Advanced Audio Settings
### Stream game Audio device
* Output audio device. It can be selected as default, by name and by index.
* While basic settings do not allow to select microphones as the game audio device, you can do so by selecting "by name" and writing out the name of the device (it can be just a part of the full name, uppercase or lowercase does not matter).
### Stream game Configuration / Batch ms
* Time interval used to calculate the size of the batch of audio samples to be processed on one go.
* Lower values reduce latency (marginally), but they can put stress to the Quest when processing audio, that could cause audio artifacts or even crashes.
* In the current implementation, this setting also controls the duration of fade-in/outs used for pop reduction in case of disruptions (lag, packet loss, packets out of order). A value too low can render the pop reduction algorithm less effective.
### Stream headset microphone / Virtual microphone input
* Virtual microphone input device. It can be selected as default, by name and by index. It's preferred to use the basic setting.
### Stream headset microphone / Configuration
* Analog to `Stream game Configuration`.
***
## Basic Headset Settings
### Headset emulation mode
* SteamVR needs some information about the hardware connected to the PC to stream to.
* Using ALVR, you don't directly connect the headset to the PC, so we can choose to emulate a headset different than the real one. You can choose between `Oculus Rift S`, `HTC Vive` and `Oculus Quest 2` (via Oculus Link). Some SteamVR games don't support the Rift S or the Quest, so if you encounter any problem you should try switching to `HTC Vive`.
* Currently this setting has a visual bug where `Oculus Quest 2` is always selected after a restart. The actual setting is not reverted. It will be fixed after a dashboard rewrite.
### Force 3DOF
* Discard positional tracking data from the headset. In the game, the head will be stuck in place even if you move it in real life.
### Controllers
* Enable controllers. This currently has effect only for the Quest headset.
### Controllers / Controller emulation mode
* This is similar to `Headset emulation mode` but for the controllers. Usually they should match.
* `"No handtracking pinch"` means that pinch gestures are not registered. A "pinch" is the gesture of touching the tip of the thumb with the tip of any other finger in the same hand. Each pinch gesture is mapped to a different controller button in-game. Handtracking is enabled automatically when the controllers are disabled, so if you don't want to accidentally make button presses you should select no handtracking pinch.
* Currently handtracking does not support the thumbstick for movement.
### Controllers / Tracking speed
* Regulates the strength of controller pose prediction. `Normal` means that the controllers will lag behind but the movement will be smooth, `Medium` and `Fast` makes the controller more reactive but also more jittery. `Oculus prediction` uses another prediction algorithm and corresponds to `Fast`.
* Why does ALVR need to predict the controller pose? ALVR needs to deal with many sources of latency (Wifi, video encoding, decoding, rendering, etc). Latency causes everything to lag behind. Controller pose is one of the things affected the worst by latency. We cannot predict the future but we can use an algorithm to estimate the controller pose.
* This algorithm tries looks back at how the controller moved a few instants ago and then tries to continue the movement. This can work decently for low latency and slow movements, since the controller velocity remains almost constant. But fast movements (where the controllers are accelerated back and forth) cause the controllers to jitter, because the acceleration was not taken into account (because acceleration is fundamentally unpredictable).
### Controllers / Haptics intensity
* Regulate the haptics (vibration) intensity. 0 means the haptics are disabled.
### Tracking space
* The tracking space is the type of anchor used to make the virtual and real world match.
* `Local` means that the anchor between the virtual and real worlds is movable: if you press and hold the Oculus button the world will rotate and translate depending on your position and heading at that moment.
* `Stage` means that the real and virtual worlds are permanently anchored: if you press and hold the Oculus button nothing will happen. If you close the game and reopen it you will be exactly where you left off in the game if you didn't move.
* `Local` is preferred for seated games and `Stage` is preferred for room scale games with real space walking.
## Advanced Headset Settings
### Universe ID
* This is a parameter needed by SteamVR to decide how to store the Chaperone boundary settings.
### { mode Idx | Serial Number | ... | Registered device type }
* These are settings needed by SteamVR to correctly set the headset emulation mode. You should use `Headset emulation mode` instead.
### Tracking frame offset
* This is a signed integer used as offset when choosing the head tracking data to assign to a certain frame returned by SteamVR.
### Head position offset
* This should be used as last resort if you can't fix the floor height or Chaperone boundary centering by other means.
### Controllers / { Mode Idx | Tracking system name | ... | Input profile path }
* These are settings needed by SteamVR to correctly set the controller emulation mode. You should use `Controller emulation mode` instead.
### Controllers / Pose time offset
* This is the latency offset value used by `Tracking speed`. You can set this value manually to have more control over the controller tracking prediction.
### Controllers / Client-side prediction
* This corresponds to `Oculus prediction`.
### Controllers / Position offset
* Position offset used to match the virtual controller position with the real controller position. This is needed because of a long standing bug of SteamVR.
### Controllers / Position rotation
* Rotation offset used to match the virtual controller position with the real controller position. This is needed because of a long standing bug of SteamVR.
### Controllers / Extra latency mode
* This should be left off normally
* This may cause the headset position to be incorrect, if enabled
***
## Basic Connection Settings
### Stream protocol
* A network protocol is a procedure and set of rules used for communication between devices connected in a network.
* You can choose between UDP, Throttled UDP and TCP socket protocols:
* UDP has the lowest latency but works best at very low bitrates (<30 Mbps). Higher bitrates cause packet loss and stutter.
* Throttled UDP is an experimental reimplementation of the previous socket. It works best at medium bitrates (~100 Mbps). At low bitrates it could have excessive delay and at higher bitrates is has the same problems as UDP.
* TCP works well up at any bitrate up to 250 Mbps. It has the highest latency (but still lower than the previous ALVR versions). This is the new default.
### Aggressive keyframe resend
* When checked, the encoder is allowed to resend keyframes faster with a timeout of 5ms.
* Usually video codecs compress the video stream by sending only what changed in the image to reduce network usage. This means that most frames actually contain incomplete information, that is completed by information retrieved by previous frames. This is why in case of packet loss the image becomes glitchy and blocky.
* A keyframe (as known as IDR frame) is a special packet that contains a whole video frame. No previous information is needed to reconstruct this frame. Because of this, IDR frames are really heavy and should be sent only when needed, otherwise the network will completely hog.
## Advanced Connection Settings
### Trust clients automatically
* If you uncheck this, clients will connect automatically without the need for trusting them. Is is a risk for security and it is off by default.
### Web server port
* The IP port used to connect to the dashboard. If this is changed, the launcher will stop working.
### Streaming port
* Port used for streaming (server to client, client to server).
### On connect script
* Specify a command to be run when the server and headset connects. The environment variable `ACTION` will be set to the string `connect`.
### On disconnect script
* Specify a command to be run when the server and headset disconnects. The environment variable `ACTION` will be set to the string `disconnect`.
### Enable FEC
* FEC stands for Forward Error Correction. It is an algorithm used by the video streaming pipeline.
* This setting MUST NOT be set to false. Support for disabling this feature is incomplete and will likely cause a crash.
***
## Basic Extras
### Theme
* Theme used for the dashboard. `System` can switch between light and dark mode depending on your system preference.
### Client dark mode
* Simple color invert for the loading room/lobby in the headset. This is applied only after a sleep-wake cycle of the headset.
### Confirm revert
* Show a confirmation dialog before reverting a setting to the default value.
### Confirm SteamVR restart
* Show a confirmation dialog before restarting SteamVR. When SteamVR restarts, the VR game that was running gets closed and any unsaved progress is lost.
### Prompt before update
* When an update is available, install it immediately without asking. Only happens at startup.
### Update channel
* The update channel is a setting that controls what kind of update to receive.
* `No updates` disables updates
* `Stable` is to receive stable updates
* `Beta` is to receive pre-release updates that had only limited testing
* `Nightly` are completely untested releases that may not work at all, but you get the latest features before anyone else
### Log to disk
* Save the file `session.txt` at the root of the ALVR installation folder.
* This is useful to get get debug information when a crash happens. By default this is disabled because this file continues to grow as long as ALVR is kept open and it keeps growing until the whole hard-drive is filled.
## Advanced Extras
### Notification level
* Select what kind of notification should be displayed in the bottom left corner of the dashboard. Each level contains all levels with higher severity.
### Exclude notifications without ID
* This is a legacy setting. It should be set to false for now.

View File

@ -0,0 +1 @@
This page has been moved [here](https://github.com/alvr-org/ALVR/wiki/Troubleshooting).

125
wiki/Troubleshooting.md Normal file
View File

@ -0,0 +1,125 @@
Troubleshooting (for ALVR 14.0.0 and later)
===
First off, please make sure to carefully read the [installation](https://github.com/alvr-org/ALVR/wiki/Installation) and [usage](https://github.com/alvr-org/ALVR/wiki/Usage) instructions.
The first thing to try is to delete the file `settings.json` located in the ALVR installation folder on the PC. This resets everything to default. If it doesn't work, try reinstalling ALVR.
Keep in mind that sometimes a restart of ALVR/SteamVR/PC/Headset will be enough to solve some problems.
Having trouble getting ALVR to work?
---
[I'm having trouble starting ALVR.](#trouble-starting-alvr)
[ALVR starts fine, but says X error.](#alvr-starts-fine-but)
[ALVR starts fine and doesn't show any error, but it doesn't see (or connect to) my headset.](#alvr-cant-see-my-headset)
If you need more help, come to our [Discord](https://discord.gg/KbKk3UM) and ask in the #help channel. When asking for help, please describe the issue, if you're getting an error message, copy it, and tell us what you already tried to fix it.
Trouble starting ALVR
===
`ALVR Launcher.exe` needs Chrome, Chromium or Edge to be installed in order to work. Chrome is preferred because Edge can cause problems after updating ALVR. If you're using Windows 10, make sure your Windows is up to date. Windows 10 version 1803 should have Edge preinstalled. Otherwise you will need to install either Chrome or Chromium yourself.
If you run Windows 7 you ned to use [this workaround](https://github.com/alvr-org/ALVR/issues/1090#issuecomment-1155288370).
ALVR starts launching, but gets stuck on "ALVR is not responding..."
===
With ALVR versions >= 14.2, some antivirus software can prevent ALVR from launching SteamVR. Try disabling any antivirus other than Windows Defender (McAfee, Norton, etc.), reboot, then try again. If the issue persists, make sure you don't have an instance of ALVR or SteamVR running in the background (check in Task Manager). If you continue having issues, hop in the [ALVR Discord server](https://discord.gg/KbKk3UM), and we'll do our best to help you get it sorted out.
ALVR starts fine, but...
===
This section has some advice for when ALVR shows an error (or sometimes warning) pop-up. This could be either a yellow pop-up in the setup window (`ALVR Launcher.exe`) or a separate pop-up when you connect with a headset.
[WARN] clientFoundInvalid
---
If you get a warning pop-up inside the `ALVR Launcher.exe` window saying `clientFoundInvalid`, make sure the version of ALVR you installed on your headset is compatible with the version you're trying to run on your PC.
The latest release can be found [here](https://github.com/alvr-org/ALVR/releases/latest) and contains both the `alvr_client.apk` file for your headset and the `alvr_server_windows.zip` archive with the application for your PC.
The version of ALVR available on the SideQuest store is compatible with the latest release on GitHub (the previous link). Keep in mind that the version on SideQuest might take us a while to update after a new version is released on GitHub.
Failed to initialize CEncoder.
---
ALVR currently needs a recent AMD or Nvidia GPU to run, since it utilizes hardware video encoding (see [requirements](https://github.com/alvr-org/ALVR#requirements)). If you get an error saying something like
```
Failed to initialize CEncoder. All VideoEncoder are not available. VCE: AMF Error 1. g_AMFFactory.Init(), NVENC: NvEnc NvEncoderD3D11 failed. Code=1 NvEncoder::LoadNvEncApi : NVENC library file is not found. Please ensure NV driver is installed at c:\src\alvr\alvr_server\nvencoder.cpp:70
```
and you have up-to-date GPU drivers, then your graphics card isn't supported. If you're using a laptop with a powerful enough discrete GPU, you _might_ be able to get ALVR to work by forcing SteamVR to use it in either Windows settings, or the Nvidia control panel.
If you have a compatible GPU, you're most likely seeing a different error after either `VCE:` or `NVENC:` than above. In that case, try using a different video codec in ALVR settings. You can also try lowering your video resolution setting.
Failed to start audio capture
---
![Failed to start audio capture](images/ALVR-audio-crash.png)
This error can show up when connecting your headset, when SteamVR gets started. Make sure the audio device you have selected in ALVR settings isn't disabled, it should be the device you usually use for games (speakers/headphones). ALVR does not create its own audio device.
You can see if you have an "enable audio enhancements" option on your sound device in Windows settings and if so, make sure it's disabled.
ALVR can't see my headset
===
Here is some advice for issues that can come up even though you don't see any error popup from ALVR.
ALVR on the headset stuck on `Searching for server...`
---
This issue can have multiple causes. It is likely that the issue is with the PC ALVR application. See below for more specific issues.
ALVR client list is empty
---
![Empty ALVR client list](images/ALVRexe-no-clients.png)
Check that the PC app and the headset app run on the latest version of ALVR. At the time of writing, the latest version is v14.1.0. If your version is v2.3.1 or v2.4.0-alpha5 then you downloaded ALVR from the wrong link. The correct link is https://github.com/alvr-org/ALVR.
Make sure ALVR is running both on the PC and on the headset. To be visible in the client list, ALVR on the headset sends broadcast packets which the PC application listens for. These can be blocked by your firewall or possibly your router, if both headset and PC are connected wirelessly, having AP isolation enabled on the router will cause this.
To fix this, you can try the following:
1. Ping the headset to check it's reachable from the PC - you can do this by opening CMD and typing `ping <headset IP>` without "<>" (you can find the headset's IP in the top left corner of SideQuest) - if ping fails, check that both PC and headset are connected to the same network
1. You can also try disabling your firewall for testing, but you shouldn't leave it disabled to use ALVR
1. Open ports 9943 and 9944 on your firewall
If pinging works but you still don't see the client on the server app, then headset and PC might be on separate subnets. To solve this you can add the client manually.
In the Connection tab press `Add client manually`. Fill in the fields with a name for your headset (you can use the name you want), the hostname (you can read it in the welcome screen in your headset when you open the ALVR app), the IP of the headset and then press `Add client`.
SteamVR says "headset not detected"
---
![SteamVR headset not detected](images/SteamVR-headset-not-detected.png)
This message means that the ALVR SteamVR driver isn't loading properly when SteamVR starts. Check that SteamVR isn't blocking ALVR (see SteamVR settings, enable advanced settings and check `Startup / Shutdown -> Manage Add-ons`).
![SteamVR add-ons](images/SteamVR-add-ons.png)
If you're still getting this message (or otherwise not getting a headset icon in the SteamVR window), a SteamVR log (vrserver.txt) will have some information on why the driver isn't loading. You can find it where you installed Steam, in `Steam\logs\vrserver.txt`.
#### Some lines to look for and tips for them:
`Unable to load driver alvr_server because of error VRInitError_Init_FileNotFound(103). Skipping.` - This usually means a library that ALVR needs is missing. Make sure you followed installation instructions carefully, installed the latest Visual C++ Redistributable x64 package and no files are missing where you extracted ALVR (especially in the bin\win64 directory).
`Skipping duplicate external driver alvr_server` - This line means another ALVR driver is registered. Go to the installation tab in ALVR and remove all drivers.
`Skipping external driver X:\path\to\your\alvr_server_windows because it is not a directory` - This can happen if you put ALVR in a OneDrive (or a similar service) directory or the path to ALVR contains characters not in UTF-8. Try putting ALVR elsewhere, preferably so that the path to ALVR contains only ASCII characters.
If you have trouble looking through the logs, none of the tips work, or don't apply to you, feel free to ask on our [Discord](https://discord.gg/KbKk3UM) in the #help channel (you may be asked to post the log there).
ALVR sees the headset, SteamVR shows headset icon
---
![SteamVR waiting...](images/SteamVR-waiting.png)
This is a situation where you have ALVR open on both headset and PC, you can see the headset in the client list and trust it. ALVR then starts SteamVR automatically when you try connecting and SteamVR shows an icon for the headset (and controllers).
First make sure that SteamVR (more specifically, vrserver.exe) is allowed incoming connections (UDP, port 9944) in your firewall. You can also try disabling your firewall for testing, but you keep it disabled to use ALVR.
You can try restarting ALVR on both the headset and the PC. On the headset, when connecting, you should see the view lagging behind when you turn your head (it drops below 1 fps), this means the headset is getting a response from the server when connecting and is waiting for the video stream to start. If you get no lag in the headset, response from the PC isn't reaching the headset.

View File

@ -0,0 +1,56 @@
# Using ALVR through a USB connection
## ALVR Configuration
### ALVR Client (Headset):
* WiFi must be turned on and connected, otherwise ALVR will not search for the server.
### ALVR Server (PC):
* If your headset is detected, click "Trust." Click "Configure" and add the IP address `127.0.0.1`. Remove the other IP address.
* If your headset is not detected, click "Add client manually" and use the IP address `127.0.0.1`. Use the hostname displayed on your headset screen.
* Turn off client discovery in Settings > Connection.
* Switch the connection streaming protocol to TCP in Settings > Connection.
## Letting your PC communicate with your HMD
The Quest and Go HMDs are Android devices, therefore, we can use [Android Device Bridge](https://developer.android.com/studio/command-line/adb) commands to tell the HMDs to look for data over USB, as well as Wi-Fi, using port forwarding.
You can accomplish this with some pre-made applications/scripts (just below), or run the commands manually with [SideQuest](https://sidequestvr.com/setup-howto)
If you haven't already, connect a USB cable from your PC to your headset. USB 2.0 will work fine but 3.0 and higher is best. Make sure to authorize the computer in your headset.
### Option 1 - Dedicated ADB Applications
The following programs serve to wrap and simplify the process of doing manual ADB commands, the first two will also automatically reconnect the headset if the USB connection is interrupted.
* [**ADBForwarder**](https://github.com/AtlasTheProto/ADBForwarder)
* Easy to use
* Downloads ADB for you
* Cross-platform (Windows & Linux with Mono)
* [**Python Script**](https://gist.github.com/Bad-At-Usernames/684784f42cbb69e22688a21173ec263d)
* Lightweight and simple
* Requires [Python 3](https://www.python.org/downloads/) and [PyWin32](https://pypi.org/project/pywin32/)
* Requires [ADB Platform Tools](https://developer.android.com/studio/releases/platform-tools) to be in the same directory as `main.py`
* Just extract `platform-tools` to your desktop and place `main.py` in that folder, should work when you run the script
* [**Batch Script**](https://gist.github.com/AtlasTheProto/1f03c3aeac70c4af5b4f2fcd9b9273c0)
* Requires [ADB Platform Tools](https://developer.android.com/studio/releases/platform-tools), edit the path in line 2 to point to the directory where you extracted `platform-tools`
* Needs to be run every time you (re)connect your headset
### Option 2 - [SideQuest](https://sidequestvr.com/setup-howto):
* Ensure SideQuest is running, and the headset has authorized the USB connection to the PC
* Open the 'Run ADB Commands' menu in SideQuest (top-right, box with an arrow inside it)
* Click 'Custom Command' and run these adb commands:
* `adb forward tcp:9943 tcp:9943`
* `adb forward tcp:9944 tcp:9944`
* These commands will need to be run every time you (re)connect your headset.
* Keep SideQuest opened until you want to close the connection.
***
Once you are finished, the headset should now establish a connection over USB.

47
wiki/_Sidebar.md Normal file
View File

@ -0,0 +1,47 @@
**Start here**
* [Installation](https://github.com/alvr-org/ALVR/wiki/Installation)
* [Settings guide](https://github.com/alvr-org/ALVR/wiki/Settings-guide)
* [Hand tracking controller bindings](https://github.com/alvr-org/ALVR/wiki/Hand-tracking-controller-bindings)
* [Other resources](https://github.com/alvr-org/ALVR/wiki/Other-resources)
***
**Configuration**
* [Information and Recommendations](https://github.com/alvr-org/ALVR/wiki/Configuration-Information-and-Recommendations)
* [ALVR client and server on separate networks](https://github.com/alvr-org/ALVR/wiki/ALVR-client-and-server-on-separate-networks)
* [Fixed Foveated Rendering (FFR)](https://github.com/alvr-org/ALVR/wiki/Fixed-Foveated-Rendering-(FFR))
* [ALVR wired setup (ALVR over USB)](https://github.com/alvr-org/ALVR/wiki/Use-ALVR-through-a-USB-connection)
***
**Troubleshooting**
* [Issues running and connecting to ALVR](https://github.com/alvr-org/ALVR/wiki/Troubleshooting)
* [ALVR Checklist before posting a new Issue](https://github.com/alvr-org/ALVR/wiki/ALVR-Checklist)
* [Controller latency](https://github.com/alvr-org/ALVR/wiki/Controller-latency)
* [My game is not working properly! Help](https://github.com/alvr-org/ALVR/wiki/My-game-is-not-working-properly!-Help!)
***
**Development**
* [Roadmap](https://github.com/alvr-org/ALVR/wiki/Roadmap)
* [Building From Source](https://github.com/alvr-org/ALVR/wiki/Building-From-Source)
* [How ALVR works](https://github.com/alvr-org/ALVR/wiki/How-ALVR-works)
* [Linux support](https://github.com/alvr-org/ALVR/wiki/Linux-Support-development-progress)
* [Real time video upscaling experiments](https://github.com/alvr-org/ALVR/wiki/Real-time-video-upscaling-experiments)

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

BIN
wiki/images/ALVRexe-404.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB