Hmm, I see. Yeah this sounds like a good idea indeed.
Hmm, why? I thought the orientation would be purely driven by the headset and the user could just interact with 3D objects in the scene (e.g. dragging windows around).
Once you add other 3D objects in the scene, a depth buffer becomes really necessary. There's no point in getting a depth buffer from clients if we're going to throw it away.
Under X11/Wayland, we probably want to open a window with the 3D scene and get input from there. Under KMS/libinput, we can just grab events as usual.
We don't yet have a depth buffer for our scene. That means rendering depends on the order in which we draw instead of the position of the objects in the 3D scape. See the OpenXR demo for a way to do that.
We don't actually need to do that.
- OpenXR extension: https://github.com/KhronosGroup/OpenXR-Docs/pull/40
- Monado patch: https://gitlab.freedesktop.org/monado/monado/merge_requests/153
Draft is available on this branch: https://github.com/KhronosGroup/OpenXR-Docs/compare/master...emersion:egl-enable