We don't yet have a depth buffer for our scene. That means rendering depends on the order in which we draw instead of the position of the objects in the 3D scape. See the OpenXR demo for a way to do that.
After more thinking I'm -1 on this idea. I like having windows use a logical z-order and skipping any z-fighting problems. The user will likely never want to have two windows partially occluding each other, which would pretty much render both windows unusable.
Once you add other 3D objects in the scene, a depth buffer becomes really necessary. There's no point in getting a depth buffer from clients if we're going to throw it away.
I agreed, but we'll have to do something weird to get the desired behavior for 2D clients.
We discussed this in more detail over IRC. Some notes.
Possible solution: set up separate framebuffers for the 2D surfaces. Render once to the depth buffer, then again to the color buffer, in separate passes, the latter pass with depth testing disabled. Then composite this buffer into the 3D scene with depth testing enabled.
Patch on the list for exporting depth buffers from clients. Still need to deal with taking them into account while rendering.