Do I understand correctly that you show the same image for both eyes? It should be possible to achieve the same with a single plane and a custom shader. Or, alternatively, you could transform it in onBeforeRender
for each camera. That way you might not even have to care if you're in vr mode or not.
Coincidentally, for some experiment I also needed to calculate the IPD in A-Frame. I did this through extracting the positions from the left and right eyes' cameras. Here's the approach (slightly adjusted to also trigger on the enter-vr
event like in your sample):
document.addEventListener('enter-vr', e => {
this.el.sceneEl.object3D.onBeforeRender = (renderer, scene, camera, currentRenderTarget) => {
const cameras = camera.cameras;
if(cameras.length >= 2) {
const leftPos = new THREE.Vector3();
const rightPos = new THREE.Vector3();
leftPos.setFromMatrixPosition(cameras[0].matrix);
rightPos.setFromMatrixPosition(cameras[1].matrix);
console.log('IPD', leftPos.distanceTo(rightPos));
}
delete scene.onBeforeRender;
};
});
Obviously if you use onBeforeRender
of the scene for something else, the above won't work. But simply setting a flag and performing this logic on the next render()
of the component achieves effectively the same.