Neat monoscopic effect hack
-
I've been working on and off on a short narrative experience that I wanted to get experimental with, which ended up taking the form of projecting a monoscopic image intermittently during the experience that takes up your field of view (you can maybe tell from this already what kind of experience I'm making as far as comfort is concerned, heh). Anyhow, I feel like sharing, so here you go!
What I ended up doing is creating two planes with an image projected onto them, then figuring out the IPD of the user and positioning the second plane that same distance away. Below is the code I wrote in A-Frame to achieve this (minus some experience specific functions), though I'm sure it can be easily recreated in other frameworks after accounting for potential differences in camera implementations (A-Frame, for example, considers the left eye to be at the origin, hence me having to offset for the right eye).
AFRAME.registerComponent('overlay', { schema: { rightEye: { type: 'bool', default: false } }, init: function () { document.addEventListener('enter-vr', e => { if (!this.data.rightEye) return; // Need to set a short delay so we have a frame object to grab the session from setTimeout(() => { // Obtain the viewer pose so we can calculate the distance between both eyes const scene = e.detail.target; const xrSession = scene.frame.session; const refSpace = scene.systems['tracked-controls-webxr'].referenceSpace; xrSession.requestAnimationFrame((_, frame) => { const [leftEye, rightEye] = frame.getViewerPose(refSpace).views; const distance = this.calcDistance(leftEye.transform.position, rightEye.transform.position); // Round to 3 decimal places, this gives IPD in meters const ipd = Math.round((distance + Number.EPSILON) * 1000) / 1000; console.log(ipd); this.el.object3D.position.x = ipd; }); }, 100); }); }, calcDistance(leftPosition, rightPosition) { // Calculate distance in 3D space between both eyes const dx = rightPosition.x - leftPosition.x; const dy = rightPosition.y - leftPosition.y; const dz = rightPosition.z - leftPosition.z; return Math.sqrt((dx ** 2) + (dy ** 2) + (dz ** 2)); } });
-
Do I understand correctly that you show the same image for both eyes? It should be possible to achieve the same with a single plane and a custom shader. Or, alternatively, you could transform it in
onBeforeRender
for each camera. That way you might not even have to care if you're in vr mode or not.Coincidentally, for some experiment I also needed to calculate the IPD in A-Frame. I did this through extracting the positions from the left and right eyes' cameras. Here's the approach (slightly adjusted to also trigger on the
enter-vr
event like in your sample):document.addEventListener('enter-vr', e => { this.el.sceneEl.object3D.onBeforeRender = (renderer, scene, camera, currentRenderTarget) => { const cameras = camera.cameras; if(cameras.length >= 2) { const leftPos = new THREE.Vector3(); const rightPos = new THREE.Vector3(); leftPos.setFromMatrixPosition(cameras[0].matrix); rightPos.setFromMatrixPosition(cameras[1].matrix); console.log('IPD', leftPos.distanceTo(rightPos)); } delete scene.onBeforeRender; }; });
Obviously if you use
onBeforeRender
of the scene for something else, the above won't work. But simply setting a flag and performing this logic on the nextrender()
of the component achieves effectively the same.