I’ve been meaning to do a post about the near future of VR because I feel like a lot of people don’t believe how good it will be, and how soon. But I guess maybe it doesn’t need a post of its own. It can be boiled down to:
Reaching maximum levels of visual acuity is very achievable via foveated rendering: the optimization of only rendering the patch of the scene that the user is actually looking at in full detail.
No mouse will be needed. That prospect, foveated rendering, incents providing eye tracking. External peripherals that aren’t right next to your eye can already provide a faster kind of hands-free mouse, accurate enough for most legitimate demands. For others tasks, perhaps some form of hand tracking could make up the difference.
Field of view (Angle. Amount of peripheral vision) has already been maxed out by pimax.
Further ahead, there just wont be much of a difference in the optical properties of VR and reality if the focal length of the screen can be made dynamically adjustable to resolve vergence conflict.
(Edit: Claim 1 is weaker than I initially thought, foveated rendering often only provides gains of about 4x, but it isn’t important, we’ll get very good visual acuity from compression and upscaling algorithms or just, idk, regular advancements in rendering hardware. Claim 2 is at about P 0.15 for me, now, eye tracking seems to have inherently limited accuracy because the human eye isn’t consciously controllably precise about where it’s pointing, it could be fine for very large UI elements, or switching focus between different windows, but it can’t substitute a mouse. I’d hope we’d just design UIs to be less mouse oriented, but that’s not likely. Claim 3 and 4 still hold.)
I’ve been meaning to do a post about the near future of VR because I feel like a lot of people don’t believe how good it will be, and how soon. But I guess maybe it doesn’t need a post of its own. It can be boiled down to:
Reaching maximum levels of visual acuity is very achievable via foveated rendering: the optimization of only rendering the patch of the scene that the user is actually looking at in full detail.
No mouse will be needed. That prospect, foveated rendering, incents providing eye tracking. External peripherals that aren’t right next to your eye can already provide a faster kind of hands-free mouse, accurate enough for most legitimate demands. For others tasks, perhaps some form of hand tracking could make up the difference.
Field of view (Angle. Amount of peripheral vision) has already been maxed out by pimax.
Further ahead, there just wont be much of a difference in the optical properties of VR and reality if the focal length of the screen can be made dynamically adjustable to resolve vergence conflict.
(Edit: Claim 1 is weaker than I initially thought, foveated rendering often only provides gains of about 4x, but it isn’t important, we’ll get very good visual acuity from compression and upscaling algorithms or just, idk, regular advancements in rendering hardware. Claim 2 is at about P 0.15 for me, now, eye tracking seems to have inherently limited accuracy because the human eye isn’t consciously controllably precise about where it’s pointing, it could be fine for very large UI elements, or switching focus between different windows, but it can’t substitute a mouse. I’d hope we’d just design UIs to be less mouse oriented, but that’s not likely. Claim 3 and 4 still hold.)