Working in Virtual Reality: A Review
For the past three days, I started experimenting with working in Virtual Reality. I’m quite impressed. My guess is that it’s not good for most people yet, but that 1 to 10% of people reading this would gain a 2 to 20% increase in computer productivity by using a VR working setup. The upper end is for people who get distracted easily or have a difficult time with SAD.
This feels like the most radical experiment I’ve made to my setup so far, so I’m quite happy with how it’s worked out. I’ve used to dream of similar setups and it’s really cool that the technology is basically there. I’ve given demos to a few people in my house who haven’t been close to VR and their responses varied from fairly impressed to incredibly impressed.
I’m fairly convinced that there’s an extremely promising future for work in VR. The VR ecosystem seems to be improving much more quickly than the alternatives. It strikes me as surprisingly possible that within 2 to 5 years, VR work setups will be the generally recommended work setups, at least for “people in the know.”[1] This could both lead to direct improvements and lead the way for radical rethinkings of what work setups are possible.
My Setup
My specific setup is an Oculus Quest 2 ($300), a 2016 Macbook Pro, and the application Immersed VR. Immersed works using WiFi. My router is around 15 feet away from my headset, and my computer is connected directly to the router via Ethernet. In the app I use two “monitors”; I downscale a 4K monitor to 2048x1280 and use a side monitor of 1920x1080. It’s suggested to keep resolutions rather low both because the Oculus Quest 2 doesn’t itself have a high resolution (1832×1920 per eye), and because higher resolution means higher latency. You can have up to five “virtual” monitors with Immersed, but I prefer one or two big ones.
I think I used this setup for around 5 working hours on Wednesday, 6 on Thursday, and maybe 2 so far today (but it’s still early). It didn’t seem to get particularly tiring over that time.
I’ve been getting latency of around 5ms to 15ms, but every minute or so there are some frustrating 1-5 second hiccups. It’s possible to watch videos but I have seen large decreases in frame rate from time to time. They have instructions about using WiFi direct to make things smoother. I’ve ordered the necessary module (it’s around $25) and should be getting it shortly.
I’m not sure how long I’m going to continue using it. I find the Quest a bit uncomfortable to wear for long periods and sometimes a bit tiring for my eyes. I’m going to continue tinkering to try to make it better.
Benefits
Focus
I have a roommate now and find visual stimuli distracting. I’m also in a room that’s a bit of a mess. I like having a lot of things (a lot of small experiments), and that makes it difficult to have a clean workspace.
VR setups can isolate away everything that’s not the monitors. There’s an option to see a keyboard, but I don’t use it (I recommend spending effort to not need to). There’s a handful of decent virtual room options. On Immersed there seems to be a few that prominently feature space and space travel.
Light / SAD
LessWrong now has a full tag on lighting, with 6 popular posts on the topic. I’ve been considering setting up a system myself.
I’m not sure how to best measure the amount of light experienced in VR vs. the sun, but things seem relatively bright to me with the Quest. VR glasses use curved lenses and a dark environment to focus the LCD light on your eyes, unlike regular monitors that are meant to be visible at any angle. So even a relatively VR small screen can produce more eye-lumens than something much larger. I recently purchased a 350nit 4k monitor and found that that hasn’t been quite enough for some parts of the day. With the VR headset, I often turn the monitor brightness down.
The only thread I could find on the topic was this one on Reddit, but it doesn’t seem that great to me. I found this beginning of a scientific study on “VR for Seasonal Affective Disorder”, but no completed version. I’d hypothesize that living mostly in VR would have some significant benefits for some people with significant SAD (if you’re in VR, how does it even matter what the season is?), though I could imagine that it has some downsides too.
Ergonomics
VR headsets can be a bit heavy, but besides that can be highly ergonomic. In virtual environments you can configure screens to be anywhere you want them. I have a decent monitor arm that I find decently suboptimal. I often have a hard time bringing my monitor just where I want, so move my neck to compensate (a bad idea!). It can also be fairly shaky when my desk is in standing mode. In VR I can easily position and reposition my monitors exactly where I want them in the sizes I want them, it’s great.
I’ve previously thought about trying to work while laying down, when my back was particularly sore. There are some intense $6k++ setups for this, and gerry-rigging solutions can be quite awkward. With a VR headset you still would need some solution to position the keyboard, but the monitor issue is of course dramatically simplified. I tried reading a bit while lying down and it worked fine.
Portability
One of the worst things about monitors is that they are a pain to transport. They’re quite large and heavy, and I’ve had a sequence of bad luck moving them without causing at least some considerable damage. The way things are going, with a VR headset, you could have a stellar setup anywhere at all, which is unheard of. Maybe outdoor setups on warm days would be possible, though of course, you’d have to replace the visuals with some similar or superior theme on your device (You’d still get the sounds, sent, and breeze.) Perhaps at some point laptops will forgo the screens, or maybe all the hardware will be in the headset and you’d carry a separate mouse keyboard combo.
Coworking
I haven’t tried this yet, but apparently, you can cowork with Immersed. I believe you get the benefit(?) of being able to see the screens of other coworkers. The options are quite configurable depending on the program.
Coworking in VR has the obvious benefit of allowing people to live anywhere, but also the obvious cost of not being able to see people’s faces. In Immersed there is one feature where you can have a “digital webcam” that uses an avatar of you in a format that’s accessible for online video chats in Google Meet and similar. It’s neat but faily basic.
Facebook has an impressive demo of Photorealistic Face Tracked Avatars, but I imagine it won’t be released for a while.
Negatives
Resolution & Latency
As mentioned, the resolution is rather poor compared to modern monitors. The latency is significantly worse, though Wifi direct should help, and Windows setups with direct connections should be fine. This seems quite bad for high-bandwidth tasks like video editing or video games, but useable for typing and a lot of coding.
Discomfort
VR headsets are still a bit uncomfortable to wear for long periods. I imagine this will improve a lot over time. I think that future prototypes look a lot like sunglasses. Apple apparently is getting into the space, so I imagine their take will be particularly lightweight.
Facebook (Quest only)
The Quest 2 requires Facebook login and the operating system is heavily integrated with Facebook. To share a screenshot of my in-game setup I actually had to post it to my Facebook wall, then copy and paste that image. In general the on-system OS is useable but quite basic.
Other Discussions
There are a few neat videos of people showing off their VR office setups:
This one is a nice overview of Immersed, though it’s about a year old.
This video shows off the Immersed webcam feature.
This one shows off Virtual Desktop with a wired connection.
Facebook is working on “Infinite Office” which seems interesting but isn’t yet available. It at least demonstrates their optimism and dedication to the area. It’s pretty easy for me to imagine it being better than Immersed after it launches.
Here’s a discussion of someone who didn’t find working in VR particularly usable, in part because they needed to see the keyboard and apparently had a lot of in-person distractions.
The Immersed Blog is interesting, though short and biased. They claim that their team works for 8+ hours a day in VR, and point out that apparently, some users reported using VR to effectively live in different time zones.
There’s an Immersed Discord and it has most of the discussion I’ve seen from actual users. The setup is highly biased to favor positive messages, but there is a long list of very enthusiastic users. Generally, people are most positive about the focus benefits and the use of extra monitors. There seems to be almost no discussion from users who have used it for collaboration; most have used it solo.
Conclusion
Working in VR is clearly in its “early days”, but it’s definitely happening. There seem to be at least dozens of people working full-time in VR at this point, most have started in the last ~2 years. The technology is already quite inexpensive and useable. The advantages going forward are numerous and significant.
I’d expect the VR headsets coming out this next year to continue to get better, so waiting a while is a safe option. But I suggest keeping an eye out and planning accordingly. If you’ve been thinking about buying a fancy monitor setup or SAD light setup, you might want to reevaluate.
[1] By this, I mean what I and many smart startups would recommend. Often very good ideas take a long time to become popular. Popularity seems harder to predict than quality.
- 20 Nov 2020 23:29 UTC; 4 points) 's comment on Working in Virtual Reality: A Review by (
I’ve been meaning to do a post about the near future of VR because I feel like a lot of people don’t believe how good it will be, and how soon. But I guess maybe it doesn’t need a post of its own. It can be boiled down to:
Reaching maximum levels of visual acuity is very achievable via foveated rendering: the optimization of only rendering the patch of the scene that the user is actually looking at in full detail.
No mouse will be needed. That prospect, foveated rendering, incents providing eye tracking. External peripherals that aren’t right next to your eye can already provide a faster kind of hands-free mouse, accurate enough for most legitimate demands. For others tasks, perhaps some form of hand tracking could make up the difference.
Field of view (Angle. Amount of peripheral vision) has already been maxed out by pimax.
Further ahead, there just wont be much of a difference in the optical properties of VR and reality if the focal length of the screen can be made dynamically adjustable to resolve vergence conflict.
(Edit: Claim 1 is weaker than I initially thought, foveated rendering often only provides gains of about 4x, but it isn’t important, we’ll get very good visual acuity from compression and upscaling algorithms or just, idk, regular advancements in rendering hardware. Claim 2 is at about P 0.15 for me, now, eye tracking seems to have inherently limited accuracy because the human eye isn’t consciously controllably precise about where it’s pointing, it could be fine for very large UI elements, or switching focus between different windows, but it can’t substitute a mouse. I’d hope we’d just design UIs to be less mouse oriented, but that’s not likely. Claim 3 and 4 still hold.)
Thanks for this post! Interesting to learn about the current state of things.
It does seem true (and funny) to me that the #1 thing in physical reality I and millions of others would like to experience in Virtual Reality is our computer screens.
I expect it humorlessly. In a lot of ways, computer screens can’t be improved upon that much:
The third dimension is unlikely to turn out to be useful when most of the work we do is already neglecting to use the color dimensions. Hopefully it’ll be useful to people who work with 3d objects (3d modellers) though.
I’d predict operating systems that at least start presenting larger computer screens, but even that has practical limits. Once it’s wide enough, it would require you to physically turn your head to be able to see things. Right now you can just hit a switch workspace or show overview keybinding for that kind of thing, which is faster. Working while looking to the side is not ergonomic, and it would be hard to get the OS to consistently put stuff at the sides that’s occasionally worth looking at but not ever worth looking at for long enough to get uncomfortable.
Caveat: Turning your body to look around at different stuff would probably be healthy, and intuitive, so we might hope for some hip new VR-optimized standing desks with keyboards that can be yawed around to different angles. Optimally, keyboards would be mounted on a fairly long robot arm that lets you just move and position it in 3d space anywhere in a room.
Still seems kinda gimicky on net but who knows, might be nice.
Good point.
One thing I noticed is that HTML could really be optimized for 3d viewing. Right now computer screens are totally flat, but with VR, you could take advantage of the extra dimension. In general I’d be quite curious about 3D web pages, it seems like there’s a lot of innovation to be done. My quick hunch is that it won’t radically change UX (things would have to be accessible to people with one eye, for instance, and it’s very user-convenient to not need to adjust the third dimension, like having a 3-d mouse), but I imagine it could still lead to a bunch of UI changes.
Big Screen allows you to watch 3D movies, which is pretty cool (though they charge a fair bit for them).
https://www.reddit.com/r/bigscreen/comments/ck4xrc/where_can_i_get_3d_movies_and_play_them_in/
Curated. I feel like humanity is on the cusp of “we’re living the future now”, and posts that explore the details of that, blazing the trail for others, are pretty exciting.
I’ve been using an Oculus Quest 2 in this exact setup for a few days. I think a lot of what’s said here is pretty accurate. My list of pros:
Anywhere. I can put my laptop down anywhere, put a headset on and start working with four screens. TBH I’m still sat at my desk most of the time but I’m considering whether I still need a desk.
Why stop at three monitors?
The Oculus headphones sound really nice. How do they make sounds sound as though they’re coming from behind you with two speakers that are both in front of your ears? I don’t understand this.
Lots of people mention the lack of distraction. I work in a room on my own so it’s not a big issue for me anyway. When I have a distraction, it’s my kids running in, so I feel kind of bad ignoring them because I’ve got a headset on.
Cons:
Getting into and out of it is pretty clunky. I write software so I want to use a keyboard (and occasionally mouse); there’s no way I’m going to write code by pointing the Oculus controllers at a virtual keyboard. To get into it, you start the agent, put your headset on, pick up the controllers, start the Immersed app, sort out the monitor layout, put your controllers down (make sure you cleared a decent space for this before you put the headset on) then find your keyboard. A bit more thought going into how well it remembers screen layouts and a voice-activation way of launching apps on the Oculus would go a long way to making this better.
You know how every time you plug in an external screen you have to tell Windows where it’s physically placed? That, multiplied by about a million times. Each time you want to do this, you have to pick up the Oculus controllers. The grid-snapping system for placing screens seems to have annoying quirks.
WiFi. My router is not in the same room. This adds enough latency (and unpredictable latency) to make it unusable. It has a WiFi direct thingy; for some reason, the option to use this is not always available in the agent software. I don’t know why. I’ll have to sort out another WiFi AP (been meaning to do this anyway to improve coverage) but you’ve just lost Pro #1.
Linux. I prefer a Linux desktop. All of my software work is on Linux. The agent claims to work on Linux but just… doesn’t. You run Wayland? Sorry. You have a 4k screen? Sorry (actually this worked sometimes, which is nearly worse). You want virtual desktops like the blurb says you can have on Linux? Sorry. You can display any physical monitors you have connected as desktops in VR, but you’ve just lost Pros #1 and #2. I’m trying out working in WSL2 on Windows but it’s not comfortable for me.
Resolution. There’s a reason we have big, 4k screens. The Oculus’ resolution is hard to give a feel for, because it varies across the display surface. The thing you’re looking at has a much higher dot pitch than things on the periphery of your vision. So the thing you’re looking at is good enough, but you have to get used to turning your head constantly to look at things; glancing at a second screen out the side of your eye does not produce a good result. This might be just something you get used to.
Update:
A month later and I’m still using it instead of my physical screens. It must be okay.
I’ve switched back to Linux and am using a combination of dummy HDMI dongles and a USB-C dock to give me three screens. It works okay, but is annoying. There’s some work going on to make virtual screens work better but it’s spread throughout the display stack so don’t expect it to happen any time soon (unless you’re on Intel hardware, in which case you can create virtual screens fairly easily).
The annoyances with Windows were mainly Windows annoyances. The ones that really got to me were:
The WiFi Direct thing in Immersed is only available if your laptop is connected to a WiFi network. I used a wired connection most of the time so this was a considerable annoyance. It was usually easier to just open settings and start the Windows hotspot (which is what Immersed was doing under the covers anyway).
The Windows WiFi hotspot occasionally just… switches off. Not that occasionally, actually. This happened at least once per day.
To make my workflow work well, I switched to Windows 11 to get WSLg. Windows 11 is… buggy (or was—big update released this week but I haven’t tried it). Didn’t wake from sleep reliably, numerous lockups/freezes, the WSLg display server would occasionally freeze or crash with no very good way to recover it.
Windows makes an utter dogs breakfast of remembering what screens are open and where they should be placed. Several times a day I had to rearrange all the screens.
My Linux setup is pretty stable. I still need to figure out how to start the agent as part of the display manager startup, so that I can put the headset on before I log in. I don’t think this is possible without making the agent a command-line process, though.
As a fellow Linux user who can’t bear the thought of returning to Windows and feels claustrophobic in the thought of being constrained by Apple, thanks for the comments on your Linux-Immersed-Q2 experience.
I hace a couple questions. First, are you using X system rather than Wayland, given your first comment? Are you using i3wm or Gnome (or something else). If i3wm, have you noticed any issues (bugs, friction etc.) specific to it?
Second, give you have three virtual screens, does this mean you have three physical monitors lying around that you plug the dummy dongles into? I don’t understand what the USB-C dock is for?
Thanks!
I’ve been reading / light working in virtual reality for ~2 hours/day for a few months. My setup is a bit different than Ozzie’s, so I thought I’d share that here.
I use a Windows gaming laptop with the Oculus Quest 2, connected via Virtual Desktop instead of Immersed. Virtual Desktop has far lower latency, and slightly higher resolution than Immersed, but only shows you one screen at a time. The latency is low enough to be practically unnoticable (smaller than latency differences between similar computers, https://danluu.com/input-lag/) . A few months ago, it was Windows only, but since added Mac support. I don’t think the GPU on my laptop is important for my setup.
Pros
The main reason I use Virtual Desktop is to get out of my chair, either to lying down or standing up. When I hava a stomachache, headache, or soreness, I get a significant productivity benefit from being able to work lying down. By default my VR productivity is a bit lower than normal, but more enjoyable.
Cons
The biggest downside I’ve found is that VR definitely gives me myopia, due to vergence problems. Whenever I take off the headset, my distance vision gets blurrer for a few hours until my eyes adjust to focusing different distances. I think this gradually leads to long term myopia if not offset by vision exercises or something, but will be sovled in future headsets. I’ve tried placing my virtual screen at the focal distance of my headset, 1.3 meters, but that didn’t help.
Virtual Desktop takes many seconds to connect, and sometimes needs to be restarted, which is a significant flow-disruptor.
Overall, I’ll continue to use VR the same as I do now, and hopefully up my VR time/other screen time when better headsets come out.
Thanks for sharing. I think that the Windows options seem pretty strictly superior to the Apple options at this point.
I’ve used Virtual Desktop with the Quest, but it doesn’t allow you to use a wired connection (using a mac). With my wifi setup, it was pretty slow.
Sorry to hear about the myopia! I didn’t really notice that. I hope future units improve here.
I’d also be interested in seeing a report on the various alternatives to Immersed, and how they stack up against each other.
I’d be interested too. My impression is that Immersed is the only option that allows for computer screen input on Mac and Linux machines. Windows has more options.
Hopefully with the increasing popularity of VR devices there will be more competition coming in.
That said, I would note that Immersed was mostly fine for me. The main frustrations were the lack of resolution and the fact that it seemed to tire my eyes a bit. I’m not sure how much better Immersed could (realistically) be in ways that would get me to use it more now, as a solo user, at this point.
Quick update:
I’ve found that wearing reading glasses while working on my regular computer results in some of the benefit listed above. This effectively makes my screen a bit larger and helps me get into a “work context”.
I’m sure that many readers of this already wear glasses, but this was fairly new to me, so it caught me a bit by surprise.
I got a pair of cheap (~$10) +.75 that also reduce blue light. I’m not sure if the blue light part does much, but the amplification helps a bit.
(I’m still waiting for some Big Sur Wifi-direct driver, to use the Quest 2 again)
I have used Immersed for screensharing movies in VR. (We’re both mac users, so nothing other than Immersed is possible currently.) It’s a bit finicky but you can definitely make it work. I had to lower the resolution of the shared screen until the streaming framerate rose to acceptable levels, but then it was generally great.
The main advantage of this over the more typical approach with a video call is that you can get more of a sense of ‘presence’—you can’t see the other person’s face, but an avatar can in some ways feel more expressive (you can see their head movements and hand movements.)
However, at least when we tried this, the Immersed screensharing worked much better than the Immersed multiplayer avatars, so we ended up going back to video chat. I expect it’s improved since then, though. (The BigScreen avatars are super cool, an incredibly strong sense of presence, but we can’t screenshare on BigScreen because that feature is Windows-only. Very frustrating.)
(Disclosure: I made a small investment in Immersed, because I think it’s super cool.)
Some updates:
I’m now using it a bit here and there, but I changed rooms and the connection isn’t as good, so it’s much more painful to use.
There’s a new VR headset being made specifically for linux, which looks very neat. https://simulavr.com/
Here’s a much more in-depth blog by someone who’s been doing this for many hours.https://blog.immersed.team/working-from-orbit-39bf95a6d385
That’s very interesting.
I’d be concerned about the potential impact of prolonged usage of a VR headset for many hours per day on eye health. (Of course, I’m not at all an expert in this area.)
Me too. Long-term impacts in general can be tricky to study.
I imagine that there are a whole bunch of parameters to play with in VR. There are different technologies for the headsets, and within it, you have options regarding brightness and similar. My guess is that theoretically it could be good or better than many regular monitor setups, but I’m not sure how long it will take to find that.
As long as most of what you’re looking at is at the VR’s natural focal point (resolving vergence conflict) and the pixel density is high enough.. maybe it will be fine.
It’s possible that something bad happens if you don’t refocus your lenses very often, and it seems likely to me that it may be a long long time before VR that can present multiple focal lengths starts getting cheap. There might not be a lot of enthusiastic demand for it. Maybe there will be though. Gamers will demand every achievable kind of realism, even this weird silly stuff like realistic depth of field simulation. Once that happens I can’t imagine what differences to reality would be left for the eye to complain about.
I tried out Immersed on the Quest 1 and found it suprisingly close, but not quite good enough. I plan to try it out on the Quest 2 someday soon.
I think VR Headsets are a bit heavy/uncomfortable to wear for extended periods of time so I don’t know that I could actually really work in it extensively, but in general I’ve been surprised at how good it was.
I’m also quite interested to see people specifically look into VR and SAD.
I think the Quest 2 is a fair bit better, though I’ve only used the Quest 1 for around 40 minutes. The resolution and screen door effect have large improvements.
I think good straps can help with the physical comfort, but agreed it’s an issue.
This is a great post! Some typos/nitpicks:
Not sure what this sentence means.
This link appears to go to the wrong place.
Ah, thanks!
It was written in VR, I think it will take some time to get used to proofreading and stuff in it :)
Hi Ozzie! Any news on the adapter, and how you kept using Immersed? :-)
Hi!
I made this mistake of upgrading my computer to Big Sur, which has problems with wifidirect. I also changed rooms in the house, so now my wifi signal isn’t quite as good. This makes a very noticeable difference.
I still use this for writing here and there, but mostly I’m waiting for wifi direct support and/or better setups to come out. I’m keeping a close eye on developments.
Ah, so the latency without wifidirect is too bad for regular use?
I read that if you disable SIP, you can get wifidirect back up and running. That’s not good security practice, though.
The latency is too bad for my particular setup. It depends on how good the wifi connection is in your room.
Yea, I’ve looked into the SIP workaround, but am reluctant to implement that now. I’m hoping they just making drivers for Big Sur, but it’s taking more time than I’d like.
Thanks! I might give it a go and use Ethernet <-> Mac <-> Quest 2. I think you could do iPhone via bluetooth <-> Mac <-> Quest 2 – I’ll have to test that out. If you wanna know how it goes, feel free to reply in a week or so!
Good luck!
To be clear, my Mac is connected via ethernet (but there’s no way to connect the headset with a wire). I’m really not sure how the iPhone would work, or if they support it.
I believe things are much nicer for Windows computers.
I’m hoping Apple releases their own soon, though rumors have it that the upcoming unit will be very expensive $1k to $3k).
Okay, so I got it to work! Basically you just do what it says on here: https://blog.immersed.team/wi-fi-direct-8ec23c74fdab
And then connect the Quest 2 to the new network your mac is broadcasting :-)
My WiFi is good enough to not need it, but I’m sure I’ll need it when I’m out of town.
Quick update:
Immersed now supports a BETA for “USB Mode”. I just tried it with one cable, and it worked really well, until it cut out a few minutes in. I’m getting a different USB-C cable that they recommend. In general I’m optimistic.
(That said, there are of course better headsets/setups that are coming out, too)
https://immersed.zendesk.com/hc/en-us/articles/14823473330957-USB-C-Mode-BETA-
Related:
Another question about your time in here. Were you using the quest 2 with the default stretchy strap, or the elite strap addon (or a 3rd party strap)?
The default strap. It’s not that great, but for me, tolerable. I’m giving it a few months before upgrading, as I’m hoping more straps will be available. (the Oculus ones are sold out)
Question about this software: Does it have to be in a building? Can I instead put my floating screens in a tranquil forest, or floating in the clouds, or next to a hoard of treasure in a cave?
There are a few options with the $15/month package with Immersed. No forest, but there is one above the clouds, and one in a cave (no treasure though). With the free package you just get a few 360 photos to choose from (no depth)
Other apps have more options, but they only support Windows generally.
It’s concerning how accurate facebook’s face tracking seems to be vs how unrealistic it feels. They’re doing the best they can. They’re doing a really good job. I can’t explicitly point out any flaws. It still doesn’t feel right at all :(((
Still probably a big step up from not being able to see people at all though.
I guess to me it didn’t seem too bad. I’ve found that talking to people with simple avatars in VR and similar seems surprisingly fine, I’d imagine that in practice you’d get used to this. That said, I also imagine the technology will continue to improve. Deepfakes are getting quite realistic.
The hyperlink is missing.
Fixed, thanks! It was a small error in how the url was typed.
Cool experiment! I’m definitely not attracted by working in VR, but from your post it seems lik ti could help some people to focus more. Even if I don’t plan to use VR, I’m curious about having a setting when you can write longhand (which is a lot of what I do when working on research).