Softspace harnesses spatial computing to help thinkers and makers better see, understand, and develop the ideas that form the heart of their creative projects.
This was the first in a series of prototypes we’re releasing as a part of our open development process to build the augmented reality version of Softspace.
Help us invent a powerful new kind of tool for thought by 🧑🏽🔬 trying these prototypes, 🐦 following us on Twitter, and 👯♀️ joining the Softspace Discord.
I released SoftspaceAR Prototype01 a month ago. As a quick reminder, it’s a passthrough AR app that lets you explore a 3D forced-directed graph of a Dropbox folder’s contents.
I launched Prototype01 on the same day I announced the larger project of prototyping my way toward a new, augmented reality version of Softspace. At the time, I didn’t dig too deeply into the ideas and experiments embodied by this first research demo.
Since then, I released Prototype02 (which is all about writing), and have started work on Prototype03 (which is all about spatial layouts that are also ordinal; more on that soon).
Before this first prototype becomes too much of a distant memory, I wanted to review the important lessons I drew from it, and highlight some of the feedback I’ve gotten from you—our intrepid test users!
Key Conclusions
From developing, building, and testing Prototype01, I was able to draw three main conclusions that are relevant to the future design of SoftspaceAR.
1. The Quest 2’s passthrough is good enough.
Passthrough AR is augmented reality that shows you the physical world via video, instead of through a transparent display (”see-through” AR?).
The Quest 2’s passthrough AR mode is a retrofitted feature. It piggybacks on the inside-out tracking system’s cameras, so the positioning relative to your eyes isn’t optimal, and the video from the infrared cameras is low-res and lacks color.
It turns out that despite its limitations, the Quest 2’s passthrough is more than good enough to build a productivity app on. It adequately solves several key problems:
Even muddy grayscale passthrough eliminates the sense of unease and vulnerability that arises when you can’t see what’s happening around you, which is a major hurdle to VR usage for many people (myself included).
Although the visual quality is pretty rough, the latency and refresh rate on the passthrough video is very good, which means it doesn’t cause nausea. This is likely a benefit inherited from the requirements of the inside-out tracking system.
The camera positioning only becomes an issue for things that are closer than about a foot from your face, which means manipulating most things within arm’s length (like a nice bluetooth keyboard) is totally fine.
This means that even the Quest’s crappy passthrough lets you work in the headset comfortably, happily, and productively for longer stretches of time—and we know that future headsets will only improve on this experience (e.g. Project Cambria, Apple’s first headset).
2. The Quest 2’s hand tracking is good enough, and getting better.
Because of Oculus’ focus on gaming, hand controllers have long been the primary input device. They offer the precision and tactility that fast-paced, action-packed games need.
However, controllers get in the way when you need to use a keyboard, and there’s something that feels... off about doing work while holding what feels vaguely like a plastic gun in each hand.
Enter hand-tracking.
The Quest 2’s hand-tracking capabilities are pretty magical, especially considering that it’s running on—once again—grainy grayscale video from the inside-out tracking system. (ML FTW!) Even so, there’s noticeable latency, pose precision and confidence are not great, and tracking breaks easily if you move your hands too quickly or the camera can’t see them clearly.
Prototype01 showed me that these limitations are real and can be frustrating, but also that 1) they can be largely mitigated in code (e.g. smoothing), and 2) are totally worth the incredible experience of manipulating virtual objects in the air with your bare hands.
Oculus has been steadily improving the Quest 2’s hand-tracking via software updates (presumably using better-trained ML models), and I expect this trend to continue. Indeed, Oculus released Hand Tracking 2.0, with big upgrades to confidence and partial-occlusion pose estimation, right after I released Prototype01.
3. Working with ~1000 objects at once is totally feasible.
One of the core tenets of Softspace is that working with ideas spatially will have big benefits over looking at them through a small rectangular window. However, loading and displaying content (text, images, metadata, etc.) in a 3D context presents novel technical and design challenges.
One of these is simply scale: because of perspective, a spatial UI makes it possible for you to see many more things at once than on (even a very large) 2D display. Loading, managing, and rendering that many objects could prove difficult for the underclocked XR2 chip on the Quest 2.
Prototype01 gave a me a lot of comfort that even for scenes with ~1000 items, load times and performance would be workable. (I took 1000 as shorthand for “many”, because it’s well beyond the number of objects you can really hold in your head at once.)
✅ Loading this many items from the Dropbox API happened quickly enough that there was always new stuff for you to look at by the time you finished visually scanning what had already loaded—analogous to not having to wait for the bottom of a webpage to load in.
✅ The CPU and GPU were able to load and render this many textured objects at 90fps quite comfortably, in large part because of extensive optimizations I had developed for SoftspaceVR.
✅ On top of rendering everything, Prototype01 also runs an unoptimized force-directed graph simulation on two worker threads. Thanks to the wonderful Unity Jobs system and Burst math compiler, these calculations take ~6ms for 1000 bodies, fitting comfortably in the 11ms frame time budget.
Having the performance headroom to load, animate, and render hundreds of objects in 3D space opens up powerful possibilities for new and better ways to work with your knowledge graphs.
User Feedback
Many of you also took the time to download and try out Prototype01—thank you! 🙏🏽
Here some of the key pieces of feedback I received:
The force-directed graph was a visually interesting way to explore Dropbox, but the fact that everything was positioned somewhat randomly and automatically limited its usefulness. Manually created layouts are slower to create, but are more meaningful. Taking time to figure out where everything goes is part of the work.
A lot of people wanted to be able to add, delete, move, and otherwise make changes to the files and folder structure.
People also wanted to link items as a way of denoting a connection between two ideas, as opposed to representing a folder structure relationship.
The low reliability of the tracked-hand interactions tripped people up—I’ll continue to work on this. I’ve already upgraded both Prototype01 and 02 to the new Hand Tracking package, which should help greatly with this issue.
Nobody mentioned any problems with the passthrough, which I take to be a general consensus that it is indeed “good enough”.
Thanks again to everybody who downloaded and tried Prototype01, and thanks to all for reading this update!
I’ll be getting back to work on Prototype03 now. More on that soon.
Best,
Yiliu