Solving VR/AR typing & input issues with UX & HCI methods

with No Comments

I’ve recieved a lot of good feedback (on Linkedin) following the 1st input method in this blog-post, but a lot of questions and interesting discussions (on Facebook groups, like XR Devs Israel & StartUX) about the 2nd method (holographic AR keyboard). With people and some of my colleagues, wondering “why we need it when we can simply use our own keyboards?”

I think this question is a great question because I myself tried to “bring” my real keyboard into VR even before Facebook (Oculus) made it official with their “Infinite Office” and I’m happy about it (I don’t have to “hack” my Quest 2 to achieve this anymore). So, I’ll try to explain in this short blog-post why we need other input methods for VR&AR, and how we can use some out-of-the-box thinking and prototypes to try and solve some of these issues…

I think after about 50 years of Keyboard based (and proxy devices) interactions with computers, we need to try and look far into the future and think about better ways to interact with future AR/XR glasses without carrying BT keyboards, VR controllers and other accessories just for a comfortable way to type text and input methods (especially when we are on the go). And while Google, Microsoft, Facebook, and Apple are working on some inspiring ways to solve some of these issues and challenges, we have to remember that not every patent they own is gonna translate into an actual product…

So, as a UX designer and researcher who is passionate about Emerging-Tech, and loves to prototype to solve problems with existing tools and open-source projects… I can only learn, research, explore, prototype and share my experiences, hoping it will help the industry or other tech enthusiasts in any way. Innovation doesn’t have to come from secret research-labs behind closed doors with secretive patents. We the people can contribute to the process too, that’s what I love about FOSS and open-source mindset. I came to this conclusion after filling a patent for my HoverSense technology to add Haptics and Hover-detection on Touchscreens and Foldable devices (phones, tablets, and XR), but I think innovation and technology moves faster when people collaborate and share ideas, especially when it comes to new technologies such as VR/AR and Spatial Computing. Prototyping & MVPs can help us to “move fast and break things” in order to test (and learn) new methods and to solve some serious issues.

Trying to improve the input & typing experience for VR/AR (XR)

1. Using my smartphone as an input device for my XR Glasses:


Because hand gestures are not always precise or comfortable for typing, I realized that the smartphone can act as a track-pad, touch-pad and a keyboard for my VR/AR Glasses. And it actually makes sense, since most of the AR/VR glasses already have to lean on the Smartphone, either as a companion app (Oculus Quest 2) or even by using the phone as the computing unit for it (Upcoming AppleGlass?).

So why not leverage this connectivity also for better (and familiar) input methods instead of waving our hands in the air or carrying dedicated controllers?! After all, the upcoming AR/XR glasses won’t kill the smartphone anytime soon, so until there’s BCI solution, we can and should use both of them for specific scenarios. It could be even useful when interacting with AR glasses (Google Glass etc) HUD elements in front of us, without raising our hands. If only we had HoverSense like haptics built into all touchscreens. But unlike HS, this time I’m not gonna patent it! 😉

Let’s face it, Oculus Quest 2 & HoloLens 2 have a great hand tracking capabilities, but typing in VR&AR by waving our hands in the air is not ideal in terms of UX & HCI (and a bit awkward)! So, projecting using a smartphone as a keyboard is a great solution, until we’ll see some BCI or Holographic solutions. By Holographic keyboard I mean a projected AR keyboard on surfaces. It could be a decent typing experience as you can see in the next section.

2. Using Projected Holographic AR Keyboard (Futuristic Concept):


In the future, we’ll interact with XR (VR/AR) and Spatial Computing devices without using a physical keyboard & mouse, so I wanted to come up with another HCI (Human-computer interaction) method without using my previous method of a smartphone as a keyboard & touch-pad. At least until we’ll see the BCI (brain-machine interfaces) tech such as Neuralink available for the masses. In my opinion, it’s not gonna happen anytime soon (even Facebook drops their BCI research and exploring new ones), so until then – we’ll have to find some different and productive ways to input text without carrying keyboard and mouse everywhere. Especially when Facebook (Oculus) and Apple, who are already working on slim, well-designed and normal-looking AR glasses. Facebook even has a partnership with RayBan & Luxottica. So, most people won’t carry a keyboard and mouse with their normal-looking AR glasses to the near coffee-shop just to answer some emails and WhatsApp message. Not to mention, nobody will do so when they are going to a vacation…

As for my Holographic Keyboard concept – *This time it’s not a real execution of my idea, it’s a quick UX prototype, until I’ll figure out what’s the right way to develop it in Unity or other tools. But once again, I believe it could be a decent input method for VR/AR glasses of the future, without the need to carry a keyboard and mouse or other peripherals!

Vision? Nope! The Future is Now…

But here’s a demo I made to show how I use my smartphone as an input device to get some work done with my Oculus Quest 2. Notice how I can easily work with Office365, Google Docs, Figma UX/UI design tool, and other productive 2D Android apps that I can run as AR and MR apps on top of OculusOS (because it’s based on Android). And remember, this video is not a concept. It’s 100% real! If you are wondering how I managed to record a colored AR Passthrough video on top of my Oculus Quest 2, check out my tutorial.

My vision: Interact with holograms the way you want: Hand tracking, Smartphone, BCI, or even a regular BT keyboard!

Although there’s no Haptic feedback (yet) typing experience could be improved by using Machine Vision and AI, just like we already use for hand-tracking. But, please let me know what’s your opinion on this one, and if you have a quick tips or recommendations how to develop / implement it on my Quest 2, I’ll be happy to hear.

Leave a Reply

*