Skip to content

Desk Portal: Input Method for Mixed Reality

During the development of immersive space to think, participants gave feedback on the text-input methods that were used in conjunction with the virtual reality implementations. These included a wizard-of-oz implementation where an experimenter would type for them, and a visualization of a keyboard on a tracked desk to give them visual feedback to accompany the passive haptics of the physical keyboard. The latter of these garnered significant negative attention, and in response I developed a mixed reality solution that is currently called Desk Portal.

The approach involves using a tracked wheeled desk. A “portal” to the real world is open to show the top of the desk, leveraging forward cameras on a head-mounted display to allow users to see and interact with physical controls and tools on the desk. In addition, the tracked desk also affords the ability to attach virtual buttons that can be easily reached by the user. In the following studies, participants used this portal to interact with small tools as well as place their controllers down more easily.

We designed and performed an experiment to evaluate this technique’s impact on typing, presence, and user experience. This study was ran by my colleague Alexander Giovannelli. We evaluated the design based through the speed at which a user could context switch between interaction tasks and typing tasks, and our paper write-up has been accepted at ISMAR 2022. The paper can be accessed here, and the paper abstract follows:

For touch typists, using a physical keyboard ensures optimal text entry task performance in immersive virtual environments. However, successful typing depends on the user’s ability to accurately position their hands on the keyboard after performing other, non-keyboard tasks. Finding the correct hand position depends on sensory feedback, including visual information. We designed and conducted a user study where we investigated the impact of visual representations of the keyboard and users’ hands on the time required to place hands on the homing bars of a keyboard after performing other tasks. We found that this keyboard homing time decreased as the fidelity of visual representations of the keyboard and hands increased, with a video pass-through condition providing the best performance. We discuss additional impacts of visual representations of a user’s hands and the keyboard on typing performance and user experience in virtual reality.