Making Mixed Reality A Thing That Designers Do and Users Want

As a researcher, I can see that mixed reality has the potential to fundamentally change the way designers create interfaces and users interact with information. As a user of current AR/VR applications, however, it's difficult to see that potential even for me. We are still far away from that future where mixed reality will be mainstream, available and accessible to anyone. Where mixed reality will be ubiquitous, available anytime, anywhere, at the snap of a finger. Where everybody will be able to "play," be it as consumer or producer. Where there are lots of flavors to choose from, or we can create entirely new experiences, simply by combining existing ones in new, more powerful ways.

So what does it take to make that future of mixed reality a reality?

In this talk, I will focus on three aspects: 1) limitations of current methods used to create mixed reality experiences, and how I think methods need to bridge the gap between physical and digital prototyping to accelerate design, 2) the trouble I have with existing AR/VR tools, including immersive authoring systems, and how I think tools need to change to enable new workflows and empower less technical designers, and 3) the issues I see with the current proliferation of dedicated, standalone AR/VR devices, platforms, and applications, and the power and flexibility that could instead come from allowing users to combine multiple devices and adapt existing interfaces for AR/VR depending on a user’s context, task, and preference. In many of my examples, I will stress the importance of the web as an open platform and the role immersive web technologies play in my research to help illustrate my vision of mixed reality interfaces.


Michael Nebeling

Michael Nebeling

University of Michigan, USA

Michael Nebeling (http://michael-nebeling.de) is an Assistant Professor at the University of Michigan where he leads the Information Interaction Lab (https://mi2lab.com). His lab investigates new techniques, tools, and technologies that enable users to interact with information in more natural and powerful ways, and also make it easier for designers to create more usable and effective interfaces. His earlier work established principles, methods, and tools for the design, development, and evaluation of multi-device and cross-device user interfaces. In a second thread of research, he demonstrated how to use crowdsourcing to create adaptive interfaces, to write a paper from a smartwatch, or to design gesture recognizers. The vision behind his more recent work is to make the creation of augmented and virtual reality interfaces as easy and flexible as paper prototyping. His work has received eight Best Paper Awards and Honorable Mentions at the premier HCI conferences. He regularly serves on the program committees of the ACM CHI, UIST, and EICS conferences. He received a 2018 Disney Research Faculty Award and a Mozilla Research award. He joined Michigan in 2016 after completing a postdoc in the HCI Institute at Carnegie Mellon University and a PhD in the Department of Computer Science at ETH Zurich.