Making Mixed Reality A Thing That Designers Do and Users Want

As a researcher, I can see that mixed reality has the potential to fundamentally change the way designers create interfaces and users interact with information. As a user of current AR/VR applications, however, it's difficult to see that potential even for me. We are still far away from that future where mixed reality will be mainstream, available and accessible to anyone. Where mixed reality will be ubiquitous, available anytime, anywhere, at the snap of a finger. Where everybody will be able to "play," be it as consumer or producer. Where there are lots of flavors to choose from, or we can create entirely new experiences, simply by combining existing ones in new, more powerful ways.

So what does it take to make that future of mixed reality a reality?

In this talk, I will focus on three aspects: 1) limitations of current methods used to create mixed reality experiences, and how I think methods need to bridge the gap between physical and digital prototyping to accelerate design, 2) the trouble I have with existing AR/VR tools, including immersive authoring systems, and how I think tools need to change to enable new workflows and empower less technical designers, and 3) the issues I see with the current proliferation of dedicated, standalone AR/VR devices, platforms, and applications, and the power and flexibility that could instead come from allowing users to combine multiple devices and adapt existing interfaces for AR/VR depending on a user’s context, task, and preference. In many of my examples, I will stress the importance of the web as an open platform and the role immersive web technologies play in my research to help illustrate my vision of mixed reality interfaces.


Michael Nebeling

Michael Nebeling

University of Michigan, USA

Michael Nebeling (http://michael-nebeling.de) is an Assistant Professor at the University of Michigan where he leads the Information Interaction Lab (https://mi2lab.com). His lab investigates new techniques, tools, and technologies that enable users to interact with information in more natural and powerful ways, and also make it easier for designers to create more usable and effective interfaces. His earlier work established principles, methods, and tools for the design, development, and evaluation of multi-device and cross-device user interfaces. In a second thread of research, he demonstrated how to use crowdsourcing to create adaptive interfaces, to write a paper from a smartwatch, or to design gesture recognizers. The vision behind his more recent work is to make the creation of augmented and virtual reality interfaces as easy and flexible as paper prototyping. His work has received eight Best Paper Awards and Honorable Mentions at the premier HCI conferences. He regularly serves on the program committees of the ACM CHI, UIST, and EICS conferences. He received a 2018 Disney Research Faculty Award and a Mozilla Research award. He joined Michigan in 2016 after completing a postdoc in the HCI Institute at Carnegie Mellon University and a PhD in the Department of Computer Science at ETH Zurich.

 

 

A Lived Lab: Reflections on (More than) A Decade of Engagement with a Community

Access to technology and digital services is commonly assessed as a binary question, both by researchers and society. People see eye-trackers as having solved communication for people with motor impairments, see screen readers as having solved access to mobile devices for blind people. In this talk, I will try to dissect deeper layers of accessibility, beyond physical access, using my own misconceptions and research agenda over time, as examples. This includes presenting not only the different research projects we have been working in the last few years but also the different research methodologies applied to get a deeper understanding of the needs of blind people and the impact of our own interventions.

With this talk, I have three parallel goals: 1) present our work on mobile accessibility for blind people, from novel text input techniques to human-powered assistance; 2) break stereotypes and call for deeper research agendas and methodologies; 3) argue for embedded research and an active role of participants, aiming towards the democratization of technology.

This talk builds on more than 12 years of prototype design and in-the-wild deployments of mobile technologies within a community of blind people.


Tiago Guerreiro

Tiago Guerreiro

University of Lisbon, Portugal

Tiago Guerreiro (https://tjvguerreiro.github.io) is an Assistant Professor at the University of Lisbon and a researcher at LASIGE (https://www.lasige.di.fc.ul.pt/). He is an HCI researcher with a focus on improving access to computing technologies to people with different abilities, and on re-designing interactions and workflows for pervasive healthcare. Alongside, he is concerned with how people, in general, are able to secure their data, particularly from non-sophisticated insiders. He does this with a strong user- and data-driven approach, pillared by deploying and assessing technological artifacts in-the-wild. He is proud to be collaborating with institutions for blind people for over 12 years, with weekly engagements and prototypes being used for periods over 8 years.
He received awards for 10+ papers, including at CHI, ASSETS, SOUPS and MHCI. He is Editor-in-Chief for ACM Transactions on Accessible Computing, ACM ASSETS 2020 General Chair, and has served in several roles at CHI (SC Chair, AC), ASSETS, W4A (GC and PC Chair), among others. He participated in 10+ EU projects, is an expert evaluator for H2020 and ERA Permed EU calls, and is currently an invited expert supporting the European Commission in implementing the Web Accessibility Directive.