Robots that communicate what they're doing in AR are more efficient (and more pleasant) to work with
Most robots are designed to do work. As such, not a lot of time, effort, or money is spent on making them able to communicate with humans, because theyre usually just doing their own thing. This is starting to change a bit, though, as robots become versatile enough that its reasonable to have humans working with them more directly, and its becoming more important that those humans have some idea what the robot is up to.
Some robots manage this with sounds, or lights, or screens with faces on them, but there are many systems for which hardware modifications like that arent a good option. In one of best papers from all of HRI 2018 (seriously, they won a best paper award), roboticists from the University of Colorado Boulder explore how using augmented reality to help robots communicate with humans can make the bots feel safer, more efficient, and more part of a collaborative team.
When watching a drone (or many other kinds of robots), its not at all obvious what theyre going to do next. Its frequently not at all obvious what theyre doing in the moment, either. If theyre not moving, for example, is that because theyre planning motion? Or because they want to move but somethings in their way? Or maybe theyre just waiting around on purpose? Who knows! But if you want to walk around them, or do other tasks in the same area, you really need to understand what the robot is going to do.
For humans, this happens almost without thinking, because were inherently good at communicating with other humans. Its not just that we talk to each other, but we can predict with quite good accuracy what other people are likely to do. We cant do this naturally with robots, but we can cheat a little bit, by taking advantage of alternate interfaces to see what a robot will do next. This has been done before with screens, but the CU Boulder researchers are testing whether augmented reality (via semi-transparent projections overlayed on a users vision with a Microsoft Hololens) could be much both more intuitive and more effective.
The primary goal of the AR in these contexts is conveying motion intent by displaying what the robot is going to do in the near future. The researchers experimented with several different ways of communicating this:
NavPoints: Displays the robots planned flight p...