Connecting people via networked robotics with Leila Takayama

We were honoured to have A/Prof Leila Takayama from the University of California, Santa Cruz as our keynote speaker this year at the Networked Society Symposium. Her engaging talk focused on the relationships between robots and humans.

Leila shared many funny, fascinating stories from her research into human-robot interaction that highlight not only some interesting elements to robot design but also some insights into the human condition that are revealed through our interactions with robots and computers. It's a great listen.

Listen in...

Episode released: 14 November 2018

Recorded and Produced by: Kate Murray

  • Full Transcript

    KATE MURRAY: Hi and welcome to Networked Society Stories. I'm your host, Kate Murray. Today we have something for you that's a little bit special: we are talking robots. But not just robots and how they work but also what we need to know about the human condition in order to build robots and what the building of robots teaches us about ourselves. It is fascinating.

    Our speaker is Leila Takayama. She was our keynote speaker at our 2018 Networked Society Symposium. She is a human-robot interaction (HRI) and human-computer interaction (HCI) researcher. She is focused on experimental social science approaches and studying how people interact through and with computers and robots. Leila is currently a professor of Psychology at the University of California, Santa Cruz. She's also an entrepreneur; a founder of Hoku Labs, which also focuses on HRI and HCI research. Leila was also a senior researcher at Google X, which of course, she can't tell us anything about but I'm sure it was a fascinating time.

    What she does talk about is her time at Willow Garage. Leila shares many insightful stories about the processes of designing robots and working with robots. She talks not only about what worked out but the places where they went wrong. In the places where they came up with what you might think is an intuitive solution, an obvious answer to how robots should act and yet it didn't quite work out in the field and she'll explain why. These are great stories, very rich and informative.

    Leila's talk overall reveals as much about the human condition, about our needs and foibles, as it does about robot design. At the Symposium this keynote was really well received, it was so fascinating, and I hope you enjoy it as much as we did on the day.

    All the references that Leila makes in her talk, we will list the links in our website networkedsociety.unimelb.edu.au. It's also a great place to find out more about our events, our research, what we're up to. Get involved. We are also on Twitter at @MelbNSI. We look forward to hearing from you. And now here is Leila Takayama with her keynote speech Connecting People via Networked Robotics.

    LEILA TAKAYAMA: All right. Thanks for that kind introduction. I'm going to just give you a little bit more about my background so you understand where I'm coming from. Then we are going to dive into a bunch of data and talking about how people actually interact with robots, which tends to be very different from how we think we are going to interact with robots. Just a bit of background. I did a bunch of school in California and interned at some places that were pretty amazing in terms of the history of human-computer interaction and how we made the computing more personal and more usable for every day people, not just people for PhDs in Computer Science.

    I stumbled into an amazing job at a little tiny startup that was doing open source robotics. I'm going to share a lot of those stories with you here today, about the research that we got to do there together. I also have done volunteer work for the World Economic Forum and worked at Google X. Actually, the work that I did there got to … Got me a trip to Australia or a couple of trips to Australia, which were pretty great. When I got the invitation to come down here, I was like, "Oh, heck yeah. I'm totally there." Thank you for having me today. I'm now at University of California Santa Cruz where we do research.

    Today I'm going to talk about robots and a lot of people argue about what the heck is a robot anyway, right? I would actually argue, because I have a broad view of robotics, that we interact with robots every day already. If you take the traditional definition of robotics, being that they sense, they plan and they act, they have to act in the real world, then you could actually say that your thermostat a is a robot. It's detecting the temperature in the room. It knows what temperature you want it to be. It makes the decision about how to get that temperature there and then it makes a change in your physical environments. Most of my robotics friends would hate me for saying that but I'm saying it anyway.

    Also, if you drive a car and it has an automatic transmission or it's got cruise control, you are already using bits of automation today. When you go to an ATM, an Automatic Teller Machine to get cash, that thing … It used to be something that we would do with a human bank teller and now we do it with machines. I really feel like these robots are already here. It's not like there's some distant future where all of a sudden we are going to have Rosie the robot running all over the place. We have little bits of Rosie already here now. There are a lot of more robotie robotic products out there today. This is one of my favorites here, which is a window washing robot that sticks together by magnets.

    We don't have to send people up to the top of those high rises to clean windows and you can do more frequently than you would normally. Also, dirt sucking robots here. Guys who will read the news to you. This is Clockie, the alarm clock, who if you hit that snooze button when he goes off in the morning, he rolls away from you and then starts beeping again so you have to get out of bed to go shut him up. I think there's some really interesting things that can be done using today's automation in order to help us to reach our own goals like getting out of bed at the time you are supposed to in the morning. These are robots that live in my house right now.

    One of them is Suckster. One of them cleans my kitty's stuff, which is great. Another one is a lawnmower. I call him Billy because he's like a Billy goat and he cuts the grass much more often than I normally would. It may not be the same as how it would be if a person was doing it but it is taking care of tasks that I honestly don't like doing. It's the dirty, dangerous and the dull. Those kinds of tasks that we tend to put robots towards today. I mentioned in human computer interaction, which is really my home community, there was a period of time when we were transitioning from mainframe computers where you had to have a PhD in Computer Science, to use them to the personal computer where we tried to make them usable by everybody doing every day things and we weren't really sure what those computers were supposed to do.

    They got lucky with figuring out that Spreadsheets were a cool thing. I think right now in robotics, we are at that same awkward adolescent stage. We've got robots that can only be used by people with PhDs in Computer Science or Robotics but nobody else can really use them for doing anything interesting, yet but what if we could? What if we instead could have robots that anyone of us could use, that we could put in the hands of, say designers or architects or people who are not only interested in robots for the sake of being robots? This is a wooden robot made by Ken Salisbury's Lab at Stanford University called PR1, Personal Robot 1.

    This is the metal version of that that costs a lot more, called PR2, Personal Robot 2 that we worked on at Willow Garage. A lot of my experience came from trying to develop that thing, which I would argue is not a very useful robot. It was a about half a million dollars, weighed 450 pounds, had 32 degrees of freedom and just was a mobile manipulation platform in robotics research. Those are fancy ways of saying it was a big thing that rolled around and grabbed stuff. What was it for? It was a little unclear. It was a platform for experimenting upon and quite frankly, it did a lot of things not very well but we learned a lot of things from using it to figure out what you would actually build if you actually were trying to get that task done in an appropriate way.

    Willow Garage was this place in California that no longer exists. We were working really hard towards building an open source robotics community where it's more like the Linux for robotics as opposed to, say the Microsoft Windows of robotics. That was the real driver here. That still lives on today in the open robotics group but a lot of the research was on this guy. We used this as a way to convince people that you should share your code. You should work on a common platform so that we can build on each other's work. Instead of demo by a YouTube video and calling that science, you should open source your code and share it with the community so that they can build on it and do it better.

    We can share the progress. These guys were running around our office all the time back in 2011. When I joined, it was 2010. This robot was state of the art and it's been many, many, many months doing this. Trying to open doors, which is this really easy thing for humans to do but really hard for a robot. Here … This is how the robot sees a world. It's thinking about finding the door, "Oh there, I found the door." Now it's going to look for the handle, which is shiny and that's a little scary. It's going to do much emotion planning to figure out how to grab that handle and maybe that door is going to be unlocked and it will get lucky so that it can get that thing done and it worked. Thank God.

    If you are in robotics, you know that this was probably take 500 and that all the ones before failed for all kinds of reasons. If you looked at this door, actually a few weeks after this, that door jam was just beaten up. All the paint was chipped off of it. The door was all dented. The robot's gripper pads were broken. It was a mess but we learned from it. We figured out how to better put all that software together to make that capability possible. Robotics is hard and that's fine but my problem personally was I had to live and work near this robot. My office was over there and the coffee machine was over there and I need coffee to get my work done. I'm fueled by coffee so I would walk in front of the robot to get my coffee.

    Most of the time that was okay but every once in a while, I'm walking in front of the robot and the software engineers would be like, "Stop messing up the point cloud. You know how long we've been taking to get that data? Now we're going to start all over again." I got screamed at just randomly, as far as I was concerned. I can't tell if it's looking for the door or if it's just sitting there charging or if it's off. It looked the same, no matter what it was doing. There was no way for me as a normal human being who doesn't have my nose in a computer terminal to know that it's trying to do anything. What we did was we went and got some help from people who could do this better than we could.

    Doug Dooley, who is a second author here, is a character animator from Pixar Animation Studios. If anyone can breathe life and readability into things that otherwise would make no sense, it's them. Doug did this. He said, "Okay, here's what your robot does right now. It sits there and stares and it totally ignores the fact that there are humans in its environment. Even if it is aware of them, it can proceed them. It ignores you and then it does this task. It's very functional. It does its job. What if instead you could show a little bit of what it's thinking about doing? Acknowledge the existence of others and the environment and then do your task. Does showing that forethought help at all?" That's the empirical question.

    The other thing that robots are really good at is knowing their states. They know they have a goal. They know if they've succeeded or failed at it. Usually, they succeed and just don't care. They just move on. What if instead it showed a little bit of a reaction to that success? There's a bit of a lift there but of course, robots usually fail because robotics is hard and so most of the time our robots fail at a task, give up and just move on to the next task but I think … Our hypothesis was maybe if we showed a little bit of reaction to that failure, that would help to make it more readable, more understandable, maybe even more relatable.

    Even though the robot fails, at least it feels bad about it or expresses feeling bad about it. I said, "The empirical question was, does it matter?" It's cute but does it help? It was not physically safe to run this study on the real robot because we could have smacked people and that would have been bad because this robot was big and heavy but we could run the study online. We ran it with Mechanical Turkers and we asked them questions like, what's going on in this video here? If you are with that person in the environment, what would you do? How would you rate this robot in terms of a bunch of semantic differential adjectives? We showed them a set of different tasks. We don't specifically care about opening doors. We also care about things like how would you serve a drink to a person?

    How would you usher people into a conference space like this? How would you try to get help from people to get plugged in? Because that also took a long time to develop. We looked across a bunch of tests and we ran a between-participants study so each person was in one of these experiment conditions. A quarter of them said, "No forethought and no showing reaction." We did the two-by-two study here. The robot either succeeded or failed on the task and either showed forethought or not and it either showed reaction to success or failure or not. Because this was an online study, they are pretty quick to run. Ideally, you could run them in person as a follow-up.

    Our hypothesis was that if you show forethought, thinking that opening that door before you open it, that people would feel more positively towards it. If you showed reaction to success or failure, that should help too. What happened? We got good results here, which are not surprising but the things that we really cared about were approachability. Remember, this is a 450 pound piece of metal in your house. Feeling like you are safe and comfortable around it matters. The one that Doug cared about, our character animator was appeal and this is really important for character designs.

    If you think about like Cruella de Ville, she's the bad guy in the movies and you are not supposed to like her but she is appealing. She's interesting. She's a character that you care about and want to follow and see what her story becomes and that's what that dimension taps into. This is the cooler one. If the robot actually functionally succeeded at its task, people felt like it was more competent than if it fails. That's a no duh. Keep an eye on the difference between those two meanings there. If your robot fails at the task but at least looks like it feels bad about failing, you get about the same bump and perceived competence, which is interesting because it didn't do the job but it at least showed you that it understood that it messed up and it expressed something like an emotion that show that it felt bad about doing badly on it.

    I think the lessons here are basically, it can help to show forethought and a reaction to success or failure in terms of attitudes towards these kinds of systems but more importantly, a lot of people today work on robots where they just emote. They sit there and they are like, "I'm happy. I'm sad. I'm angry." It's just emotion in a vacuum. I think we could actually use those emotional expressions to be goal specific, to make them functional because actually emotion is about approach and avoidance. Things that are good for us, staying away from things that are bad from us. It's very basic.

    If robots can express things that they are leaning towards and things that are leaning away from, that can help us as normal human beings who aren't sticking our noses in computer terminals to understand what these robots are doing and what's going on in there. We can make better sense of them, like if you walk up to a dog. If you are familiar with dogs, you know how to read their body language. How can we make it like that with these systems too? Today, a lot of folks who actually spun out of Willow Garage are working on other systems like hotel delivery robots. If you come to the states and you stop by a hotel and you forgot your toothbrush, you call the front desk and tell them, "Hey, I forgot my toothbrush. Do you guys have any on hand?"

    That's actually nobody's job in the hotel, to run a toothbrush up to your room, not even the bellhops. They do luggage. Often, it takes a while for that toothbrush to get to you because nobody really has time for it. They have normal jobs that they are getting paid for to do. This robot will come down to the front desk, pick up the toothbrush, put it in the bin and then they run it up to your room. It will knock on your door or call you on the phone and it will give you the toothbrush by popping its lid open. At the end of the interaction it will say, "How's your day going so far?"

    You can give it a one to five star rating. If you give it a five star rating, it does a happy dance. People love the happy dance. If they don't get a five star rating, it will actually call the GM and let them know that this customer is not so happy. You should follow up but what's funny is people like the happy note so much that they'll often pretend to forget stuff and keep calling those robots so that they can do more happy dances and take selfies with the robot. Sometimes they get a little in the way of the robot. They love it so much they want to hug it and then it can't navigate when kids are hugging it. I think there's something here about showing some kind of expression about carrying.

    It may not inherently actually care but it can show you what it approaches versus avoids, at least. All right. That's our robotie robot. The rest of the stories I want to tell you about are more about, more like the network society vision. This happened completely by accident, as a total side project that we weren't supposed to be working on but we did it anyway because there's so much power in connecting people. This was my coworker, Dallas. Dallas lived in Indiana because he likes Indiana, his family is there and they don't want to move to California because it's just too expensive. We get that. Dallas was a voice in a box and the table.

    You probably have coworkers who look like this too. This is okay. It's not great but it's okay. For Dallas, it became a real problem because he was one of our star electrical engineers on the team. Our other star electrical engineers who are in California had this way of making decisions together, which is they'd scream at each other until somebody won and then they made a decision and moved on. When you are screaming at each other and you screaming on the phone, sometimes someone might accidentally hang up on you and we don't get to hear the rest of your argument and so we are going to make our decision and then move on.

    Dallas didn't have a lot of agency or a lot of say in what was happening on his team. Sometimes he'd fly out to California just to get in their faces and yell at them in person but that's not very effective. Of course, they are engineers so what they did was put Skype on a laptop on a cart with the power cable and the Mac and this made it harder to hang up on him. You might feel a little bit more guilty about doing that because he's looking at you but it's not so hard to forget to bring him to the meeting or to have that impromptu conversation at the little kitchen stop in and not have him there because it just happened.

    The conversation just became the place where they made the decision and it wasn't a formal meeting that was pre-scheduled, where Dallas was supposed to be there and so he was still getting left out of a lot of decisions. The reason I had to tell you about the big robot was this. One weekend Dallas … He makes battle bots, those robots that kill each other and he decided to steal some body parts from our PR2. Those are $6,000 casters. This is Dallas. He came into work like this on a Monday morning for standup meeting. It's like Skype on wheels where you can roll yourself around instead of just being stuck on the laptop. Now, he didn't have to wait for someone to push him in the cart to get somewhere. He could just get there by himself and he can block that doorway until you answer the question that he emailed to you that you've been ignoring for the last week.

    He has this presence now that's different than it was when he was just the voice in the box on the table. Honestly, the first time I saw this thing I thought it was super nerdy and it was not going to last and it was just silly but I was wrong. Over many months, Dallas actually became my coworker. He became a friend, somebody I cared about. We got to meet his kids because of the time zone differences. The kids would actually log on and drive his robot around the afternoons too because they are back from school. We got to know him as a person, which actually makes all the difference in the world, for remote collaboration.

    Not just for socializing but for work too. Then the question was, are we just crazy? Do we only like this because we like robots or is there something really there? Our team built 25 more of these prototypes systems. You'll notice a lot of 80/20 for the MDs in the room. These are not the most efficiently build systems. They mostly were that big lead acid batteries in the base where you just cannot predict how much power you've got left but they worked well enough us to try out this idea. My job was to figure out, are we crazy? Would other people actually use it? We fielded these systems across a whole bunch of companies in the San Francisco Bay Area.

    We tried to get not just techie companies but we did try to get people or companies who had remote collaboration issues. A lot of them had persistent Skype systems that they'd shut down but they tried it, which means they felt that pain point and then they are trying to find solutions for it but they haven't found it just yet. I will mention we were not the first ones to invent these. Eric Palos was making these in the 1990s, in Berkeley and published work there. Eric was telling me, "That was really cool. It works but the networks just weren't good enough." You could do that for about 15 minutes and then you would saturate the entire network and shut down Wi-Fi for the entire computer science building.

    You couldn't really run any long-term study using that stuff. There's other folks who have been doing that in Canada. This is Wayne Gretzky's robot that they use for hospitalized kids so that they could go to class but again, those are real short-term use. I think it's only because of the increased bandwidth and reliability of wireless networks that now these things are becoming real products and real companies. Now you'll see these systems used in places like hospitals and classrooms and office places, they are not super pervasive just yet but they are doable and they are something that a lot of places are just experimenting with right now. There's a lot of important design decisions to make when you are trying to represent a person from someplace else.

    Ideally, you are not even going to think about the robot at all. It should just feel like you're talking to Dallas. It's like when you call home and you're chatting with, say grandma. It should just feel like you're talking to grandma where you shouldn't be thinking about the device that's in your hand. It should just be a connection with the person and not about the machine itself. If you catch me saying these words, I just want you to know what I mean when I say it. If I say, "Local," I mean the people who are local to the physical device. They are usually the hub, the place where most people live and work. If I say, "Remote pilot," I mean the other person who's remote, who's roboting into the meetings. What happened? This is at the very beginning of a very long set of field studies that we did.

    This was after the first few months of deploying the robots and we were looking here, at how remote collaboration teams would make use of these, not make use of these, why? Who would adopt them? What kinds of social norms would be developing around these systems? We used a contextual inquiry approach to understanding what's going to happen, very open-ended. It usually took about three to four weeks for all the novelty effects to finally just go away and so it takes a while to run studies like these. The remote pilots were located all around the world, even though their robot bodies are in the San Francisco Bay Area.

    The only reason for that was because the prototypes would break a lot and so we had to be close enough to be able to go there and fix them. Of course, as you know, time zones are still an issue and so everybody's favourite CIS Admin who was in Singapore, and he was using the robots to come into California, he kept logging in at two in the morning for California. We had done brown bag talks at all of these companies so that people would know what we are doing. What is this thing rolling around in my space? We had not done the brown bag for the cleaning crew who was actually there at 2:00 in the morning and so we ended up freaking out a whole bunch of the staff who worked in that office.

    Some of them quit because they were so upset about it, which was a shame. Now we've learned you got to really debrief everybody, everybody, not just the people who are the direct stakeholders here. What do they do? People mostly did the same things that they do in person as they do via robot. They talk to each other. They share ideas. They socialize. I think what's more interesting than what they did was actually where they did it. If you look here, this is a very formal communication space. We have very standardized norms about how we shouldn't interact here. I do most of the talking in the beginning and then we do a bit of talking together at the end but really the places where the robots got used were more informal communication settings.

    Those hallway conversations where the real decisions get made or where people say, "Oh my God, can you believe what he just said at that meeting? That was ridiculous. We are totally not doing that." You need to be part of those conversations if you are going to get work done in these kinds of work settings. Also, if you think about where you would normally see a really big video conferencing system, it tends to be in a boardroom or meeting room. Again, those are formal communication spaces that are really different from where these ones seem to shine. Bonnie Nardi had done a really interesting work for a long time on video media communication and computer media communication. She keeps finding these three trends in terms of things that people use these systems for.

    I think they are the same for these robots. I'm going to go through each one just a bit in turn. The first one is just showing up. Getting your butt in that seat to sit with your team and talk about your project shows that you are committed to your team. You are committed to the mission. This was actually Mike Beltzner, who was the director for Firefox at the time. Mike lived in Toronto but his entire team was in Mountain View, California. He would normally come to this all-hands meeting via the video camera in the back but he decided to try to come via robot instead and that made him much more visible to his team.

    He went through the trouble of trying to be a little more physically present, as opposed to just being one of the anonymous people inside of that camera. Now they can see him too and pounce on him after the all-hands to ask him questions about how things are going with the project. This is not perfect. Actually, the woman who is sitting behind him can't see because his big head is blocking the way. There's lots of workarounds that you have to do to make it work. There's a cost, not only benefit but for that particular team it worked pretty well. He actually had a situation where somebody had broken the build a few seconds before release, the night before.

    He had to talk with that person but it's not something you to do where you say like, "Hey, let's set up a meeting" because you are going to freak out that employee. What he did was a bunch of drive-bys to see when that guy was available when he took off his big headphones and show that he was ready to talk. He was able to time his approach to have that difficult conversation with that software engineer on the team, in a more effective way than one that would totally scare him. They talked about, "Why did you check in that code right then? That was not part of the feature lists. Don't do that again please." That worked out better via robot than it would have via all the other communication channels that they had.

    The other one I mentioned was getting people's attention can be really hard right when you are remote. Sending a whole bunch of emails often doesn't work. When you can come in via robot, now you can pounce on people just like we do in person. One of the tricky things about these though was you could actually hear them coming and so whenever Dallas needed to talk to me about something and I really didn't want to talk to him, I could actually hear him coming and run away. I would leave the building because I knew he couldn't get through the doors. There are workarounds for this, just like you can hear someone's footsteps coming down the hallway.

    It's comparable but if they do catch you, they will keep your attention until they are done talking to you. In this way, you are more present and more in people's faces literally in the workspace. I think the most … The one that sounds the most touchy-feely but is actually the most important is this one. Getting to know your coworkers as human beings, as people that you care about, building that rapport is what matters more than all the other things. Here, the guys would often play pool at the end of the workday. This was a pretty … Not a great pool table and not necessarily great players. The guys here are there via robot. They can't play pool because they don't have robot arms but they can totally heckle the guys who are playing pool and oh my, they did.

    They get to be part of the team, making fun of their teammates and joking around with them after work, just as they would if … Well almost as they would as if they were there in person. This ended up turning into a spinout that was called … Is called Suitable Technologies. They make the Beam. This is what the user interface looked like at the time that they left it. It's just like driving a video game. You drag your body around via the GUI. You could use arrow keys if you wanted to. We had to put a lot of mirrors in this space because a lot of people would forget what they look like there. This is what you look like via robot, which is something that's really easy to forget.

    You feel like you are anonymous but really you are not. You are on display. That was one of the design rules that we built into these systems. Every once in a while, we'd have a glitch where the screen would turn off and when the screen is off and you are rolling around with cameras in people's faces, you are creepy. That makes you a surveillance device. Our design rule became if you see me, I see you. This is a main reason why most of those persistent Skype systems were shut off. People felt like they were being watched but that they couldn't watch back. If you don't show video and show your face, we would actually shut down your connection to the robot because that's just not fair.

    It's putting people, the remote people in a very powerful position if they get to be anonymous, so they don't get to be anonymous here. It's a communication device and so we wanted as much as possible for it to be clear and transparent about who is on the other end and what can they see? You may have seen this before so I'm going to nip this one in the bud because I always get this question. People will say, "You just built Shellbot. If you look in Shellbot, that's our robot." The guys at The Big Bang Theory are very smart. They saw a bunch of our dinky little YouTube videos that nobody else watched and they got super excited and invited our team actually to go down there to film this episode, which is great because Kurt and Dallas, the two guys who built this thing, love The Big Bang Theory.

    Kurt even played The Big Bang Theory theme song as his wedding march down the aisle. He loves it that much. For him, this was like heaven on earth. They shot this whole episode, actually a little bit in the dark because the screen was so dim that the film, cameras couldn't catch it but it was worth it for that episode. If you've seen Shellbot, that's actually one of our old prototypes. Steve Wozniak was the special guest on this episode so he actually signed the back of one of their heads, which tickled everybody pink, of course. Also, we've seen people use this for things that we were not intending for. If you were Edward Snowden, you have some reasons to not be in the United States.

    You can just send your robot instead and still talk on the Ted stage without getting arrested because who cares if they arrest your robot body? At least it's not really you. Again, these are not things that we meant for these to be used for but it's interesting to see how people adopt it anyway and what kinds of use cases appear out of that.

    KATE MURRAY: You are listening to Associate Professor Leila Takayama from the University of California, Santa Cruz, talking as the keynote speaker at the Networked Society Symposium at the University of Melbourne in October 2018. Her talk is titled 'Connecting People via Networked Robotics'. You can find more about this talk and all the links to references Leila makes on our website networkedsociety.unimelb.edu.au. And now, back to Leila.

    LEILA TAKAYAMA: Our basic methodology has been just try it. Put it out in the field, some prototype that barely works or works well enough and do a bunch of exploratory field studies. That's what I just showed you now. The next set of studies I'm going to show you are much more controlled. These are like, okay we got this insight. We saw this problem but what are we going to do about it? How do we design the system better in a way that's going to better support people and actually do those comparisons so we can make design decisions about how to build it better next?

    We ran controlled studies. Eventually built a new prototype and filled that one and then rinse and repeat. I'm going to give you just a few examples of those controlled user studies because they were very humbling to us in terms of learning what better could look like with these systems. Of course, if you're in the social sciences there's this notion of ingroupiness. If someone is an ingroup member, they tend to be a teammate. They are someone you feel close to and they exist in contrast to outgroup members. Really, I think what we are trying to do with these remote presence systems is to make remote collaborators feel more like ingroup members.

    You want them to be part of the team more than they already are. It seems like an obvious design goal. We thought we had an obvious solution but we got a bit of a surprise on this one. There's this thing that we know from the Computers-Are-Social-Actors paradigm. My PhD advisor, Cliff Nass did a lot of work on how people actually respond very socially to communication technologies. Even though we deny it and we wish we didn't and we thought we are rational beings, we are realising that we are not. We fall back on our old brains when it comes to interacting with more autonomous kinds of systems.

    He did these studies where he found that if you put a blue frame around your computer screen and you put a blue wristband on someone's wrist, they are nicer to that computer. They spend more time with it. They give it better ratings than if you put a red border around that screen and a blue wristband around your wrist. It's how sports teams work. We figured, okay what if we let people decorate the robots with the same colours that they are on? It seems like something people would do. Actually, people love decorating these. You can't stop them from decorating them so you might as well use it for something. We let people decorate them and then saw what happened in a collaborative decision-making exercise.

    If you were a participant in this study, you'll be randomly assigned to one of two conditions. In one of those conditions, you might be told to go ahead and decorate the robot and we'll give you a bunch of one colour of things. In the other condition, you don't get to do that. You just start interacting and that's how the new systems get used. Then you do a collaborative decision making task, talk with the person a bit and then fill out a questionnaire. This is again a between-participants study. We gender-balanced across the conditions. We had decorated the robot and not decorating the robot. We also used one other independent variable which we know makes a difference in ingroup behaviour.

    We told people, "In this collaborative decision-making task, we are going to score you together as a team." That's the interdependent scoring but the other half of the people, we told them, "You are going to have a collaborative decision-making task and then you are going to decide what you really think at the end of the study and we are going to grade you as an individual." That's the independent scoring and that makes a difference for ingroup versus outgroup kinds of behaviour. We hypothesised that interdependent scoring would make people more ingroupie and we did find that people would disclose more information about themselves to their partner if they were in the interdependent condition.

    They also ascribed fewer animal like emotions to their partner, which is a pretty strong measure of how ingroupie you feel people are with you. There's animal like emotions and then more nuanced human like emotions like angst. We tend to ascribe animal like emotions to outgroup members, as opposed to ingroup. They also liked the other remote pilot more. That's a no duh. This is the thing that we were really wondering about, was if we let them decorate it will it help? We were really hoping that it would. We figured decorating, you should get more positive responses. The answer is nope and nope in a statistically significant way.

    If you decorated the robot, you actually felt like you cooperated less with your partner and beyond that you really didn't want to talk with them after the end of the study. Which was just backfired in our faces, that was not the direction that we predicted. We were like, "What happened?" We did some follow-up interviews with all the participants at the end of the study where we tell them … We disclosed like, "This is what we are studying. Here is what we thought was going to happen. It doesn't look like that happened. What's going on?" The interesting thing when we got out of that qualitative data was they said, "Well when I decorated the robot, I really liked the robot but who's this person logging into my robot?"

    They felt like their robot was being invaded by this random human out here. I don't like that person. What are they doing in my machine? They were feeling more attached to the robot but not to the person, which was not the point. We wanted them to feel more connected to the person, not the machine. Even though it still held the Computers-Are-Social-Actors, we still got more connected to the machine when you decorate it, it backfired in this context of telepresence where we don't care about the machine. We care about the interpersonal connection between the people. We just finished running a follow-up study on this with some partners at USC, where we let both the remote pilots and the locals decorate the robots together.

    Hopefully, that should work better because then you are both invested in the decorating of the system and it should hopefully feel more like the remote pilot was also part of that process and was part of your team. We'll see. One other thing that we heard a lot during our field studies was, "My grandma would love this." Keep in mind, these are 20 somethings in Silicon Valley. We were like, "Well maybe grandma would love it. We actually do want to know what she would think about it." Usually, the use case that they would propose to us was, "You should totally put this in retirement homes so that we can visit grandma via robot when we can't visit her in person."

    There's good intention there, for sure but there are also a lot of potentially big red flags also. What we decided to do was go and talk to grandma and grandpa themselves. We brought in 12 folks from the local community who were still living independently, were retired and they were thinking about maybe moving into retirement communities but maybe not. Keep in mind, these are more independently living folks in Silicon Valley so they are probably Silver Surfers who are very comfortable with tech. They were amazing. Their stories were so good and I'll just share a few of them with you here but we've got so many more. We brought them into our lab to actually try the robots out. If you ask a person off the street, "Would you like to use this robot?" They'd say, "Sure." Or they'd be like, "Oh my God, no. They are going to take over the world. They are terrible."

    They are making those decisions based upon Sci-Fi and the media, which are not necessarily the best sources of information for what robots are actually like or what they can actually do. We brought them in and had them drive the robots around themselves and then get visited via people and via the other robot to so that they had firsthand experience with it. We let them play with it as long as they wanted and then we did the interview so that at least their opinions would be grounded in the reality of the system today. Who did they want to connect with? It was not the grandkids.

    I mean grandkids sort of but really they wanted to hang out with their friends. That was really important to this particular community of people because they are like, "I don't want my adult children logging into my living room whenever the heck they want. That's weird and an invasion of my privacy. I want to talk to them when I want to talk to them and when they call, sometimes I don't pick up the phone. Sometimes I but it's up to me." They didn't like that they could just barge in anytime that they wanted by turning on the robot system. We thought that reducing social isolation was going to be the biggest benefit that they would see in the system.

    Actually, they just wanted to see each other and really couldn't you just do that via Skype? This is where we started questioning, do you really need a robot for this or are there other potential solutions out there? I think there are other solutions but they … It was interesting the way that they prioritize these sets of benefits that you might see for using these kinds of systems. The other thing that we were also wrong about was the things they worried about. I thought I was really worried about safety because these are rolling pieces of metal in your home, potentially. Safety was not the number one issue that they talked about.

    The number one was etiquette. We have spent decades developing a social set of norms about how you answer a phone, how you open a phone call, how you close a phone call, how you move the conversation along these media to communication setting. With the robots, it's awkward. Someone just … Boop, they appear and they start rolling around your space awkwardly, often running into things and then they talk with you. Especially at the end of the conversation, usually what happens is the person will either forget that they are via robot and they just hang up the phone, which then leaves a dead robot body in your space that you got to go put away.

    Or they spend many, many, many minutes trying to find the charging station, driving backwards, trying to say goodbye and then driving in the end and they are running into more stuff and eventually hanging up. It's just weird. We think, here we ended up trying to develop more guidelines for etiquette for using these systems. If someone hangs it up when it's on the charging station, we actually pop up a little message that says like, "Hey, that was rude. You should probably put your robot body away. You don't want to do that to locals. The coolest thing that we got, I think out of these interviews, was that we were totally wrong about where these systems should be used.

    Remember the use case that we had gotten in with was having people visit grandma and grandpa in their home. Really what the folks in our study we're talking about was like, "I want to get out. I am tired of being here all the time. It's really hard for me to get around. I really want to go hang out with my friends and family outside of this place." Number was just things like going to a baseball game and being able to heckle the players from the stands alongside their family. Or going to the orchestra or the symphony or whatever, rock concert with their friends and family so that when they hang out during intermission, they can still talk and schmooze the way that they would if they were there in person.

    It's not that they don't do this already. It's that it would be easier to do it and sometimes it is too hard to get there in person. Really, you need to think about what's the alternative to doing this? Maybe they can't go to that castle in France because the flight would be too tough to do but this will be better than not being able to go at all, bringing a system like this along. This flipped the use case on its head. We've been talking about putting these robots in people's homes and actually, it turns out the more useful place to put these is out in the world where people want to go visit but it's too hard to get to. Nowadays we see a lot more of these.

    They get put in places like conference centers, in music halls and sports stadiums where people want to go but it may be too difficult for them to get there at that particular time. That was another humbling study where we learned not only what we were wrong with about but also new directions for where we could push that design. More for supporting friends, not just family. Giving people more control about who they let into their space and when they decide that they want the plausible deniability of being able to say like, "Oh, we didn't hear you. Sorry. Next Time. Try calling back later." The etiquette was a really big deal so we started building in more tutorials for when people are onboarding so that they can understand better how you might use these in a more polite way that doesn't get in the way of the locals.

    All right. This is the last one. I mentioned the running into things. I don't know if you remember what it's like to learn how to drive a car. It's not easy. It takes many, many years. Learning how to drive a robot body is just as hard. At one of our field sites, I remember we had this one guy who really meant well but he really screwed up. He logged into the robot and what he didn't know was that his camera, his little eyeball was turned 90 degrees to the left. He thought that he was looking forward. Now his point of view was off by 90 degrees. He starts to try to drive to his boss's office for his management mentoring meeting and because his eyeball was crooked, he went zigzagging through this big open floor space, getting super in the way, ran into one woman's desk, knocking her stuff over.

    She got really mad at him and laughing really, really loud because he was embarrassed and not realizing that his volume was cranked all the way up. In a big room full of software engineering consultants with clients sitting right there, that is terrible. It was an awful user experience. He made a lot of enemies that day and that wasn't his fault. The system, it was just too hard to drive. That was really on us, not on him and so we felt bad about it and went back to the office. We were like, "We got to make this easier to do." One of the ideas of course was, why don't we add some assistance? Don't let him run into her desk. Even if I, as the robot operator I'm trying to drive into a chair, what will happen instead of it letting me drive into the chair is our laser range finder says, "Nope, there's a chair there so we are going to plan a path around the chair, instead of through it.

    Even if the operator makes a mistake, the system can override them to help them do the right thing instead of to disrupt their coworkers. We thought this was a good idea. It mostly is. We set them up in the lab where they had a typical office space. There's a desk and some chairs and some other stuff and we told them, "Go ahead and practice this obstacle course for a while. Let us know when you are ready to try it out. Then we are going to ask you to do this as quickly and accurately as you can. Get through the course and hit nothing, if possible." Half of the people were given autonomous assistance. We turned our lighter on and half of the people were not given the autonomous systems and that was just how the system normally is.

    If they wanted to hit things, they could hit things. The other dimension that we needed to worry about or we thought we might need to worry about was this personality dimension. If you've heard of locus of control, this comes from the 60s and 70s and social psychology. If you have a strong external locus of control, it means basically you believe in fate. You just roll with the punches and you just take things as they come. If you have a strong internal locus of control, it means I am the master of my own destiny. When I succeed, it's because I'm great. When I fail, it's because I'm terrible.

    You take more of an internal stance in terms of control and responsibility and people tend to fall somewhere on the spectrum here. We split them into two groups after giving them a standardized measure for locus of control, just to see if it mattered. We expected that when we give them assistance, they should hit fewer things and drive the course faster. They did hit fewer things and here's the surprise for us, was the hit fewer things but they took longer to get through the obstacle course, which was not what we were hoping for. That was very, very statistically significant so we had to dig a little deeper.

    We had been worried about this locus of control thing. We figured that if people need to be in more control, they might have trouble with letting go of control. Indeed, if you split the data this way people who have a strong internal locus of control took longer to finish the obstacle course than the people who are shorter. More specifically, that tall yellow bar is statistically significantly different than the other three. What's happening here is if you try to override with autonomous assistance a person who doesn't want your assistance, you get into trouble. They are going to fight the autonomous help because if I have a strong internal locus of control and I want to hit that chair, I'm going to hit that chair.

    If you don't let me get that chair, I'm going to keep trying to hit that chair. This personality dimension can really matter when it comes to task performance of people using bits of autonomy as they are driving these cars and robots. Things that are now adding bits of autonomy supposedly try to help us do better but sometimes it can backfire depending on who that person is and where they fall on this particular dimension. It may also happen with other dimensions too. We came across this also in another project that's called Robots for Humanity and this was started by Henry Evans, the guy sitting right here. Henry had a stroke, now more than 10 years ago, that left him quadriplegic and mute. Henry saw a robot, the big robot on CNN one day and just gave us a call and was like, "Hey, I live nearby. I want to try your robot." We said, "Okay, sure." Of course going in and talking with Henry, we were asking him things like, "Well what are your pain points?

    What are things you think you'd want to do with the robot so that we can help to see if we can develop those capabilities in the PR2 so that we can try it out with him?" His answer was really interesting. Henry said, "I already had caregivers. My wife and my family take care of pretty much everything and we've got professional medical staff around all the time. I don't need another agent doing stuff for me. What I really need is to do it myself." He one of the strong internal locus of control guys. He's like, "I'm tired of people doing stuff for me. I don't need a robot doing stuff for me. It's my turn to do things on my own."

    For him, what we ended up doing was building user interfaces that he could control the robot through. Here, this was his number one use case. He said, "You don't realize this because you can use your arms and hands but your nose gets itchy about 80 times a day and you can just scratch it. I can't. It drives me crazy and I'm not going to go ask the nurse to come and scratch my nose because that's so small and annoying but it gets really annoying when you can't scratch your nose." He wanted a nose scratcher. We 3D printed this little nose scratcher and effector and Henry scratched his own nose using the user interface.

    Remember here, we could have tried to program the robot to do it autonomously but that's not what he wanted because of the sense of human dignity, wanting to get it done yourself. The other thing that he did that scared us so badly was he said, "Jane, my wife …" She's amazing by the way. Jane would shave Henry but Henry used to actually shave himself much closer than she does but she's just trying to be careful because she doesn't want to cut him but he always complained about it like, "She doesn't do it right." He's like, "I'm going to shave myself with the robot," which … Oh my goodness, giving sharp blades to a robot next to someone's face.

    Henry … We didn't … We gave him an electric razor, not a sharp, straight blade and he did it. He shaved himself. The robot, of course had to have some forced feedback so that it wouldn't push all the way through his face. It was a little bit uneven, I think but obviously he was having a good time and felt good about getting this done. He's actually a really good robot operator and through many years of physical therapy, has gained a little bit of control back in his left hand. He can do even more now with the interface but that was a fun and terrifying demonstration of looking at, how would you build interfaces to enable people to do things for others for themselves, as opposed to just building robots that do stuff for you?

    Because you already got people to do that. I do want to mention, none of this work was stuff that I did by myself. These are a bunch of big collaborations that we had with a lot of good friends and colleagues, both inside of Willow Garage and outside, from a bunch of different organizations. My job is usually to identify the problem and figure out, who's the right team to help work on this thing? Then we do it together, especially with those big field studies. We ended up meeting quite a few hands on deck for collecting all that field study data together. It's been a lot of fun. I think this is a really rich research area to explore more.

    Even though I know everybody is excited about autonomous robots running around and doing stuff that is exciting, I think the more near term future is actually more in the telepresence space and people telling the operating robots to do things themselves. We can add bits of autonomy over time until maybe one day we get to Rosie but it's going to be a while before we get there. Before that, what are we going to do today that actually provides value to people that we care about in ways that they care about too?

    These are just some of the questions that I'm working on now, at USC Santa Cruz, really trying to figure out … There's a lot of social problems out there and a lot of people want to throw robots at those problems. I think we go in with a healthy degree of skepticism, maybe disillusionment about what robots can really do but I do think there are things in robotics that are ready for harvesting now that might make people's quality of life better in some way. With that, I just want to say thank you. Thanks for your attention.

    KATE MURRAY: Well, thank you for joining us for Professor Leila Takayama's talk, 'Connecting People via Networked Robotics'. This was the keynote talk at the University of Melbourne's Networked Society Symposium 2018. I hope you found it as fascinating as I did. You can visit our website networkedsociety.unimelb.edu.au to find links to Leila Takayama's work and all the references she made in the talk. You can also learn more about the Networked Society Institute at the University of Melbourne and hear more great speeches from thought leaders, interviews, articles, there is a lot happening on there. You can find us on Twitter @MelbNSI. We'd love to hear from you. My name is Kate Murray, thanks for listening.

About A/Professor Leila Takayama

Takayama is a Human-Computer Interaction (HCI) and Human-Robot Interaction (HRI) researcher with expertise in experimental social science approaches to studying how people interact through and with computers/robots. She is currently an acting associate professor of Psychology at the University of California, Santa Cruz. She is also a founder and researcher at Hoku Labs and was previously a senior user experience researcher at GoogleX.

About the Networked Society Symposium

The Networked Society Symposium is an annual flagship event showcasing the breadth of research occurring at the University of Melbourne's Networked Society Institute. It is held at the Melbourne School of Design on the University of Melbourne Parkville campus and is a free event, open to all.

It provides an opportunity to connect, engage and debate the matters essential to the networked society. The symposium also provides networking opportunities with researchers, industry, policy makers and the community. The day includes research presentations, keynote speakers, interactive demonstrations, panel discussions, and plenty of catered breaks in between.

Click through for details from Networked Society Symposium 2018 (NSS18)Networked Society Symposium 2017 (NSS17) and Networked Society Symposium 2016 (NSS2016).

More Information

Kate Murray