Home Communication Human-centered robotics: Challenges of the near future
Human-centered robotics: Challenges of the near future
30 March, 2021

  

For many years, science fiction has shown us visions of future worlds in which robots and other intelligent systems coexist and collaborate with humans in close proximity. Today we see the first autonomous vehicles on the roads and the first service robots inside our homes. After all, fiction seems to be getting closer to reality.

 

However, to achieve a beneficial coexistence between humans and robots, the interaction between them must be properly considered in the development of autonomous systems, ensuring that it is safe, intuitive and effective.

This was the topic of a webinar organized by the CCG - the first in a series called “InteracTalk” that focuses on issues linked to human-machine interaction in different application contexts.

The discussion included the participation of professors Estela Bicho, Alexandra Fernandes and Dominic Noy.

As a whole, the panel offered a broad perspective of the research challenges facing the field of human-robot interaction, the importance of understanding users' needs and what can be expected in terms of future development.

 

Safe, intuitive and effective interaction

Combining safety, intuition and effectiveness in human-robot interaction is not an easy task. Traditionally, robots have been used mainly in repetitive and high-precision industrial tasks. They are generally considered efficient and safe, but this safety almost always depends on being isolated in closed work cells, away from contact with humans. This forced distance makes a close-proximity interaction difficult to conceptualize. Some manufacturers are now producing robots certified specifically for close-range interaction with humans. Typically, these robots have sensors and algorithms that immobilize them immediately when physical contact is detected, in order to prevent the robot from hurting the human. However, these robots are essentially reactive and therefore limited in terms of interaction. A true interaction between two agents, as we conceive it between humans, requires that each one is capable of perceiving the other, anticipating their actions and acting in a complementary way, to ensure joint action and prevent conflicting movements.

 

There are, however, a growing number of exceptions with several attempts to develop social robots for close interaction and cooperation with humans. These tend to be more sophisticated, with advanced sensing and complex artificial intelligence algorithms and optimized to perform specific and restricted tasks. However, when they interact with real users, outside the laboratory, their perceived usefulness is reduced. This means that either they are not considered useful, or that the added complexity of interaction is not compensated by gains in efficiency in performing tasks. This may happen because the technology is not yet developed enough, but also because the user's needs are not being properly taken into account during development.

 

 

 

 

Autonomous driving systems offer a different field of application for similar concerns. Considering the interaction between AVs and pedestrians, a safe and efficient interaction can be defined as one in which both the pedestrian and the vehicle reach their destination, passing each other with minimal interference, preferably without either having to stop. For the pedestrian, an intuitive interaction will be one in which he can predict the vehicle's actions using the same type of cues that he would use in an interaction with a conventional vehicle. Although more sophisticated solutions can be used, sometimes simple solutions can be found focusing on the user's needs.

 

Communication

When we think about communication between humans, it is language that comes to mind first. We might be tempted to think that an ideal robot designed for close interaction should be equipped with verbal communication capabilities. However, human-to-human communication is largely nonverbal. When we interact with each other we exchange cues such as gestures and actions, facial expressions or looks. By interpreting these clues, framed in the right context, we can infer the intentions of our “partner”. For a robot that interacts with a human, mimicking these cues may be easier if it has a close-to-human shape and moves like a human. Behavioral and neuro-cognitive studies appear to support this hypothesis. On the other hand, science fiction gives us several examples of robots that, despite having different shapes than humans and not being able to communicate verbally, are incredibly expressive and can easily transmit intentions and even emotions, based on sounds, lights or simple movement. We must also bear in mind that the humanoid form must be used with care, so as not to induce an overestimation of the robot's capabilities which could result in disappointment and negatively affect acceptance.

In the interaction between AVs and pedestrians, research shows that a pedestrian's decision to cross a road in front of an approaching vehicle is mostly based on implicit visual cues, especially the vehicle's acceleration pattern. Based on this, the pedestrian infers the driver's intention behind the wheel. Direct communication between pedestrian and driver only comes into play in more ambiguous situations. VAs are driverless and the best way to replace this communication is still an active topic of investigation. Dedicated human-machine interfaces, commonly called “External Human-Machine Interfaces (eHMIs)”, are now being considered, but raise questions regarding the best way to transmit messages that are easily and universally understood by pedestrians. Alternatively or in addition, it seems reasonable to consider that AVs should, intentionally, manifest movement patterns similar to those of human drivers, which are already familiar to pedestrians. This can facilitate recognition of intentions and encourage acceptance.

 

Um desafio multidisciplinar

Human-robot interaction cannot be approached solely from the engineering side, as the multidisciplinary panel can attest. Psychology, for example, plays an essential role. On the one hand, understanding the user and their needs maximizes usability and acceptance. On the other hand, robotics and autonomous driving applications rely heavily on machine learning algorithms. Annotating the databases on which these algorithms depend and understanding the causes of possible biases and artifacts are challenges with which psychology can be particularly useful. Furthermore, the use of models inspired by neuro-cognitive and behavioral studies, in combination with machine learning, could be a way to provide robots with advanced cognitive capabilities, reducing over-specialization (a recurring problem in the field of machine learning). ) and improving the generalization of the resulting models.

 

Ethics and security

Any technology that can be used for good can also be used for evil”, and a reality in which robots are part of our lives will undoubtedly confront us with important questions about ethics and safety.

Cybersecurity will be an important aspect. As we cede more power and responsibility to robots, the more important it will be for developers to minimize the risk of bad actors interfering with these machines. On the other hand, intelligence and connectivity are not the same thing, and perhaps it makes sense to develop systems and algorithms that rely less on cloud computing, also as a way to minimize the risk of interference.

Regulation capable of dealing with the specificities of robots designed to interact closely with us will have to be developed. Current standards are aimed at robots that behave deterministically, in restricted contexts. Social robots will “live” in more complex and dynamic environments. The algorithms that control them are often based on machine learning and dynamic computing processes, which can result in non-deterministic behaviours. New standards will have to be developed and prepared to deal with these behaviours without being too restrictive to the point of limiting the usefulness of robots.

Appropriate processes will also be needed to deal with accidents involving humans and machines. Autonomous driving is at the forefront of this issue today, as the first traffic accidents involving AVs and humans occur. Evaluating decisions made by opaque algorithms and assigning responsibilities are challenges faced by regulatory authorities, which sometimes lack human resources with the knowledge and skills to evaluate these events.

It is also important to carefully manage expectations. Overestimating the capabilities of AVs can lead to dangerous situations, as several examples of misuse of Tesla vehicles attest. Users will need to be educated to understand that AVs have limitations and may not always be able to prevent accidents, especially in situations of incorrect use.

Autonomous driving systems offer a different field of application for similar concerns. Considering the interaction between AVs and pedestrians, a safe and efficient interaction can be defined as one in which both the pedestrian and the vehicle reach their destination, passing each other with minimal interference, preferably without either having to stop. For the pedestrian, an intuitive interaction will be one in which he can predict the vehicle's actions using the same type of cues that he would use in an interaction with a conventional vehicle. Although more sophisticated solutions can be used, sometimes simple solutions can be found focusing on the user's needs.

 

What the future holds

It is difficult to predict what the future of human-robot interaction, collaborative robotics or autonomous vehicles will be. History is full of examples of technologies that ended up having very different uses and impacts than one might initially think, and the same could happen with social robotics. The COVID-19 pandemic and self-imposed physical distancing are a good example of this. This is an important concern today, but very difficult to imagine just over a year ago. However, it has contributed to enhancing the role of human-robot teams as a way of maintaining productivity in collaborative industrial tasks while minimizing social contact. Another example related to the pandemic is the growing demand for delivery services, which has motivated a transfer of investment in research into autonomous driving, from transporting people to transporting goods.

It is expected that humans will also adapt their behaviours and environments to robots if they see an advantage in this change. This is already happening to some extent with Roomba vacuum cleaners and similar, with users making changes to their homes that allow these robots to be more efficient. The same will likely happen with other robots that will also end up shaping our environment.

Ultimately, we don't know what the future holds for human-robot interaction. Maybe we have robots walking or flying beside us, or robot taxis or courier robots making deliveries. Maybe they talk or simply make sounds and flash lights. Maybe they have a humanoid shape or something very different and more similar to what we see in films like Wall-E. Maybe it's a combination of all of this. We do not know. But it is reasonable to assume that, as with other technologies, human needs will define what the future will be. Even though this future is something that not even science fiction can yet predict.

 

 

 


 

Per Emanuel Sousa- HTIR Development Coordinator


Estela Bicho is a robotics researcher at the University of Minho, with extensive experience in developing neuro-inspired control architectures for human-robot collaboration;

Alexandra Fernandes is an experimental psychologist and senior researcher in the field of human factors at the Institute for Energy Technology (Norway) where she studies human-machine and human-robot interaction;

Dominic Noy is a psychologist and data scientist at Humanizing Autonomy (United Kingdom) where he develops algorithms for predicting pedestrian behavior, to be applied in autonomous vehicles (AVs).


 

CATEGORIAS

HMIHuman Factors