The project C1 investigates spatial alignment and
complementary alignment of gestures between a human and an artificial agent who interact in a
shared space. The mental representation of this interaction space facilitates the coordination
of parallel actions. In the artificial intelligence part the consequences of spatial
perspective taking are explored in a virtual setup where both partners jointly move and
manipulate virtual objects. The cognitive robotics part focuses on perception and gesture
control issues in a physical setup where a human and a robotic receptionist operate on the
same map between them.
Interaction Space during Human-Robot-Interaction on a
When two persons are closely facing each other, the
interaction space is formed by the overlapping of the two persons’ peripersonal spaces (the
space immediately surrounding the body). We assume that the mental representation of the
interaction space needs to be aligned in both partners in order to ensure a smooth cooperation
between them. A principal goal of the project is to enhance artificial agents by an awareness
of interaction space. Therefore, a spatial representation for the peripersonal and
interpersonal action space is developed and methods are devised for dynamic alignment of
interpersonal space representations. So far, significant progress could be achieved in two
robotic scenarios towards establishing and maintaining interaction spaces (using a model for
proxemics for a mobile and a humanoid torso robot), body coding for peripersonal spaces
(virtual agent) and perception of gestures (physical agent), and an integrated robotic
receptionist scenario that will provide a basis for further work.
Peripersonal Space during an interaction with the virtual agent
The project will proceed towards assuming less pre-knowledge
about the partner and setup, and the modelling of more dynamic changes in interaction spaces.
This will be studied along two main threads that complement each other.
In the Artificial Intelligence part we adopt the Interactive Alignment Model by assuming that alignment occurs when interlocutors share the same mental representation of interaction space. Therefore interlocutors who share the same spatial representation of their shared environment are more successful in cooperation tasks than interlocutors who have different spatial representations of their shared space. To develop a model of interaction space which comprises alignment of spatial representations, we will (i) investigate spatial perspective taking for artificial agents as an unconscious mechanism comprising embodiment effects, (ii) develop methods to inter the partner’s spatial perspective with changing body orientation and position (iii) investigate and develop a representation of interpersonal action space and (iv) model adequate behaviour strategies based on concepts of proxemics to support successful and natural interaction with artificial agents.
project members researching on this part: Ipke Wachsmuth, Nhung Nguyen
In the Cognitive Robotics part the loop between the perception of gestures and the production of gestures will be closed. Thus, the control of gesture generation needs to respect the past and current activity of the partner (also on a social level). The system needs to decide (i) if the performance of a gesture is possible, (ii) if the dynamics of the gesture needs to be changed (slow down, wait), or (iii) if the gesture should be performed differently (in kinematics and dynamics). In order to keep the mutual representation of the interaction space aligned, social signalling will be realized based on an adapted model of proxemics. These skills will be enabled and mediated by an appropriate representation of the interaction space. The improvements will be incorporated into the demonstration setup of the robotic receptionist, which should increase the usability and acceptance of the demonstration system.
project members researching on this part: Sven Wachsmuth, Patrick Holthaus