Our robots, namely Infanoid and Keepon, are not only research platforms on which we implement human communicative development but also tools for psychological experiments to investigate how humans, especially children, interact with it. We are observing how children respond to the robots' social actions such as gazing and pointing, and also how they perform spontaneous actions such as showing and giving an object, to the robots. It is worth noting that we can control the complexity of the robots' behavior in order to meet our research objectives and children's developmental stages.
We observed a number of children (from 6 months to 9 years of age) interacting with Infanoid. In these observations, the robot ran in Automatic Mode, in which it alternates between eye-contact and joint attention with pointing. If necessary, a remote operator made adjustments to the robot's attention (e.g. the direction of the gaze/face/body). First, each child was seated alone in front of the robot. About 3-4 minutes later, the child's mother came in and sat next to the child. Interaction continued until the child got tired or bored; on average, each child had an interaction of about 30 minutes.
In these observations, most of the children (especially those that are 3-6 years old) showed the following changes in their interaction.
-
Neophobia phase
When the child interacted with the robot alone (for the first 3-4 minutes), he or she looked seriously into the robot's eyes. Even though the robot produced a mutual gaze or an aversive gaze, the child's eyes were locked onto the robot's eyes. The child then showed embarrassment, not knowing how to deal with this weird, moving thing.
-
Exploration phase
Next, using his or her mother as a secure base, the child started exploring how the robot changed its attention and posture in response to various actions, such as showing toys or poking the robot. When the child elicited an interesting response from the robot, he or she often made referential looks and comments to their mother. Through this exploration, the child would find that the robot was an autonomous agent that shows attention and emotion.
-
Interaction phase
The child then gradually got into social interaction, where he or she pointed to the toys or gave the toys to the robot by putting them in the robot's hands. Verbal interaction also started by asking questions (e.g. "Which one do you want?", showing two toys) and asking the robot to do something (e.g. "Grasp it like this!", showing how to handle a toy). The child seemed to attribute mental states, such as intention and emotion, to the social being.
The children changed the recognition of Infanoid dynamically: first as an unknown, ambiguous "moving thing", then as an "autonomous agent" that has attentiveness and responsiveness, and finally as a "social being" that deserves to be involved in a social interaction, including a verbal one. In most cases, these dramatic changes occurred within the first 10 minutes.
Note: Part of this observation and analysis was done in collaboration with Nobuyuki Kawai (Nagoya University), Daisuke Kosugi (Shizuoka Institute of Science and Technology), and Yoshio Yano (Kyoto University of Education).
We also observed a number of infants in three different age groups, namely 0-year-olds (from 6 months of age), 1-year-olds, and over-2-year-olds, interact with Keepon with their mothers. The robot ran in Manual Mode, where a remote operator controlled the robot's attentive and emotional expressions manually with the help of the images taken by the on-board and off-board cameras. The robot usually alternated between eye-contact (with the infant or the mother) and joint attention (to the toys in the environment). When the infant showed any meaningful response (touching, pointing, etc.), the robot made eye-contact and showed positive emotion by rocking and bobbing its body. Interaction continued until the infant got tired or bored; on average, each infant lasted about 10 minutes.
In these observations, infants in each age group showed different styles of interaction.
-
0-year-olds
Interaction was dominated by tactile exploration using the hands and mouth. The infant did not pay much attention to the attentive expression of the robot; but, the infant showed positive response (e.g. laughing) to the robot's emotional expression (especially the one of bobbing).
Video: Keepon Stage 1 (MPEG 3.6MB) -
1-year-olds
The infant showed not only the tactile exploration, but also awareness of the robot's attentive states, sometimes following the robot's attention. Some of the infants mimicked the robot's emotional expression by rocking and bobbing their bodies.
Video: Keepon Stage 2 (MPEG 5.4MB) -
Over-2-year-olds
First, the infant carefully observed the robot's behavior and how the caregiver interacted with it. Soon the infant started social exploration by, for example, showing toys to the robot. Soothing by stroking the robot's head and verbal interaction (such as asking questions) were also frequently observed.
Video: Keepon Stage 3 (MPEG 4.9MB)
As described above, these three age groups showed quite different styles of interaction: with a "moving thing" that induces tactile exploration, then with an "autonomous agent" to enjoy a contingency-detection game with, and finally with a "social being" to play and talk with.
Note: Part of this observation and analysis was done in collaboration with Daisuke Kosugi (Shizuoka Institute of Science and Technology) and Chizuko Murai (Kyoto University).