Talk
*** Unfortunately, this talk has been cancelled,
because of flight delay. ***
-
Date: Friday, July 15th, 2005.
-
Time: 14:00-15:20
-
Venue: Multimedia Conference Room (next to TV Conf Room)
-
Speaker: Christopher G. Prince (Department of Computer Science,
University of Minnesota Duluth)
-
Title: Towards robotic models of infants:
Ongoing emergence and audio-visual sensory intergration
-
Abstract:
We have been following a three-part approach to epigenetic
robotics. First, we are formulating a theory related to the
developmental growth by robots of various psychological
abilities. Second, we are constructing sensory-oriented
computational models of infants audio-visual integration
skills. Third, we are conducting empirical research with infants
regarding audio-visual integration. Our vision is to gradually
formulate this theory in terms realizable in robots, to gradually
build-up our robot systems to reach towards the theory, and
throughout, to incorporate comparisons and findings from
developmental psychology. The theory focuses on the idea of
ongoing emergence (Prince et al., to appear; see also Prince,
2001), which refers to the continuous development, integration,
and incorporation of new skills. Ongoing emergence commonly occurs
in human infants (e.g., in walking, word learning, and visual
object skills) and differs markedly from accomplishments to date
in epigenetic robotics. In particular, the continuous acquisition
of robotic skills, and the general incorporation of new robotic
skills with existing robotic skills in the repertoire of an
individual robot are not well established. Our sensory-oriented
computational modeling research follows from the observation that
young infants learn better in certain situations involving
contingent stimuli. For example, 7-month-old infants can learn
word-object mappings better when provided with synchronized
speech-visual signals (Gogate & Bahrick, 1998, 2001). In this
regard, we have been constructing models of infant audio-visual
synchrony detection (Prince & Hollich, 2005; Prince et al.,
2004). These models qualitatively and quantitatively detect
audio-visual synchrony based on an algorithm computing Gaussian
mutual information between two input channels (Hershey & Movellan,
2000). While the tasks performed by model and infant are not yet
the same, the model compares favorably to aspects of the published
infant literature. In a new study, we have also found that 7-8
month-old infants, who looked at face motion from two people but
heard speech audio from only one, have moment-by-moment looking
behavior that correlates significantly even with this basic model
(r = .30, p < .001; Hollich et al, 2004). Our present research
goals include creating a framework for sensory-oriented modeling
to enable more flexible creation and use of models, using this
framework to create models of infants audio- visual learning based
on synchrony detection, and determining how contingency detection
may contribute to processes of ongoing emergence.