Dr. Sajit Rao



G.Metta and G.Sandini

Strategies of visuo-motor coordination (abstract only).

The study of sensori-motor coordination in artificial systems has been carried out mainly by trying to implement skill levels comparable to those of adult humans (for example, the control of robot heads or visually guided reaching and manipulation). Often, the traditional approach has been a "divide and conquer" process which from one side makes the overall problem more tractable but, on the other, leads to the difficulty of integrating individually developed sub-systems.

On the contrary, in humans and animals, adult performance is achieved through the simultaneous development of sensory, motor and cognitive abilities on the basis of the high degree of adaptability which characterizes biological systems. Following this approach we started to investigate which control architectures and sensori-motor coordination strategies may be better suited to implement the kind of adaptability shown by biological systems and particularly the evolution of sensori-motor skills during their initial developmental phases (for example how a system can evolve from the purely reflexive behavior that characterize newborn sensori-motor coordination to the complex voluntary control already present a few months after birth).

To investigate these issues we built a sensorized head-arm robot which consists of an head equipped with two cameras and two microphones, a robot arm and a force-torque sensor at the wrist and decided to address the problem of controlling visually-guided reaching. In order to define the initial state of the "baby" robot we are now investigating the implementation of reflexes (sensory triggered motions) which are the movements that appear to be present soon after birth. An additional constraint is, according to our long term goals, to implement these reflexive behaviors so that they can adapt to changes in sensorial accuracy as well as motor controllability.

Sensori-motor coordination can be described as a process transforming sensory information into motor commands. The framework we propose is based on the use of approximation of vectorial fields as a common language of the visual and motor system. Visually, for example, dynamic information can be described as a velocity field in the retina (optical flow) which can be directly used to control the timing of motor acts (e.g. the act of grasping an approaching ball). Motorically, investigation of the spinal cord by Bizzi et.al. has shown that movements can be generated as a combination of motor primitives which are modeled as force fields. The advantage being that, instead of controlling each muscle individually, the motor system operates on muscle synergies and combines them to produce the end-point force field defining, at each instant of time, the "equilibrium point" toward which the end-point is moving.

Within this framework we implemented some basic behaviors like gravity compensation, movement of the arm in the field of view or in a given point in the image plane. The commonality between these experiments is that each module is implemented as an approximation of a vector field using radial basis techniques.

For example, in the case of gravity compensation, the system learns the configuration dependent gravity field gathering information from the points where the arm is at rest.

Following the same reasoning, visually triggered arm movements are initiated through mappings between the image position vectors and the controllers activation values. Even in this case, the algorithm learns from examples extracted at resting points.