Modeling social interaction environment for baby with aim to improve AI in developmental robotics

There is still a long way to go before we will be able to create an artificial intelligence agent that can perform versatile tasks on a similar level of effectiveness as a human being does. This would require accumulating and studying a large dataset of information, but even this could be not enough. For now, only the task-specific agents are showing remarkable behavior, exceeding that of a human.

It is well-known that a human child learns numerous tasks over a relatively short period of time. When using technical analogy, all these tasks are performed sequentially and learned using universal algorithms. But in case of machine learning, it is a daunting challenge to develop a single agent that can incorporate all such tasks. It is likely to consume an ample amount of time and, overall, most likely to be an extremely costly process. To partially resolve such issues, computerized environments are developed that provide a realistic experience for the agent to learn.

Evaluation experiments. (a) Paper rod experiment to evaluate unity perception[18]. (b) Paper rod experiment simulation in SEDRo. Image credit: Courtesy of the researchers / arXiv:2012.14842

A new recent research paper published on arXiv.org is based on the fact that a baby learns by interacting with the surrounding environment. This interaction begins from birth and supports the cognitive development of a child, including language learning.

Several simulated robot environments and games have been developed over the years and studied by researchers, but none of them works to provide a real-life approximated experience of what an infant experiences during the first year of life. Keeping this thought in mind, a simulated environment for developmental robotics (SEDRo) was designed with aim to create a generalized artificial intelligence model of a baby agent.

Different stages of infant social interaction are simulated, considering the age. Incremental development incorporates the results from the previous stage of development. All this is done by using a mother agent named ‘Motherese’ that interacts with the child. The Unity 3D game engine is used to demonstrate SEDRo.  

Proposed Environment

SEDRo is developed to provide a minimal environment that a baby can experience counting from the fetus stage until 12 months after birth. The key aspect of the SEDRo involves the baby agent, a surrounding environment, and a caretaker – in this case, ‘Motherese’ AI agent. The simulated surroundings are composed of a variety of objects, such as furniture and toys so that the baby agent can interact with them. Four developmental stages, i.e., fetus, immobile, crawling, and walking, are observed in two environments (fetus and after-birth). New and unique capabilities and features are experienced by the machine learning model in each stage.

1. The agent

The agent body is programmed in a similar way to a human child’s body and, as mentioned previously, supports various stages of development (crawling, walking, grasping food, etc.) that can be simulated and analyzed over time. The agent body is developed to support 64-degree movement.

– Vision

Two eyes with a binocular system have been developed within the agent. There is a horizontal, vertical, and focal degree of freedom in both eyes and two cameras to replicate the central and peripheral vision that humans have. An optional camera is placed on the head to generate a combined visual perception. Nearsighted focusing effect is implemented too, because an infant cannot focus his/her vision beyond arm’s length.

– Tactile sensitivity

About 2110 sensors are placed across the agent’s body, with each sensor of varying density. Most of the sensors are placed within the head. With each touch, a sensor generates “1”, otherwise, it is a “0”. A sparse status vector is generated consisting of all sensor status and sent as part of observations.

– Proprioception

Current joint positions and visual information are undertaken to evaluate the association of spatial locations and body part movements. 469 observations with values ranging from -1 to 1 were given to the agent’s observation. The velocity and angular velocity of joints are also included to understand body movements.

– Interoception

The food level within the stomach is also observed. With time this level will fall, and after reaching a certain threshold, the baby will cry. The mother agent will come into action and feed the baby AI agent, resulting in an increased satiety level.

2. Modeling ‘Motherese’

The mother character is included in the scenario to take care of the baby’s needs, which also includes social interaction.

– Mother agent

To develop the child agent’s intelligence, it’s interaction with the mother agent is essential. The mother agent is build using a pre-defined library by analyzing the real-life interaction of mother and child. Pre-recorded motion captured (Mocap) animations are used based on realistic interactions. To make the task a bit less complicated, only the first 12 months of a child are observed so that no open-ended back-and-forth interactions are there. All the scenarios are manually built during the research work.

– Interaction with baby

The foremost scenario of child and mother interaction is feeding the baby. The supervising AI agent feeds the baby at pre-defined time intervals and also when the food level falls. The mother can avoid obstacles and move towards the baby during feeding and walking in the surrounding area.

Infant directed speech (IDS) is another key aspect of the mother character. The mother will interact with the child using small words and nodding at the baby or moving arms. As sound cannot be added directly to the observation, so the researchers have used a one-hot encoded vector of length 26 to represent one English character at every time frame.

For joint attention, different objects are held in front of the baby, and providing their description while looking at them. The description of objects at a later stage of development is provided when the baby tries to touch or grab the object.

Evaluation of Development

The creators of SEDRo have come up with various experiments to evaluate and track the development of a child agent. One such experiment included the movement of a rod occluded by a box. 3-month-old baby will observe it as two separate rods while older one sees them as a single piece. The unity perception of simulated babies is observed by this test.

Final words

The above research is currently in-progress. Researchers hope to improve this model by adding new modes of interaction between AI agents.

Source: arXiv.org

Source

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x