The breakthrough could ‘play an algorithmic role’ in the development of AGI
Human intelligence heavily depends on acquiring knowledge from other humans — accumulated through time as part of our cultural evolution. This type of social learning, known in literature as cultural transmission, enables us to imitate actions and behaviours in real time. But can AI also develop social learning skills the same way?
Imitation learning has long been a training approach for artificial intelligence, instructing the algorithms to observe humans complete a task and then try to mimic them. But usually AI tools need multiple examples and exposure to vast amounts of data to successfully copy their trainer.
Now, a groundbreaking study by DeepMind researchers claims that AI agents can also demonstrate social learning skills in real time, by imitating a human in novel contexts “without using any pre-collected human data.”
Specifically, the team focused on a particular form of cultural transmission, known as observational learning or (few-shot) imitation, which refers to the copying of body movement.
DeepMind ran its experiment in a simulated environment called GoalCycle3D, a virtual world with uneven terrain, footpaths, and obstacles, which the AI agents had to navigate.
To help the AI learn, the researchers used reinforcement learning. For those unfamiliar with Pavlov’s work in the field, this method is based on offering rewards for every behaviour that facilitates learning and the desired result — in this case, finding the correct course.
The full study is published on the journal Nature Communications.