ż´Ć¬ĘÓƵ

News

Cause-and-effect approach could help machines become better learners

Three stick-figure representations of a cheetah, each one from a different camera angle
Published: 15 July 2020

For human beings, the ability to generalize – to extract broad principles from our experiences of the world and use these principles to help us make decisions in new situations – is an essential skill for navigating everyday life. But for those working in the field of artificial intelligence, getting machines to generalize in this way has been a notoriously difficult challenge.

“A lot of research on generalization and reinforcement learning has shown that [software] agents completely fail at it,” says Amy Zhang, a doctoral student in ż´Ć¬ĘÓƵ University’s .

ż´Ć¬ĘÓƵ researchers make headway on tough problem

Zhang and her colleagues have made an important step forward with this problem by using the concept of causal inference to develop a new method for machines to learn to perform image-based tasks. In a paper presented at the 37th International Conference on Machine Learning in July 2020, the research team demonstrated that machines can learn more effectively and efficiently by focusing only on those elements of a scene that are relevant to successfully performing a given task.

In the context of reinforcement learning – where machines aim to identify which actions to take to achieve positive outcomes or rewards – causal inference is about recognizing relationships between actions and rewards that do not change. Zhang refers to a hypothetical self-driving vehicle as an example: “Even if other things change in the world, such as lighting or positioning, that causal relationship is always going to be there: if you see a stop sign, you’re supposed to stop.”

While the researchers’ experiments were considerably less complex than an autonomous driving scenario, the underlying principle they demonstrated is exciting. For the first time, they showed a machine could apply a lesson learned from past experience to a new situation it had never seen before.

“Current reinforcement learning methods can’t do that,” Zhang says. “If the environment doesn’t look exactly the same as something the machine has seen before, it won’t be able to generalize.”

A faster, more efficient way to learn

A causal inference-based approach, the research showed, can reduce the amount of information a machine needs to store and process to make effective decisions. That has practical implications for any form of artificial intelligence working with images. In robotics, for instance, cameras are a cheap and effective way to gather large amounts of information about the environment, but reinforcement learning from images has so far been limited to methods that capture and compress every detail of a scene, requiring millions of processing steps. If a machine can identify which features of an image are relevant to the task it has been built to perform, it would be free to discard a vast amount of unnecessary information.

“If you can imagine a robot that’s trained to unload a dishwasher,” Zhang says. “Everyone has a different dishwasher, different dishes, kitchen layouts are different – but the task is the same. And so being able to understand which parts are the same – that the robot is getting reward for taking objects out of the dishwasher and putting them away, even if everything looks slightly different – that's the information that you want to capture.”

Image caption: Machine learning breakthrough. By using causal inference, the research showed an AI system needed only to learn to recognize this stick-figure representation of a cheetah from two different camera angles to then be able to recognize it in a position or under lighting conditions it had never seen before.


About the paper

“Invariant Causal Prediction for Block MDPs” by Amy Zhang, Clare Lyle et al. was presented at the International Conference on Machine Learning on 14 July 2020.

To read the paper:

Back to top