Artificial Imagination

Time for one of those posts where I just juxtapose two things I came across in the same week that happenstantially are worth pairing together for the Thinking Face Emoji of it all.

Andrey Vyshedskiy for The Conversation on imagination making us human outlines the foundational processes of memory. “The next building block,” he writes, “is the capability to construct a ‘memory’ that hasn’t really happened.”

The difference between voluntary imagination and involuntary imagination is analogous to the difference between voluntary muscle control and muscle spasm. Voluntary muscle control allows people to deliberately combine muscle movements. Spasm occurs spontaneously and cannot be controlled.

Similarly, voluntary imagination allows people to deliberately combine thoughts. When asked to mentally combine two identical right triangles along their long edges, or hypotenuses, you envision a square. When asked to mentally cut a round pizza by two perpendicular lines, you visualize four identical slices.

Neil Savage for Nature on AI needing to understand consequences highlights an obvious current limitation. “Anything beyond prediction requires some sort of causal understanding,” he writes. “If you want to plan something, if you want to find the best policy, you need some sort of causal reasoning module.”

Imagining whether an outcome would have been better or worse if we’d taken a different action is an important way that humans learn. Bhattacharya says it would be useful to imbue AI with a similar capacity for what is known as ‘counterfactual regret’. The machine could run scenarios on the basis of choices it didn’t make and quantify whether it would have been better off making a different one. Some scientists have already used counterfactual regret to help a computer improve its poker playing.

The ability to imagine different scenarios could also help to overcome some of the limitations of existing AI, such as the difficulty of reacting to rare events. By definition, Bengio says, rare events show up only sparsely, if at all, in the data that a system is trained on, so the AI can’t learn about them. A person driving a car can imagine an occurrence they’ve never seen, such as a small plane landing on the road, and use their understanding of how things work to devise potential strategies to deal with that specific eventuality. A self-driving car without the capability for causal reasoning, however, could at best default to a generic response for an object in the road. By using counterfactuals to learn rules for how things work, cars could be better prepared for rare events. Working from causal rules rather than a list of previous examples ultimately makes the system more versatile.