An AI-powered tool developed by the University of Stanford can be used to understand the emotional intent in great works of visual art.
Advances in artificial intelligence (AI) over the years has become foundational technology in autonomous vehicles and security systems. Now, a team of researchers at the University of Stanford are teaching computers to recognise not just what objects are in an image, but also how those images make people feel.
The team has trained an algorithm to recognise emotional intent behind great works of art like Vincent Van Gogh’s Starry Night and James Whistler’s Whistler’s Mother.
“The ability will be key to making AI not just more intelligent, but more human,” a researcher said in the study titled ‘ArtEmis: Affective Language for Visual Art’.
The team built a database of 81,000 WikiArt paintings and over 4 lakh written responses from 6,500 humans indicating how they felt about a painting. This included their reason for choosing a particular emotion. The team used the responses to train AI to generate emotional responses to visual art and justify those emotions in language.
The algorithm dissected the artists’ work into one of eight emotional categories including awe, amusement, sadness and fear. It then explained in written text what it is in the image that justifies the emotion.
The model is said to interpret any form of art, including still life, portraits and abstraction. It also takes into account the subjectivity of art, meaning that not everyone feels the same way about a piece of work, the team noted.
The tool can be used by artists, especially graphic designers, to evaluate if their work is having the desired impact.