What are affordances?
J.J. Gibson’s famous quote on affordances from his book “The Ecological Approach to Visual Perception” published in 1979:
“The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill. The verb to afford is found in the dictionary, but the noun affordance is not. I have made it up. I mean by it something that refers to both the environment and the animal in a way that no existing term does. It implies the complementarity of the animal and the environment…”
Our affordance learning scenario
- Robotic arm attached to a table surface.
- Monocular & stereo camera systems observe the work area.
- Arm interacts with various household objects & cameras record image data.
- System must learn object affordances by associating object properties, e.g. shape, with the resulting behaviour the objects exhibit during interaction.
Main Idea
The main idea behind our affordance learning framework is illustrated in the above figure. During experimental trials, when an object is placed in the working environment, both intensity and range images of the object are gathered by the camera systems. An arm action is then performed on the object and its resulting behaviour is recorded to video. Various object property features are derived from the intensity and range images of the objects, forming an object property feature space, while various result features are derived from the videos of the objects, generating a result feature space. The main task of the affordance learning algorithm as defined in our framework, is to identify significant clusters in the result space and associate these clusters with data in the object property space. This allow for the affordances of novel objects to be broadly classified in terms of result space clusters, by observing their respective object property features and using them as input to a classifier trained by the affordance learning algorithm.