embodied simulation and activity theory

July 4, 2007

Anatole pointed out that there are some aspects of cognition related to awareness caused by mirror-neurons the Activity Theory might consider.

The basic ideas in Activity theory relate ‘people who want to reach some goals’ with their ‘mediating tools for realising activities’ necessary to ‘reach the goal’. Mediating tools can be cognitive (eg. language, gestures, content of narrative artifacts or pictures etc.) or material artifacts (tools, objects etc.).

This article suggests cognitive relationships between perception – motorical action/language – goal.

What i find interesting is that in the paper they differentiate between two kinds of neurons – some react on action potentials (affordances) and some react on actions. The latter has been related to social cognition.

My understanding of affordances sees them as the emergent constraints in the activity system dependent of various system components and interactions. However, affordances can also be potentially embedded into mediating tools due to cultural use of language or due to culturally defined activity potentials objectified in artifacts and tools. Which part of mirroring systems is responsible of just defining affordances of objects for certain actions and which part requires the actual seeing of certain actions of interest on others? Both are part of the activity system’s functioning.

It is not clearly articulated in the paper but ‘context’ and ‘culture’ play a major role in the activation of mirror-neurons. Activity theory is explaining several aspects of the activity systems which could be in relation to mirroring.

mirror context

Actions embedded in contexts, compared with the other two conditions, yielded a significant signal increase (Iacoboni et al., 2005).

For example scaffolding (that is based on the comprehension of each others goals) and the interpersonal zone of proximal development would be well explained by the mirroring systems.

My question is: How does (does it?) mirroring happen in mediated spaces where we cannot see the others, but we can see the consequences of actions on software, articulated responses etc.?

Does mirroring get activated if we read a book?

Also, how does mirroring influence certain kind of learning in authentic settings?

Does mirroring influence how our value systems develop and what kinds of actions we perceive as relevant ones (eg. environmentalism), and does it also have to do with what we selectively become aware of if the situation is multiperspective (dilemmas).



Volume 1
Mechanisms of Imitation and Imitation in Animals
Susan Hurley and Nick Chater, Editors
MIT Press
Chapter 2.
Understanding Others: Imitation, Language, Empathy
Marco Iacoboni

Minimal neural architecture for imitation: This architecture comprises a brain region that codes an early visual description of the action to be imitated, a second region that codes the detailed motor specification of the action to be copied, and a third region that codes the goal of the imitated action.

Neural mechanisms implementing imitation are also used for other forms of human communication, such as language. Functional similarities between the structure of actions and the structure of language as it unfolds during conversation reinforce this notion. We come to understand others via imitation, and imitation shares functional mechanisms with language and empathy.

The two types of neurons on maquaces are called canonical and mirror. Both types fire when the monkey executes goal-directed actions, such as grasping, holding, tearing, manipulating. Some of these neurons fire for precision grip, when the monkey grasps small objects like a raisin, and some other neurons fire for whole- hand grasp, when the monkey grasps bigger objects, such as an apple. When it comes to their visual properties, canonical neurons that fire when the monkey grasps a small object with a precision grip, respond also to the sight of small objects graspable with precision grip but not to the sight of bigger objects graspable with, say, a whole-hand grip. Note that these visual responses are obtained when the monkey does not reach and grasp the object; the simple sight of the object is sufficient to activate canonical neurons.

In other words, canonical neurons seem to be coding the affordance of an object, the pragmatic aspect of how-to-grab-that-thing, rather than its semantic content.

In contrast, mirror neurons do not fire at the sight of an object but will fire at the sight of a whole action. If it is a mirror neuron, will fire at the sight of another individual grasping an object, but will not fire at the sight of the object alone and will not fire at the sight of a pantomime of a grasp in absence of the object.

There was a convincing anatomical correspondence between the areas identified in the human brain as having mirror properties, and the macaque mirror areas.

The study shows a modulation of activity in inferior frontal mirror areas during imitation of goal-oriented action, with greater activity during goal-oriented imitation compared to non goal-oriented imitation (Koski et al., 2002).

It has been shown that mirror neurons in the macaque fire not only at the sight of an action, but also at the sound of an action (i.e., breaking a peanut) in the dark (Kohler et al., 2002). These data suggest two things: first, mirror neurons have auditory access necessary to implement speech perception.
Second, they enable a multimodal representation of action that is not linked to
the visual channel only.
This may facilitate learning of speech sounds via imitation.

How does one go from a relatively simple action recognition system to the complex symbolic levels reached by human language?

A type of answer (very vague, admittedly) to this question, provided by others elsewhere (Rizzolatti & Arbib, 1998), is that ‘gestures may be a primitive form of grammar’. The problem with both question and answer is that they accept a view of language as a phenomenon that can be essentially reduced to formal constructs such as grammar.

A salient feature of typical conversations that is ignored by traditional linguists is turn-taking. The average transition space from one speaker to another is less than 0.2 seconds, and longer pauses are immediately perceived as violations of temporal norms, even by young children.
Eye-gaze, body torque, rhythm attunement and simultaneous gesture are part of a social interaction (rather than a “software program” as classical cognitivism advocates) critically dependent on the motor system’s facility for temporal orientation and sequence organization and, I propose, also dependent on (and plausibly even deriving from) the action recognition or mirror system.


There are some other important aspects from other sources.

The hypothesis is that the motor system, through its mirror neurons, is involved in perceiving speech, and that through evolution, the “motor resonance”generated by the mirror neurons has been diverted from its original function to serve the needs of language.
Intentional communication requires one individual who is transmitting information and a second who is paying attention to receive it.


Neural mechanisms mediating between the multi-level personal experience we entertain of our lived body, and the implicit certainties we simultaneously hold about others:
Such personal and body-related experiential knowledge enables us to understand the actions performed by others, and to directly decode the emotions and sensations they experience.

A common functional mechanism embodied simulation is the basis of both body awareness and basic forms of social understanding:
– unconscious modeling of our acting body in space
– our awareness of the lived body and of the objects that the world contains


The fact that mirror-neuron activity is impaired in autistic children
fueled the speculation about the importance of mirror-neurons for
social cognition.


This system of neurons allows the brains in humans (and primates) to
perform its highest tasks including learning, imitating and
The mirror neuron system allows for the ability to create
an image of the internal state of another’s mind.

Vittorio Gallese (2000)
The Inner Sense of Action
Agency and Motor Representations
Journal of Consciousness Studies

When describing correlations between neurons and behaviour we are forced to select a foundational perspective defining the broader context in which our investigation is supposed to be framed.

My personal view of this ‘broader context’ is that brain functions can be accounted for only by considering the dynamic interplay that occurs between the biological agent as a whole, and the ‘external world’ (see also Jarvilehto, 1998).

Any attempt to characterize brain functions as the outcome of encoding devices whose final product is a symbolic ‘language’ totally remote from the acting body is bound to fail.

Ungerleider and Mishkin (1982) have influentially proposed that the dorsal pathway should function to analyze the spatial relationships among objects, while the ventral pathway should code their identity. This model posits that vision is ‘implemented’ along two parallel routes: the where and what pathways.

From the early nineties this model has been questioned by an equally influential— and partly alternative—one (Milner and Goodale 1995; see also Gallese et al., 1999 for a critical discussion of it). In Milner and Goodale’s view (1995) the dorsal pathway is involved in the sensorimotor ‘on-line’ control of action (the where becomes how), while the ventral pathway is maintained (pretty much in accord with Ungerleider and Mishkin) to be the privileged site for the semantic description of objects.

Both models, although with substantial differences, posit a strict dichotomy between regions of the brain supposed to control the doing of things, and other ones supposed to know what things really are.

In the next sections I will address the relationship between action and perception quite differently from the tenets of classical cognitivism and neuroscience. This perspective will show the impossibility of drawing a sharp line between acting and perceiving. Furthermore, this account of sensorimotor processes will enable us to formulate some new hypotheses about how our brain is capable of re-presenting the world as phenomenally experienced.

The notion of representation needs to be freed from its abstract connotation — typical of the representational–computational account of the mind—and has to be relocated within a naturalistic perspective.
This new account of representation stresses its pre-conceptual and pre-linguistic roots. What does it precisely mean to define representation in control terms? It means to underline its relational—and therefore intentional—character.

The achievement of different goals turns those very same movements into different actions. What relationship exists between the motor system, movements and actions? Until not so many years ago the motor system was conceived as a mere movement controller. However, recent
neurophysiological findings convey a totally different picture: the motor system controls actions.

It is more plausible to postulate that the objects whose observation triggers the neurons’ response are analyzed in relational terms. Object observation, even within a behavioural context not specifically requiring an active interaction on the side of the observer, determines the activation of the motor program that would be required were the observer actively interacting with the object. To observe objects is therefore equivalent to automatically evoking the most suitable motor program required to interact with them.

What I am proposing here is that to be phenomenally conscious of the meaning of a given object depends also on the unconscious simulation of actions directed to that object.

In humans, the development of language allows a new way of categorizing objects by means of their naming. By receiving a verbal description of an object one can infer its category without the need of acting on it. However, ‘to receive a verbal description of an object’, if one looks closer at it, could still be a way of experiencing this object, by involving the internal simulation of an action directed to that object.

What then really constitutes the meaning of an observed and internally represented object? A purely pictorial description of its shape, size and colour features, or rather also its intentional value? The pictorial description only gains its full, interesting meaning by being transiently bound to an individual first-person perspective on the level of conscious experience, by becoming the object-component of a much bigger, comprehensive picture.

Peri-personal space is by definition a motor space, its outer limits being defined by the working space of different body effectors such as the head or the arms. In fact, what is relevant to the neurons of these brain sectors is the location, with respect to the body, of ‘something’ that will become the target of a purposeful action. Again, we see that even space is inherently, intrinsically dependent on the dynamic relationship between agent and environment. Even more suggestive that this perspective is right are the data by Iriki and co-workers (1996).

Blogged with Flock



  1. About peripersonal space you may find very interesting this couple of articles:

    : J Cogn Neurosci. 2000 May;12(3):415-20.
    Related Articles, Links

    When far becomes near: remapping of space by tool use.

    Berti A, Frassinetti F.

    Dipartimento di Psicologia, Universita di Torino, Italy.

    Far (extrapersonal) and near (peripersonal) spaces are behaviorally defined as the space outside the hand-reaching distance and the space within the hand-reaching distance. Animal and human studies have confirmed this distinction, showing that space is not homogeneously represented in the brain. In this paper we demonstrate that the coding of space as “far” and “near” is not only determined by the hand-reaching distance, but it is also dependent on how the brain represents the extension of the body space. We will show that when the cerebral representation of body space is extended to include objects or tools used by the subject, space previously mapped as far can be remapped as near. Patient P.P., after a right hemisphere stroke, showed a dissociation between near and far spaces in the manifestation of neglect. Indeed, in a line bisection task, neglect was apparent in near space, but not in far space when bisection in the far space was performed with a projection lightpen. However, when in the far space bisection was performed with a stick, used by the patient to reach the line, neglect appeared and was as severe as neglect in the near space. An artificial extension of the patient’s body (the stick) caused a remapping of far space as near space.

    : Neuroimage. 2001 Jul;14(1 Pt 2):S98-102.
    Related Articles, Links
    Coding of far and near space in neglect patients.

    Berti, A. – Smania N, Allport A. 2001

    Dipartimento di Psicologia, Universita’ di Torino, Torino, Italy. berti@psych.unito.it

    Far (extrapersonal) and near (peripersonal) spaces are behaviorally defined as the space outside arm-reaching distance and the space within arm-reaching distance. Animal and human studies have shown that this behavioral distinction corresponds in the brain to a composite neural architecture for space representation. In this paper we discuss how the activation of the neural correlates of far and near space can be modulated by the use of tools that change the effective spatial relationship between the agent’s body and the target object. When subjects reach for a far object with a tool, it is possible to show that far space is remapped as near. We shall also argue that space remapping may not occur when far space is reached by walking instead of using a tool. Copyright 2001 Academic Press.

  2. I can see some interesting points in it:

    ..when the cerebral representation of body space is extended to include objects or tools used by the subject, space previously mapped as far can be remapped as near.

    If we do mediated activity (activity with tools), we might do the same?

    Is extrapersonal and peripersonal space reprentation in brain causing differences in the activity-perception?

    Most likely… especially if to think that activity-motivation and action-goal relationships have been described as intervened and emerging from using some mediating tools to realise objectives. Thus as soon as i have objective and i grasp for the certain tool i will re-map objects from extrapersonal space as tools in peripersonal space. This must be related with the affordances.

    Some kind of comprehension of anticipated affordances of the tools has to take place before i pick the object from the extrapersonal space. I can imagine that if objects were interpreted in the frames of subject’s intentions (it seems quite similar to embodied simulation) this would bring them to the peripersonal space where one can simulate the intentionality of the object (in one’s own context).


  3. […] Iaccoboni’s studies about mirror neuron function divergence seem to support Norman’s assumptions. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: