Archive for November, 2007


From ecologically defined affordances within activity systems to pedagogical pattern ontologies?

November 25, 2007

There is a recent position paper (2007) about merging two web cultures.

The Two Cultures: Mashing up Web 2.0 and the Semantic Web
by Anupriya Ankolekar, Markus Krötzsch, Thanh Tran, Denny Vrandecic

This paper describes how social software could be used both as the collaborative data-collection, and personalized data retrieval tools if merged with semantic ontologies that are related with databases.

Why i find it interesting to me:
– I support the emergent nature of affordances within an activity system as a result of user interaction with the environment (objects, other users and their meanings).
– Thus, i believe we can talk of affordance perspectives within certain activity patterns and the ontospace of affordances within e-learning pedagogical settings.
– I have thought that besides folksonomies, that define meanings of socially constructed contents, we lack user-defined activity-potentialities (affordances) of social tools.
– I am pushed to constrain affordances under pedagogical pattern ontologies (similar to IMS LD) in order that certain semantic web technologies could be used for developing new distributed learning-tools (eg. mashup-tools that determine widgets based on user-defined affordances for their anticipated activity systems, distributed tools that use affordances for coupling between pedagogical activities and suitable tools for these activities).

An activity structure (or “activity”) is a digital schema-based representation that describes the properties of a business activity (such as organizing a conference) and that semantically relates it to the people, artifacts, tools, and events involved in carrying out the business activity (Moody et al., 2006).

- I am seeking for the technology or solution HOW can the ontologies be created on top of user-defined tags without killing the ecological variability within each ontological perspective.

The summary of the ideas i could find from the position paper:

In the new vision of the Semantic Web, humans and machines share semantic data across the world. The Semantic Web uses machine-readable data formats that are the basis for semantic technologies.
Yet, it is necessary to incorporate semantics into applications in ways that allow more intuitive usage,
There needs to be more understanding of the “human-semantics interaction” aspects of how people approach semantically rich applications, and ways for easing people into working with the semantic models underlying their software and tools.

New Semantic Web would make use of simple Web 2.0 type of collaborative construction and evaluation of ontologies.

Incorporating the creation of semantic data into the interfaces of blogs, forums, online directories, etc. can turn them into semantic data sources. The integration of two web cultures would make it possible that large number of people could author small amounts of semantic data (eg. FOAF). Future semantic search engines will provide services beyond mere display of data, and successfully employ complex processing tasks.

Since the exchange process requires a shared common understanding of the involved data, differences in the ontologies need to be aligned and reconciled, and reliable mapping systems must be developed.

On the Semantic Web, collaboratively constructed ontological data must be transformed, merged, and collected to enable later reuse. Data must be mapped to a common terminology/format that can be further processed. Semantic technologies advertise the use of common data formats that are universal across application domains, and hence greatly facilitate the construction of mashups. Aggregators will play an important role in the emerging Semantic Web, especially as ontologies become more numerous and filtering methods become more complex.

Here are some other relevant links explaining how it could be possible of merging collaborative tagging and ontologies. Interesting is that the ideas seem not to have matured much. I didn’t find clear solutions how to make ontology on top of user-defined affordances. There seem to be more ideas, how to integrate different ontologies.

Ontology or Folksonomy (2007)

The process of developing a tag data ontology forces us to identify the kinds of ontological assumptions made by various source of tag data, and to specify a vocabulary for stating those assumptions.


In cases where social tagging is sufficient, ontologies may simply be overkill. But there are many, many cases where social tagging simply does not, and cannot, have the semantic rigour that is needed.

Imagine a folksonomy combined with an ontology — a “folktology.” In a folktology, users could instantly propose or modify ontological classes and properties in the same manner that they do with tags in tagging systems. The most popular ontological constructs (the most-instantiated classes, or slots on classes, for example) would “rise to the top” and self-amplify, while the less-instantiated ones would “fall to the bottom” over time. In this way an emergent, self-organizing, and self-pruning ontology could emerge within a community. Such a system would have the ease and adaptability of a folksonomy plus the semantic richness and formal structure of an ontology.

Soft ontologies

Soft ontology, as proposed in computer science circles by Aviles et al. (2003), is a definition of a domain in terms of a flexible set of ontological dimensions.Unlike standard ontologies, the approach allows the number of its constitutive concepts to increase or decrease dynamically, any subsets of the ontology to be taken into account at a time, or the order their mutual weight or priority to vary in a graded manner so as to allow different ontological perspectives. Where conventional ontologies describe or interpret the conceptualization of a domain from a prioritized perspective, the soft ontology approach transfers the task of interpretation to the observer, user or learner, depending on the context.


Gruber defined ontology as a “formal specification of a conceptualization.”
There are at least 40 terms or concepts across these various disciplines, most related to Web and general knowledge content, that have organizational or classificatory aspects that — loosely defined — could be called an “ontology” framework or approach.

It is not unrealistic to also seek “naturalness” in the organization of other knowledge domains, to seek “naturalness” in the organization of their underlying ontologies. Like natural systems in biology, this naturalness should emerge from the shared understandings and perceptions of the domain’s participants.

The practice within the ontology community is to characterize ontologies by “levels”, specifically upper, middle and lower levels. Most of the content in upper-levels is akin to broad, abstract relations or concepts than to “generic common knowledge.” Most all of them have both a hierarchical and networked structure, though their actual subject structure relating to concrete things is generally pretty weak.

“Binding layers” are a comparatively newer concept. Leading upper-level ontologies propose their own binding protocols to their “lower” domains, but that approach takes place within the construct of the parent upper ontology and language. Such designs are not yet generalized solutions.

Ontologies are combined by using federated approaches. An important goal in any federated approach is to achieve interoperability at the data or instance level without unacceptable loss of information or corruption of the semantics.

A different, “looser” approach, but one which also grew out of the topic map community, is the idea of “subject maps.” A subject proxy is a computer representation of a subject that can be implemented as an object, must have an identity, and must be addressable (this point provides the URI connector to RDF). Each contributing schema thus defines its own subjects, with the mappings becoming meta-objects. These, in turn, would benefit from having some accepted subject reference schema (not specifically addressed by the proponents) to reduce the breadth of the ultimate mapped proxy “space.”


Embodiment of abstract concepts

November 18, 2007

This weekend Anatole Fuksas visited Tallinn and we had great meetings and brainstorming with him and our new media prof. Mauri Kaipainen and his partner Pia Tikka from narrative film studies. Tomorrow Anatole is going to give a talk about ‘Storytelling and Hybrid Ecologies in the Age of Social Networking and Locative Media’ at KERG seminar in Tallinn University.

During these weekend days there were some talks and some events that enabled me to get better understanding of how the embodiment of abstract concepts could be explained.

We were talking at Mauri’s place, could concepts like ‘photosynthesis’ be easily embodied and activated directly through sensory-motor path, without the activation of symbolic schemata processing in between, like mirror-matching theories suggest would happen when we look, hear or read something from the environment that we have embodied before, and what relates with our intentional framework.

The problem with abstract concepts is that they are often invisible and thus not so easily embodied. For example, we can think of various kind of reasons why some abstract concepts cannot be directly embodied – they are at micro-level and thus we have no sensory-motor experience with them (genes) or complex at supra-macro level (why the seasons emerge due to astronomical causes), they may have the emergence patterns that make them visible only as time-related phenomena (like evolution), they may look totally different an have several visible and invisible emergence patterns (like boiling with start and end at visible level with various emotional and perceptional and motor-neuron activation and Brown movement at molecular level with endless chaotic movement of molecules at higher speed).

Previously, Anatole with his philology background, has suggested that photosynthesis for him relates with something green that is synthesized..and i argued with him that in many languages the concept is actually borrowed from foreign languages, and it does not activate anything familiar word-based, besides maybe thinking of the synthesis of photos that sounds similar and doesn’t lead us to the right path of embodiment.

Piia suggested that for her knowledge of photosynthesis, as the learned process, always emerges as the sensory-motor sequence of actions when she hears the word. Basically, if i have learned what photosynthesis means, i have embodied the sequence of actions through my neural sensory-motor path, and i can reactivate them when the keyword ‘photosynthesis’ is heard.
Ok, this seems quite likely…but still there is a problem how i created the pattern of sensory-motor activations that will later become related with the abstract word?

Today we went to see some nice medieval artifacts at Niguliste church. Then Anatole initiated talk about exemplars (eg. exemplar of guilt feeling at the altar paintings) what were supposed to create the emotional and motor-action correlates at believers.

The idea of exemplars as the carries of the whole complex system of abstract concepts reminded me the talks of photosynthesis. How we create the abstract concept, is enabling the person to embody the whole sensory-motor action system using some analogical exemplars of this concept. We need to model something, which evokes similar affordances for action potentialities.

Basically, the concept photosynthesis can be a bit differently triggered by many exemplars, which evoke different action potentialities and its emotional correlates, and the understanding and precise knowing of what photosynthesis is, presupposes having more than one embodied experience of these different exemplars.

So, we could teach the abstract concept by embodying sensory-motor actions from some known situations that are modeled to us in relation to the new abstract word.
Well, a bit trivial? Analogy-based teaching is mapping old sensory-motor experiences to new objects and processes.

As suggested by Gentner and Gentner, the success of an analogy-based teaching method depends on student knowledge of the base domain (i.e. prior knowledge), and student acceptance of the analogy.

If to go deeper into what the embodied concepts are, diSessa’s theory of knowledge as phenomenological primitives or p-prims (eg. closer means warmer), which are activated like fields by coordination classes, seems very similar to the affordance emergence (field-like, perspectives) and embodied concepts (sensory-motor experiences, phenomenological primitives).

p-prims can be understood as simple abstractions from common experiences. They are phenomenological in that they are responses to experienced and observed phenomena. They are are linked to, cued by, those phenomena rather than being general or abstract. They are primitive in the sense that they generelly are self-evident and need no explanation; they simply happen.
Coordination classes are internally coherent networks of primitives and readout strategies.

When teaching an abstract concept, we will try to activate a field of phenomenological primitives the learner has previously activated as sensory-motor paths leading to perceiving those phenomena, embodying them as concepts. Then we try to tag the new abstract symbol (eg. photosynthesis) to these embodiments and relate these sensory-motor experiences to the new elements from the environment. It is about migrating the emergence of affordances (action potentialities) from one system to another new system.

It is known that the activation of wrong phenomenological primitives (eg. warmer means closer) can cause altered understandings of the processes (eg. sun moves closer to the earth, and then we get summer, and when it moves further away it gets colder and we will have winter).

Teaching abstract concepts through embodied simulations has been one of the suggested methods in science (eg. playing sun system or molecules) at primarily level. For example, my friend Pirkko Hyvonen has studied PLE-s, Playful Learning Environments where students learn mathematics and other subjects outdoors by playing and embodying concepts.

Since it is known that embodiment can also be visually triggered (great paper i saw of embodying movements in art for example Anatole showed me), the models can be used for understanding or embodying concepts.

For example, studies of Uri Wilensky about participatory simulations, where kids can learn difficult phenomena by programming them at different emergence levels with NetLogo programming language, that helped them to understand phenomena better can be explained as embodiment by moving the LOGO turtles and perceiving their actions at individual (one molecule in the water) and system level (boiling water as a system).
Another nice example of participatory simulations with PDAs.

So, important in studying abstract concepts is that we can do something with the model elements to embody action- and emotion potentialities as sensory-motor paths?


expectations to new social learning tools

November 13, 2007

Social software is generally recognized as tools, which development is highly dependent of users‘ mutual interaction with the mediation of these tools, involving group processes such as discussion, mutual advice or favors, and play (Shirky, 2002).

Any activity is always mediated by the tools that we create in the process of actualizing certain affordances in our goal-directed and enculturated actions – when making something from the environment into our own or when bringing something of our own ideas into the environment. More than at earlier times, current social tools are the creation of communities. While the artifacts and meanings, created and distributed with social software, obtain in the process of use the community-defined folksonomical dimensions, the activities what are performed and evolve in these systems as a result of community interactions, have yet remained implicit, and are not well observable for the users of social software. Social software still lacks the means how to make activity potentialities of tools, and activity patterns, which emerge in the communities, more observable. What we basically lack, is the soft ontologically defined constraints/possibilities of actions determined by the communities who use social tools.

When using social software for learning at institutional courses, but also for personal self-directed learning attempts with other learners in the Web, the explicit socially defined action potentialities within activity systems would enhance the selection of communal tools for common objectives. Some of the recent developments, such as Friend of a Friend (FOAF) technology that aims at creating a Web of machine-readable pages describing people, the links between them and the things they create and do, seem to promise that the action-based automated search of learning partners would soon become possible. The best practice of the tool-use for certain learning activities is, thus, disseminated giving a valuable input for the others and narrowing down their choice of appropriate tools for particular learning goals. For example, it is suggested that the super-peer networks would enable the learners to observe, record and share their activity practices with artifacts through networks (Clematis et al., 2007). If FOAF and similar specifications could read personal action potentialities with certain social software, their communities and artifact types, which we described earlier, the decision processes at constructing collaborative landscapes for learning purposes, could be supported by technological means.

Tools that support the construction of group landscapes from distributed personal tools play an important role in the application of new Learning Environment Design model. The new generation of aggregation and mashup tools is anticipated to support the construction of distributed personal and group learning landscapes, using the affordance-based activity system model. The mashup of the learning environment from distributed feeds will be realised, considering, in one hand, the anticipated affordances for action, and personal activity preferences, which may be described with FOAF kind of scripts, and on the other hand, the socially defined action potentialities of tools would enable the mashup tools to automatically select a suitable set of widgets for certain learners or groups. In these mashup tools learners would pertain full control over the selection of feeds – eventually they can ignore or close some tools and even add new tools. Such user-activity can be, in turn, used to update the semantic models refining the activity-tool relations, improving the tool recommendations.

The critical factor of effective use of distributed social landscapes and scaffolding in such systems is the possibility to monitor the use of landscape elements and the information flows between them in the cause of action. New developments at social software systems enable already to visualise the folksonomy based meaning-building dimensions in the communities (see Klerkx & Duval, 2007). What is yet needed, is the visualisation of activities and learning landscapes for the learners. This may be realised through visualising the mashed learning landscapes as affordance-based activity systems in which the distributed social tools would convey also the socially defined activity potentials. Certainly, this may not indicate, which of these available activity potentialities were put into action. For understanding this, interaction within specific social tools, and the content of feeds between tools must be analyzed (eg. which regulatory, social or content-creation types of action potentialities were put into action). But that seems even more complicated issue.

The joint learning situations would also pertain the use of asynchronous or synchronous interaction tools when working with artifacts. Some of the tools like Gabbly chat can now be easily integrated with different webpages, social software applications and masup tools. Yet, the develoment of tools, which keep the interrelations between the talked content and the productive actions made at artifact, should enhance learning at distributed landscapes. The future of using distributed social software elements for self-directed and collaborative learning purposes is in mashing selectively the evidence from different activities eg. weblog posts and commentaries with certain tags, artifacts purposfully created and stored in different repositories, wiki-contributions, discourse logs etc. In these places (hubs) where our distributed knowledge meets again, we propagate ourselves as the connectors between the communities. If we mix our distributed self with the knowledge of our community members (like in micro-blogging feeds of Jaiku), these mashed feeds may work as triggers for learning. They enable to access knowledge community-wise and transfer it to other community spaces.


An ecological approach in inquiry learning environments

November 9, 2007

Some ideas from the paper i try to write. I am especially grateful to Anatole Fuksas for triggering me to think about embodied concepts rather than training for knowledge and competences in inquiry systems. It seems that this new approach is well in accordance with my previous ideas of the systems as emergent semiotic ones in which the learners are creating perceptionally translation borders between the artifacts in inquiry steps. This new idea relates well with this translation part where learners with perceptional translation problems are unsuccessful in performing certain actions of the inquiry process.


Recent findings in neuro-science enable to consider the interrelations of the components of learning environments, inquiry actions and knowledge construction, uniting all these into one ecologically defined perceptual-action system.

At traditional sensimotor schemes of information-processing, an action is often seen as the late step caused by stimulus processing (Prinz, 1997). This means that depending of input information from the environment (e.g. learning materials and problem statement), and learners‘ previous knowledge, the inquiry actions are planned to solve the problem (Hommel et al., 2001) (see fig. 1).

The traditional view to information-processing has assumed that people constantly process mediated representations of information from outside environment and information retrieved from the long-term memory, in their working memory in order construct dynamic mental models that mediate their awareness of themselves and phenomena, and trigger action performance.

inquiry learning environment eclkogy

Hommel (2003), however, assumes that action control to all behavioral acts is ecologically delegated to the environment – when planning actions in terms of anticipated goals, the sensory-motor assemblies needed to reach the goal are simultaneously selectively activated in the environment, and bind together into a coherent whole that serves as an action-plan, facilitating the execution of the goal-directed actions through the interaction between the environment and its embodied sensory-motor activations.

The former idea could be translated into what would happen in the learning environment: the learner has previous experiences with similar actions and situation elements, and this enables them to anticipate certain action goals and their sensory-motor correlates in the learning environment, which in turn would constrain and guide learners to embody certain sensory-motor activity patterns and perform appropriate inquiry actions in the system. Goals and proceeding actions are, thereby, not sequentially deduced from the input information and previous knowledge, but they are ecologically emergent from coupling between anticipated goal-directed action potentialities and the features perceived in the environment as affordances for these actions.

Discoveries in cognitive and neuroscience about the functioning of mirror-neuron systems (Gallese et al., 1996), claim, that cognition is embodied through grounding knowledge directly in sensory-motor experiences without the mediation of symbolic representations (Pecher & Zwaan, 2005). We perceptually activate certain multimodal action-potentialites of embodied symbols to mediate our purposful and goal-directed actions (see Gallese & Lakoff, 2005). These embodied dimensionalities of symbols are activations of neural representations located in sensory-motor areas in brain.

The embodied view to concepts as activity patterns makes learning in authenitc contexts even more meaningful – when activating information of objects, we have had direct emotional and action-related experiences with, the same neural areas are involved than when activating sensory-motor circuits of the brain on performing actions with their mediation (Gallese and Lakoff, 2005).

From the ecological viewpoint, complex multi-representational learning environments are built on the supposition that people should be constructing knowledge and inquiry competences in the process of moving from authentic and perceptually known narrative or visual settings through inquiry actions to the abstract narrative or visual settings, in which the objects and events are highly abstract and do not have direct perceptual correlates in sensory-motor system. When planning for inquiry actions, various artifacts embedded to the learning environment provide action potentials that the learner can embody. In the sequential or iterative process of inquiry, perceptually embodied concepts related to the problem will be coded through inquiry procedures into different semiotic registers (Duval, 2000), and tied with the arbitrary theoretical semantic knowledge.


That is so far abstract of my new ideas of complex multi-representational systems. I intend to use some example cases of showing how the wrong selection of affordances at narrative and visual artifacts in learning environment defined inquiry actions with the narratives.

In one paper we have collected evidence of changes of awareness of learning objects’ affordances in complex inquiry system, which could be used as evidence of learning environment as an ecologically defined system.


distributed self

November 5, 2007

One of the phenomena in web 2.0 is keeping distributed self.

We all invade various spaces: weblogs, twitter, jaiku, flickr, youtube, social bookmarking spaces etc.
What these distributed spaces enable us to do, is to keep our personality in multiple places at the same time and variate our presence in different modalities.

The result of keeping distributed self increases likelihood that my external knowlege, my artifacts, my meanings, my activity patterns will be noticed, modified and duplicated.

Keeping distributed self keeps us in touch with different communities.

Being simultaneously in different communities enables us to bring information across the borders of the communities, initating semiosis, enabling us to constantly create new knowledge.

The maintenance of distributed self has also become external – we tend to feed together our distributed spaces into aggregators or weblogs in order to feel as a whole and observe our external presence. In these places (hubs) where our distributed knowledge meets again, we propagate ourselves as the connectors between the communities.

Can we create in these spaces as well? If we mix our distributed self with the knowledge of our community members (like in microblogging feeds of Jaiku or Twitter), these mashed feeds may work as triggers for writing new blog entries. They enable us to access knowledge community-wise and transfer it to our other community spaces.

The social media starfish is a good representation of our distributed self. Another idea of digital and distributed self is here.

There is also an article by Stanton Wortham about distributed self – The Heterogeneously Distributed Self (2007)
Journal of Constructivist Psychology, Volume 12, Issue 2, March 1999, pages 153-172.

Article explains that heterogeneous distribution can be applied to the self.
The self is heterogeneously distributed because a coherent self emerges from the interconnection of structures of diverse sorts, which together facilitate the experience and manifestation of a coherent identity.

Performative account of self: self emerges when person repeatedly adopts characteristic positions, with respect of others and within recognizable cultural patterns, in everyday social action (Butler, 1990).

The author suggests locating self in several different types of structures, including performative, psychological and other patterns.

For example the author writes how there will be interaction between our past and present self in autobiographical narratives.

This makes me think, if we reflect in weblog – do we also talk with our past and present self in order to create some coherence?


ecology of hybrid social web

November 1, 2007

Rising social web and its rapid becoming into the hybrid environment that integrates virtual and real spaces has given birth to the new activities:

self-management of personal mediation spaces constructed by orchestrating distributed sets of web-based and mobile tools;
self-propagation of one’s presence and self-positioning into the multi-perspective hybrid places evoked by merging virtual and real spaces through creating personal external meaning-spaces and geo-tagging personal meanings as action potentialities to hybrid locations;
self-localization in the hybrid space by tagging, feeds, and mashup technologies for obtaining awareness of people, their meaning perspectives and activities;
self-identification and alignment into virtual communities and their spacial perspectives through detection, participation and playful variation of their activity patterns, and connective uptake and translation of meanings;

These activities all together enable to establish the dynamic ecology of hybrid social web as an activity system. This consists of external spaces with objects, what people need to activate as embodied concepts in neural circuits of sensory-motor area of brain. Embodiment happens by intentionally evoking anticipated affordances related to previously experienced or culturally defined action potentialities and their emotional correlates.

Embodying objects in space as embodied concepts turns them for persons into places with embedded meanings, which serve as mediating tools for activities. People propagate their activity patterns in spaces as meanings attached to artifacts, what they externalise through mediating tools. Each artifact, when interpreted in space, constrains the dimensions of the space for the person, it contains action potentialities (affordances) that will be created and embodied by new person, and which start constraining the space, actions in space, emotions related to this space. We can see these artifact-action triggered affordances as sort of ecological activation or even instruction for the user how it is possible to use the environment.

In order to perceive certain activity potentials of other people in space people need to be intentionally at same wavelenght and embody similar/or potentially competing action potentialities and their emotional correlates (affordances). Self-identification of spaces into places enables the person to locate himself, propagate one’s identity, and distinguish from the other identities creating therefore an ecological niche where to inhabit. Continuous self-localization in respect to other space perspectives and their inhabitants, and potential adjustment to their places serves for community formation that is ecologically important to defend the communal places.

Ecological social web is in dynamic changes because the embodiment of action potentials of individuals is never totally similar and brings in variations. Within the communities this variation is low, resulting in similar perception of places and uptake of meanings and participation of the common activity patterns. As certain communities embody different perspectives of spaces, this creates the potential borders of understanding meanings, and noticing afforded activity patterns. Thus, the social web as an ecosystem obtains structural complexity – certain communities may simultaneously inhabit the same space while defining it as a different place. The uptake of meanings of another community in the jointly inhabited space may also happen. Such meanings will be embodied in the different intentional frames causing novel activity patterns to emerge.


We may walk in town seeing the previous location of Bronze soldier monument. Depending of our alignment to certain cultural-ideological group we may embody certain emotions (fear/anguish/pride) and maybe some motor actions like (not)going there. If we are the inhabitant of hybrid social spaces, we may be tempted to take a picture of this place and upload it to Flickr, geotagging it at Tallinn map. We may also comment our experiences with the location in the post of our weblog and drag the feed of the Flickr image to the weblog. Let’s suppose many people do the same thing. They can also see the other images tagged to the place, maybe some from the times when the soldier was still there, or some from the hot days in Tallinn. They reflect their different meanings and related action potentials in narratives of their weblogs. Someone else studying the event, will find different weblogs and images and needs to detect what were the action potential of people, if he is able of detecting some communalities in meanings he may also embody some action potentials. These depend of the cultural and activity background of this person (eg. whether this is a citizen of Moscow or New York). They will comment the posts and take other actions, presumably sending some liberty fighters to Tallinn or decide not to take the trip to Tallinn as tourists. We can also imagine there is a certain software that enables people to directly geotag their images or meanings to the Bronze soldier location and view the meanings at spot. This will create a potential for embodying different action potentials for the different communities, and also the possibility to develop novel activity patterns – for example the narratives of the place, grounding of what happened and finding the compromises between cultures etc. We can say then that the previous Bronze soldier location becomes into the space with meanings that serves as a mediating device for understanding and participating in activities.


embodied theory of concepts

November 1, 2007

From Claudia Scorolli and Anna M. Borghi Sentence comprehension and action: Effector specific modulation of the motor system

Recently cognitive science and neuroscience claim that cognition is embodied: knowledge is grounded in sensorimotor experiences, and that there is a deep unity among perception, action and cognition (Pecher and Zwaan, 2005).

The theoretical understanding of indirect connection between concepts as symbols of something real, and perceptual experiences we gain with sensory-motor systems, has been put under question.
New, embodied view of concepts considers perceptual symbols as neural representations located in sensory-motor areas in brain. This means concepts are not perceived as arbitary symbols but rather concepts consist of the reactivation of the same neural activation pattern that is present when we perceive the objects or entities they refer to and when we interact with them.

Object attributes are thought to be stored near the same modality-specific neural areas that are active when objects are being experienced (Martin, Ungerleider and Haxby, 2001).

Symbols, according to the embodied view, are not amodal, but multimodal – for example, they refer both to the tactile experience of caressing a dog as well as the auditory experience of hearing a dog bark (Barsalou, 1999; Gallese and Lakoff, 2005).

Concepts make direct use of sensory-motor circuits of the brain (Gallese and Lakoff, 2005).
The same neural areas are involved when forming motor imagery and when activating information on objects, particularly on tools.

Pecher, D., and Zwaan, R.A., 2005, Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thinking, Cambridge University Press.

As a researcher with the background of natural sciences, i wonder what happens if the symbol, what we read, is not the first order symbol, directly related with the objects (real dog – word dog), but if we describe in the scientific text some micro- or macrolevel objects and phenomena (eg. cells, genes, evolution), or if the phenomena we describe are highly of abstract nature (photosynthesis). How do we embody these concepts?

Or, if we read not the scientific narrative but scientific images (graphs etc.) or scientific formulated symbol language (math or physic formulas, chemistry reactions). Presumably we need to process arbitrary concepts somehow and link them to embodied concepts at certain moment in order to grasp the thing?

If we think of scientific narratives (papers, books), we can say that they have reduced action potentialities, or embodied concepts in them, and the difficulty of the reader of such texts is to perform cognitive symbol processing to relate abstract concepts with limited affordances for action-potentialities internally with the embodied concepts that are related with real world perception. And supposedly only then we can understand the text.


Get every new post delivered to your Inbox.