h1

Affordance theory in games and simulations

June 23, 2007

One of my students and a collegue, Kairit Tammets who is analyzing the result of our affordance base course activity patterns and learning landscapes has bookmarked in delicious an interesting paper about Affordance Theory.

What I find interesting in this simulation-related affordance theory application approach is that they also use coupling between environment’s affordances and subjects’ action golas, considering that each subject has a different perception.

What i find questionable, especially if to transfer the whole idea to real people and distributed learning environments, is that they have the somewhat limited approach to the objects in the environment and to the dynamic changes in subjects’ perception. If we see environment as an activity system, the subjects in the environment would also start constraining each other’s perception. There are interpersonal affordances eg. rules and roles they take when acting together, joint goals they would develop etc. (in this article this is tried to be solved with conceptual objects). Person-to-person based and person-to-person affordances, which also constrain the objects’ affordances for these persons are also important in such systems (in the article they use dynamic states of objects, which should integrate the dynamic states of perception as well). Secondly, in the simulation, these subjects seem not to be learning and their set of perception rules remains constant. In real life this is not the case.

From

Affordance Theory for Improving the Rapid Generation, Composability, and Reusability of Synthetic Agents and Objects

Jason B. Cornwell, Kevin O’Brien, Barry G. Silverman, Jozsef A. Toth

In a simulation that takes social dynamics into account, individual points of view are significantly different and agents act with less than perfect knowledge of the world. In order to begin to capture the subtleties of social interaction or simulate human emotionality, agents must act based on their own unique socio-cultural background and personal experience. Each agent contains a unique semantic markup of the world describing every perceived object in terms of the agent’s own cultural and emotional history. To add a new object to that world, each and every agent would need to be revised to include this object and the actions available as a result of its presence into their individual semantic markup. With a simulation containing more than just a few agents, such a solution is untenable.

Affordance Theory offers an elegant solution to this problem. If the semantic markup of the objects in the environment is contained within and broadcast by the objects themselves rather than the agents perceiving them, then agents and objects can be added independently. A simulation developer adding a new agent type to the system need not worry about what agents or objects are already instantiated. The objects in the simulation will broadcast their affordances, or the actions that they afford to the agent in combination with some measure of the anticipated results of those actions, to any new agent, allowing it to manipulate them with no a priori knowledge of that object whatsoever.

Affordances cannot be uniform for all agents. Each agent must still have a unique view of the objects in its environment. The affordance approach offers two possibilities for introducing individual differences in perception. The first is to have multiple perceptual types for each object, accompanied by perception rules that determine which type is active for any given agent.

The second possibility is to provide some mechanism in each agent that will automatically modify or interpret the affordance according to some property internal to the agent. For example, an agent system might be devised that categorized actions in terms of certain central goals.

The best approach is a union of these two possibilities. Each perceivable object (including agents) should contain a variety of perceptual types representing fundamentally different perceptions of that object. Concurrently, each agent should contain a system for interpreting the actions afforded by each object according to its own properties.

Generating a complete list of agents and objects is crucial, as an agent will only be able to perform an action if that action is made available to it by an object in the scenario. The inventory should therefore contain not only physical objects but also composite and conceptual (e.g. orders that can be followed, etc) objects as called for by the specific demands of the scenario. If objects can be in different physical states over the course of the scenario, and those different states afford completely different opportunities for action, then they might be represented by multiple objects that replace each other when the state changes.

Each simulated object should contain a set of perceptual types that describe the variety of perceptions of that object available to other agents.

Each perceptual type for any given object should offer a set of possible actions and the results of those actions anticipated by the agent perceiving that object. Each of the anticipated results should be described in terms of goals.

Rather than build mental models on a per-agent basis we allow those models to be generated at runtime based on the objects present in the environment at the time. This allows us to build agents that can respond to very complex situations without having to painstakingly design those complexities into their Markov chains from the start. An agent’s decisions were driven by internal Markov chains that represented every possible state of the world as far as the agent was concerned.

Step 1: Generate a list of all agents, objects, and events involved in the scenario

Step 2: For each agent type, develop a complete Markov chain describing the agent’s possible states and valid state transitions

Step 3: Design unique concern trees that correspond to each agent’s Markov chain

Step 4: Implement the execution of all events in code along with forced state transitions

Affordance Theory would probably be overkill for most cellular automata or other artificial life simulations. For multi-agent systems that simulate the cognition and/or emotionality of individual agents, and that aspire to a high degree of reuse and rapid composability, however, Affordance Theory is practically a requirement.


Blogged with Flock

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: