[Learning virtual routes: what does verbal coding do in working memory?]

Can J Exp Psychol. 2015 Mar;69(1):104-14. doi: 10.1037/cep0000039.
[Article in French]

Abstract

Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Adolescent
  • Adult
  • Analysis of Variance
  • Female
  • Humans
  • Judgment
  • Male
  • Memory, Short-Term / physiology*
  • Middle Aged
  • Photic Stimulation
  • Reaction Time
  • Recognition, Psychology
  • Space Perception / physiology*
  • User-Computer Interface*
  • Verbal Learning / physiology*
  • Young Adult