Cope, Bill and Kalantzis, Mary. "A Grammar of Multimodality"
From RhetorClick
Recent, widespread technological advances have increased our representational modes of communication, forming new literacies via combinations of textual, visual, and aural elements. Cope and Kalantzis argue that a new “pedagogy of multiliteracies” is needed to incorporate these new means of communication, one that encompasses written language, oral language, visual representation, audio representation, tactile representation, gestural representation, and spatial representation.
Each of these modes shares communication capabilities, but their purposes are not entirely “parallel.” In other words, while different modes may portray the same idea, the meaning/focus of that idea may change depending on the mode. This shifting between modes to represent the same idea is called “synaesthesia.”
To illustrate this, Cope explains the differences between written and visual/artistic demonstration. By its very nature, writing sequences events along a timeline and thus favors narrative purposes. In contrast, image sequences events according to space, and thus favors purposes of display. Written language is open to a large variety of visualizations, however, image requires the viewer to fill in the details of time, causation, purpose, and effect via already existing visual elements. While reading might require some imaginative filling, image does not possess strict, linear rules of interpretation and thus gives the viewer more power than written modes of communication.
Many individuals have a preference for a mode - i.e., they feel most comfortable working in a certain mode, a particular mode comes naturally to them, etc. Additionally, synaethesia is an invaluable learning tool as it allows us to explore ideas and concepts via often unfamiliar platforms. This is why it’s important that we compile a pedagogy that allows students to work in multimodal platforms.
Although webpages employ written text, the integration of navigational bars, image, caption, list allows the user to experience the page more like he/she would an image. These visual elements allows for a simplification of any writing on the page, yet increasingly “complex multimodality.” However, written language is not simply “going away” - it is simply changing as it becomes influenced and connected with other modes of expression.
Cope then contrasts the different modes of meaning (representational, social, organizational, contextual, ideological) with different modes of expression (linguistic, visual, spatial, gestural, aural) in order to draw attention to similarities and differences between combinations. He then presents us with a series of tables that asks questions about the different modes of meaning, and provides us with instances of how that mode of meaning might be expressed, followed by a specific example.
(Lenses of ways of making meaning) Linguistic: Implied meaning via written text. Visual: Meaning through manipulation of perspective, vectors, abstraction, focus, arrangement (cohesion), inclusion/omission. Spatial: Linguistic and visual representations can mean a “who” or a “what.” Spatial representation is only a “what.” Space can be used to emphasize differences in power, price, social standing, distance, form, intended route/interactions. Gestural: Expressions, clothing, hand movements/positions, emotion, body language Audio: Tone, tempo, recordings of sounds specific to a particular setting/mood.
In short, multimodal learning is “all modes of meaning working together.” Cope gives example of linguistic meaning combining the gestural, aural, and spatial. Although modes of learning are often congruous/parallel, they have different effects. The user selects a mode based on desired emphasis.
Representation of meaning multimodally is inherrent to human nature, yet recently we have placed too heavy an emphasis on linguistic modes of representation.