Skip to content

Multimodal Essay Definition And Example

Technology in Pedagogy, No. 10, August 2012
Written by Kiruthika Ragupathi

Recent pedagogical movements have demonstrated the value of mastering multiple literacies, asking students to become knowledgeable not only in their analysis of the written word, but also in other forms of visual media ranging from advertisements to photojournalism to cinema. However, while approaches to literacy have become increasingly “multimodal”, student outputs have remained largely “unimodal”, with the written word being privileged for its ability to convey a level of complexity supposedly outside the purview of other communication forms.

Research indicates that students who incorporate multimodal forms and approaches to their learning are better engaged with the content than those who employ traditional approaches, thereby enhancing their thinking and learning process. It is possible for students to convey their ideas that is critically engaged through the use of multimodal forms, says Dr Jasmine Nadua Trice, a lecturer in the Ideas and Exposition programme, a multidisciplinary critical thinking and writing programme at the National University of Singapore.  Her background in Film and Media studies with a PhD in Communication Culture and her interest in teaching film studies, public speaking and film productions lead her to trying out the use of multimodal communications in her modules.

In this session, Dr Trice shared her experience teaching a General Education Module (GEM) that essentially employs multimodal communications focusing not on technology but on the content (Emergent Media), and more importantly on the multimodal forms that the assignments took place in. Using her class as a case study, she examined the potential usefulness of multimodal communications for undergraduate level criticism, asking what kinds of critical pedagogies such an approach to student inquiry might enable.

Multimodal communications: An overview

Dr Trice provided a brief overview of multimodal forms of communication and highlighted some examples of scholarly work that inspired the proposal of a new course.

Multimodal communication is a form of communication that uses a combination of written, audio and visual forms to convey an idea and works in tandem with media literacy movements. Gunther R. Kress, a Professor of Semiotics & Education at the University of London points out that “in this ‘new media age’ the screen has replaced the book as the dominant medium of communication and this dramatic change has made image, rather than writing, the center of communication.” Multimodal literacy, therefore, is an established field and it is apparent that it is possible to understand critical ideas and academic analysis through multimodal forms in an undergraduate classroom. 

Multimodal scholarship

In the recent years, multimodal scholarship is stronger in the fields of media studies and digital humanities. The multimodal scholarships sometimes take the form of web-based or interactive text forms and at other times use video essays or screencasts.

Dr Trice showcased some examples of scholarly work that were of particular interest to her:

  • Vectors, Journal of Culture and Technology in a Dynamic Vernacular, USC: “Vectors is realized in multimedia, melding form and content to enact a second-order examination of the mediation of everyday life.”
  • International Journal of Learning and Media, MIT: “Rich media contributions representing key research findings that exceed the boundaries of the printed page.”
  • Kairos,  A Journal of Rhetoric, Technology, & Pedagogy: “publish scholarship that examines digital and multimodal composing practices, promoting work that enacts its scholarly argument through rhetorical and innovative uses of new media.”
  • Alliance for Networking Visual Culture: “creates scholarly contexts for the use of digital media in film, media and visual studies.”

 She highlighted the examples that she would use in the module – those that her students need to read or watch. The examples helped students to visualize complex concepts and emphasizes the fact that creativity is the most important factor in using multimodal communications effectively.

  • Alexandra Juhasz, Learning from YouTube (MIT Press, 2010):
    Learning from YouTube, the first video—book published investigates questions with a series of more than 200 texts and videos, also known as “texteos.” This video-book, an example of web-based or interactive text mode, integrates the news clips based on the interviews that Dr Juhasz had with CNN, her book and the assignments created by her students when she taught the module on “Learning from YouTube”. Students in her class used YouTube as the media to do their assignments.

Dr Trice highlighted that when she tried using this as an example in her module, the students were disconcerted with the format and also required that a steep learning curve was necessary for her students to use such interactive text. Hence this reading, she said was avoided in the current semester.

  • Richard Langley, “American Un-Frontiers: Universality and Apocalypse Blockbusters”: 
    This example showcases the use of visual elements and the usage of text in a video essay to underscore the idea that the author is getting across. This example of video essay integrates icons, text and archival footage in interesting ways employing a screencast method that employs a linear way of presenting the video essay. (http://vimeo.com/32288942)
  • David Gauntlett, “Making is Connecting” (www.artlab.org.uk / www.theory.org.uk)
    An example of what one can do with basic screencast software (http://www.youtube.com/watch?v=nF4OBfVQmCI&feature=relmfu)
    This example highlights: 
    • The tone that is used in the video he uses when oral voice-over is added much more casual – and that the casual tone does not make the video any less professional but more importantly how the tone has to match the medium;
    • the use of basic information design for presentation of ideas;
    • gives a context of what is being talked about;
    • gives a literature review with citing secondary sources that is used;
    • also visualize the quotes from others.
  • Images from a graphic design books. E.g., visualizing content – Europe; Corriette Schoenaerts, for her fashion spread on countries and borders, in Robert Klanten et al., Eds. Data Flow: Visualizing Information in Graphic Design (Berlin: Gestalten, 2010), 189; Christoph Niemann, Sleep Agony Chart, in Robert Klanten et al., Eds. Data Flow: Visualizing Information in Graphic Design (Berlin: Gestalten, 2010), 107;  C.P.G. Grey, “The True Cost of the Royal Family Explained”

Dr Trice emphasized to her class that the content skills, conceptual skills and practical skills learned in the module would be integrated in producing the assignments. She reassured her students that it was not necessary for them to have high levels of technical ability and skills for doing well in the module and what mattered most was the CREATIVITY.

Proposal for a new module using multimodal communications

A workshop at CDTL that introduced her to the screencasting options available at NUS (Camtasia Relay and Ink2Go), her background in Film and Media studies coupled with her interests in teaching film studies, public speaking and film productions inspired her to propose the new module on “Emergent Media and Multimodal Communications”. This enabled her to combine all her interests to explore a more productive approach to teaching. The module was developed so as to provide students with a broad understanding of transitional media and culture not only through engagement with module content, but also through developing written, oral, and visual communication strategies.

To get her students to understand and better appreciate the use of multimodality in the module, she introduced the idea of multimodality to her students by probing them on:

  • what the idea of “modality” entails,
  • what the idea of “multimodality” entails,
  • how different is multimodal from unimodal communications, and
  • how the written word is still the dominant mode employed in most of the University assignments.

She briefed her class on how things would be different in the module where they (her students) would be involved in producing assignments that employ different forms of modality. The students on the first day of the class were also encouraged to contemplate on what they would gain and/or lose when moving from the written mode to multimodal approach of critical ideas. Students were then asked to reflect upon if it was possible for them to convey critical ideas and academic analysis through multimodal forms. She emphasized the idea of critical thinking and whether it was possible to convey their ideas that is critically engaged and analytically rigorous using images, audio and the written or spoken word.

The module had three units with each culminating in an assignment that require students to use one or more of the written, oral, or visual communicative modes.  The assignment tasks were designed to cultivate the practical comprehension of media by allowing students to convey ideas about class content using multiple forms of communication, both residual and emergent. The tasks enabled her students to:

  • combine video, still images, audio, and text to convey complex, academic investigations in a clear and creative manner, and
  • convey critical ideas in an unconventional form.

However, she also emphasized that the main focus of the assignments were on thinking about the ideas and the video essays using Screencast and not on the technology itself.

The first assignment was to use multimodal essays which were posted on the class Facebook page with peers providing reviews and comments on the essay. The second assignment involved the use of Screencast videos. And the final assignment was an oral presentation in groups.

Assignment 1 – Multimodal Essay:

The multimodal essay assignment is not about testing the multimedia skills but on the usage of the visual parts of the essay. The students were advised against the use of pre-made or readily available templates, as it was important to create something original that is visually and aesthetically compelling. The output was a 575-600 word multimodal essay. The assignments were graded in such a way that the three quarters of the grade was for the content analysis and on how they would visualize the theoretical concepts (75% for analysis; 15% for multimodal aspects and 10% for writing style and structure). All students were required to post their assignments on a class Facebook page that has to be accompanied with an explanation as to why they used a certain approach (assignments were uploaded on SCRIBD, an online PDF environment). This allowed students to justify their visual process/approach taken. Their classmates were then required to comment and critique on their peers’ work. Dr Trice felt that this was extremely helpful for her to understand the student’s thought process, particularly when it is difficult to understand the execution.

The common approach was that students used the evolution approach (e.g., from a book to iPAD). One student used a newspaper format and provided a wider context with the use of news splashes. Listed below are some samples from her students’ work:

Assignment 2 – Screencast/ Video essay

Students will create screencast videos or video essays, each of which should be 6 minute long clip. Again, the grading’s focus was on the analysis with 75% marks assigned for content and 25% assigned for the multimodal aspect. Dr Trice briefed and showed samples on how the students’ video essays should focus on multimodal scholarship and information design; use of videos, moving and still images; slides, on editing and juxtaposition, the voice-over narration, the use of on-screen text and symbols, and the use of music.

Students produced a variety of video essays: videos with no voice – so text heavy slides, with interesting use of on-screen text, good visualization of core ideas, and visuals inspired by RSAnimate series.

Assignment 3: Group oral presentation

The focus of the oral presentations was on: visual aids employed in the presentation; audiences and informative strategies; the vocal and physical modes of delivery; and on preparing for questions.

Assessment/Grading criteria

Overall, the assignments were assessed based on the following grading criteria:

Analysis (75%)

Multimodal aspects (25%)
(Composition, visual components,
editing + transitions, voice)

  • Demonstrates a clear understanding of class readings
  • Assesses and applies these ideas to other authors or to student’s own thoughts &  examples
  • Clearly organized, with an introduction, transitions, and a conclusion
  • Flows smoothly, building the analysis with each section
  • Demonstrates an understanding of the multimodal principles studied in class
  • Uses these principles in creative and compelling ways to support the overall analysis

Pedagogical potentials of multimodal communications

  1. Enrich and empower student learning. Providing learners an opportunity to create a shared representation of language – textual form, visual form and an auditory form— proves to be cognitively and pedagogically valuable. The usageof multimodal  communication in their assignments help students transfer ideas from writing into multiple ways of communicating, offering them greater opportunities for meaning making. It helps them convey their ideas in critically engaged and analytical rigorous ways. With the changed and changing communication, the use of multimodality in the assignments will enable students to enter the workplace confident of their own potentials.  
  2. Engages peers and promotes reflection. The multimodal components provides a greater opportunity for students to engage with their peers as it allows them to present their arguments in multiple ways through written, spoken, and visual texts. When students view, comment and critique the work of the peers, it aids in reflection after the assignment task and promotes overall learning. These appeal to students’ interest and motivate them to be engaged learners.
  3. Enhance writing and communication skills. Making the multimodal essays, video essays and screencast helped students to hone their writing and communication skills.

Reflections and future directions

Dr Trice reflected upon the planning of assignments and indicated that she would change the way she did the oral presentation assignment and would consider the use of other criteria for assessing multimodal forms based on the work by (Ball, 2012). Ball (2012) identifies items that need to be considered when assessing such multimodal forms of assignments and could also be used by students when developing their assignments and while peer reviewing other’s work. Some items to consider include: (i) the project’s structural or formal elements must serve its conceptual core; (ii) the design decisions made must be deliberative, controlled, and defensible; (iii) the project should have distinguishable and significant goals that are different from what be achieved on paper; (iv) the design should enact the argument; and importantly it is important for students to have thought of a visual metaphor for the argument.

Summary of Feedback/ Suggestions from the Discussion

Dr Trice welcomed ideas and ways that participants have employed multimodality in their classroom.  A  lively discussion followed and participants discussed on:

  • To what degree is it possible for undergraduates to convey critical analysis through multimodal forms?
  • How to develop class content and/or assignments that would allow students to employ multimodal approach?
  • How should the grading criteria be designed to effectively assess such multimodal forms of assignments?

There were other participants who had used such multimodal forms of assignments in their modules. They also agreed with Dr Trice that the technical skill of students was never a problem as it was “super easy” to edit and create movies (e.g., FinalCut Pro, Windows movie maker, Adobe premiere). They also pointed out that when students start working in groups, they tend to help each other. One participant felt that once a student’s work is uploaded, and a high bar is set, then all the other students try to outdo each other, and in the process also teach other.

Another participant indicated that for his module, the students were free to choose their own platform based on what they were comfortable with – YouTube, videos, multimodal or essay. Based on his experience, students submitting written essays tend to go deeper in their analysis. However, he used an assessment criterion that awarded 60% for content and 40% for presenting ideas and also had two different sets of criteria for the multimodal form and the written essay. However, he found that it was difficult to follow two standards and he also felt that this might not be fair.

Q & A Session

Listed below are some questions from the subsequent Q & A session:

Q:  Did you have lessons that taught students the necessary technical skills for creating such assignments?
JT:Drag-drop editing necessary to create these videos does not need background knowledge on technology. I created a tutorial using Screencast – options include the use of camtasia, ink2go, & imovie. Students can also meet with me for consultations if they needed help. Only those students who were super enthusiastic used the consultation sessions—and usually the huge proportion of focus was on content.
Q:  Do students with technical skills/technology background have an edge/advantage over the others?
JT:This was something that I put a considerable amount of thought into and that was the reason for having the grading criteria place a greater emphasis on the content rather than on technology. I also got students to get their preliminary sketches and to have discussions with their peers before submitting the assignment. I also got them work in groups and discuss on what they were planning to do, and that also helped, I think.
Q:  How do you measure if this new method is more effective than your old method?
JT:I don’t think it is very different from writing an essay, it is pretty similar structurally and in terms of the ideas that they get across if they are doing a voice-over in particular. It is interesting for collaboration, and students will find it easy for peers to watch it rather than reading the peer’s essay. It is also interesting for public dissemination– making the materials available beyond the classroom. So it might be good to a website in addition to the Facebook page. In term of whether it is better for critical thinking – I think a lot of it is same when compared to writing essays but this form is more novel, and students like it for its novelty.I also felt that students were seeing each other’s work and were benefitting from it. I required them to comment on at least 2 of their peer’s work.  But students generally went beyond that and comments comment on more students’ work. Since it is in the students’ social space, appearing on their FB timeline. I also discussed with them on providing constructive critique and how they could improve on their comments.
Q:  Do you spend time to talk to students about copyright and plagiarism (fair use of information)?
JT:During my first lesson, I talk to them on the fair use of information. I told them to make the link to their essays private as I was not sure if the references made were appropriate. I did not spend too much time on that aspect as the presentations were not made public. And also since this was mainly for educational purpose, I guess it is fair use.  I also informed that whatever they used cannot be pre-fabricated and that the components made has to be original. I also gave them open source websites, creative commons site from where they could get the images, photos and music. Students also need to provide a worksite page, and which would have the references and links. Personally, I would spend more time on it when I teach this course again.
Q:  If we want to incorporate this type of teaching, as a teacher what skills do I need to have?
JT:CDTL’s workshops like Screencast, Breeze, Ink2Go, Moviemaker would be a good starting point. I also researched and explored online on the things one can do with Screencast. It is also important to get a lot of examples and showcased them to the students and engaged them in discussion during the class. Most of these applications are intuitive.
Q:  Have you wondered about how different a traditional essay by the same student would be? What are the implicit and explicit assumptions that are more pronounced in multimodal forms vs a traditional essay? Are students prone to making assumptions when make a video due to addition of music, tonality, songs, etc. How do contrast the both?
JT:That’s right, rather than explicitly spelling out exactly what they are trying to get across, they would present to you with some kind of multimodal image, sound combination and expect those to the work, and the meaning is more ambiguous.But this is something that I have not looked at it very rigorously which I should look at.  But one of the things that I try to do is to include a lot of opportunity for students to discuss their design elements and they would need to include justifications for their decisions in terms of the multimodal form. And since for most of them this was a first attempt, most of them were reading scripts from probably an essay format. There should be probably ways to study if there is difference between traditional essay & video essay. But definitely this is something that I would like to work on in the future.

References

Cheryl E. Ball, (2012). “Assessing Scholarly Multimedia: A Rhetorical Genre Studies Approach,” Technical Communication Quarterly, 21: 61-77, 2012.

 

Written byKiruthikaPosted inTechnology in PedagogyTagged withCritical Writing Assignments, Multimodal communications, multimodal essays, video essays

In its most basic sense, multimodality is a theory of communication and social semiotics. Multimodality describes communication practices in terms of the textual, aural, linguistic, spatial, and visual resources - or modes - used to compose messages.[1] Where media are concerned, multimodality is the use of several modes (media) to create a single artifact. The collection of these modes, or elements, contributes to how multimodality affects different rhetorical situations, or opportunities for increasing an audience's reception of an idea or concept. Everything from the placement of images to the organization of the content creates meaning. This is the result of a shift from isolated text being relied on as the primary source of communication, to the image being utilized more frequently in the digital age.[2] While multimodality as an area of academic study did not gain traction until the twentieth century, all communication, literacy, and composing practices are and always have been multimodal.[3]

Definition[edit]

Although discussions of multimodality involve medium and mode, these two terms are not synonymous.

Gunther Kress's scholarship on multimodality is canonical in writing studies, and he defines mode in two ways. In the first, a mode “is a socially and culturally shaped resource for making meaning. Image, writing, layout, speech, moving images are examples of different modes.”[4] In the second, “semiotic modes, similarly, are shaped by both the intrinsic characteristics and potentialities of the medium and by the requirements, histories and values of societies and their cultures.” [5]

Thus, every mode has a different modal resource, which is historically and culturally situated and which breaks it down into its parts, because “each has distinct potentials [and limitations] for meaning.”[6] For example, breaking down writing into its modal resources would be syntactic, grammatical, lexical resources and graphic resources. Graphic resources can be broken down into font size, type, etc. These resources are not deterministic, however. In Kress’s theory, “mode is meaningful: it is shaped by and carries the ‘deep’ ontological and historical/social orientations of a society and its cultures with it into every sign. Mode names the material resources shaped in often long histories of social endeavor.”[7] Modes shape and are shaped by the systems in which they participate. Modes may aggregate into multimodal ensembles, shaped over time into familiar cultural forms, a good example being film, which combines visual modes, modes of dramatic action and speech, music and other sounds. Multimodal work in this field includes van Leeuwen;[8] Bateman and Schmidt;[9] and Burn and Parker's theory of the kineikonic mode.[10]

A medium is the substance in which meaning is realized and through which it becomes available to others. Mediums include video, image, text, audio, etc. Socially, medium includes semiotic, sociocultural, and technological practices such as film, newspaper, a billboard, radio, television, theater, a classroom, etc. Multimodality makes use of the electronic medium by creating digital modes with the interlacing of image, writing, layout, speech, and video. Mediums have become modes of delivery that take the current and future contexts into consideration.

Because multimodality is continually evolving from a solely print-based to a screen-based presentation, the speaker and audience relationship evolves as well. Due to the growing presence of digital media over the last decade, the central mode of representation is no longer just text; recently, the use of imagery has become more prominent. In its current use for Internet and network-based composition, the term “multimodality” has become even more prevalent, applying to various forms of text such as fine art, literature, social media and advertising. An important related term to multimodality is multiliteracy, which is the comprehension of different modes in communication – not only to read text, but also to read other modes such as sound and image. Whether and how a message is understood is accredited to multiliteracy.

History[edit]

Multimodality has developed as a theory throughout the history of writing. The idea of multimodality has been studied since the 4th century BC, when classical rhetoricians alluded to it with their emphasis on voice, gesture, and expressions in public speaking.[11][12] However, the term was not defined with significance until the 20th century. During this time, an exponential rise in technology created many new modes of presentation. Since then, multimodality has become standard in the 21st century, applying to various network-based forms such as art, literature, social media and advertising. The monomodality, or singular mode, which used to define the presentation of text on a page has been replaced with more complex and integrated layouts. John A. Bateman says in his book Multimodality and Genre, “Nowadays… text is just one strand in a complex presentational form that seamlessly incorporates visual aspect ‘around,’ and sometimes even instead of, the text itself.”[13] Multimodality has quickly become “the normal state of human communication.”[3]

Expressionism[edit]

During the 1960s and 1970s, many writers looked to photography, film, and audiotape recordings in order to discover new ideas about composing.[14] This led to a resurgence of a focus on the sensory, self-illustration known as expressionism. Expressionist ways of thinking encouraged writers to find their voice outside of language by placing it in a visual, oral, spatial, or temporal medium.[15]Donald Murray, who is often linked to expressionist methods of teaching writing once said, “As writers it is important that we move out from that which is within us to what we see, feel, hear, smell, and taste of the world around us. A writer is always making use of experience.” Murray instructed his writing students to “see themselves as cameras” by writing down every single visual observation they made for one hour.[16] Expressionist thought emphasized personal growth, and linked the art of writing with all visual art by calling both a type of composition. Also, by making writing the result of a sensory experience, expressionists defined writing as a multisensory experience, and asked for it to have the freedom to be composed across all modes, tailored for all five senses.

Cognitive developments[edit]

During the 1970s and 1980s, multimodality was further developed through cognitive research about learning. Jason Palmeri cites researchers such as James Berlin and Joseph Harris as being important to this development; Berlin and Harris studied alphabetic writing and how its composition compared to art, music, and other forms of creativity.[17] Their research had a cognitive approach which studied how writers thought about and planned their writing process. James Berlin declared that the process of composing writing could be directly compared to that of designing images and sound.[18] Furthermore, Joseph Harris pointed out that alphabetic writing is the result of multimodal cognition. Writers often conceptualize their work by non-alphabetic means, through visual imagery, music, and kinesthetic feelings.[19] This idea was reflected in the popular research of Neil D. Fleming, more commonly known as the neuro-linguistic learning styles. Fleming’s three styles of auditory, kinesthetic, and visual learning helped to explain the modes in which people were best able to learn, create, and interpret meaning. Other researchers such as Linda Flower and John R. Hayes theorized that alphabetic writing, though it is a principal modality, sometimes could not convey the non-alphabetic ideas a writer wished to express.[20]

Introduction of the Internet[edit]

In the 1990s, multimodality grew in scope with the release of the Internet, personal computers, and other digital technologies. The literacy of the emerging generation changed, becoming accustomed to text circulated in pieces, informally, and across multiple mediums of image, color, and sound. The change represented a fundamental shift in how writing was presented: from print-based to screen-based.[21] Literacy evolved so that students arrived in classrooms being knowledgeable in video, graphics, and computer skills, but not alphabetic writing. Educators had to change their teaching practices to include multimodal lessons in order to help students achieve success in writing for the new millennium.

Audience[edit]

Every text has its own defined audience, and makes rhetorical decisions to improve the audience’s reception of that same text. In this same manner, multimodality has evolved to become a sophisticated way to appeal to a text’s audience. Relying upon the canons of rhetoric in a different way than before, multimodal texts have the ability to address a larger, yet more focused, intended audience. Multimodality does more than solicit an audience; the effects of multimodality are imbedded in an audience’s semiotic, generic and technological understanding.

Psychological effects[edit]

The appearance of multimodality, at its most basic level, can change the way an audience perceives information. The most basic understanding of language comes via semiotics – the association between words and symbols. A multimodal text changes its semiotic effect by placing words with preconceived meanings in a new context, whether that context is audio, visual, or digital. This in turn creates a new, foundationally different meaning for an audience. Bezemer and Kress, two scholars on multimodality and semiotics, argue that students understand information differently when text is delivered in conjunction with a secondary medium, such as image or sound, than when it is presented in alphanumeric format only. This is due to it drawing a viewer’s attention to “both the originating site and the site of recontextualization”.[22] Meaning is moved from one medium to the next, which requires the audience to redefine their semiotic connections. Recontextualizing an original text within other mediums creates a different sense of understanding for the audience, and this new type of learning can be controlled by the types of media used.

Multimodality also can be used to associate a text with a specific argumentative purpose, e.g., to state facts, make a definition, cause a value judgment, or make a policy decision. Jeanne Fahnestock and Marie Secor, professors at the University of Maryland and the Pennsylvania State University, labeled the fulfillment of these purposes stases.[23] A text’s stasis can be altered by multimodality, especially when several mediums are juxtaposed to create an individualized experience or meaning. For example, an argument that mainly defines a concept is understood as arguing in the stasis of definition; however, it can also be assigned a stasis of value if the way the definition is delivered equips writers to evaluate a concept, or judge whether something is good or bad. If the text is interactive, the audience is facilitated to create their own meaning from the perspective the multimodal text provides. By emphasizing different stases through the use of different modes, writers are able to further engage their audience in creating comprehension.

Genre effects[edit]

Multimodality also obscures an audience’s concept of genre by creating gray areas out of what was once black and white. Carolyn R. Miller, a distinguished professor of rhetoric and technical communication at North Carolina State University observed in her genre analysis of the Weblog how genre shifted with the invention of blogs, stating that “there is strong agreement on the central features that make a blog a blog. Miller defines blogs on the basis of their reverse chronology, frequent updating, and combination of links with personal commentary.[24] However, the central features of blogs are obscured when considering multimodal texts. Some features are absent, such the ability for posts to be independent of each other, while others are present. This creates a situation where the genre of multimodal texts is impossible to define; rather, the genre is dynamic, evolutionary and ever-changing.

The delivery of new texts has radically changed along with technological influence. Composition now consists of the anticipation of future remediation. Writers think about the type of audience a text will be written for, and anticipate how that text might be reformed in the future. Jim Ridolfo coined the term rhetorical velocity to explain a conscious concern for the distance, speed, time, and travel it will take for a third party to rewrite an original composition.[25] The use of recomposition allows for an audience to be involved in a public conversation, adding their own intentionality to the original product. This new method of editing and remediation is attributed to the evolution of digital text and publication, giving technology an important role in writing and composition.

Technological effects[edit]

Multimodality has evolved along with technology. This evolution has created a new concept of writing, a collaborative context keeping the reader and writer in relationship. The concept of reading is different with the influence of technology due to the desire for a quick transmission of information. In reference to the influence of multimodality on genre and technology, Professor Anne Frances Wysocki expands on how reading as an action has changed in part because of technology reform: “These various technologies offer perspectives for considering and changing approaches we have inherited to composing and interpreting pages....”.[26] Along with the interconnectedness of media, computer-based technologies are designed to make new texts possible, influencing rhetorical delivery and audience.

Education[edit]

Multimodality in the 21st century has caused educational institutions to consider changing the forms of even its traditional aspects of classroom education. According to Hassett and Curwood, authors of Theories and Practices of Multimodal Education, “Print represents only one mode of communication…” and with a rise in digital and Internet literacy, other modes are needed, from visual texts to digital e-books. Other changes occur by integrating music and video with lesson plans during early childhood education; however, such measures are seen as augmenting and increasing literacy for educational communities by introducing new forms, rather than replacing literacy values. In several children’s books released today, written language is no longer the most important factor while producing these books. According to Miller and McVee, authors of Multimodal Composing in Classrooms, “These new literacies do not set aside traditional literacies. Students still need to know how to read and write, but new literacies are integrated."[27] The main purpose of early childhood education is still evident but the form that is now being presented to children is different. Some learning outcomes include – but are not limited to – reading, writing, and language skills. For example, “Multimodal Composing In Classrooms,” written by Miller and McVee, advocates storyboarding as an assignment and says that it “is multimodal in that there is the visual and audio of a film clip, the drawing representation of the film’s images, as well as the written analysis of the scene." According to Roger Essley in his article, "Visual Tools for Differentiating Reading & Writing Instruction," storyboarding is “the origin of all written language.[28] Teachers are able to use storyboards for visual organization to strengthen writing skills and help the writing process. They can be used in brainstorming sessions, problem solving, planning, and much more.

The choice to integrate multimodal forms in the classroom is not accepted unproblematically by everyone in educational communities. In Charles Bazerman’s essay, “The Case for Writing Studies as a Major Discipline,” he states “multimodal compositions are seen in the classrooms across the country but there seems to be a lack of support from the institutions to advance what the instructors are bringing into the classroom." The idea of learning has changed over the years and now, some argue, must adapt to the personal and affective needs of new students. According to Kress and Bezemer, “education has to accommodate to ‘life-long’, ‘life-wide’ learning, that is, learning at all times, by those who demand that their interests be taken with utmost seriousness, in all sites, in all phases of professional and personal life.”[27] As a result, some administrations may be reluctant to institute wide-spread change without assurance of results. In order for classroom communities to be legitimately multimodal, all members of the community must share expectations about what can be done with through integration, requiring a "shift in many educators’ thinking about what constitutes literacy teaching and learning in a world no longer bounded only by print text."[29]

Multiliteracy[edit]

The shift from page-based text found in print and screen-based text found on the Internet is causing a redefinition of literacy.[30] While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and new ways to communicate. Text, whether it is academic, social, or for entertainment purposes, can be accessed in a variety of different ways and edited by several individuals on the Internet. The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages.[30] With the continual growth of new media and the adaptation of old media, there are now numerous mediums to use when communicating.[31] Many mediums can be used separately and individually. Combining and repurposing one to another has contributed to the evolution of different literacies.

Multiliteracy is the concept of understanding information through various methods of communication. With the growth of technology, there are more ways to communicate a message to the world or individuals. Literacies change to incorporate new ways of communication, stemming from new advances or approaches in communication tools, such as text messaging, social media, and blogs.[32] These new methods consist of more than just text or the written word. Things like audio, video, pictures, and animation can now be simultaneously incorporated into communication.[32]

Communication is spread across a medium using different modes, like a blog post accompanied with images and an embedded video. These modes all work to construct meaning through this concept of multimodality. With the introduction of these modes comes the notion of transforming the message. This transformation is accomplished by taking the message of one mode and displaying it in or with another, such as taking a text and incorporating it into a video.[33] However, the message may have been transformed or changed as it goes from one medium to the next. The video could now act as a supplement to the text, much like special features on a DVD, or it could become a piece that reiterates or supports the text, just in a different format. This reshaping of information from one mode to another is known as transduction.[30] As information changes from one mode to the next, how that message is comprehended is attributed to multiliteracy, as the text is understood across a variety of different means.

A key purpose for multiliteracies is to engage the diverse perspectives of students, facilitating progressively broadened and multicultural groups.[34] Another function of multiliteracies is helping the shift of content design from primarily the instructor's responsibility to a more cooperative effort between teacher and learner.[34] Students are able have a more proactive role in their learning and are in a position to consciously evaluate how their lessons may impact others. Such extrinsic thought permits an evolution of the content and context of lessons advancing the idea of teaching (and learning) relevant material.[34]

Classroom literacy[edit]

Multimodality in classrooms has brought about the need for a new definition of literacy. According to Gunther Kress, a popular theorist of multimodality, literacy, when defined, usually refers to the combination of letters and words to make messages and meaning and can often be attached to other words in order to express knowledge of the separate fields, such as visual- or computer-literacy. However, as multimodality becomes more common, not only in classrooms, but in work and social environments, the definition of literacy extends beyond the classroom and beyond traditional texts. Instead of referring only to reading and alphabetic writing, or being extended to other fields, literacy and its definition now encompass multiple modes. It has become more than just reading and writing, and now includes visual, technological, and social uses among others.[35]

As classroom technologies become more prolific, so do multimodal assignments. Students in the 21st century have more options for communicating digitally, be it texting, blogging, or through social media.[36] This rise in computer-controlled communication has required classes to become multimodal in order to teach students the skills required in the 21st-century work environment.[37] However, in the classroom setting, multimodality is more than just combining multiple technologies, but rather creating meaning through the integration of multiple modes. Students are learning through a combination of these modes, including sound, gestures, speech, images and text. For example, in digital components of lessons, there are often pictures, videos, and sound bites as well as the text to help students grasp a better understanding of the subject. Multimodality also requires that teachers move beyond teaching with just text, as the printed word is only one of many modes students must learn and use.[38]

The application of visual literacy in English classroom can be traced back to 1946 when the instructor’s edition of the popular Dick and Jane elementary reader series suggested teaching students to "read pictures as well as words" (p. 15).[citation needed] During 1960s, a couple of reports issued by NCTE suggested using television and other mass media such as newspapers, magazines, radio, motion pictures, and comic books in English classroom. The situation is similar in postsecondary writing instruction. Since 1972, visual elements were incorporated into some popular twentieth-century college writing textbooks like James McCrimmon’s Writing with a Purpose. However, similar to that in English classroom, visual media or visual images were mainly used as prompts for students’ writing assignments.[39]

Another type of visuals-related writing task is visual analysis, especially advertising analysis, which has begun in the 1940s and been prevalent in postsecondary writing instruction for at least 50 years. This pedagogical practice of visual analysis did not focus on how visuals including images, layout, or graphics are combined or organized to make meanings.[39]

Then, through the following years, the application of visuals in composition classroom has been continually explored and the emphasis has been shifted to the visual features—margins, page layout, font, and size—of composition and its relationship to graphic design, web pages, and digital texts which involve images, layout, color, font, and arrangements of hyperlinks.  In line with the New London Group, George (2002) argues that both visual and verbal elements are crucial in multimodal designs.[39]

Acknowledging the importance of both language and visuals in communication and meaning making, Shipka (2005) further advocates for a multimodal task-based framework in which students are encouraged to use diverse modes and materials—print texts, digital media, videotaped performances, old photographs—and any combinations of them in composing their digital/multimodal texts. Meanwhile, students are provided with opportunities to deliver, receive, and circulate their digital products. In so doing, students can understand how systems of delivery, reception, and circulation interrelate with the production of their work.[40] 

Multimodal communities[edit]

Multimodality has significance within varying communities, such as the private, public, educational, and social communities. Because of multimodality, the private domain is evolving into a public domain in which certain communities function. Because social environments and multimodality mutually influence each other, each community is evolving in its own way.

Cultural multimodality[edit]

Based on these representations, communities decide through social interaction how modes are commonly understood. In the same way, these assumptions and determinations of the way multimodality functions can actually create new cultural and social identities. For example, Bezemer and Kress define modes as “socially and culturally shaped resource[s] for making meaning.” According to Bezemer, “In order for something to ‘be a mode,’ there needs to be a shared cultural sense within a community of a set of resources and how these can be organized to realize meaning.”[[41]] Cultures that pull from different or similar resources of knowledge, understanding, and representations will communicate through different or similar modes.[22] Signs, for instance, are visual modes of communication determined by our daily necessities.

In her dissertation, Elizabeth J. Fleitz,a PhD in English with Concentration in Rhetoric and Writing from Bowling Green State University, argues that the cookbook, which she describes as inherently multimodal, is an important feminist rhetorical text.[42] According to Fleitz, women were able to form relationships with other women through communicating in socially acceptable literature like cook books; “As long as the woman fulfills her gender role, little attention is paid to the increasing amount of power she gains in both the private and public spheres.” Women who would have been committed to staying at home could become published authors, gaining a voice in a phallogocentric society without being viewed as threats. Women revised and adapted different modes of writing to fit their own needs. According to Cinthia Gannett, author of "Gender and the Journal," diary writing, which evolved from men’s journal writing, has “integrate[d] and confirm[ed] women’s perceptions of domestic, social, and spiritual life, and invoke a sense of self.”[43] It is these methods of remediation that characterize women’s literature as multimodal. The recipes inside of the cookbooks also qualify as multimodal. Recipes delivered through any medium, whether that be a cookbook or a blog, can be considered multimodal because of the “interaction between body, experience, knowledge, and memory, multimodal literacies” that all relate to one another to create our understanding of the recipe. Recipe exchanging is an opportunity for networking and social interaction. According to Fleitz, “This interaction is undeniably multimodal, as this network “makes do” with alternative forms of communication outside dominant discursive methods, in order to further and promote women’s social and political goals.” Cookbooks are only a singular example of the capacity of multimodality to build community identities, but they aptly demonstrate the nuanced aspects of multimodality. Multimodality does not just encompasses tangible components, such as text, images, sound etc., but it also draws from experiences, prior knowledge, and cultural understanding.

Another change that has occurred due to the shift from the private environment to the public is audience construction.[44] In the privacy of the home, the family generally targets a specific audience: family members or friends. Once the photographs become public, an entirely new audience is addressed. As Pauwels notes, “the audience may be ignored, warned and offered apologies for the trivial content, directly addressed relating to personal stories, or greeted as highly appreciated publics that need to be entertained and invited to provide feedback."[44]

Communication in business[edit]

In the business sector, multimodality creates opportunities for both internal and external improvements in efficiency. Similar to shifts in education to utilize both textual and visual learning elements, multimodality allows businesses to have better communication. According to Vala Afshar, this transition first started to occur in the 1980s as "technology had become an essential part of business." This level of communication has amplified with the integration of digital media and tools during the 21st century.[45]

Internally, businesses use multimodal platforms for analytical and systemic purposes, among others. Through multimodality, a company enhances its productivity as well as creating transparency for management. Improved employee performance from these practices can correlate with ongoing interactive training and intuitive digital tools.[46]

Multimodality is used externally to increase customer satisfaction by providing multiple platforms during one interaction. With the popularity of with text, chat and social media during the 21st century, most businesses attempt to promote cross-channel engagement. Businesses aim to increase customer experience and solve any potential issue or inquiry quickly. A company's goal with external multimodality centers around better communication in real-time to make customer service more efficient.[47]

Social multimodality[edit]

One shift caused by multi-literate environments is that private-sphere texts are being made more public. The private sphere is described as an environment in which people have a sense of personal authority and are distanced from institutions, such as the government. The family and home are considered to be a part of the private sphere. Family photographs are an example of multimodality in this sphere. Families take pictures (sometimes captioning them) and compile them in albums that are generally meant to be displayed to other family members or audiences that the family allows. These once private albums are entering the public environment of the Internet more often due to the rapid development and adoption of technology.[44]

According to Luc Pauwels, a professor of communication studies at the University of Antwerp, Belgium, “the multimedia context of the Web provides private image makers and storytellers with an increasingly flexible medium for the construction and dissemination of fact and fiction about their lives.”[44] These relatively new website platforms allow families to manipulate photographs and add text, sound, and other design elements.[44] By using these various modes, families can construct a story of their lives that is presented to a potentially universal audience. Pauwels states that “digitized (and possibly digitally ‘adjusted’) family snapshots...may reveal more about the immaterial side of family culture: the values, beliefs, and aspirations of a group of people.”[44] This immaterial side of the family is better demonstrated through the use of multimodality on the Web because certain events and photographs can take precedence over others based on how they are organized on the site,[44] and other visual or audio components can aid in evoking a message.

Similar to the evolution of family photography into the digital family album is the evolution of the diary into the personal weblog. As North Carolina State University professors, Carolyn Miller and Dawn Shepherd state, “the weblog phenomenon raises a number of rhetorical issues,… [such as] the peculiar intersection of the public and private that weblogs seem to invite."[24] Bloggers have the opportunity to communicate personal material in a public space, using words, images, sounds, etc. As described in the example above, people can create narratives of their lives in this expanding public community. Miller and Shepherd say that “validation increasingly comes through mediation, that is, from the access and attention and intensification that media provide."[24] Bloggers can create a “real” experience for their audience(s) because of the immediacy of the Internet. A “real” experience refers to “perspectival reality, anchored in the personality of the blogger."[24]

Digital applications[edit]

Information is presented through the design of digital media, engaging with multimedia to offer a multimodal principle of composition. Standard words and pictures can be presented as moving images and speech in order to enhance the meaning of words. Joddy Murray wrote in "Composing Multimodality" that both discursive rhetoric and non-discursive rhetoric should be examined in order to see the modes and media used to create such composition. Murray also includes the benefits of multimodality, which lends itself to “acknowledge and build into our writing processes the importance of emotions in textual production, consumption, and distribution; encourage digital literacy as well as nondigital literacy in textual practice.[2] Murray shows a new way of thinking about composition, allowing images to be “sensuous and emotional” symbols of what they do represent, not focusing so much on the “conceptual and abstract.”

Murray writes in his article, through the use of Richard Lanham’s The Electronic World: Democracy, Technology, and the Arts, how “discursive text is in the center of everything we do," going on to say how students coexist in a world that “includes blogs, podcasts, modular community web spaces, cell phone messaging…”, urging for students to be taught how to compose through rhetorical minds in these new, and not-so-new texts. “Cultural changes, and Lanham suggests, refocuses writing theory towards the image”, demonstrating how there is a change in alphabet-to-icon ratios in electronic writing. One of these prime examples can see through the Apple product, the iPhone, in which “emojis” are seen as icons in a separate keyboard to convey what words would have once delivered.[48] Another example is Prezi. Often likened to Microsoft PowerPoint, Prezi is a cloud-based presentation application that allows users to create text, embed video, and make visually aesthetic projects. Prezi’s presentations zoom the eye in, out, up and down to create a multi-dimensional appeal. Users also utilize different media within this medium that is itself unique.

Accessing the audience[edit]

In the public sphere, multimedia popularly refers to implementations of graphics in ads, animations and sounds in commercials, and also areas of overlap. One thought process behind this use of multimedia is that, through technology, a larger audience can be reached through the consumption of different technological mediums, or in some cases, as reported in 2010 through the Kaiser Family Foundation,can "help drive increased consumption".[citation needed] This is a drastic change from five years ago: “8–18 year olds devote an average of 7 hours and 38 minutes to using media across a typical day (more than 53 hours a week)."[citation needed] With the possibility of attaining multi-platform social media and digital advertising campaigns, also comes new regulations from the Federal Trade Commission (FTC) on how advertisers can communicate with their consumers via social networks.[49] Because multimodal tools are often tied to social networks, it is important to gauge the consumer in these fair practices. Companies like Burberry Group PLC and Lacoste S.A. (fashion houses for Burberry and Lacoste respectively) engage their consumers via the popular blogging site Tumblr; Publix Supermarkets, Inc. and Jeep engage their consumers via Twitter; celebrities and athletic teams/athletes such as Selena Gomez and The Miami Heat also engage their audience via Facebook through means of fan pages. These examples do not limit the presence of these specific entities to a single medium, but offer a wide variety of what is found for each respective source.

Advertising[edit]

Multimedia advertising is the result of animation and graphic designs used to sell products or services. There are various forms of multimedia advertising through videos, online advertising and DVDs, CDs etc. These outlets afford companies the ability to increase their customer base through multimedia advertising. This is a necessary contribution to the marketing of the products and services. For instance, online advertising is a new wave example towards the use of multimedia in advertising that provides many benefits to the online companies and traditional corporations. New technologies today have brought on an evolution of multimedia in advertising and a shift from traditional techniques. The importance of multimedia advertising is significantly increased for companies in their effectiveness to market or sell products and services. Corporate advertising concerns itself with the idea that “Companies are likely to appeal to a broader audience and increase sales through search engine optimization, extensive keyword research, and strategic linking.”[50] The concept behind the advertising platform can span across multiple mediums, yet, at its core, be centered around the same scheme.

Coca-Cola ran an overarching “Open Happiness” campaign across multiple media platforms including print ads,[51] web ads, and television commercials.[52] The purpose of this central function was to communicate a common message over multiple platforms to further encourage an audience to buy into a reiterated message. The strength of such campaigns with multimedia, like the Coca-Cola ‘Happiness’ campaign,[52] is that it implements all available mediums - any of which could prove successful with a different audience member.

Social media[edit]

Social media and digital platforms are ubiquitous in today’s everyday life.[53] These platforms do not operate solely based on their original makeup; they utilize media from other technologies and tools to add multidimensionality to what will be created on their own platform. These added modal features create a more interactive experience for the user.

Prior to Web 2.0’s emergence, most websites listed information with little to no communication with the reader.[54] Within Web 2.0, social media and digital platforms are utilized towards everyday living for businesses, law offices in advertising, etc. Digital platforms begin with the use of mediums along with other technologies and tools to further enhance and improve what will be created on its own platform.[55]

Hashtags (#topic) and user tags (@username) make use of metadata in order to track “trending” topics and to alert users of their name’s use within a post on a social media site. Used by various social media websites (most notably Twitter and Facebook), these features add internal linkage between users and themes.[56][57][58] Characteristics of a multimodal feature can be seen through the status update option on Facebook. Status updates combine the affordances of personal blogs, Twitter, instant messaging, and texting in a single feature. The 2013 status update button currently prompts a user, “What’s on your mind?” a change from the 2007, “What are you doing right now?” This change was added by Facebook to promote greater flexibility for the user.[59] This multimodal feature allows a user to add text, video, image, links, and tag other users. Twitter's 140 character in a single message microblogging platform allows users the ability to link to other users, websites, and attach pictures. This new media is a platform that is affecting the literacy practice of the current generation by condensing the conversational context of the internet into fewer characters but encapsulating several media.

Other examples include the 'blog,' a term coined in 1999 as a contraction of “web log,” the foundation of blogging is often attributed to various people in the mid-to-late ‘90s. Within the realm of blogging, videos, images, and other media are often added to otherwise text-only entries in order to generate a more multifaceted read.[60]

Gaming[edit]

One of the current digital application of multimodality in the field of education has been developed by James Gee through his approach of effective leaning through video games. Gee contends that there is a lot of knowledge about learning that schools, workplaces, families, and academics researchers should get from good computer and video games, such as a ‘whole set of fundamentally sound learning principles’ that can be used in many other domains, for instance when it comes to teaching science in schools.[61]

Storytelling[edit]

Another application of multimodality is digital film-making sometimes referred to as ‘digital storytelling’. A digital story is defined as a short film that incorporatsd digital images, video and audio in order to create a personally meaningful narrative. Through this practice, people act as film-makers, using multimodal forms of representation to design, create, and share their life stories or learning stories with specific audience commonly through online platforms. Digital storytelling, as a digital literacy practice, is commonly used in educational settings. It is also used in the media mainstream, considering the increasing number of projects that motivate members of the online community to create and share their digital stories.[62]   

See also[edit]

Notes[edit]

  1. ^Murray, Joddy (2013). Lutkewitte, Claire, ed. "Composing Multimodality". Multimodal Composition: A Critical Sourcebook. Boston: Bedford/St. Martin's. 
  2. ^ abLutkewitte, Claire (2013). Multimodal Composition: A Critical Sourcebook. Boston: Bedford/ St. Martin's. ISBN 978-1457615498. 
  3. ^ abKress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge. ISBN 0415320607. 
  4. ^Kress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge. p. 79. ISBN 0415320607. 
  5. ^Kress, Gunther; van Leeuwen, Theo (1996). Reading Images : the grammar visual design. London: Routledge. p. 35. ISBN 0415105994. 
  6. ^Kress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge. p. 1. ISBN 0415320607. 
  7. ^Kress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge. p. 114. ISBN 0415320607. 
  8. ^van Leeuwen, Theo (1999). Speech, Music, Sound. London: Palgrave MacMillan. 
  9. ^Bateman, John; Schmidt, Karl-Heinrich (2011). Multimodal Film Analysis: How Films Mean. London: Routledge. 
  10. ^Burn, Andrew; Parker, David (2003). Analysing Media Texts. London: Continuum. 
  11. ^Wysocki, Anne Frances (2002). Teaching Writing with Computers: An Introduction, 3rd Edition Teaching Writing with Computers: An Introduction (3rd ed.). Boston: Houghton-Mifflin. pp. 182–201. ISBN 9780618115266. 
  12. ^Welch, Kathleen E. (1999). Electric Rhetoric: Classical Rhetoric, Oralism, and a New Literacy. Cambridge, MA: MIT Press. ISBN 0262232022. 
  13. ^Bateman, John A. (2008). Multimodality and Genre: A Foundation for the Systematic Analysis of Multimodal Documents. New York: Palgrave Macmillan. ISBN 0230302343. 
  14. ^Williamson, Richard (1971). "The Case for Filmmaking as English Composition". College Composition and Communication: 131–136. 
  15. ^Palmeti, Jason (2007). "Multimodality and Composition Studies, 1960–Present": 45. 
  16. ^Palmeti, Jason (2007). "Multimodality and Composition Studies, 1960–Present": 31. 
  17. ^Palmeti, Jason (2007). "Multimodality and Composition Studies, 1960–Present": 90. 
  18. ^Berlin, James A. (1982). "Contemporary Composition: The Major Pedagogical Theories". College English. 44 (8): 765–777. doi:10.2307/377329. 
  19. ^Harris, Joseph (1997). A Teaching Subject: Composition Since 1996. Upper Saddle River, NJ: Prentice Hall. ISBN 0135158001. 
  20. ^Flower, Linda; John R. Hayes (1984). "Images, Plans, and Prose: The Representation of Meaning in Writing". Written Communication. 1 (1): 120–160. doi:10.1177/0741088384001001006. 
  21. ^Kress, Gunther (2003). Literacy in the New Media Age. London: Routledge. ISBN 978-0415253567. 
  22. ^ abBezemer, Jeff; Gunther Kress (April 2008). "Writing in Multimodal Texts: A Social Semiotic Account of Designs for Learning". Written Communication. 25 (2): 166–195. doi:10.1177/0741088307313177. 
  23. ^Fahnestock, Jeanne; Marie Secor (October 1988). "The Stases in Scientific and Literary Argument". Written Communication: 427–443. 
  24. ^ abcdMiller, Carolyn R.; Dawn Shepherd (2004). "Blogging as Social Action: A Genre Analysis of the Weblog". In Laura J. Gurak; Smiljana Antonijevic; Laurie Johnston; Clancy Ratliff; Jessica Reyman. Into the Blogosphere: Rhetoric, Community, and Culture of Weblogs. 
  25. ^Ridolfo, Jim; Danielle Nicole DeVoss. "Composing for Recomposition: Rhetorical Velocity and Delivery". Kairos 13.2. Retrieved 25 April 2013. 
  26. ^Wysocki, Anne Frances (2002). Teaching Writing with Computers: An Introduction, 3rd Edition Teaching Writing with Computers: An Introduction (3rd ed.). Boston: Houghton-Mifflin. ISBN 9780618115266. 
  27. ^ abMcVee, Mary B.; Suzanne M. Miller (2012). Multimodal Composing in Classrooms: Learning and Teaching for the Digital World (1 ed.). New York: Routledge. ISBN 0415897475. 
  28. ^Essley, Roger. "What are Storyboards". Retrieved 19 April 2013. 
  29. ^"Performance Learning Systems". Retrieved 19 April 2013. 
  30. ^ abcKress, Gunther (2003). "The Futures of Literacy". Literacy in the New Media Age. Routledge. p. 1. ISBN 0-203-29923-X. 
  31. ^Kress, Gunther (2003). "Going into a Different World". Literacy in the New Media Age. Routledge. p. 21. ISBN 0-203-29923-X. 
  32. ^ abSelfe, Richard; Cynthia Selfe (2008). "Convince Me! Valuing Multimodal Literacies And Composing Public Service Announcements". Theory Into Practice. 47 (2): 84. doi:10.1080/00405840801992223. 
  33. ^Kress, Gunther (2003). "Literacy and Multimodality: a theoretical framework". Literacy in the New Media Age. Routledge. p. 36. ISBN 0-203-29923-X.