Assessing Multimodal Texts Part 2

What we typically understand to be “literacy” has changed in recent years, due to an increase in the presence of digital technology and online practices. However, how teachers approach literacy, and particularly what is measured in literacy assessments, has not caught up with these changes (Cope, Kalantzis, McCarthey, Vojak, & Kline, 2011). Effective assessment in today’s world needs to be in spaces where students can demonstrate their knowledge and understanding using a variety of modes.

With regards to writing in particular, Cope et al. (2011) identify six principles for effective assessment of writing. They claim that meaningful assessment should:

  • be situated in a knowledge-making practice
  • draw explicitly on social cognition
  • measure metacognition
  • address multimodal texts
  • be “for learning”, not just “of learning”
  • be ubiquitous

Additionally, Cope et al. (2011) propose six technology-mediated processes for the assessment of writing:

  • Natural Language Analytics
  • Corpus Comparison
  • In-text Network-Mediated Feedback
  • Rubric-Based Review and Rating
  • Semantic Web Processing
  • Survey Psychometrics

Each of these is outlined in the table below. I personally find this chart a little overwhelming, and not very teacher-friendly. I would like to see a more simplified version of this chart, and written in a more coherent and concise manner. Also, I would like to see how this could be put into practice, by seeing how it could be used in the classroom.

Cope, B., Kalantzis, M., McCarthey, S., Vojak, V., & Kline, S. (2011). Technology-mediated writing assessments: principles and processes. Computers and Composition, 28(2), 79-96.                               

 The New Assessment: A Matrix of Principles and Practices for Writing Assessment

Principles and Practices

Situated in Knowledge-Making Practice Social Cognition Metacognition Multimodal Texts For/Of Learning

Ubiquitous

Natural Language Analytics

Responds to a text’s specific features with just enough information and just in time. Environment becomes more socially and contextually intelligent as annotations are collected. Reviewer annotations and queries to writers prompt metacognitive thinking about the writing and its contents. Reading tags, captions, labels, descriptions of image, video and audio. Assists the writer and provides data on their learning progress.

From a writer’s point of view, substantially automated, instant responses regarding textual specifics.

Corpus Comparison

On-the-fly comparison of same-discipline, same-level texts. Environment becomes more accurate as more texts are collected and aligned according to discipline, subject contents and learning level. Writer provided generalizations from corpus comparison, and the opportunity to rewrite addressing these generalizations. Reading text ancillary to multimedia objects. Provides the writer with peer comparison of writing quality in relation to cohorts, including an opportunity to rewrite and reapply comparison.

Automated response regarding overall text in relation to standards and equivalent levels and content areas.

In-text Network-mediated Feedback

Immediate feedback on written work in a knowledge producing community of practice. Synchronous or asynchronous person-to-person conversations around textual specifics. Parallel conversation speaks metacognitively about the text form and contents. Dialogical feedback on non-textual multimedia contents. Specific feedback and quantification of plus/minus evaluations.

Participants need not be proximate for the around-text dialogue to happen.

Rubric-Based Review and Rating

Establishing an overall frame of reference for the knowledge work. Review and rating by self, peers, and invited critical friends, thereby creating a social culture of constructive evaluation. An explicitly defined frame of abstract outcomes criteria in relation to a performance scale. Review and rating of purely multimedia objects, as well as written and multimodal texts. Pre- during- and post-task access to rubric, along with the option to rework to address reviewer comments in relation to rubric.

Asynchronous, web-accessible review and rating.

Semantic Web Processing

A subject-specific schema for mapping a knowledge domain. Collaborative construction of concept maps. Conscious markup of structure and semantics. Tagging of images, videos, etc., using concept schemas. Machine and person feedback on the application of concept map to a task.

Asynchronous web-accessible semantic mapping and markup.

Survey Psychometrics

Task-embedded quizzes, surveys, item-based tests. Surveys can measure stance, attitudes, and perspective. Surveys can address underlying knowledge and understandings. Addresses knowledge acquired from multimodal work. Before, during and after surveys to track what students already know and what knowledge they acquire.

Can be taken anywhere, anytime, for example when a task is completed.