Materials Evaluation

On the surface, materials evaluation is a straightforward exercise in determining the efficacy of materials to be used in the teaching context but scratch a little deeper and it is a quagmire of subjectivity and beliefs that appear to be impossible to get away from. We also come back to the issue of relationships and power between stakeholders. Tomlinson & Masuhara (2018, p. 52) noted that “the prime users of commercially produced materials are learners their prime buyers are administrators”. Administrators in this sense could be teachers but also director of studies, senior teachers, or ministerial appointees, among others. We can also add teachers as users of materials, especially in the classroom context. Tomlinson (ibid) found that “in a few institutions the classroom teachers selected the coursebooks and that in no institutions were the textbooks selected by learners”.

This layer between materials and learners/teachers places responsibility on those choosing materials to make the optimal choice for the intended users. It also brings how we evaluate materials into closer scrutiny and how our beliefs and understanding of language learning affect the tools we use for evaluation. The very act of evaluating materials can bring new insights and understanding about what teachers want from materials (Mishan & Timmis, 2015, p. 72) and highlight our beliefs regarding what aspects of materials are important.

During the task this week evaluating materials and Tomlinson and Masuhara’s framework (Tomlinson & Masuhara, 2018, pp. 69-71) was our preferred catalyst. There were strong requests for criteria focusing on authenticity of listening materials, which brought up differing opinions on the nativeness principle versus intelligibility principle (Levis, 2005, p. 370), and English as a lingua franca. One side argued that authenticity is important, but this led to attempts to define authenticity and “it is impossible to engage in a meaningful debate over the pros and cons of authenticity until we agree on what” authenticity is (Gilmore, 2007, p. 98). Non-native speakers with accented English are equally as authentic as native speakers with various regional accents and evaluating materials purely on accent content without insight into the breadth of accents represented would leave two different types of audio equally assessed if we were not to look into more detail regarding the origin of these accents. We then dig further into issues such as the authenticity of studio-recorded audio (or any material used in the classroom for that matter) versus comprehensibility, which raises semantic problems defining comprehensibility and whether graded language is needed: Loschky’s findings suggest modified input does not “facilitate comprehension relative to non-modification of input” (1994, p. 315). These issues were almost impossible to navigate in our short two-hour workshop and with the need to press on, we found that separating materials analysis and materials evaluation was equally challenging.

The literature distinguishes between evaluation and analysis (Tomlinson & Masuhara, 2018, p. 56), and highlights some pitfalls of mixing them together as well as its effect on weighting, which was another headache for our team. Analysis of materials are typified by factual questions (Mishan & Timmis, 2015, p. 88) and generate definite answers from questions such as “how many chapters does it contain?”, or “does it provide speaking opportunities?”. This type of question was posited as effective by a member of our team because, from experience, it was simple to use but, as noted by Tomlinson and Musahara (2018, p. 55), these types of questions are open to bias from the author or can be interpreted from different perspectives which will lead evaluators to different scores and therefore results. In our example, it is not clear what can be deemed a good number of chapters because it is not possible to evaluate materials based on number of chapters. From our evaluation tool, you can see in the content-specific criteria that we have put a question regarding task-based learning into our evaluation. It suggests we are looking for task-based learning materials, and that task-based learning is preferable to other approaches, although that was not one of the underlying beliefs of our group. Despite this being presented to us in our reading, we added these questions into our evaluation tool and this promptly led to disagreement about how binary options can be represented on our cline of 1 to 5. This naturally led to how to weight different criteria, because guidance notes for teachers is not equal to cognitive challenge of materials.

Evaluation Tool

The evaluation tool we used to evaluate English Unlimited B1+

Ultimately, our evaluation tool did not weight the criteria and the Teacher’s book is weighted less because we could not think of more criteria. Universal criteria is not the most important category but is weighted that way as a result of not giving time to think about and discuss weighting. When we used the criteria for evaluating, we came across problems that were impossible to evaluate but time pressure, and our own experiences of teaching left us at an impasse, so we did not rewrite or delete the question.

Time and complexity is an area that has been highlighted (Tomlinson & Masuhara, 2018, p. 61; Mishan & Timmis, 2015, p. 97) that affects how teachers use evaluation tools. Teachers are busy teaching and would not be able to go through a protracted pre-evaluation process in addition to evaluating the materials. Materials selected for an institution will be used by teachers of all experiences and novice teachers will be disadvantaged with evaluation tools that can only be understood by experienced teachers, and thus return unreliable data. This is without considering evaluation from learners (ibid, p. 98), an area that we had not considered. The reality of the situations in which evaluation tools will be used compared to the research are far apart, and this is most notable in how the different responsibilities overlap.

Theresa Clementson, one of the authors and editor of the course book we evaluated (Rea, et al., 2013), advised us to be careful about overlap between what teachers should do in the classroom and what materials are designed to do and this issue came up with evaluating materials based on long-term learning goals, and with engagement and motivation (ibid, p. 91). I would argue that long-term learning goals are not within the remit of materials creators and lies firmly in the learners’ own hands along with their teachers. There are clear limits to what materials can realistically deliver, and this is further muddled by the decision-making processes behind how the course book was put together. It was surprising to discover that the editor does not have authority on all aspects of the book and the publisher, and its marketing team, wield considerable influence on visual media and layout. This information shed light on a question that we had grappled with throughout the evaluation task regarding the discontinuity of the visuals used within one unit, with a mix of cartoons and photographs that were integral in some activities but were totally irrelevant in others.

We noted that these issues were probably borne from spending too little time preparing our framework and exploring our beliefs before creating criteria for evaluation, and that with more time we could have produced a more objective evaluation tool that had itself been evaluated. However, our situation mirrored those in many teaching centres and schools around the world where materials need to be evaluated and ready for new classes.

References

Gilmore, A., 2007. Authentic materials and authenticity in foreign language learning. Language Teaching, 40(2), pp. 97-118.

Levis, J. M., 2005. Changing Contexts and Shifting Paradigms in Pronunciation Teaching. TESOL Quarterly, 39(3), pp. 369-377.

Loshchky, L., 1994. Comprehensible Input and Second Language Acquisition: What is the Relationship?. Studies in Second Language Acquisition, 16(3), pp. 303-323.

Mishan, F. & Timmis, I., 2015. Mateials Development for TESOL. Epub ed. Edinburgh: Edinburgh University Press.

Rea, D. et al., 2013. English Unlimited B1+. s.l.:Cambridge University Press.

Tomlinson, B. & Masuhara, H., 2018. The Complete Guide to the Theory and Practice of Materials Development for Language Learning. Hoboken, New Jersey: John Wiley & Sons.

Leave a Reply

Your email address will not be published. Required fields are marked *