Week 3

Materials Evaluation

The deeper I delve into almost any aspect of ELT, the more I unearth dispute, controversy and unanswered questions. The hope that this MA might provide a clear, well defined picture of my field is diminishing. The reality is that most areas require thorough investigation, research and deliberation. And, this week, some creation.

The coursebook

At the beginning of today’s seminar, a discussion on the use of coursebooks almost developed into a fiery debate. I have often employed the use of coursebooks in my teaching, finding them to be helpful guides – they remove some of the mental load that comes with the profession and they half my planning time, all without restricting my creativity. Coursebooks were especially key in my first year, as a novice teacher. It is true, I have been frustrated by the repetitiveness of some activities and simply perplexed by the inclusion of others, but I have always preferred to endure these annoyances because, for me, the pros outweigh the cons. I was surprised, then, that so many of my fellow teachers strongly disliked the use of coursebooks, viewing them as passive dictators in the language classroom.

“Much of the language teaching which occurs throughout the world today could not take place without extensive use of commercial materials.”

(Richards, 2001: 251)

Despite their controversial status, coursebooks are a resource used by many of the 1.5 billion learners worldwide. Accordingly, it is undeniable that coursebooks will form a central part of any teaching materials discussion. They are thus deemed worthy of some extra attention.

Too much choice

Often, the task of choosing a coursebook is removed from teachers’ control, especially at the important stages of the needs-based analysis and deciding on objectives (McGrath 2013). However, if the teacher does have a say, how do they know which book to choose? Just as materials development benefits from a systematic approach, so does materials evaluation. McGrath (2013), Tomlinson and Masuhara (2018) and Mishan and Timmis (2015) all advocate the use of an evaluation checklist. Our task for this week was to collaborate with our classmates and make one of our own.

The Checklist

My checklist begins with a set of basic information, including the name, the author(s), the publisher, the date published, etc. I have also included the price in this section, as I felt that it could (unfortunately) be a deciding factor for some institutions. It then involves two stages, the first of which is an external analysis, a content analysis and a ‘flick test’ (Matthews, 1985, as cited in McGrath, 2013), the idea being that if a coursebook fails to pass this initial stage, it will be discarded.

The second stage is a subjective evaluation. We decided on a grouped approach, splitting our questions into three sections. Here, we use Tomlinson and Masuhara’s three initial divisions to help structure our questions: universal, local and medium (2018). We agreed that the number of questions per section should remain fairly similar to Tomlinson and Masuhara’s model. Our evaluative questions are heavily influenced by our own whiteboard of principles, made in last week’s session. We also drew inspiration from suggestions made by McGrath (2013), Tomlinson and Masuhara (2018) and Mishan and Timmis (2015). Our process went as follows: having seen relevant evaluation parameters in our individual readings or our whiteboard, we would pose one to the group. If we all agreed it was important for us, we would rephrase the idea to better suit our needs (and avoid plagiarism) and then enter it into the appropriate section.

We decided on a five-point scale for evaluation, with 1 being the lowest, negative opinion and 5 being the highest, positive opinion. Having read a chapter in McGrath’s book Teaching Materials and the Roles of EFL/ESL Teachers (2013) for my part of the jigsaw reading, I campaigned for the checklist to include subtotals at each section, so as to allow for easy comparison with other coursebook evaluations. Similarly, the perspectives from other teachers reported in McGrath’s book encouraged the addition of a ‘comments’ column to help us justify our evaluations. In my personal copy of the checklist, I then changed all of our questions to declarative statements to avoid the yes/no dichotomy, on the basis that the scale would evaluate to what extent each statement was true.

The evaluation

Having proudly polished my Official Evaluation Checklist, I then set about the second part of this week’s tasks. We were to use the checklist to evaluate one unit from a real coursebook and see how it fared. The coursebook was English Unlimited B1+ Special Edition and we were provided with copies of the students’ book, teachers’ book, workbook, contents page and audio transcripts from unit 4.

When evaluating the unit, we began with the coursebook, then moved onto the teachers’ book and compared the two. Next, we looked at the workbook and any extra materials. This was the same method that the experienced teacher, reported in Johnson et al. (2008), used to conduct their evaluation.

Eval CheckL 1

Eval CheckL 2Eval CheckL 3

 

Click the above images to enlarge. Alternatively, here is a downloadable PDF of my completed evaluation checklist.

Evaluating the evaluation

Whilst sharing our ideas in class, some interesting issues surfaced regarding our checklist designs. We were pointed towards Tomlinson and Masuhara’s (2018) evaluation considerations to guide us.

Overall, we found our checklist to be helpful and easy to follow. It did indeed make us think about how, why and what we were evaluating in a more structured manner. We found the comments section particularly beneficial as it allowed us to quickly recall which specific areas had affected the score. It also helped us to reconsider what could have been a very low score if made through snap judgement – when you have to do more than just circle a number it makes you pause for thought. This combination of qualitative + qualitative data collection in our evaluation worked well. Lastly, the universal/local distinction was extremely helpful when applying the checklist to our specific contexts.

Looking critically at our checklist, there is a need to define some of the vaguer questions that could be open to too much personal opinion. This is meant to be a subjective evaluation but only within reason. For example, despite being at the top of my list of principles, the activities are engaging and motivating actually meant different things for everyone in our group, causing some confusion whilst evaluating. It also became clear that some statements were more important than others and that a weighted checklist might provide a more accurate evaluation. This would add another level of subjectivity: deciding which questions are worth more would be a lengthy process, demanding valuable time that most teachers cannot spare. Personally, I also believe a few of our statements are not entirely free from dogma: for the group, authenticity was a high priority, however, authentic materials have not actually been proven to increase language acquisition (Mishan, 2005). We attempted to phrase two statements in a way that kept dogma to a minimum but also expressed our views.

  • The activities use authentic language – in that listening activities need not be native-level but should include hedges, hesitations, etc. to make them realistic.
  • The activities give opportunities for authentic communication – excluding games and warmers, students should not be subjected to activities that are so contrived they’re very unlikely to be experienced in real life.

Upon discussion with the other groups, we realised that we had forgotten to include any requirements for student-centredness. I justify omitting this important requirement by the fact that we included statements to evaluate my top two principles of being engaging and motivating and also easily adaptable. As a result, student-centredness was seemily already a given and we therefore did not write it down. Although this might also be considered a vague, subjective concept, we could have been specific by including a statement such as student talking time is a priority.

Last but not least, our course leader highlighted that for subjective data collection it is advisable to employ an even number evaluation scale to prevent too many ‘middle’ answers. Here is a downloadable, blank Word document of my updated evaluation checklist.

En fin

Creating our own evaluation checklist was a long, thought provoking process. I believe this actually highlights the importance of having a checklist in the first place. If you can’t breeze through the process whilst working in a supportive team with multiple opportunities to revise your ideas, how can it possibly all be done mentally, in two minutes before class? Furthermore, if you are not an educator, the decision will likely be based on administrative factors such as price, design and the number of add-ons. These days, everyone wants more bang for their buck, but the decision is then not very student-centred. Not only do I now believe that it is essential to use a checklist, I also believe that the process of making my own one has been extremely valuable. Choosing a random checklist from a book in the library would only result in inappropriate evaluation parameters for my context (Roberts, 1996). Sheldon (1988) puts it nicely:

“We can be committed only to checklists… that we have a hand in developing, and which have evolved from specific selection priorities.”

(Sheldon, 1988, p. 242)

The argument could be made that all evaluations are subjective (Johnson et al, 2008; Tomlinson and Masuhara, 2018; Sheldon, 1988) and that such irrationality has no place in coursebook evaluation (Roberts, 1996). Ultimately, it is difficult to remove all subjectivity from the process. Therefore, we might just have to settle for a framework of objectivity that helps to prevent our subjective judgements from getting the better of us.

 

 

 

References

McGrath, I. (2013) Teaching Materials and the Roles of EFL/ESL Teachers: Practice and Theory. London: Bloomsbury, pp. 52-59, pp. 105-125.

Mishan, F. (2005) Designing authenticity into language learning materials. Bristol: Intellect Books.

Mishan, F. & Timmis, I. (2015) Materials Development for TESOL. Edinburgh: Edinburgh University Press, pp. 56-67.

Johnson, K., Kim, M., Ya-Fang, L., Nava, A., Perkins, D., Smith, M. S., Soler-Canela, O. and Lu, W. (2008) ‘A step forward: investigating expertise in materials evaluation’, ELT Journal, 62(2), pp. 157-163.

Richards, J. (2001) Curriculum Development in Language Teaching. Cambridge: Cambridge University Press, p. 251.

Roberts, J. T. (1996) ‘Demystifying materials evaluation’, System, 24(3), pp. 375-389.

Sheldon, L. (1988) ‘Evaluating ELT textbooks and materials’, ELTJ, 42(4), pp. 237-246.

Tomlinson, B. & Masuhara, H. (2018) The Complete Guide to the Theory and Practice of Materials Development for Language Learning. Hoboken, NJ: John Wiley & Sons, pp. 52-82.

Leave a Reply

Your email address will not be published. Required fields are marked *