MATERIALS EVALUATION
The task for the third week was evaluating a unit in the Advanced-level general English coursebook “Face2face”:
At first, I thought this evaluation exercise would not be too difficult because we have already established a set of teaching and learning principles in class and the rest would just be applying them on a specific unit. Not until my group had the first meeting did I realise how wrong my assumptions were and how much more work was required. Sitting in the same class and being assigned the same readings gave me the false sense of confidence that all members of my group would see the task in the same way and agree on the procedures. However, probably due to time constraint, each member could only approach the task from the perspective of a certain author and therefore, form a different image of how the evaluation was going to be conducted. Fortunately, in the end, we agreed on the evaluation steps and completed the task. In this post, I will describe in detail all the steps we went through – from choosing an approach, creating a checklist, to the evaluation itself – and finally reflect on how I could have done the task better.
Choosing an approach
First and foremost, it is important to draw an overall picture of the evaluation process. According to McGrath (2013), there are three stages in the evaluation cycle: pre-use, while-use, and post-use. As none of us have used the “face2face” book in a real class before, “pre-use” evaluation was the only option. However, at that stage, evaluators could only predict the potential effectiveness of the materials without a chance to receive feedback from users. Consequently, a systematic approach to evaluation appeared to be more appropriate as it may balance out the predictive nature of our task and thus, we decided to create a checklist for evaluation. The following seven steps from Tomlinson (2013) were adopted to create that checklist:
As can be seen, what we did in class seemed to fit nicely into the first three steps so the next step would be to create a “profile” of the teaching and learning context surrounding the given materials. The idea of a “profile” resembles what McGrath (2013) considered as “materials analysis” and “context analysis”. While “evaluation” contains value judgments of the teaching materials, “analysis” is seen as more “descriptive” and usually takes place before the evaluation stage. As a result, we chose to analyse “face2face” before evaluating it, adopting Littlejohn’s (2011) model:
However, as we looked more into what Littlejohn meant by a complete materials analysis, we realised that there was not enough time for this task and thus, restricted this stage to a brief introduction of the elements presented in the books.
Besides the analysis of the materials, we also tried to analyse the context in which the books were expected to be used. According to Tomlinson (2013), the evaluation of materials could be “context-free”, “context-influenced” or “context-dependent”. Since we only have the general target learners in mind – young adult and adult learners at an advanced level – the choice would be to look at the evaluation from a “context-influenced” perspective.
Creating a checklist
After agreeing on the procedures, we continued with compiling a list of universal criteria from the principles generated previously in class. In addition, to organise the list of criteria in a logical way, we adopted the sub-categories from Tomlinson (2013: 40-43). What we did not expect at first was that finalising the list of principles took a great amount of time. Many of them overlapped each other and some were too vague in meaning. As a result, we had to read some phrases again and again and rewrite them in a uniform style. Finally, with data from group A, B, & C, the following table was created:
Looking at this table, we realised that some categories did not have any criteria and some had too many. Thus, we merged some sub-categories together while also came up with new ones. Then, more criteria were developed and all of them turned into questions in the following final checklist:
With the finalised checklist, individual evaluation processes were conducted and the final results were as
follows:
Reflection
Firstly, I underestimated how difficult it was to work in a group for an evaluation task. So much time was wasted on arguing about terminologies and their meanings while we should have spent more time evaluating the materials. In retrospect, I could have been more patient and tried to explain to other teachers what I was talking about so we could gain a mutual understanding.
Next, with the checklist, we considered different weighting for criteria which seemed less important than others. However, due to time constraint, the final decision was to have the same weighting for every criterion. Nevertheless, in my opinion, the fact that some categories have more criteria than others could be treated as a way of weighing in itself.
Lastly, with the evaluation stage, I think more time should have been spent on comparing the results and explaining how we came up with the ratings. While the checklist method appears to be systematic and objective, the act of evaluating is still up to the teachers to decide and everyone seems to have a different interpretation of the criteria we listed beforehand. This lack of agreement might have come from the fact that the wording in the checklist was not transparent enough and also giving a reasonable value judgment has always been difficult.
In conclusion, after this week pre-seminar task, I have learned that evaluating materials can be an extremely long and arduous process. The involvement of other team members could even further complicate the task if everyone is not on the same page and understands each other well enough. With this experience, I will be more careful in the future in terms of choosing teaching materials and encouraging people to revise the materials they have created.
References
Littlejohn, A. (2011) The analysis of language teaching materials: Inside
the Trojan Horse. In B. Tomlinson (ed.) 2011a: 179–211.
McGrath, I. (2013) Teaching Materials and the Roles of EFL/ESL
Teachers: Practice and Theory. London: Bloomsbury.
Tomlinson, B. (2013) Developing materials for language teaching (2nd
edn.). London: Bloomsbury Academic.
This is an excellent post and pretty much exactly what is required. Yes, materials evaluation is hard, and collaborating with others to identify what to evaluate and how, and then evaluating the materials is exceptionally hard. You will, I know, already have realised that this is precisely why I wanted you to do this task and work on it together. Teaching is a collaborative undertaking and there are different and contradictory views about how it should be done and what materials should be used, all this forces us to find the most effective compromise that we can. Not easy.
I see no reason to push you to do more work on this post. You could have said something about the results of your evaluation of ‘face2face’, you could have commented on Theresa’s visit. However, this isn’t essential and you may already have done this elsewhere on your blog. – Paul
Evaluation can definitely be time-consuming and frustrating at times, especially when attempting to do so with peers. In a way, evaluation is like teaching itself: so many approaches, methodologies, theories and countertheories have been put forward and all of them seem well-grounded, but eventually one has to make certain choices and adopt a principled approach. The matter gets more complicated when one attempts to do a quantitative evaluation due to the weighting and rating issues you mention.
I agree with you that the predictive nature of pre-use evaluation isn’t the most accurate representation of the effectiveness or success of a coursebook unless we look into it after or during its use, a point I make in my own post as well. Nevertheless, pre-use evaluation does give you an idea.
Besides some sporadic handouts, I haven’t properly used face2face either so it’d be interesting to see how our pre-use evaluation and analysis correlate with our whilst- and post-use evaluations.
Our group took so much time on this task (about 8 to 10 hours contact time in the library plus additional personal time), but those meetings were worth every minute.
I regret I never got to see Grace’s presentation she’d been up all night getting onto slides (my rather laissez faire handwritten note approach was a luxury I could only afford having used CUP books for teaching in Asia), but I appreciated your compiling everything onto this one Powerpoint, and your presentation skills and attention to detail are an asset to any group.
The journey we underwent in deciding upon principles was such a learning curve, and I think we really discovered a lot about the role of the author and publisher and the compromises they make.