Assessment and the Language Teacher: Trends and Transitions

Writer(s): 
Geoff Brindley, National Centre for English Language Teaching and Research, Macquarie University, Australia

 

In recent years, assessment policies and practices in language teaching programs worldwide have been changing in a number of ways, both at system and classroom level. In line with political and economic pressure to demonstrate 'efficiency and effectiveness', educational institutions are increasingly being called on to provide much more explicit information on program activities and outcomes. In some countries, this has resulted in a proliferation of standardized tests of various kinds. In others, educational authorities have introduced outcomes-based approaches which use teacher-conducted assessments as a basis for reporting learners' progress and achievement against system-wide or national standards (Brindley, 1997). Teachers are thus finding themselves in the position of having to develop tools and procedures for monitoring, recording, and assessing learners' progress and achievement in the classroom on a more systematic and formal basis.

At this stage, however, relatively little is known about how teachers are dealing with these new demands. What I want to do in this paper, therefore, is to consider some of the effects of the changing assessment landscape on language teachers' role.

Assessment skills required by teachers

In order to carry out formal assessments which can meet minimum standards of validity and reliability, teachers need a wide range of skills. These include:

  • Observing, interpreting and, documenting learners' use of language
  • Designing classroom tests and assessment tasks
  • Analysing test results
  • Providing diagnostic feedback to learners
  • Evaluating the quality of tests and assessment tasks
  • Evaluating the quality of learners' language performances according to rating scales
  • Writing evaluative reports for program administrators

However is it realistic to expect that teachers should possess these skills, given that assessment could by no means be considered a 'core' component of language teacher training courses? In this regard, Doherty, Mangubhai, and Shearer (1996), discussing the introduction of a new national assessment and reporting framework for adult literacy in Australia, comment:

Respondents who had a moderate level of assessment training were not convinced that their training equipped them for their assessment duties as delineated in explicit curricula.

Time demands of increased assessment

One of the most frequent findings which has emerged from accounts of the development of outcomes-based approaches is that any kind of assessment and reporting system which is reliant on teacher-conducted assessment and observation is extremely time-consuming. A commonly voiced concern is that constant assessment can erode teaching time. Cumming (1997), describing Grade 9 teachers' use of language standards in Ontario, identifies these sorts of pressures, typified by the following comment from a teacher:
There are so many tests coming from on high that I have no idea what they are. We are doing so many tests and all we find is that the patients are very sick. But what do we do, how do we get the patient well? To do that, we have to have time to teach.
Echoing this theme, Barrs (1992) reports that one of the main problems in implementing the detailed observational recording system used with the Primary Language Record in the United Kingdom was the sheer amount of time necessary to document many student performances on an ongoing basis: ". . . it does seem to be the case that it takes a full school year to 'learn the forms', to internalise the ways of observing that they encapsulate and to see the full value of this kind of recording" (p. 56).

What can be done?

Since teachers are the people who are responsible for implementing assessment "on the ground", it is important to ensure that they have the opportunity to acquire the skills they need to conduct high quality assessments through appropriately-targetted professional development. Some of their needs can be addressed by enrollment in formal degree courses or through attendance at in-service workshops. However this theoretical knowledge needs to be supplemented by on-the-job experience in developing and using assessment tools.
There are also a variety of other ways in which teachers can enhance their assessment skills. These include moderation sessions which involve teachers coming together on a regular basis to discuss performance standards or criteria, using samples of students' work. Such sessions provide an opportunity for teachers to become familiar with typical or "benchmark" performances representing different levels of ability, thus helping them to improve the consistency of their judgements. At the same time, the close focus on features of language learning and use which accompanies the discussion of learner performances serves a valuable professional development function.

Another way of developing assessment expertise and at the same time improving the quality of standardized tests is through collaborative test development projects in which practitioners and professional testers work together (Shohamy, 1992). The involvement of teachers in developing specifications, item writing and trialling can help to ensure that test content is consonant with current teaching practices, thus increasing the likelihood that the test will have beneficial washback on teaching.

Providing institutional support

If teachers are to assume greater responsibility for assessment, they require sufficient time and resources to do the job properly. In some cases, however, policy makers and program administrators may need to be convinced of this, since they may not be aware of how time-consuming assessment can be, particularly when it involves the construction of formal tests. One way to demonstrate the impact of increased assessment duties on teachers' day-to-day work is to pilot the new tests or assessments over a reasonable length of time, documenting the kinds of assessment-related tasks performed by teachers and how long they take. Gunn (1995), for example, did this and found that carrying out systematic criterion-referenced assessment in an adult ESL class took up over 20% of class time! Once it becomes clear what demands a new form of assessment makes on teachers' time, an argument can be made, if necessary, for a corresponding reduction in teaching loads or the injection of extra resources.
When considering the issue of institutional support, it is important to bear in mind that the introduction of a new assessment policy or practice is no different from introducing a new curriculum or textbook-it is an exercise in educational change which requires careful planning and ongoing management (Fullan and Stiegelbauer, 1991). If the proposed change is clearly going to have major effects on people's work practices, it may be necessary to designate a person with specific responsibilities for assessment who can help people at all levels of the system work through the implications of the change. These responsibilities might include, among other things, communication with management and teachers, identification of teacher training needs, conduct of moderation sessions, co-ordination of test development and the collection of assessment information for reporting purposes.

Conclusion

In this paper I have tried to sketch out some of the consequences that an expanded assessment role would have for language teachers. Experience indicates that they are prepared to undertake such a role if they perceive clear benefits for learners (Shohamy, 1992). However, given the level of assessment training with which many teachers enter the profession, it would be unrealistic to expect that this could happen without a considerable investment of resources both in terms of professional development and institutional support. As more and more educational systems move in the direction of increased accountability and place greater assessment demands on teachers, it will become clearer to what extent these resources will be forthcoming.

References

  • Barrs, M. (1992). The Primary Language Record: What we are learning in the UK. In C. Bouffler (ed.) Literacy evaluation: Issues and practicalities. Sydney: Primary English Teaching Association.
  • Brindley, G. (1997). Assessment and reporting in second language programs: Purposes, problems and pitfalls. In E. S. L. Li & G. James (Eds.) Testing and evaluation in second language education. Hong Kong: Language Centre, Hong Kong University of Science and Technology.
  • Cumming, A. (1997, March). Grade 9 teachers' use of standards. Paper presented at Colloquium on Implementation of Language Standards, American Association of Applied Linguistics Conference, Orlando, Florida.

  • Doherty, C., Mangubhai, F., & Shearer, J. (1996). Assessment as an ongoing feature of the learning environment. In J. J. Cumming & C. E. van Kraaynoord (Eds.). Adult literacy and numeracy: Assessing change. Melbourne: Language Australia.
  • Fullan, M., & Stiegelbauer, S. (1991). The new meaning of educational change. London: Cassell.

  • Gunn, M. (1995). Criterion-based assessment: A classroom teacher's perspective. In G. Brindley (Ed.), Language assessment in action. Sydney: National Centre for English Language Teaching and Research, Macquarie University.
  • Shohamy, E. (1992). New modes of assessment: The connection between testing and learning. In E. Shohamy & R. Walton (Eds). Language assessment for feedback:Testing and other strategies. Dubuque, Iowa: Kendall Hunt Publishing Company.

Geoff Brindley's workshop is sponsored by The Australian International Education Foundation (AIEF).