Formative Assessment in Ghana

Every day in Ghana we were woken at 6 am precisely – not by an alarm clock (we never needed one) but by the combination of sunrise, chickens crowing, and the neighbour’s radio. After breakfast we would walk downhill on a dirt track to Oblogo School, passing through the recycling yard and waving to the workers busy breaking up plastic chairs. At the school the children would be lining up to hand over the small amount of cash for that day’s teaching, and we would go up to the Omega Schools office on the first floor. We would work through till 4pm and then climb back up the hill to our flat, with local children calling out ‘Obruni!’ (white man) as we passed. Back home we would first jump in the shower and then relax on the balcony to watch the sunset, drink in hand. We would speculate about the probability of a power cut, hoping that tonight we might be able to cook in the light and maybe watch TV.

This was our life for a year (2011-12) while my wife and I were working as volunteers with a group of schools in Ghana. I was mainly developing formative assessment systems; Sandie was helping with this as well as quality assuring their teaching materials. There were a lot of positives for us – the people were lovely, the weather was warm and at weekends we could go to the local beach. The work itself, however, could be challenging, mainly due to problems with the lack of infrastructure and experienced staff.

Formative assessment

Formative assessment (also known as ‘assessment for learning’) is based on the idea that testing is not valuable unless it leads to improved teaching and learning. Omega Schools already had end-of-term tests, the results of which were reported to parents. When we arrived they were in the process of introducing mid-term tests in English, maths and science, using multiple choice questions. My task was to develop a system to make good use of these results to provide high-quality feedback to teachers and school managers (the term used for headteachers) and help to promote pupils’ learning.

All the tests were developed by subject specialists who were not assessment experts. Our first task was to improve the quality of those assessments, so that the results became more meaningful. As well as checking the questions and providing feedback to the subject specialists, we ran training courses in the principles of test development. Our colleagues showed great interest in these sessions, but some of the questions we saw later made us doubt that they had fully understood the points we had made.

Providing feedback

The main focus of our work, however, was collecting students’ results and providing detailed feedback to teachers and school managers in a way which was comprehensible and useful to them.

Feedback for teachers

For teachers we produced three different reports, based purely on the children’s test scores in each subject and not comparing their results to those from other schools. In the first report we showed the scores for each pupil in each subject, and whether they were significantly above () or below (↓) the class average. Pupils above average in all three subjects were marked as ‘needing extra challenge’ and those below average in all three as ‘needing extra help’. Children who were above average in two subjects and below in the other were also identified, so that the reason for their poor results in the third subject could be explored.

Figure 1: Example of Teacher Feedback 1

Ghana F1

In the second teacher report, we gave a table showing each child’s results on each item – either correct (C), wrong (X), or wrong with a question mark (?). The last indication was based on a model predicting item results from total test score. If the pupil got a wrong answer when the model predicted they should have got it right, this was indicated with a question mark. The idea was that the teacher should explore why they got the unexpected result, and if there was a problem with their understanding of that particular aspect of the subject.

Figure 2: Example of Teacher Feedback 2

Ghana F2

The third teacher report showed the proportion of pupils in the class getting each item right, with a brief description of the item content. The intention was to give teachers an idea of which items, and hence areas of the curriculum, caused the most problems for their pupils and might require further work.

Figure 3: Example of Teacher Feedback 3

Ghana F3

Feedback for school managers

To produce feedback for school managers, we created ‘Omega scores’ for each subject, which were standardised to a mean of 50 and standard deviation of 10 across all pupils. Reports showing average Omega scores for each year group and subject, and comparing them with the overall average, enabled school managers to see how their results stacked up against other schools’.

Figure 4: Example of School Manager Feedback

Ghana F4

We even produced a ‘league table’ plot of all schools’ results in ascending order, with a large arrow pointing to their own school – this became known to us as the ‘Hand of God’ plot.

Figure 5: Example of ‘Hand of God’ Plot

Ghana F5

The limitation of this feedback will be immediately obvious. All of the scores computed were relative, and had to be, because we had no objective standard against which to measure pupils, classes or schools. Nevertheless, the feedback enabled school managers to see the strengths and weaknesses of particular classes in certain subjects, and (in theory at least) helped them identify teachers who needed support.

Use of feedback

Formative assessment needs two elements to work successfully: high-quality, relevant and timely feedback, and intelligent use by teachers to improve and focus their teaching. We believe the feedback we produced was both relevant and high quality – it was certainly more detailed and specific than anything we have seen elsewhere. It was as timely as we could make it, given that three stages were involved:

  • Collecting the pupil scores from schools
  • Analysing the data
  • Printing and collating the feedback.

Data analysis was relatively quick, but collecting the scores from schools was a slow and surprisingly complicated process. Printing and collating the feedback also took time, as each individual teacher received a bound booklet containing all of the reports relevant to him/herself, as well as a simply worded explanation of what the tables meant, and how the information could be used to direct pupils’ learning.

We did not rely entirely on written explanations. We ran training sessions for teachers, and for school managers, explaining the use and purpose of the feedback, and answering any questions they might have. Further, while the system was being developed, teachers and managers were interviewed and questioned about the usefulness of the feedback; some reports were dropped or modified on the basis of their comments. In general interviewees seemed pleased with the feedback and claimed that they found it helpful; but many struggled when asked to give specific examples of how it had been used.

Problems encountered

In operating the system, several problems were encountered, some of which have already been alluded to above.

  • It was very hard to get data recorded and entered which was good enough to be used without a great deal of checking and recoding. Getting school staff to use consistent pupil identifiers was a nightmare, not helped by pupils’ names being spelled differently each time, and often having first and second names transposed.
  • It was emphasised many times (by ourselves, and by the senior managers of Omega Schools) that the purpose of the feedback was to help teachers to teach their classes, and individual pupils, more effectively. Nevertheless, many teachers remained convinced that the real purpose was to judge their work, and that they would be in trouble if their pupils did not obtain the highest scores. (As a result, there was some evidence of maladministration – I developed statistical methods of detection, but that is another story.)
  • Before we left I trained staff in running the system we had set up, but unfortunately these all left shortly after. This highlights another issue in developing countries – staff mobility, allied with a critical lack of suitably skilled personnel to operate sophisticated systems.

Ideas for the future

When we joined Omega Schools in 2011, there were only ten schools in the chain, but the number expanded rapidly. By spring 2014 (when I did some analysis for them from home) it had increased to nearly 40. Although a larger number of schools/pupils may enhance the value of the analysis, it becomes impracticable to run a system like that described above. The process of collecting data from schools, and producing feedback booklets, becomes far too cumbersome, expensive and time consuming.

Ultimately, I believe the only feasible way to get accurate and timely formative data in this environment is to maximise the use of IT: pupils need to do the assessments on screen, the data needs to be sent to head office electronically, and the results communicated to teachers and managers the same way. However, this would require a large investment in tablets or equivalent to allow all children to do the tests in a reasonable period of time. Given that, a high-quality electronic data collection, analysis and feedback system could be developed to provide formative information for teachers in such schools.

We really enjoyed our year in Ghana, and met some wonderful people. I am convinced there is a potential for the use of good formative assessment systems to help improve teaching and

Leave a comment