School Report Card

  • Uploaded by: ZhenXhan
  • Size: 1.7 MB
  • Type: PDF
  • Words: 18,186
  • Pages: 58
Report this file Bookmark

* The preview only shows a few pages of manuals at random. You can get the complete content by filling out the form below.

The preview is currently being created... Please pause for a moment!

Description

A commitment from

The Children’s Plan

A School Report Card: Prospectus

1

Contents

Paragraphs Introduction

1–4

Links with Ofsted

5–8

Overall score

9 – 24

Performance categories

25 – 31

Scoring

32 – 35

Year-on-year comparisons

36 – 37

Contextualisation - Background information about a school - Contextualised performance information

38 – 40 41 42 – 48

Performance indicators

49 – 50

Pupil attainment - Primary - Secondary - Weighting the indicators

51 – 53 54 – 59 60 – 61 62 – 64

Pupil progress - Progress measures in English and mathematics - Value added - Contextualising pupil progress - Confidence intervals - Baselining contextual value added - Pilot options

65 66 – 68 69 – 72 73 – 77 78 – 80 81 82 – 83

Pupil wellbeing

84 – 98

Parents’ and pupils’ perceptions Breaking the link between disadvantage and low attainment - How to measure the gap - Methodology for attainment

99 – 101 102 – 106 107 – 115 116 – 119

Special educational needs and disability

120 – 121

Partnership working

122 – 124

Coverage

125 – 127

Publication - Changes to other reporting of school information

128 – 133 134 – 136

Next steps

137 – 139

Standardising the indicators

Annex

A School Report Card: Prospectus

Introduction

1.

Chapter 4 of the White Paper: Your child, your schools, our future: building a 21st century schools system, published alongside this document, sets out the Government’s plans for the school accountability system and the position of the School Report Card within it. Those plans are summarised in the box below. The full text of the White Paper can be found at www.dcsf.gov.uk/21stcenturyschoolssystem.

2.

This document, which has been produced jointly by the Department for Children, Schools and Families (DCSF) and Ofsted, sets out our early decisions on the overall shape of the School Report Card and how we will now take forward work on its detailed design. The new School Report Card, to be introduced from 2011, will provide our key statement on the outcomes we expect from schools, and the balance of priorities between them, ensuring more intelligent accountability across schools’ full range of responsibilities. It will report on outcomes across the breadth of school performance: pupil attainment, progress, and wellbeing; a school’s success in reducing the impact of disadvantage; and parents’ and pupils’ views of the school and the support they are receiving. We will also consider how to place each school’s outcomes in context, so that fair comparisons can be made between the performance of schools with different intakes and challenges. This is vital so that all schools, regardless of background or intake, have the same opportunity to perform well on the School Report Card. The recent report of the Expert Group on Assessment, in calling for the earliest possible introduction of the School Report Card, also recommended that it should replace the existing Achievement and Attainment Tables as the focus of public accountability for schools. It will therefore supersede the Achievement and Attainment Tables as the central source of externally-verified, objective information on the outcomes achieved by schools. That will not mean a reduction in the information publicly available about schools’ performance. The detailed performance data used to prepare the School Report Card will continue to be published. Where further data is collected by the Government, it will also, wherever appropriate, continue to be available to the public.

3

4

It is important that the school accountability and school improvement systems have a clear and agreed understanding of what constitutes good school performance. We will continue to work closely with Ofsted on all decisions on the design and content of the School Report Card. We believe that an overall score or rating on the School Report Card is the way to provide clarity on what constitutes good overall performance for use throughout the system and by parents, and help to ensure recognition for the full range of outcomes achieved by schools. We recognise that the way any overall score is constructed would be critical to its success with schools, the educational community and parents and the public. We will therefore consult further on the categories that will be used and the indicators that will underpin those categories. 3.

Following the Secretary of State’s announcement last year of his intention to introduce a School Report Card, an initial consultation paper on the general principles that should govern its design and publication was published on 8 December 2008. The significant majority of respondents to that consultation, which closed on 3 March 2009, supported the need for a School Report Card. Further outcomes of that consultation are summarised throughout this document. A full account of the consultation responses can be found at http://www.dcsf.gov.uk/consultations.

4.

Given the central position we see for the School Report Card (and the Framework for Excellence for post-16 provision in schools), both in reporting on schools’ performance and, complementing Ofsted’s inspections, in underpinning schools’ accountability, it is vital that it is tested thoroughly. We therefore intend to pilot the proposals in this document over the next two years, starting from September 2009. Paragraphs 137 – 139 set out our broad pilot timetable. The results of the pilot work will be fully evaluated with all stakeholders before final proposals for the School Report Card are agreed.

A School Report Card: Prospectus

Links with Ofsted

5.

A majority of respondents to our earlier consultation agreed that a common set of performance indicators should be used for both the School Report Card and Ofsted’s risk assessment, and that the latest Ofsted judgement should be shown on the School Report Card. Half of respondents agreed that the School Report Card should take the place of Ofsted’s proposed health check report.

6.

The School Report Card and inspection by Ofsted are complementary and different elements of the accountability framework for schools, and their future development needs to be co-ordinated. For example, the School Report Card and guidance for inspectors should reflect the same view of the relative importance of different outcomes. To reinforce this point, we continue to believe that the latest Ofsted judgement should be shown prominently on the School Report Card, but it should not contribute to the calculation of the Report Card’s scores.

7.

Our intention is that the indicators that underpin the School Report Card will form the core of the process of risk assessment that Ofsted will use to select schools for inspection. This congruence, particularly with regard to the indicators of pupil wellbeing and the satisfaction of parents and carers, will be developed over time, as individual indicators are piloted and evaluated. In the short term, Ofsted will use the selection process developed for the launch of the new inspection arrangements in September 2009. In determining whether schools should be inspected, Ofsted will not only use indicators, but also consider other information of a qualitative nature (for example, concerns reported by the local authority), which would not be included in the School Report Card.

8.

We intend that the School Report Card alongside school inspection will become central to the accountability framework and will be used by all – the school, its parents, its Governors, its School Improvement Partner and Ofsted – to inform school evaluation and become a trigger for intervention, based on a mutual understanding of the school’s performance across a broad range of outcomes. The Department for Children, Schools and Families and Ofsted have jointly produced this document and will continue to work closely together to develop the School Report Card, so that is can fulfil its role in providing a summary of the school’s performance that can meet the needs of all.

5

6

Overall score

9.

Whether the School Report Card should include an overall score or rating of each school’s performance, pulling together all the information provided on the School Report Card into a single judgement, was the most controversial aspect of the consultation.

10.

Respondents to the initial consultation were asked: (i)

if the School Report card should include: ●●

an overall score

●●

an overall rating

●●

both

●●

neither

●●

other

and (ii)

if an overall score is adopted, do you agree that this should be based on performance in all categories included on the School Report Card?

(iii) if an overall score is not adopted, how should we ensure that public attention is focused on a balanced measure of school performance, taking account of the whole range of school achievements? 11.

Of the 307 respondents to the first question, 57% (including most of the professional associations) thought that the School Report Card should contain neither an overall score nor an overall rating. Of those who opposed the adoption of an overall score or rating, the reasons given included that it would be too crude and simplistic to provide a balanced view of a school’s work, and that parents would be unable to judge a school’s effectiveness from an overall score or rating.

12.

Of the 258 respondents to the second question, 54% thought that an overall score or rating should be based on performance in all of the categories included in the School Report Card in order to give a more rounded picture of a school’s performance. The suggested alternatives to calculating an overall score or rating included using the Ofsted grade for overall effectiveness instead, or including a narrative report that summarised a school’s performance.

13.

An overall score or rating on the School Report Card would not be the same as the grade for overall effectiveness in an Ofsted inspection report. Rather they would be complementary, but different, assessments of a school’s work. The overall score on the

A School Report Card: Prospectus

School Report Card would be calculated from a set of quantifiable indicators, and would provide a balanced picture of the measurable outcomes achieved by the school for its pupils. The Ofsted grade for overall effectiveness is a holistic judgement, reached in the light of a series of judgements about key aspects of the school’s work, including the quality of its provision (especially the quality of teaching and its impact on learning), the effectiveness of leadership and management, and the school’s capacity to improve. The judgements in Ofsted inspection reports take account of, but are not determined by, a range of performance data on which an overall score on the School Report Card would be based. In making their evaluations, however, and the recommendations that follow from them, inspectors also use first-hand evidence derived from observation of teaching and learning; discussions with pupils, staff, parents and the school’s partners; scrutiny of pupils’ work, school documentation and parental questionnaires; and direct experience of the school’s ethos and culture. 14.

As set out in the Government’s recent response to the Expert Group on Assessment, we continue to believe that the inclusion of an overall – or summary – score is of great importance if the School Report Card is to deliver the improvements that we want. The inclusion of – and process of arriving at – an overall score is an important step in ensuring that there is clarity and transparency over priorities across the different performance categories for schools.

15.

We fully recognise the range and complexity of what schools are expected to do, and therefore of the outcomes for pupils that the School Report Card will need to cover. Properly capturing and reflecting that range is a prime motivation for the introduction of a School Report Card, compared to our current arrangements for reporting on schools’ performance. But the range and complexity cannot become an excuse for obscuring, or failing to come to a view about, a school’s performance across the piece – particularly if schools are to have a clear understanding of the standards that are expected of them and the consequences of their performance.

16.

To provide a simple example, without an overall score it is unclear what message the School Report Card is giving about the performance of a school where pupils’ attainment is strong, but progress is no better than average, compared to a school which is providing excellent support for pupils’ progress but where academic attainment is unexceptional. We believe that, to be successful in its overall objectives, it is vital that the School Report Card should be clear about what is excellent, good and poor overall performance. The simplified illustration below demonstrates this:

7

8

Pupil Progress how well pupils have moved on in their learning

Pupil Attainment pupils’ academic achievement

Pupil Wellbeing pupils’ health, safety, enjoyment, and wider opportunities

C

Narrowing Gaps in Pupil Performance

how well pupils have moved on in their learning

Pupil Attainment

A B

Pupils’ Perceptions Parents’ Perceptions

Pupil Progress

pupils’ academic achievement

How do these schools compare?

Pupil Wellbeing pupils’ health, safety, enjoyment, and wider opportunities

A C B

Pupils’ Perceptions

B

Parents’ Perceptions Narrowing Gaps in Pupil Performance

B

17.

We do not underestimate the challenges in developing a robust, credible and accepted means to draw school performance data together into an overall score. Doing so, however, would be a powerful means of communicating the full range of schools’ performance indicators in a balanced way, and the relative priorities attached to the different aspects of schools’ work in a manner that reflects the vision set out in the White Paper. This is particularly important to underpin the broader approach to school accountability and improvement that the White Paper establishes.

18.

As well as clarity over priorities for schools, inclusion of an overall score would provide a clear indication to stakeholders of the overall outcomes achieved by the school. Schools can be successful in many ways, and for different things, and different schools’ Report Cards will demonstrate different profiles of relative strengths and weaknesses – it will be relatively rare for a school to be uniformly strong or uniformly weak. If the School Report Card does not include an explicit overall score, it effectively leaves users of the Report Card with no indication of the relative priorities against which schools will be judged. This is, in effect, the current situation with the Achievement and Attainment Tables which are criticised for providing a range of data of schools’ (predominantly academic) performance without explaining which indicators are most important.

19.

Experience shows that this frequently leads not to the establishment of a balanced view of a school’s overall performance, based on the range of data available, but to the choice of a single aspect of schools’ performance as summing up their full contribution – e.g. the proportion of 11 year-olds achieving level 4 or above in both English and mathematics; or 15 year-olds gaining five good GCSE passes, including English and mathematics. The School Report Card will not be successful if attention continues to be paid only to these traditional indicators. An overall score would establish the clear importance of reaching a rounded understanding of each school’s performance, rather than one drawn simplistically from a narrow range of indicators, or a single indicator.

20.

Of course, parents and other stakeholders will rightly have different views about what – for them – constitutes good outcomes for a school. Different parents will be looking for different strengths, reflecting the specific interests, aptitudes and needs of their children. By reporting all the underpinning performance data on the School Report Card, different

A School Report Card: Prospectus

users will still be able to look at the particular aspects of performance that interest them most – identifying areas of a school’s work that are particular strengths; or areas in a strong school that continue to need improvement. The inclusion of an overall score will, however, allow that detailed consideration to take place in the context of a general understanding of the school’s overall performance. 21.

For the reasons above, we strongly favour the inclusion of an overall score on the School Report Card. However, we recognise that it is difficult to reach final conclusions in the absence of a clear proposal on how the overall score would be derived; that can only be finalised once much of the detailed work on the individual indicators and performance categories for the School Report Card has progressed. For example, we need to pilot indicators and engage stakeholders in further discussion about the relative weightings of the performance categories and the indicators within them so that, if finally adopted, the overall score and rating command wide acceptance in schools and with the general public. While we proceed with the presumption that the School Report Card will incorporate an overall score, we will return to make a final decision once the detailed work has been concluded. There will be full consultation about these matters before a final decision is taken.

22.

There may, of course, be occasions when the overall rating on the School Report Card and the inspection grade for overall effectiveness are not providing the same message, and there will be good reasons for this. The difference may arise because the school’s performance has changed between the date of the information used to prepare the School Report Card and the date of the last inspection; or it may reflect some of the different aspects of a school’s performance which are identified through an inspection, but not through the indicators used in the Report Card.

9

10

23.

The following paragraphs set out our proposed approach over the life of the pilot to develop the different performance categories that will contribute to the overall score. Paragraphs 137 – 139 set out our broad pilot timetable. Because we will be adopting a staged approach to developing the School Report Card, it means that we will not be able to test the calculation of an overall score until the second pilot year.

24.

The folded insert to this document shows what the front cover and second page of the School Report Card might look like. This is an illustration intended only as an aid to discussion during consultation. The final design of the School Report Card – including its subsequent pages – will be developed throughout the two year pilot phase.

A School Report Card: Prospectus

Performance categories

25.

The initial consultation stressed the need for the School Report Card to reflect a wider set of outcomes for children, compared to those currently available, in a simple and accessible way. Consultation respondents were asked to indicate whether they agreed with the inclusion of the following broad performance categories: ●●

Attainment

●●

Pupil Progress

●●

Wider Outcomes

●●

Narrowing Gaps

●●

Parents’ Views

●●

Pupils’ Views

●●

Parents’ and Pupils’ Views combined into a single Users’ Views category

●●

Parents’ and Pupils’ Views combined within a Wider Outcomes category

●●

Any other categories

26.

The strongest agreement was with the inclusion of Pupil Progress (70% of the 318 respondents to this question) and Attainment (63%), followed by Wider Outcomes (58%), Narrowing Gaps (45%), Pupils’ Views (42%) and Parents’ Views (40%). The proportion of respondents who thought that parents’ and pupils’ views should be combined within the Wider Outcomes category, or a single Users’ Views category was small. A further 17 categories were suggested by respondents. These included attendance, partnership working, behaviour and views of the school workforce. The first three of these will be covered by the School Report Card, though not necessarily as separate performance categories. We will consider the possibility of including the Views of the School Workforce as a performance category during the pilot phase. There is at present no school by school data available to the Department on workforce views, so in the pilot we will need first to examine whether robust data could be collected.

27.

We consider that each of the six performance categories identified in the initial consultation are important if the School Report Card is to give a more balanced picture of a school’s achievements. Pupils’ attainment is clearly a key indicator of how well a school is doing but needs to be complemented with information about their progress. A school’s contribution to its pupils’ wider development, and its work with disadvantaged and vulnerable groups, adds more important detail. This is further enhanced by information on the views of its pupils and their parents.

11

12

28.

It is also important that the School Report Card is accessible and there is a danger that including too many categories (each with a separate judgement on a schools’ performance in that category) would detract from, rather than improve, its accessibility. However, given the importance of each of the categories, and with the inclusion of an overall score bringing the various judgements together, we believe that the School Report Card can best provide clear information across the range of schools’ work by the inclusion of the full range of categories proposed.

29.

A further issue around accessibility that has been raised during the consultation is whether the names of some of these categories – in particular Wider Outcomes and Narrowing Gaps – are sufficiently clear to parents. It is of the highest importance that what the School Report Card is reporting on should be immediately clear to all its users. We therefore propose to test with a wide range of parents and other stakeholders how we refer to the individual categories. In the rest of this document, we refer to Pupil Wellbeing, rather than Wider Outcomes, and to Narrowing Gaps in Pupil Performance, although final decisions will only be taken later. Also, we refer to Parents’ Perceptions and Pupils’ Perceptions rather than to their “views”, to make clearer that we are intending to capture under these performance categories their satisfaction with a school’s provision, such as its ethos, curriculum and range of additional activities, and to distinguish this from measures of the extent to which they believe a school is contributing to pupil wellbeing (see paragraphs 84 – 101).

30.

In light of the above, we believe that the performance categories that should be included in the School Report Card are: ●●

Pupil Progress

●●

Pupil Attainment

●●

Pupil Wellbeing

●●

Pupils’ Perceptions

●●

Parents’ Perceptions

●●

Narrowing Gaps in Pupil Performance

A School Report Card: Prospectus

31.

It is our intention that the scores attained in these categories would be used to calculate an overall score for the school. If, following further consultation, a decision is reached to add a category of Views of the School Workforce, further consideration would need to be given to whether it should also contribute to an overall score.

13

14

Scoring

32.

In the initial consultation, respondents were asked whether they agreed that each performance category should have: ●●

a numerical score

●●

an assigned rating

●●

both a numerical score and an assigned rating

●●

none of the above

●●

other

33.

Of the 294 respondents to this question, just over a half agreed that each performance category should have a numerical score and/or an assigned rating. Among those opposed, mainly from primary schools, there was concern about using a range of information from varied contexts to arrive at scores or ratings. Alternative suggestions included using Ofsted inspection grades or narrative reports.

34.

While we recognise the concerns expressed by respondents, particularly from the primary sector, we agree with the majority of respondents that the School Report Card should represent schools’ performance within each category. If the data relating to the individual indicators used in each category is not brought together in this way, the School Report Card will not be able to fulfil its purpose of providing schools, parents and stakeholders with a clear picture of how schools are to be held to account. Without scores or ratings for each category, the School Report Card – while providing a better account of the range of outcomes to which schools contribute – would not help to provide clarity about a schools’ overall performance in each category, and leave schools vulnerable to being held to account in different ways, for different outcomes, by different stakeholders.

35.

We believe that the production of both scores and ratings for each performance category is legitimate and feasible, will enhance the value of the School Report Card for parents and other stakeholders, and through the indicators and weightings used, will signal clearly to schools the priority attached to the different elements of performance within each category. We therefore intend to proceed on the basis that there will be both a score and a rating for each performance category.

A School Report Card: Prospectus

15

16

Year-on-year comparisons

36.

A majority of respondents to our earlier consultation agreed that the School Report Card should include information about changes in the school’s performance since the previous year and over the past three years.

37.

We agree that it is important to show how a school’s performance has changed over time, and it is our intention to show that change for each of the performance categories and the overall score. This might be shown on the front cover of the School Report Card by an arrow symbol indicating the trend since the previous year, or previous three years. Fuller information might then be made available on a subsequent page of the School Report Card. We will consider the range of that information, and the complexity and presentational issues of including it, during the pilot phase, with a view to testing options in the second year of piloting from September 2010.

A School Report Card: Prospectus

Contextualisation

38.

In the initial consultation, two questions were asked about contextualisation: ●●

Do you think that information about the school’s context should be provided as a separate item on the School Report Card?

●●

Do you think that the indicators that underpin the scores for attainment, progress and wider outcomes should be “contextualised”?

39.

Of the 282 responses to the first question, the majority (66%) agreed. A number of those who agreed stressed that the school should be able to provide information and an explanatory commentary about its own context. Among those who disagreed, a small proportion (5%) felt that the inclusion of contextual information about schools serving areas of deprivation could be detrimental to the school and insulting to its community, and 6% thought that the context should be recognised and integrated into the calculation of scores rather than reported separately.

40.

Of the 257 responses to the second question, the majority (59%) thought that all the indicators should be contextualised, and 22% thought that some should be. Respondents felt that contextualised indicators would be fairer to all schools. The minority of respondents who expressed reservations about contextualisation were concerned about the potential for confusion, the impossibility of capturing all relevant elements of the context, and the possibility that it could allow some schools to justify low standards.

Background information about a school 41.

We agree that some basic information about a school should be shown on the School Report Card – for example, whether a school has a unit for pupils with special educational needs. Respondents also told us they would like the ability to provide information about the particular characteristics of their school – its ethos statement, for example. We will explore both these options during the first year of piloting from September 2009.

Contextualised performance information 42.

We are clear that every school – regardless of the local circumstances in which it operates – must have an equal opportunity of achieving a good score on the School Report Card. We should ensure that any school which is serving its pupils well has its efforts recognised in the School Report Card, while taking care not to provide any excuse for poor performance.

17

18

43.

However, we also have to recognise the importance of absolute outcomes for children and young people. Their life chances are at stake: future employers require minimum qualifications and will not make allowances for the context of the school that a prospective employee attended. Every child, regardless of their background, deserves the best chance to succeed.

44.

If performance in every category were contextualised, the importance of absolute outcomes for all children and young people would not be reflected sufficiently in the scores and ratings on the School Report Card. We believe that absolute attainment must be clear on the face of the Report Card. We propose, therefore, that the indicators of Pupil Attainment, and the resulting score for this performance category, should not be contextualised in any way.

45.

Indicators of Pupil Progress are by their very nature contextualised to a degree, because they reflect the progress made from pupils’ varying starting points. We believe that the Pupil Progress category should be the means through which we account for the context of the pupil intake when measuring a school’s performance in academic outcomes.

46.

We will consider carefully the method used to contextualise Pupil Progress and which characteristics should be taken into account. The options that might be considered during the pilot phase are discussed in detail in the section on Pupil Progress (see paragraphs 73 – 77).

47.

In order to ensure that every school, regardless of its intake, has a fair chance of achieving a good score on the School Report Card, we must get right the balance of the weighting given to Pupil Progress – which will be contextualised – and Pupil Attainment – which will not. While we believe this is another compelling argument in favour of including an overall score on the School Report Card, we recognise that this proposal must be thoroughly tested in the pilot phase. Nevertheless, without an overall score, a means of contextualisation might become overly complex and difficult to interpret for parents and the general public.

48.

Once we have a sufficiently complete and robust set of national survey data we will consider the need, and technical options, for contextualisation of indicators of Pupil Wellbeing and indicators of Parents’ Perceptions and Pupils’ Perceptions. We should be clear, however, that it would not be appropriate to contextualise all wellbeing indicators, for example, those relating to pupil safety.

A School Report Card: Prospectus

Performance indicators

49.

50.

In the initial consultation, we concentrated on the general principles for the School Report Card. We did not recommend specific performance indicators that might be used to measure the performance categories, but proposed eight principles to which indicators should conform. Of the 273 respondents to this question, a majority agreed that the performance indicators used should be: ●●

Relevant

●●

Robust

●●

Outcome/output focused

●●

Responsive

●●

Differentiating

●●

Stable

●●

Complete and inclusive

●●

Timely

The following paragraphs explain the performance indicators we propose to test over the two pilot years to calculate outcomes in each of the performance categories. When developing the indicators which contribute to these categories, we will be mindful of existing data collections and the impact on burdens on schools. Minimising the time and effort spent on collecting and checking data will be a central topic of evaluation throughout the pilot phase. In line with our qualifications strategy, we expect that as the School Report Card is introduced, it will reward achievement in the qualifications which will form part of the main national suites of qualifications: GCSEs and A levels, Diplomas, and the Foundation Learning Tier (of lower level qualifications). As part of this, achievement in Functional Skills qualifications will be rewarded. During piloting, while new qualifications are still in the process of being made vailable nationally, it will be necessary to continue to recognise the wider range of qualifications currently approved for teaching in schools.

19

20

Pupil attainment

51.

Pupil attainment is universally recognised as a fundamentally important outcome of schools’ work. Whatever their backgrounds, the knowledge, understanding and skills that pupils acquire at school affect their life-chances. Therefore, although we recognise that the levels of attainment that pupils achieve reflect a school’s context as well as its effectiveness, we consider it vital to include their attainment as a performance category.

52.

In identifying indicators for this performance category, we want to strike the right balance between indicators relating to key thresholds, which are widely understood and used, and the achievement of which is crucial in preparing pupils for further study and productive lives, and indicators that give a more holistic picture across the attainment of all pupils in a particular school, and do not cause disproportionate emphasis to be given to those pupils closest to the threshold.

53.

In the first year of piloting starting from September 2009, we will necessarily have to begin by testing primary and secondary school indicators that are familiar to schools and other stakeholders, and for which we currently have data. That means that the indicators used will be based on Key Stage 2 test and Key Stage 4 examination results. Consequently, the first year of piloting will have to be restricted to those schools that have results in those tests and examinations – essentially mainstream junior schools, all through primary schools and secondary schools (including Academies). See also paragraphs 125 – 127 on Coverage.

Primary 54.

Ministers recently accepted a recommendation by the Expert Group on Assessment to end tests in science at Key Stage 2. The data on primary school attainment available for use on the School Report Card will therefore be Key Stage 2 test outcomes in English and mathematics, and the remaining question is whether the attainment category for primary schools should be based on separate indicators for tests in English and mathematics, or whether it should be based on an indicator that combines them.

55.

We no longer require primary schools to set targets separately for English and mathematics (but only to set a target for the proportion achieving both together) because we believe it is vitally important that pupils leave primary school with the necessary skills in both literacy and numeracy to be able to succeed in secondary school. However, continuing to present separate performance indicators for English and mathematics in the School Report Card will show whether there are any specific strengths or weaknesses in the teaching of those individual subjects at a school.

A School Report Card: Prospectus

56.

We believe that there is a minimum set of Key Stage 2 indicators which should be considered for use in calculating a score for the Pupil Attainment category for primary schools. There is also an extended set of indicators which could provide a broader picture of a primary school’s performance and help parents and the general public to understand better how well it is doing. The table below sets out which indicators we believe to be in the minimum and extended sets of indicators. We propose to test different combinations of these in the first year of piloting from September 2009. Minimum set of Key Stage 2 indicators Average point score per pupil in both English and mathematics

Recognises the attainment of all pupils. But it is not transparent or easily understood by parents and the general public

Percentage of pupils achieving Level 4 or above in both English and mathematics

The standard expected of most pupils which allows for successful progression to secondary education and beyond

Percentage of pupils achieving Level 4 or above separately in English and in mathematics

Provides information about a school’s strengths and weaknesses in individual subjects: e.g. a low percentage of pupils achieving Level 4 in English and mathematics could mask strong performance in mathematics with weak performance in English

Extended set of Key Stage 2 indicators Percentage of pupils achieving Level 5 in both English and mathematics

An indicator of excellence which recognises a school’s work with more able children

Percentage of pupils achieving Level 5 separately in English and in mathematics

Provides further information about a school’s strengths and weaknesses in individual subjects

Percentage of pupils achieving Level 3 in both English and mathematics

An inclusive measure reflecting a school’s work with lower attaining pupils

57.

Ofsted’s inspection judgements on the standards of attainment reached by pupils at a school are based on more than the past published attainment data that will be available for the School Report Card. They draw on other evidence including a school’s own data available to inspectors at the time of inspection. We do not, therefore, expect to see a precise correlation between a school’s performance in the attainment data used for the School Report Card and Ofsted’s inspection judgements on attainment.

58.

It is useful, however, to examine the overall correlation between potential School Report Card indicators of attainment and Ofsted’s judgements. If a particular set of potential indicators for the Report Card’s attainment category has no, or a very weak, relationship to Ofsted’s judgements, it would imply that it is a poor choice for use in the School Report Card. It is important to stress that, for the reasons set out above, it should not be our aim

21

22

to maximise the correlation between the School Report Card attainment indicators and Ofsted’s judgements. The purpose of examining the correlation is to provide an additional check, through the pilot phase, on the overall validity of individual, or combinations of, indicators for use on the School Report Card. 59.

With this in mind, we have carried out some preliminary statistical analysis to measure the relationship between Key Stage 2 indicators and Ofsted’s inspection judgements on standards of attainment. Of the full set of potential indicators identified above, the average point score per pupil in English and mathematics had the strongest correlation to Ofsted’s judgements. A number of further options were modelled using different combinations of the Key Stage 2 indicators. The inclusion of additional indicators did not significantly change the level of predictive accuracy, which is to be expected given the correlation between the available indicators.

Secondary 60.

Unlike primary schools, the available attainment indicators for secondary schools are more numerous and cover results in a wider range of subjects. However, as with primary schools, we believe that there is a minimum set of Key Stage 4 indicators which should be considered for use in calculating a score for the Pupil Attainment category for secondary schools. There is also an extended set of indicators which could provide a broader picture of a secondary school’s performance and help parents and the general public to understand better how well it is doing. The table below sets out which indicators we believe to be in the minimum and extended sets of indicators. We propose to test different combinations of these in the first year of piloting starting from September 2009. Minimum set of Key Stage 4 indicators Average point score per pupil “capped” at the best eight GCSEs (or equivalent)

Recognises the attainment of all pupils. But it is not transparent or easily understood by parents or the general public

Percentage of pupils achieving Level 2 (i.e. five or more GCSEs at grades A*-C or equivalent)

The standard expected of most pupils which allows for successful progression to post-16 learning and beyond

Percentage of pupils achieving English and mathematics GCSEs at grades A*-C

Evidence shows that pupils with English and mathematics GCSEs at grades A*-C are most likely to succeed post-16 and at Level 3

Percentage of pupils achieving functional English and mathematics at Level 2

Essential life skills for success in learning and employment

Extended set of Key Stage 4 indicators Percentage of pupils achieving Level 2 including English and mathematics GCSE

This could be used in place of the second and third Key Stage 4 indicators above

Percentage of pupils achieving three or more GCSEs (or equivalent) at grades A*-A

An indicator of excellence which recognises a school’s work with the most able children

A School Report Card: Prospectus

61.

Percentage of pupils achieving functional English and mathematics at Level 1

A measure reflecting a school’s work with lower attaining pupils in these essential life skills

Percentage of pupils achieving Level 1 (i.e. five or more GCSEs at grades A*-G or equivalent)

An inclusive measure reflecting a school’s work with lower attaining pupils

Percentage of pupils achieving at least one Entry Level qualification

An inclusive measure reflecting a school’s work with lower attaining pupils and, identifying where pupils reach age 15 with no qualifications

Percentage of pupils achieving two GCSEs at grades A*-C in science

Reflects the Government’s educational priorities

Percentage of pupils achieving a GCSE at grades A*-C in a modern foreign language

Reflects the Government’s educational priorities

As with primary schools, we have carried out some preliminary statistical analysis to measure the relationship between Key Stage 4 indicators and Ofsted’s inspection judgements on standards of attainment. Of the full set of potential indicators identified above, the percentage of pupils achieving five or more GCSEs at grades A*-C (or equivalent) including English and mathematics GCSEs had the strongest correlation to Ofsted’s judgements. A number of further options were modelled using different combinations of the Key Stage 4 indicators. The inclusion of additional indicators did not significantly increase the level of predictive accuracy, which is to be expected given the correlation between the available indicators.

Weighting the indicators 62.

There is a very obvious tension to overcome between keeping the number of indicators to a manageable minimum while recognising the full range of a school’s priorities. Various combinations of indicators will be tested in the first year of piloting and the merits of each discussed fully with stakeholders before a decision is reached on the combination of indicators we believe should be used to measure attainment in the School Report Card.

63.

The most important element of the final composition of the attainment category and the impact of the School Report Card in assessing a school’s performance will be the weighting attached to each indicator. A range of weighting options and the need for performance “floors” within those options will be explored during the pilot stage. For example, if we were to continue to reflect current school improvement priorities, we could decide that the percentage of pupils achieving Level 2 including English and mathematics GCSE would be deemed twice as important as any other indicator and weight it accordingly, while also stipulating that no school with fewer than 30% of its pupils achieving that “threshold” could be awarded a good grade for attainment on the School Report Card.

23

24

64.

Finally, our statistical modelling demonstrated that combining different “raw” attainment results to derive an attainment category score is undesirable because the set of indicators is measured on a different scale. For example, some indicators are expressed as percentages, others as point scores. To overcome this, the different indicators will be “standardised” in the pilot so that they are all measured on a comparable basis. Standardisation is discussed in more detail in the Annex to this document. While adding a level of complexity, standardisation does bring the benefit of being able to set a baseline for future years to show the extent to which schools’ attainment has changed over time.

A School Report Card: Prospectus

Pupil progress

65.

Pupil Progress, as a performance category, sits alongside pupil attainment and complements it. It is this category that recognises a school’s efforts in enabling its pupils to make gains in knowledge, understanding and skills over time, from their different starting points. Recognising the progress made by pupils given their varying starting points is the most important step in setting their attainment in context. There are different ways of measuring progress and therefore different indicators that could be used to reflect and report on it. They include: ●●

progress measures

●●

value added

●●

contextual value added

Progress measures in English and mathematics 66.

67.

Progress measures show the proportion of pupils who achieve or exceed the expected amount of progress in English and mathematics over time. In primary schools, most pupils are expected to make two levels of progress in English and mathematics between the end of Key Stage 1 and the end of Key Stage 2. This is based on an expectation that pupils achieve at least Level 2 in their Key Stage 1 assessments in Year 2, and at least Level 4 in their Key Stage 2 tests in Year 6. It follows, therefore, that in order to make the expected progress from the end of Key Stage 1 to the end of Key Stage 2, pupils must achieve the following results: Key Stage 1 result

Key Stage 2 result

Level 1

Level 3 or higher

Level 2

Level 4 or higher

Level 3

Level 5

In a similar way, pupils in secondary schools who achieved Level 4 at the end of Key Stage 2 are expected to achieve at least a grade C in their English and mathematics GCSEs by Year 11 – or the end of Key Stage 4. It follows, therefore, that in order to make the expected progress from the end of Key Stage 2 to the end of Key Stage 4, pupils must achieve the following results:

25

26

68.

Key Stage 2 result

Key Stage 4 GCSE result

Level 2

Grade E or higher

Level 3

Grade D or higher

Level 4

Grade C or higher

Level 5

Grade B or higher

Because schools set targets based on expected progress in English and mathematics1, it is important that progress measures are considered for the School Report Card. Progress measures are designed to eliminate any low expectations of lower attaining pupils and give all pupils an equal chance of success. They have the advantage of being relatively straightforward and easy to understand, relating the actual progress made by individual pupils. However, they have the disadvantages associated with “threshold” measures in that they can focus attention on pupils near to the threshold to the potential detriment of others, and they may not include sufficient stretch for high attaining pupils.

Value added (VA) 69.

Value added measures are a familiar way of measuring progress. They enable a comparison of the progress made by pupils between the end of one Key Stage and another, compared with other pupils who have the same or similar prior attainment.

70.

The methodology for value added groups all pupils at the end of a particular Key Stage – nationally – based on their prior attainment at the end of the previous Key Stage, regardless of their personal characteristics or circumstances. Each pupil’s attainment in the later Key Stage is then compared with the average of their peers. Those who achieved higher results than the average of their peers are said to have made more progress than average. Those who achieved lower results than the average of their peers are said to have made less progress than average. A school’s value added score is the average of its individual pupils’ value added scores.

71.

Value added measures are more holistic than progress measures, but are less transparent. They also cause some to worry that a recognition of slower progress made by some lower attaining pupils can contribute to entrenching lower expectations, or to excusing insufficient focus on providing additional support to pupils who have, or are at risk of, falling behind. However, for secondary schools value added measures have the significant advantage of recognising the full range of pupil achievement across all GCSEs and equivalent qualifications.

72.

We believe, therefore, that the Pupil Progress category on the School Report Card for primary and secondary schools should include progress measures in English and mathematics, and a measure of value added, because they complement one other. The following paragraphs discuss how these indicators of progress should be contextualised.

1 Primary schools have set targets based on two levels of progress from Key Stage 1 to Key Stage 2 since 2007. Secondary schools will set targets based on expected progress from Key Stage 2 to Key Stage 4 from Autumn 2009.

A School Report Card: Prospectus

Contextualising pupil progress 73.

As discussed in paragraphs 42 – 48, we believe that when measuring a school’s performance based on its academic outcomes, the Pupil Progress category should be the means through which we take account of context. The purpose of contextualising the indicators in this category would be to enable fair comparisons to be made between the performance of schools with different intakes and facing different challenges. There are essentially two possible approaches to this.

74.

The first approach is to produce indicators based on comparisons of a school’s performance with that of a group of other schools whose context or pupil intakes are similar – referred to as a “family of schools” or ”statistical neighbours”. Groups of schools could be put together in different ways. For example, one approach might compare a school’s performance with that of other schools in a given local area. Another approach might compare a school’s performance with that of other schools across a wider area, but with similar pupil intakes. While both these approaches are useful for self-evaluation and improvement purposes, allowing similar schools to come together to share their experience and good practice, they are less suitable as a means of holding schools to account. One of the drawbacks of these types of comparison – especially if they are restricted to a small geographical area – is that the comparator group is very narrow. In the extreme, a group could comprise the worst performing schools on all measures so that, within that narrow comparator group, a school is judged the “best” – even though it would be the “worst” in any other group drawn from a wider area. For this reason we have rejected the use of “families of schools” or “statistical neighbours” for the School Report Card.

75.

The second approach is to produce indicators based on comparisons of the performance of individual pupils with that of other pupils nationally with the same characteristics. This is the approach used in producing the contextual value added (CVA) indicators currently used by Ofsted and in the Department’s Achievement and Attainment Tables. CVA measures not only take into account a pupil’s prior attainment, but also other contextual factors, known to have an effect on their progress, that are outside a school’s control – for example, their gender, degree of deprivation, Special Educational Needs, first language, ethnicity, and whether they have recently moved school. As with value added, a school’s CVA score is the average of its individual pupils’ CVA scores.

76.

One of the drawbacks of CVA is that the process of producing a school’s score is complex and difficult to explain to a lay audience. CVA scores can have wide confidence intervals, and therefore may not sufficiently differentiate one school from another. Further, CVA can sometimes be seen as excusing the lower attainment of some groups of pupils, or implying that lower expectations of these groups are acceptable.

77.

Nevertheless, CVA is widely recognised as the fairest method of contextualising pupil progress, because it is based on individual pupil characteristics and their prior attainment, and therefore not prone to the biases that can be created by comparisons based on school-level similarities. We, therefore, believe that some form of CVA is the best means of contextualising Pupil Progress. However, in light of the concerns about

27

28

CVA, we will review the factors that should be taken into account and the methodology used to calculate scores during the pilot phase.

Confidence intervals 78.

CVA scores are commonly published with “confidence intervals”. Confidence intervals are a statistical means of showing the range within which readers can be confident that a school’s CVA score represents its “true” effectiveness. They are directly associated with the number of pupils at a school included in its CVA calculation, with smaller numbers resulting in wider confidence intervals – because there is less evidence on which to judge a school’s effectiveness.

79.

Confidence intervals also determine whether a school’s CVA score can reasonably be said to be above, below, or not significantly different from average. To illustrate this point, the chart below gives four examples of CVA scores, where the score is shown by the “dot” and the confidence interval by its vertical “whiskers”. In the left-hand example, the lower limit of the confidence interval is above the national average. This represents a school where pupils made, on average, significantly more progress than pupils nationally. In the righthand example, the upper limit of the confidence interval is below the national average. This represents a school where pupils made, on average, significantly less progress than pupils nationally. In the middle two examples, the upper and lower limits of the confidence intervals straddle the national average. They represent schools where CVA scores are not significantly different from the national average.

80.

In the pilot, we will explore the limited amount of differentiation that the current CVA methodology can afford. We will review whether it is appropriate to use a school’s CVA score without reference to its confidence interval. We will also consider whether, as an alternative, it would be sufficient to simply categorise a school as either above, below or not significantly different from average – otherwise referred to as its CVA “significance state”.

Significantly above average Not significantly different from average

Significantly below average

A School Report Card: Prospectus

Baselining Contextual Value Added 81.

An issue with the current CVA model is that results are relative – the indicator shows performance in relation to this year’s national average for the group of schools or pupils used for comparison. We believe that the possibility should be explored of establishing a baseline for CVA that, for a given period, would not be re-calculated annually, enabling schools to demonstrate absolute progress rather than progress in relation to other schools. We, therefore, intend to test a modified CVA model during the pilot phase where pupil progress in the 2009 test and exam results are measured in a CVA model where a baseline is set using the known impact on progress established in 2008.

Pilot options 82.

We intend to take forward development work on a range of options, and to pilot them thoroughly before making decisions about the approach to contextualisation to be adopted in the School Report Card, and will test two combinations of indicators. Primary school indicators Proportion of pupils making the expected progress from Key Stage 1 to Key Stage 2 in English Proportion of pupils making the expected progress from Key Stage 1 to Key Stage 2 mathematics Baselined CVA score Baselined CVA significance state Secondary school indicators Proportion of pupils making the expected progress from Key Stage 2 English to GCSE English Proportion of pupils making the expected progress from Key Stage 2 mathematics to GCSE mathematics Baselined CVA score Baselined CVA significance state

83.

As with the Pupil Attainment category, we will also use the pilot to explore various options for weighting the relative importance of each indicator within the Pupil Progress category.

29

30

Pupil wellbeing

84.

Schools make a difference to pupil wellbeing, and the inclusion of pupil wellbeing as a performance category in the School Report Card will formalise what has long been an under-recognised feature of the work of most schools. Schools help children and young people grow up as healthy, confident and motivated young people, well equipped to fashion successful lives and keen to contribute positively to their communities.

85.

As the White Paper makes clear, these wider outcomes are important in their own right, and they underpin pupil attainment and progress. Pupils who feel unsafe, whose health is poor and who have negative attitudes may not achieve as well as they could, while for those who thrive, the possibilities are boundless.

86.

Increasingly, schools are recognising both their responsibility (and, since September 2007, their statutory duty) to promote pupil wellbeing, and the wide range of ways in which they can do so. As they develop extended services and build effective local partnerships with other agencies which work with children, young people and families, the potential of schools to have a beneficial impact on their pupils’ lives is becoming even greater, and their role as significant partners within the local Children’s Trust is increasingly realised.

87.

Identifying the right measures to reflect the wellbeing of children to which schools contribute is not straightforward. In recent years, the five Every Child Matters outcomes2 have provided a widely accepted framework for discussing pupil wellbeing. The National Indicator Set used to evaluate the performance of local authorities is organised around the five outcomes, but most of its measures are not available for individual schools. Even if they were, the aspects of the outcomes to which they relate are often not those for which it would be fair to hold schools directly to account. Schools can affect the outcomes, but many other factors influence them as well. What should to be measured, where possible, is the school’s contribution to pupil wellbeing.

88.

There is a range of quantitative and qualitative indicators that might be used to measure a school’s contribution to pupil wellbeing. Ofsted and the Department’s response to the joint consultation on Indicators of a school’s contribution to wellbeing, published at the same time as this document3, sets out the possibility of using the quantitative measures in the Table below, which are currently available at national level and for most schools. Perception surveys with parents and pupils could yield further qualitative measures, also set out in the Table below.

2 The five Every Child Matters outcomes are: be healthy, stay safe, enjoy and achieve, make a positive contribution, and achieve economic wellbeing. 3 The response to the consultation on Indicators of a school’s contribution to wellbeing can be found at www.ofsted.gov.uk

A School Report Card: Prospectus

Quantitative Wellbeing Indicators

Qualitative Wellbeing Indicators

School measures of:

the extent to which a school:

attendance and persistent absence

●●

promotes healthy eating

●●

feel safe

●●

permanent exclusions

●●

●●

experience bullying

●●

post-16 progression

●●

●●

pupils provided with at least two hours per week of high quality PE and sport

promotes exercise and a healthy lifestyle and (for younger children) play

know whom to approach if they have a concern

●●

enjoy school

●●

are making good progress

●●

feel listened to

●●

are able to influence decisions in the school

●●

●●

89.

the extent to which pupils:

●●

the uptake of school lunches

discourages smoking, consumption of alcohol and use of illegal drugs and other harmful substances

●●

gives good guidance on relationships and sexual health

●●

helps pupils to manage their feelings and be resilient

●●

promotes equality and counteracts discrimination

●●

provides a good range of additional activities

●●

gives pupils good opportunities to contribute to the local community

●●

supports pupils to make choices that will help them progress towards a chosen career/further study

Ofsted will use the quantitative indicators in this table in school inspections from September 2009. These indicators provide one source of evidence for aspects of wellbeing. They will be used alongside a wide range of other evidence gathered through inspection to help inspectors reach their judgements. For example, in judging how well pupils adopt healthy lifestyles, inspectors will want to discuss with the school the proportion of pupils who take part in at least two hours per week of high quality sport. This indicator itself will not be the single determinant of the judgement about the extent

31

32

to which pupils adopt healthy lifestyles, as inspectors will also consider other evidence. Ofsted is planning to make national benchmark data available on the quantitative indicators for use in the conversations between schools and inspectors. 90.

At the moment these quantitative indicators provide the only proxy data we have for a school’s contribution to pupil wellbeing. With regard to the School Report Card, these indicators might provide a partial insight into some aspects of wellbeing – for example, the take-up of school lunches could be an indication of the contribution a school is making to promoting healthy eating. Permanent exclusions data on the other hand is likely to cover too few pupils to be viable for use in the School Report Card – especially in primary schools. And we need to be very wary of the risk of creating perverse incentives around behaviour policy and exclusions decisions through the School Report Card. Permanent, and fixed term, exclusions are the right response in certain situations, and the School Report Card should not create a disincentive for their proper use. While it makes sense for this information to be considered during inspection, when the wider context of decisions to exclude can be taken into account, we do not believe that it would be appropriate to incorporate it on the School Report Card. More generally, the use of these quantitative indicators in the School Report Card to systematically derive a category score for Pupil Wellbeing will need very careful consideration.

91.

Attendance and persistent absence indicators have been in use in the accountability system for some years through publication in the Achievement and Attainment Tables and use in Ofsted inspections. We see no need to develop these performance indicators further and will consider their contribution to the School Report Card once we have a view on the complete set of indicators which are robust enough for use in the Pupil Wellbeing category.

92.

An important aspect of a schools contribution to a pupil’s long term wellbeing and success later in life is how well schools prepare, encourage and support all their young people to take their learning to the next level and continue to progress post-16.  This is becoming increasingly important if young people are going to fully benefit from the reforms we are making to education and training for 14-19 year olds, so we want to focus schools’ attention on preparing their young people not only for success at 16, but also for success at 19 and to recognise and reward those schools that do this well.

93.

Newly developed 16-19 progression measures show the proportion of pupils completing Year 11 at a school who went on to participate in learning and achieve further qualification levels by age 19.  We are making 16-19 progression data available to all mainstream maintained secondary schools this September to support their selfevaluation. As the data will initially only be available to individual schools for selfevaluation purposes, Ofsted will also use information about pupils at a school who did not stay in education, employment or training (NEET).  Together these measures can be used to help schools, with School Improvement Partners and inspectors, to evaluate their contribution to pupil wellbeing and future success. We will work with schools in the first year of piloting from September 2009 to test the robustness of 16-19 progression measures for use as indicators of Pupil Wellbeing in the School Report Card.

A School Report Card: Prospectus

94.

Ofsted’s experience over the coming year will inform us whether or not the absence, sport or school lunch indicators provide a useful source of evidence for helping to reach judgements on aspects of wellbeing. If the indicators prove to be a valuable source of evidence, we will investigate whether such school-level data would be appropriate for use in the School Report Card, recognising that the use of the indicator on the School Report Card would be different to its use in the inspection process.

95.

Qualitative indicators of Pupil Wellbeing (and a school’s contribution to it) could be based on data from surveys of parents’ and pupils’ views. At present, there is no nationally comparable set of perception survey data – many schools commission surveys of parents’ and pupils’ views of different kinds from a range of providers. Ofsted’s and the Department’s joint consultation on Indicators of a school’s contribution to wellbeing recommended that schools’ own surveys could ask additionally for parents’ and pupils’ views on the aspects of their provision set out in the Table above. Ofsted proposes to trial the use of parent and pupil surveys from autumn 2009. At the same time, we will investigate how the data can be used to derive indicators of Pupil Wellbeing for the School Report Card.

96.

In the second year of piloting from September 2010, we will consider how all available wellbeing indicators can be used to derive a category score for Pupil Wellbeing and test their use.

97.

Further to this, we will explore during the pilot phase the possibility of developing a measure of the quality of the extended services provided through a school.

98.

Finally, in addition to developing performance indicators that will contribute to a school’s score for the Pupil Wellbeing category, we will include on the School Report Card the most recent Ofsted judgement on the behaviour of learners. This follows a recommendation by Sir Alan Steer, in the concluding report of his review of pupil behaviour published in April 2009.

33

34

Parents’ and pupils’ perceptions

99.

Parents’ Perceptions and Pupils’ Perceptions have been proposed as separate performance categories. The satisfaction of both pupils and their parents or carers with a school’s work is both an indicator and a vital element of the school’s performance. We propose, therefore, to incorporate scores of the satisfaction of these two groups of users, separately, in the School Report Card. We propose that these scores and ratings are underpinned by indicators derived from surveys of the views of pupils and their parents or carers.

100.

Although they overlap, it is important to distinguish between the two kinds of indicators that surveys of parents and pupils could yield. As already discussed at paragraph 88, surveys can provide important indicators of pupil wellbeing that can be used alongside the quantifiable indicators derived from other sources. Surveys can also yield indicators about parents’ and pupils’ perceptions about a school’s provision more generally, for example their satisfaction with a school’s:

101.

●●

direction and ethos

●●

teaching

●●

curriculum

●●

extra-curricular activities

●●

guidance and support

Currently, as for indicators of wellbeing, there is no national set of data that will provide indicators of satisfaction for individual schools or nationally. As with indicators of wellbeing, Ofsted intends to trial the use of parent and pupil surveys over the coming year to develop comparable data, from which indicators of satisfaction could be produced. At the same time, we will investigate how the data can be used to derive indicators of Parents’ Perceptions and Pupils’ Perceptions for the School Report Card and pilot their use in the second year of piloting from September 2010.

A School Report Card: Prospectus

Breaking the link between disadvantage and low attainment

102.

All schools aim to deliver a personalised education which meets the needs of all their pupils and enables them to achieve their maximum potential, both academically and in wider outcomes. Our accountability system needs to measure how successful each school is in this endeavour. The School Report Card must reflect schools’ successes in improving the attainment of disadvantaged pupils alongside, and not at the expense of, their peers. Under our proposals, no school will gain credit for narrowing the gap by “levelling down”.

103.

A pupil’s prior attainment is a key indicator of their future success. But, while every child is an individual, national data show that certain groups systematically under-perform in relation to their peers, and that their circumstances have an effect on their achievement independent of their prior attainment. Ethnic background, levels of household income, whether a child has special educational needs, or is looked after by a local authority are all factors which correlate closely with pupils who under-perform at school. Therefore, within the overall context of personalisation, we need through the accountability system to provide schools with positive incentives to identify and monitor the progress of these groups of children, to ensure they buck the historical trend and do not fall behind their peers.

104.

Of course, not all disadvantaged children achieve poor results. On the contrary, many do well at school and go on to lead successful and fulfilled lives. But even in schools where pupils do well, there can be unacceptably large gaps in performance between particular groups of students. For example, pupils eligible for free school meals (FSM) on average make less progress between Key Stages than their more affluent peers at any given level of prior attainment. In order to overcome historic patterns of under-performance, teachers need to set equally high expectations for every pupil based on their potential, and not be content with achievement in line with national trends if a pupil can achieve more.

105.

Reducing variance between pupil outcomes is an issue within every school, but data shows that breaking the link between disadvantage and low attainment is a particularly high priority. The gap between the attainment of pupils eligible for FSM and their peers is 28 percentage points at Key Stage 4 and 20 percentage points at Key Stage 2 – a larger gap than for any ethnic minority group, and at least twice as large as the gap between boys and girls. It is an urgent priority to narrow this gap, which has been hitherto largely

35

36

invisible at pupil level, so that circumstances at birth have less influence on pupils’ future life-chances. Ultimately, our aim must be that all children and young people should be able to succeed and achieve their full potential, regardless of their background, and lack of equity is no longer a feature of English education. 106.

In due course, when we have established a suitable dataset on pupil wellbeing, we will consider whether that too should be covered by the Narrowing Gaps In Pupil Performance category.

How to measure the gap 107.

The indicators we propose to use in the pilot to measure the Narrowing Gaps in Pupil Performance category are designed to specifically address under-performance correlated with poverty or ethnicity, based on established Key Stage 2 and 4 attainment thresholds. Narrowing Gaps in Pupil Performance will be a supplementary category which, alongside the Pupil Progress category, will measure how well a school enables every child to do well.

108.

During the first year of piloting from September 2009, we will test the design and use of new indicators aimed at narrowing the gap in attainment. At the same time, we will consider with stakeholders whether new measures should be included in the Pupil Progress category to specifically address “gaps” – for example, checks on any variance in the amount of progress made between pupils at different starting points. Or we might decide that a measure of gaps in progress would be a suitable addition to the Narrowing Gaps in Pupil Performance category.

109.

The starting definition of a disadvantaged pupil included in the Narrowing Gaps in Pupil Performance category will be the same as that used in Regulations that came into force on 31 December 2008 requiring local authorities to set targets for eight ”under-performing groups”. These groups are:

110.

●●

Black Caribbean

●●

White/Black Caribbean

●●

Black African and White/Black African4

●●

Black Other

●●

Pakistani

●●

White Other

●●

Gypsy, Roma and Traveller of Irish heritage5

●●

Children eligible for free school meals

While children who are eligible for FSM fall within the target setting Regulations, we recognise that eligibility for FSM is not a perfect measure of pupils who are affected by income deprivation. We therefore propose to test an alternative measure that combines eligibility for FSM with the Income Deprivation Affecting Children Index (IDACI), which is a

4 This is a new group combining the standard Black African and Mixed White and Black African groups into one. 5 This is a new group combining the standard Gypsy/Roma and Traveller of Irish Heritage groups into one.

A School Report Card: Prospectus

post code based deprivation indicator currently used in the calculation of Contextual Value Added used in the Achievement and Attainment Tables and by Ofsted. 111.

Because other groups of children – such as those in care – are also recognised as being affected by disadvantage, we will periodically review which groups should be included in this performance category. In particular, the Department is working on ways to improve the data that we collect on looked-after children. While data are unlikely to be available in time for the first year of piloting, we will review their inclusion in time for the second year of piloting from September 2010.

112.

Some schools have no such disadvantaged pupils, while in other schools the number of disadvantaged pupils will be too small to provide a reliable measure of their attainment as a group. In order that these schools are not penalised in the School Report Card, the Narrowing Gaps in Pupil Performance category will not apply universally. Instead, we propose that the School Report Card use a “credit system”. Those schools that have sufficient numbers of disadvantaged pupils will gain credit depending on the extent to which they achieve continual improvement for all pupils, while at the same time narrowing the attainment gap between their disadvantaged pupils and their more advantaged peers.

113.

During the pilot phase, we will also consider carefully whether penalties would be appropriate in any instances where gaps in attainment widen. Though intuitively this might make sense, there is a real risk that doing so might unfairly penalise schools because of a change in their intake, rather than their performance. Conversely a credit only system might help change behaviour and see fairer access to all schools for children from disadvantaged backgrounds.

114.

Separately identifying each disadvantaged group is attractive in that it enables the attainment of individual groups to be considered in their own right. However, having eight separate groups would mean that the numbers in many schools – especially in primary schools – would be too small to be reliably reported. For this reason, we are minded to calculate an aggregate measure for primary schools that merges all disadvantaged pupils into a single group. This approach is likely to ensure that there is a critical mass in a greater number of schools, enabling the results to be published and, additionally, avoiding any double counting of pupils who fall into more than one group (e.g. a black Caribbean pupil who is eligible for free school meals).

115.

In secondary schools, where the number of disadvantaged pupils is sufficiently large, we will test in the pilot phase separate identification of income deprived pupils. Additionally, if any one of the minority ethnic groups is sufficiently large we will consider separately identifying that group too – subject to exploring ways to prevent double-counting.

37

38

Methodology for attainment 116.

117.

The attainment indicators we propose to test for the Narrowing Gaps in Pupil Performance category will be the same as those set out in the Local Authority target setting Regulations, namely: ●●

at Key Stage 2, the proportion achieving Level 4 or above in both English and mathematics

●●

at Key Stage 4, the proportion achieving 5 or more GCSEs at grades A*-C (or the equivalent) including English and mathematics

The attainment “gap” that will be measured is the difference between the proportion of disadvantaged pupils and the proportion of other pupils at a school, who reach the expected attainment levels set out above. Whether an attainment gap has been ‘narrowed’ will be based on how much, and the rate at which, those differences have changed from one year to the next. Measurement of the Narrowing Gaps in Pupil Performance category is designed to reward a school where: ●●

the attainment of both its disadvantaged pupils and their peers has increased above the previous year’s results; and

●●

the attainment of its disadvantaged pupils has increased at a faster rate than their peers.

We envisage that the way in which schools will gain credit will fall into four broad scenarios set out below – all of which will be tested in the pilot. 118.

We recognise that there are a number of circumstances where the School Report Card needs to acknowledge the absolute level of attainment or strong year-on-year improvement in the results of disadvantaged pupils at a school.  While not an exhaustive list, these circumstances include where: ●●

the attainment of a school’s disadvantaged pupils already exceeds that of their more advantaged pupils

●●

the attainment of a school’s disadvantaged pupils shows very strong year-on-year gains but the attainment of their peers has slipped.

We will use the pilot to identify any other circumstances that need to be recognised alongside these. 119.

During the pilot we will test the following broad scenarios. For illustrative purposes, the scenarios are based on the Key Stage 2 results for primary schools, using the attainment of pupils eligible for free school meals and those not eligible for free school meals as the illustrative national averages. All the principles apply equally to using the Key Stage 4 indicator for secondary schools. The value of the credits shown in the scenarios is also illustrative: they are designed to give a sense of how the credit might vary across the four scenarios and within each scenario. The exact level of credit will be determined later in the pilot phase.

A School Report Card: Prospectus

SCENARIO A: the attainment of disadvantaged pupils starts below that of their peers Credit will be given where:

➔ the attainment of both disadvantaged pupils and their peers are above the level of the previous year’s results, and

➔ the attainment of disadvantaged pupils increased at a faster rate than their peers in the school. Overall, this would mean that the school’s headline attainment rises and the gap between the disadvantaged pupils and their peers closes. This is an ideal scenario which will attract greatest credit up to a maximum of 20 points (see below). % attaining Level 4 or above in both English and mathematics at KS2 Disadvantaged pupils

Peer pupils

2007

2008

2007

2008 Credit?

National

51%

54%

75%

76%

School 1

45%

47%

76%

77%

Yes

School 2

45%

47%

76%

79%

No

CREDITS: In School 1, the gap between the attainment of disadvantaged pupils and their peers has narrowed – the gap was 31 percentage points in 2007 and is one percentage point lower in 2008. Across all schools we would expect the amount by which the gap has narrowed will vary from school to school. To reflect this, schools will be split into quartiles (i.e. split into four groups with 25% of schools in each group) based on the amount by which the gap narrowed. The bottom quartile (i.e. the schools where their gap narrowed by the smallest amount) will have their overall score increased by, say, 5 points; the next quartile group will have their overall score increased by 10 points; and so on until the top quartile of schools (where the gap narrowed by the most) which will have their overall score increased by 20 points. No credit would be given to School 2. The attainment of their disadvantaged pupils did increase by two percentage points (from 45% in 2007 to 47% in 2008), however this two percentage point improvement was lower than the rate of improvement of the other pupils in their school which increased by three percentage points (from 76% to 79%).

39

40

SCENARIO B: the attainment of disadvantaged pupils starts below that of their peers Credit will be given where:

➔ the attainment of disadvantaged pupils is above the level of the previous year; and

➔ the rate of improvement of disadvantaged pupils in the school is greater than the national rate of improvement for disadvantaged pupils; and

➔ the attainment of their peers falls slightly but remains above national average. While the ultimate outcome of this scenario is that the gap has narrowed it has done so in an undesirable way, so the credit available will be limited to a maximum of four points (see below). % attaining Level 4 or above in both English and mathematics at KS2 Disadvantaged pupils

Peer pupils

2007

2008

2007

2008 Credit?

National

51%

54%

75%

76%

School 3

45%

49%

78%

77%

Yes

School 4

45%

46%

77%

76%

No

School 5

45%

45%

77%

77%

No

School 6

45%

44%

77%

76%

No

School 7

45%

47%

77%

73%

No

School 8

45%

49%

Insufficient pupils

Insufficient pupils

Yes

CREDITS: In this scenario the attainment gap would have narrowed but did so in an undesirable way which means it would be inappropriate to follow the same approach as Scenario A and give credit based on narrowing the gap. Instead this scenario gives credit based on the change from last year in the proportion of disadvantaged pupils who attained level 4 or above. Specifically schools would only get credit where the rate of change of their disadvantaged pupils’ attainment is greater than the national rate of change for disadvantaged pupils.

A School Report Card: Prospectus

School 3 and School 4 illustrate this scenario: in both schools the attainment gap narrows partly because the attainment of the peer pupils fell. The top line of the table gives the national average attainment and shows that the attainment of disadvantaged pupils nationally increased from 51% to 54%; so the national rate of change is a three percentage point increase. By comparison, the rate of change of disadvantaged pupils’ attainment in School 3 is four percentage points (up from 45% to 49%); as that is greater than the national rate of change, School 3 would get a credit. Conversely, the rate of change of disadvantaged pupils’ attainment in School 4 is only one percentage point (up from 45% to 46%), as that is below the national rate of change no credit would be awarded to School 4. As illustrated by School 8, this scenario also covers the circumstances where there is a sufficient number of disadvantaged pupils but there are an insufficient number of peers to make a comparison. In this circumstance it would be inappropriate to give credit based on narrowing the gap, so again the change from last year in the proportion of disadvantaged pupils who attained Level 4 or above is used. The rate of change of disadvantaged pupil attainment in School 8 is four percentage points (up from 45% to 49%). As that is greater than the national rate of change of three percentage points, School 8 would get a credit. To calculate the amount of credit, the schools will be split into quartiles: the bottom quartile (those schools where the rate of change in the attainment of disadvantaged pupils is just above the national average) will have their overall score increased by one point; the next quartile will have their overall score increased by two points; up to the top quartile (those schools where the rate of change in disadvantaged pupils’ attainment was the fastest), who will have their score increased by four points.

41

42

SCENARIO C: Where the attainment of disadvantaged pupils is already above that of their peers and above the national average, the school will not be penalised for increasing the attainment of the more advantaged pupils provided the attainment of the disadvantaged pupils increases too and remains above their peers. Credit will be given where:

➔ the attainment of both disadvantaged pupils and their peers is above the level of the previous year’s results; and

➔ the attainment of disadvantaged pupils in the school remains above the national average level of attainment for disadvantaged pupils; and

➔ the attainment of disadvantaged pupils remains above the attainment of their peers in school. This is aimed at capturing schools which have already high performing disadvantaged pupils and a maximum of 10 credit points will be available. % attaining Level 4 or above in both English and mathematics at KS2 Disadvantaged pupils

Peer pupils

2007

2008

2007

2008 Credit?

National

51%

54%

75%

76%

School 9

56%

57%

45%

48%

Yes

School 10

56%

57%

54%

58%

No

School 11

56%

56%

45%

48%

No

School 12

56%

55%

45%

48%

No

School 13

56%

57%

45%

44%

No

School 14

56%

57%

Insufficient pupils

Insufficient pupils

Yes

Under this scenario the attainment of disadvantaged pupils already exceeds that of their peers, so the credit again is not based on the gap. Following the same principles as Scenario B, this scenario gives credit based on the change from last year in the proportion of disadvantaged pupils who attained level 4 or above. As the attainment of disadvantaged pupils is above the national average any improvement will gain credit. In the table, School 10 does not receive a credit. In 2007 the attainment of disadvantaged pupils was above that of their peers, and while the attainment for both groups increased, the attainment of the peer group was above that of the disadvantaged pupils in 2008.

A School Report Card: Prospectus

The schools will be split into quartiles: the bottom quartile (those schools where the rate of change in the attainment of disadvantaged pupils was smallest) will have their overall score increased by two and a half points; the next quartile will have their overall score increased by five points; up to the top quartile (those schools where the rate of change in disadvantaged pupils’ attainment was the fastest), who will have their score increased by 10 points. SCENARIO D: Where the attainment of disadvantaged pupils is already above that of their peers but below the national average. Credit will be given where:

➔ the attainment of both disadvantaged pupils and their peers is above the level of the previous year’s results; and

➔ the rate of improvement of disadvantaged pupils in the school is greater than the national rate of improvement for disadvantaged pupils; and

➔ the attainment of disadvantaged pupils remains above the attainment of their peers in school. This scenario is aimed at rewarding schools with lower than average attainment but where disadvantaged pupils’ attainment has shown strong improvement and a maximum of 10 credit points will be available. % attaining Level 4 or above in both English and mathematics at KS2 Disadvantaged pupils

Peer pupils

2007

2008

2007

2008 Credit?

National

51%

54%

75%

76%

School 15

36%

39%

35%

40%

No

School 16

36%

37%

34%

36%

No

School 17

35%

39%

32%

35%

Yes

School 18

35%

39%

Insufficient pupils

Insufficient pupils

Yes

Exactly as for Scenario C: the attainment of disadvantaged pupils already exceeds that of their peers and credit is based on the change from last year in the proportion of disadvantaged pupils who attained level 4 or above. The calculation of the credit is the same as Scenario B but with higher credit values. To calculate the amount of credit, the schools will be split into quartiles: the bottom quartile (those schools where the rate of change in the attainment of disadvantaged pupils is just above the national average) will have their overall score increased by two and a half points; the next quartile will have their overall score increased by five points; up to the top quartile (those schools where the rate of change in disadvantaged pupils’ attainment was the fastest), who will have their score increased by 10 points.

43

44

Special educational needs and disability

120.

With 20% of all children identified as having a special educational need (SEN), all mainstream schools will be concerned with achievement for this group, particularly as outcomes are well below those of children without SEN.  Traditionally, children with SEN and disabilities have suffered from low expectations and their parents are less likely to be satisfied than other parents with their engagement with schools.  We are committed to the School Report Card containing a measure which reflects schools’ success in securing positive outcomes for children with SEN, as well as reflecting the views of pupils with SEN and their parents.

121.

One approach would be to separate out the results for children with SEN, and disabled children in future, and publish a score for each performance category of the School Report Card.  However, we would not want a measure that incentivised schools to over or under identify children with SEN as a means of influencing their scores.  And the current variation between different localities in the assessment and statementing of pupils with SEN makes comparisons based on attainment and progress more complicated. This points towards using measures which focus on the progress of the lowest achievers, the majority of whom are identified as having SEN; considering how the Narrowing Gaps in Pupil Performance category could be used; and separating out satisfaction results for pupils with SEN and disability and for their parents, and comparing them to those of other pupils and parents.  We will consider the most appropriate ways to reflect schools’ outcomes for this group of children in the School Report Card during the first year of piloting, starting from September 2009, and test the options in the second year of piloting, starting from September 2010.

A School Report Card: Prospectus

Partnership working

122.

A majority of respondents to our earlier consultation agreed that the School Report Card should include information about the school’s contribution to its local partnerships. Those who agreed felt that Every Child Matters and 14-19 reforms had raised the profile of partnership working, which is integral to the vision of the White Paper. Inclusion of a reference to partnership working in the School Report Card would make this an aspect of their work for which schools would be held accountable and would ensure that they give high priority to improving links with parents, employers and the local community, other agencies and other schools, colleges and other providers.

123.

As part of its school inspection framework. Ofsted will introduce a revised partnership grade from September 2009. The partnership grade will assess the effectiveness of a school’s partnership working in promoting better outcomes for its pupils – giving schools increased recognition for their partnership work.

124.

We are committed to recognising partnership working as part of the proposed new School Report Card. As the detail develops, we will consult further on whether this should be through a separate indicator for partnership working and whether or not this should be based on the Ofsted judgement on the impact of partnership working. We will also explore in the pilot phase how the School Report Card (or elements of the School Report Card) could be aggregated to recognise outcomes for formal partnerships.

45

46

Coverage

125.

Of the 285 respondents on this issue, 67% agreed that the School Report Card should cover all maintained schools, including special schools, Pupil Referral Units and alternative provision in due course. Our intention is to introduce School Report Cards in the first instance for all mainstream primary and secondary schools (including Academies); and using the lessons learned to help us refine and develop School Report Cards for special schools, Pupils Referral Units and alternative provision.

126.

It will be important to ensure that the School Report Card can properly reflect the quality of a school’s early years or sixth form provision, where it exists. We will work over the pilot phase to align the School Report Card and the Framework for Excellence (FfE) where appropriate, and to consider how the School Report Card can reflect results from the FfE for school sixth forms.

127.

Local authorities set targets to increase the number of children achieving a good level of development in the Early Years Foundation Stage and to narrow the gap between the lowest achieving 20 percent of pupils and the rest. But there are no school-level targets for early years’ provision. For those schools with early years provision, we will consider how to represent the effectiveness of that provision in the School Report Card during the pilot phase.

A School Report Card: Prospectus

Publication

128.

Our earlier consultation asked whether respondents agreed that the School Report Card should be published annually, and whether the results of Ofsted inspections should be incorporated into School Report Cards as soon as they are available.

129.

Most (70%) of the 284 respondents to the first question agreed that the School Report Card should be published annually; their comments indicated that this would be often enough to inform parents promptly but not so frequent as to make it unmanageable. There was a range of views about when the School Report Card should be published, and whether there should be a single national publication date. Several respondents (8%) felt that the School Report Card should be updated as necessary during the year, while others (5%) thought that this would place an undue burden on schools.

130.

The great majority (75%) of the respondents to the second question agreed that the results of Ofsted inspections should be incorporated into School Report Cards as soon as they were available. It was noted that this would be essential as Ofsted inspection reports were public documents and might conflict with the information in the School Report Card if the latter were not updated. Those who disagreed included two professional associations opposed to the inclusion of information from Ofsted inspection reports in the School Report Card.

131.

We have noted the outcomes of the earlier consultation and reaffirm our proposal that the School Report Card should be published at least annually, and that the results of more recent Ofsted inspections should be incorporated as soon as they are available. Because it makes most sense to build on existing data collection processes developed for the Achievement and Attainment Tables, the School Report Card will be compiled by the Department for Children, Schools and Families. To enable schools to publish their own School Report Card, we propose to provide them with an electronic copy which they can publish locally – alongside national publication of all schools’ Report Cards by the Department.

132.

We will delay a decision about the interim updating of schools’ scores and ratings until the pilot phase has been completed, when we will have a better understanding of the implications of data availability and production times.

133.

The School Report Card will only be properly able to reflect the full range of schools’ responsibilities – beyond the academic – if appropriate data is available. The response to Ofsted’s and the Department for Children, Schools and Families’ joint consultation on Indicators of a school’s contribution to wellbeing sets out a range of data that schools should be collecting and using to evaluate their contribution to pupil wellbeing. We

47

48

anticipate that the School Report Card will draw on this data, including the views of parents and pupils gathered through perception surveys, although we would only expect to use some of the indicators identified in the consultation. To ensure that the School Report Card can reflect this important information, we intend to legislate to ensure that schools have a duty to report such data.

Changes to other reporting of school information 134.

In our earlier consultation, respondents were asked whether they agreed that the requirement on schools to complete the School Profile should be ended. Over four fifths (84%) agreed. With the introduction of the School Report Card, the legal requirement on schools to produce a School Profile will be removed.

135.

The School Report Card will also supersede the Achievement and Attainment Tables as the central source of externally verified, objective information on the outcomes achieved by schools. That will not mean a reduction in the information publicly available about schools’ performance. All the detailed performance data used to prepare the School Report Card will continue to be published, so that users can understand how the School Report Card has been prepared and so that they can see a school’s outcomes in specific areas of interest. Where further data are collected by the Government, this will also, where appropriate, be made available to the public; and, in line with the Government’s wider commitment to making data on all public services available, we will explore how we can make it easy for parents to access data that reflects their individual interests and concerns.

136.

The School Report Card, however, will be the principal tool used for accountability, ensuring that a school will be held to account for its overall performance across the full range of its responsibilities.

A School Report Card: Prospectus

Next steps

137.

We will engage with all stakeholders through two years of piloting beginning in September 2009. At the core of the pilot will be a substantial sample of schools which will work with us in ensuring the underlying systems produce timely, accurate data and will contribute to the development of the information that underpins the School Report Card and how it is presented. That in turn, will provide tangible outputs which will be used to engage other stakeholders. The results of the pilot will be published at regular intervals throughout the two years.

138.

In the first year of piloting we will

139.

●●

develop the performance indicators which might be used in the Pupil Attainment, Pupil Progress, and Narrowing Gaps in Pupil Performance categories, and exploring the weightings to be used to produce a score for each of those categories

●●

test the robustness of the 16-19 Progression Measures for use in the Pupil Wellbeing category

●●

Consider which background information about a school should be included on the Report Card and the possibility of a free format field to be completed by the school to demonstrate unique aspects – e.g. the school ethos statement

●●

test options for contextualising the School Report Card information

●●

begin to develop design features – including how the top level information and underpinning data will be presented on the School Report Card; and how the School Report Card will link to other sources of information

●●

continue to consider the need for a single overall grade and how it might be constructed

In the second year of piloting we will ●●

build on lessons learnt in the first year – agreeing and refining methodologies, and improving systems for data collection

●●

when survey data becomes available, develop the indicators which might be used for Pupil Wellbeing, and for Parents’ Perceptions and Pupils’ Perceptions – including exploring the weightings to produce a score for each of those categories

●●

build the public website, based on lessons learnt in the first year

●●

continue to consider the need for a single overall grade and (with a full dataset now available) test how it might be constructed

49

50

●●

test options for reflecting a school’s work with children with Special Educational Needs and disability

●●

pilot a means of showing a school’s performance over time

●●

confirm our arrangements for publication of the School Report Card alongside the Framework for Excellence

A School Report Card: Prospectus

Annex Standardising the indicators

1.

Having identified the combinations of indicators we are minded to pilot, the next step of the process is to combine the indicators within each combination to form an overall category score. The statistical modelling to examine how indicators related to Ofsted’s judgements on school standards demonstrated that combining different “raw” attainment results to derive a category score is undesirable because the indicators are measured on different scales. The problem is illustrated below using an example basket of indicators for a secondary school. School A

School B

School C

School D

99

35

21

47

100

98

97

99

99

35

24

50

Average point score per pupil (capped)

410

300

270

320

Percentage of pupils achieving 2 or more GCSEs at grades A*-C (or the equivalent) in science

100

36

2

50

90

27

80

60

898

531

494

626

Percentage of pupils achieving 5 or more GCSEs at grades A*-C (or equivalent) including English and mathematics Percentage of pupils achieving at least one entry level qualification Percentage of pupils achieving Functional Skills at Level 2

Percentage of pupils achieving a GCSE at grade A*-G (or the equivalent) in a modern foreign language Overall score 2.

The simplest and most accessible way to create the overall attainment category score is to add together the raw scores for each indicator, which is how the “overall score” has been generated in the table above.

3.

The table clearly shows, however, that the capped average point score (APS) is on a different scale to the other indicators. The five other indicators are percentages so have a maximum of one hundred whereas the capped APS has a maximum of 464 (as the APS is

51

52

capped at the “best 8” if all these were A*, which has a value of 58 points, this gives a maximum of 8 x 58 = 464 points). The APS dominates the overall score and even though all the measures are equally weighted together the capped APS has a disproportionately large impact on the overall score. For example, for School B, School C and School D, the APS accounts for over half of their overall score. 4.

The second problem centres on indicators such as percentage achieving at least one entry level qualification (ELQ). Most pupils in most secondary schools achieve at least one ELQ – the national average in 2008 was 98.3% and nine out of ten schools achieved within the range 95.7% to 100% – this means that most schools will broadly score 100. The distribution is illustrated in the chart below: % of pupils achieving any qualification

Mean = 98.81 Std. Dev. = 1.64 N = 3,092

2,000

Frequency

1,500

1,000

500

0 85.0

90.0

95.0

100.00

% of pupils achieving any qualification

5.

Given the tightness of the range, small differences in indicators such as percentages achieving at least one ELQ can be significant but differences will have no impact whatsoever on the overall score. A second problem with these indicators is that, like the capped APS, they have a disproportionately large effect on the overall score. For example, in the example basket above, the APS and ELQ indicators account for three-quarters of the overall score for School B and School C. This substantially lessens the effect of the other indicators in the combination.

6.

The issue of scale could be overcome by converting the capped APS into what would in effect be a percentage by dividing the APS by the maximum possible value of 464. Alternatively the other indicators could be assigned weights, say multiplying them by at least four, to put them broadly on the same scale as the APS. Neither of these two approaches, however, addresses the issues caused by having some indicators with a very compact distribution. The approach that overcomes both the issues of scale and

A School Report Card: Prospectus

distribution is to “standardise” all the indicators. This common statistical approach makes all the indicators comparable by putting them on the same basis. 7.

In its simplest form, standardisation involves subtracting the average (mean) and then dividing by a measure of the variation (standard deviation). Simple standardisation, however, would leave many indicators for many schools negative which we believe is presentationally unacceptable. To overcome this, the indicators are additionally transformed by multiplying by 10 and adding 100 (which means the indicators have a mean of 100 and standard deviation of 10). Schools will be familiar with this approach which is used to calculate age standardised scores.

8.

Three examples below demonstrate how standardisation works: ●●

percentage achieving 5+ A*-C including English and mathematics which for illustrative purposes has a mean of 47 and a standard deviation of 20

●●

capped APS which, again for illustrative purposes, has a mean of 310 and standard deviation of 40

●●

percentage achieving one ELQ with a mean of 98.8 and standard deviation of 1.5. School X

School Y

Percentage of pupils achieving 5 or more GCSEs at grades A*-C (or the equivalent) including English and mathematics

57

25

Subtract mean of 47

10

-22

Divide by standard deviation of 20

0.5

-1.1

5

-11

Add 100

105

89

Standardised percentage achieving 5 or more GCSES at grades A*-C (or the equivalent) including English and mathematics

105

89

Average point score per pupil (capped)

318

278

8

-32

0.2

-0.8

2

-8

Add 100

102

92

Standardised average point score

102

92

Percentage achieving at least one Entry Level qualification

100

97

Subtract mean of 98.8

1.2

-1.8

Divide by standard deviation 1.5

0.8

-1.2

8

-12

Add 100

108

88

Standardised percentage achieving at least one Entry Level qualification

108

88

Multiply by 10

Subtract mean of 310 Divide by standard deviation 40 Multiply by 10

Multiply by 10

53

54

9.

The key element to note about standardisation is how it has affected the difference between the results of School X and School Y for the latter two indicators: ●●

On the raw capped APS, School X has 318 points and School Y has 278 points, so School X is 40 points ahead (which in this example is equivalent to one “old” standard deviation). When standardised the APS of School X is 102 which is 10 points higher than School Y (whose standardised APS is 92), but the difference is still only one “new” standard deviation.

●●

On ELQ, School X has 100% compared to School Y who achieved 97%, so School X is three percentage points ahead (which in this example is two “old” standard deviations). When standardised the ELQ measure for School X is 108, whereas it is 88 for School Y, which puts School X 20 points ahead of School Y. Again, the difference is still only two “new” standard deviations.

10.

This example demonstrates the power of standardisation. Using raw results alone suggested that the achievement of School X and School Y on the ELQ indicator was broadly the same. This simple comparison hides the fact that the range of achievement for this indicator is very narrow and hence small differences can be significant. This was revealed through standardisation which had the effect of accentuating the difference.

11.

Conversely, with the capped APS the difference of 40 suggested the gap was substantial, but given the different order of magnitude standardisation had the effect of diminishing the difference.

12.

While recognising that standardisation adds complexity it does bring every single measure onto the same basis and so enables robust comparison of the indicators. The result of standardisation also has the feel of a contextual value added measure, in that anything above 100 is above the national average, and conversely a standardised value below 100 is below the national average. There are three further benefits which are explained below.

13.

First, the standardised indicators within the combination can be added together to calculate the overall category score without the drawback of being on a different scale. If there were seven indicators in a particular combination, the average would be around 700. If there were 11 indicators in the basket then the average would be around 1,100. To ensure comparability across baskets we would divide by the number of indicators in the combination to produce an overall score with an average of 100 which would maintain the feel of a CVA score.

A School Report Card: Prospectus

14.

Taking the indicators from the table on the previous page, the table below gives simple worked examples of how an overall category score for attainment could be calculated using either two or three indicators: School X

School Y

Standardised percentage achieving 5 or more GCSEs at grades A*-C including English and mathematics

105

89

Standardised average point score

102

92

Overall score (with average 200)

207

181

103.5

90.5

Standardised percentage achieving 5 or more GCSEs at grades A*-C including English and mathematics

105

89

Standardised average point score

102

92

Standardised percentage achieving at least one Entry Level qualification

108

88

Overall score (with average 300)

315

269

Using three indicators, so divide by three to give Overall score (with average 100)

105

89.7

Using two indicators, so divide by two to give Overall score (with average 100)

15.

The second advantage is that the standardised indicators can be weighted together, if necessary, with weights that have an educational or policy meaning. Where the indicators are on a different scale then one facet of the weight would be used to make the indicators broadly comparable and the second facet would be to signal the educational or policy importance. These two facets could in some instances work against each other while in other instances have a magnifying effect. These two facets would make the rationale for having any particular weight totally obscure. As standardisation makes the indicators comparable, any subsequent weights to reflect an educational or policy priority will be transparent.

16.

The third advantage is that in subsequent years the mean and standard deviations from the first year could be retained. Keeping the values from the first year constant creates a baseline, so in subsequent years the overall scores would reflect year-on-year improvement.

55

You can download this publication or order copies online at www.teachernet.gov.uk/publications Search using ref: DCSF-00664-2009 Copies of this publication can be obtained from: DCSF Publications PO Box 5050 Sherwood Park Annesley Nottingham NG15 0DJ Tel: 0845 60 222 60 Fax: 0845 60 333 60 Textphone: 0845 60 555 60 Please quote the ref: 00664-2009DOM-EN ISBN: 978-1-84775-472-1 D16(8256)/0609/33 © Crown copyright 2009 Extracts from this document may be reproduced for non-commercial research, education or training purposes on the condition that the source is acknowledged. For any other use please contact [email protected]

Similar documents

School Report Card

ZhenXhan - 1.7 MB

Card 1

Ксения Соколовская - 68.7 KB

My Card

ananda alya - 564.3 KB

Report final

Amol Katkar - 2.1 MB

Report Final

Ronald C. Valdez Jr. - 2.8 MB

Internship Report

Usman Khalid - 164 KB

Smart Card License Document

Rohit Chatterjee - 77.1 KB

Card Control Tb PDF

joe - 17.2 MB

WiFi Card -

Daniel Benedito - 64.6 KB

WiFi Card - BR_MOBILE_GUEST

Daniel Benedito - 66.1 KB

Internship Report

Shivendra Singh - 1.3 MB

© 2024 VDOCS.RO. Our members: VDOCS.TIPS [GLOBAL] | VDOCS.CZ [CZ] | VDOCS.MX [ES] | VDOCS.PL [PL] | VDOCS.RO [RO]