Monitoring and evaluating curriculum implementation:

Final evaluation report on the implementation of the New Zealand Curriculum 2008-2009

Publication Details

This report presents findings from a national evaluation of the implementation of The New Zealand Curriculum. The project sought to establish a national picture of implementation progress in English-medium schools in the first two years following the curriculum's launch in November 2007.

Author(s): Dr Claire Sinnema, The University of Auckland. Report prepared for the Ministry of Education.

Date Published: March 2011

Please consider the environment before printing the contents of this report.

This report is available as a download (please refer to the 'Downloads' inset box).  To view the individual chapters please refer to the 'Sections' inset box.

Section 2: Methodology

Data for this evaluation were gathered through a range of methods, including web-based and paper surveys at two time points each, and from a series of key informants (focus groups and email interviews). A mixed methods approach (Greene, Benjamin, & Goodyear, 2001) was used in response to the scope of the evaluation, the complexity of curriculum implementation processes, and the desire to both describe and explain implementation progress. The surveys included both quantitative and qualitative items, and the key informant sessions focused on the qualitative questions. The various methods were integrated and interacted (Caracelli & Greene, 1997) using a fully mixed concurrent dominant status design based on Leech and Onwuegubuzie's typology of mixed methods design (2007). For example, the design of the first web survey was informed by early focus groups and email interviews. Similarly, guiding questions for subsequent focus groups were informed by findings from the surveys, and survey data were used as prompts for rich descriptions and examples in some focus groups.

A utilisation-focused evaluation (Patton, 1997) approach was used, in which all phases of the evaluation, from design through to reporting, focused on the intended use of the evaluation findings by the intended users. Users, including policy makers at the Ministry of Education, and also a range of others (teacher educators, school leaders, curriculum advisory groups, and school support providers) have been involved in the development of the research instruments and informed of interim findings during the evaluation process. The Program Evaluation Standards (The Joint Committee on Standards for Educational Evaluation, 2007) for feasibility, propriety, accuracy, and utility, were incorporated throughout the evaluation.

Research questions

The main research questions focused on gaining a national picture of implementation progress:

Research Question 1: What progress was made in the first two years of implementation of The New Zealand Curriculum?
Research Question 2: What factors explain the degree of progress in implementing The New Zealand Curriculum?

Sub-questions

Sub-questions include the following, and are referred to explicitly in the findings section of this report:

  1. To what extent do schools and teachers feel confident about, or challenged by, the implementation of The New Zealand Curriculum?
  2. What progress is being made in schools and by leaders in implementing school-wide curriculum design?
  3. What progress is being made by schools and leaders in implementing the purposes and key understandings of The New Zealand Curriculum?
  4. How have the materials, resources and programmes supported schools and teachers to make changes?
  5. What alternative or further supports do schools and teachers feel they need to effectively implement The New Zealand Curriculum?

The theoretical framework

The evaluation was guided by a theoretical framework which comprised four elements—support encounters, receptivity, understanding and practice (SERUP) as shown in Figure 2.

Figure 2: SERUP Framework

Figure 2: SERUP Framework.

Support encounters

Data were gathered on the extent to which educators had encountered various kinds of support (including people within and beyond their schools, print publications and online material) and how valuable and high quality they perceived those supports to be.

Receptivity

Data were gathered about the extent to which educators value the curriculum, their confidence in implementing it in their own context, and the degree to which they perceive implementation to be feasible.

Understanding

Attention was also given to how educators understand key elements of the new curriculum. The New Zealand Curriculum has, for instance, significantly greater emphasis on effective pedagogy, Teaching as Inquiry and the development of key competencies. It also affords schools substantially more flexibility through its emphasis on locally designed curricula and sets out principles and values that are to be reflected in teaching and learning programmes. Evaluation tools asked practitioners to report not only how much, and what they were implementing in relation to these aspects, but also examined curriculum understandings underlying those reports. Of key interest was the extent to which those understandings aligned with curriculum intentions as expressed by experts who had been closely involved in designing the curriculum.

Practice

An overriding rationale for the curriculum change related to the pursuit of improved teaching and learning. It was critical, therefore, to also gather data on teaching practices in response to the new curriculum. Of particular interest was the extent to which practices that reflect the intentions of the new curriculum were becoming evident in both leaders' and teachers' practice. Practice items were not designed to measure adherence to the use of particular strategies, sequences, materials, or to duration stipulations for particular practices as is often the case in evaluations concerned with "fidelity of implementation' (O'Donnell, 2008). Rather, they were designed to measure the nature of practices teachers are emphasising in their work with students. They were about more generic practices deemed to be indicators of curriculum intentions being realised.

Samples and data sources

More than 5000 educators have taken part in the series of evaluation activities between the beginning of 2008 and the end of 2009 as summarised in Figure 3.

Figure 3: Overview of data sources and samples

Figure 3: Overview of data sources and samples.

During the early months of 2008, focus groups and email interviews with curriculum experts were held to inform the design of the evaluation. Ministry of Education officials helped to identify the curriculum experts who were mostly members of curriculum advisory and writing groups. Fifty-eight experts responded to the questions about critical understandings, shifts required by the new curriculum and potential misunderstandings. Their responses were used to clarify key areas of focus for the evaluation.

In August 2008, a comprehensive web survey was carried out. Principals from a stratified random sample of 1210 schools were sent an email invitation to take part in the web survey, and to forward the invitation to all of their teachers. There were 579 respondents to that survey from 230 (19%) of the 1210 schools invited to participate. The 579 respondents represented 13% of the teachers in the participating schools. The original evaluation design was to carry out three web surveys. The low response rate, however, prompted a decision to design two additional paper surveys (to be administered towards the end of 2008 and 2009) and to administer just one further web survey (towards the end of 2009).

In October 2008, principals from a stratified random sample of 593 schools (no schools from the web survey sample were included) were sent an invitation by mail inviting them, and all of their teachers, to complete the enclosed paper surveys. These were a short two-sided single page survey of 84 curriculum implementation items and five demographic items. In the following month there were 2578 responses to the survey from teachers and principals in 221 (37%) of the 593 schools invited to participate. The 2578 respondents to the 2008 paper survey represented 41% of the teachers in the participating schools.

In October 2009, the second administration of both paper and web surveys took place. The second web survey focused on open-ended understanding items, and duplicated the series of receptivity items from the paper survey. Email invitations were sent to principals in 1191 schools (the same sample as those invited in 2008 with those who requested not to be invited again excluded). Once again, principals were requested to forward the email invitation to all teachers in their school. There were 604 responses from educators in 345 (29%) of the invited schools. The 604 responses to the 2009 web survey represented 8% of the teachers in the participating schools.

The 2009 paper survey was identical to the first paper survey, with the addition of two support encounter items and one support quality item. It was sent to the principals of the same 593 schools with a request for them to again extend the invitation to all teachers in their school. In the following month, responses were received from 1800 educators from 176 schools. The 1800 responses represented 36% of teachers in the participating schools.

The surveys were complemented by a series of 26 focus groups involving 247 participants from across a range of school types and roles.

Sample representativeness

The random samples for invited schools for both web and paper surveys were based on a stratified-sampling frame constructed around units of school type, region, and decile5. The achieved samples (actual respondents) for each of the surveys were compared to the total population of teachers (for English-medium state and state-integrated schools) and these comparisons are presented below.

School type

The respondents to all surveys were similar to the teacher population in terms of the school types they work in (composite, primary, secondary and special). The participating teachers and those in the population of English-medium schools were similar for both of the paper surveys, but slightly less similar for the web surveys as shown in Figure 4.

Figure 4: Comparison of survey achieved samples and teacher population by school type

Figure 4: Comparison of survey achieved samples and teacher population by school type.

School Region

Similarly, the proportion of respondents from each of the six regions (organised in this way to match the regional delivery of school support services) for all of the surveys, closely matched the regions of the total population of teachers.

Figure 5: Comparison of survey achieved samples and teacher population by region

Figure 5: Comparison of survey achieved samples and teacher population by region.

School Decile

The socio-economic status-based decile rating for schools was also a consideration in the sampling frame. For all surveys a slightly higher proportion of teachers from mid (4–7) and high (8–10) decile schools responded relative to the proportion of teachers in mid and high decile schools in the population. The proportion of teachers in low (1–3) decile schools was, therefore, slightly lower than in the achieved sample than in the population. The match was closest in the 2009 paper survey.

Figure 6: Comparison of survey achieved samples and teacher population by decile

Figure 6: Comparison of survey achieved samples and teacher population by decile.

Instruments

Paper surveys

The paper survey was designed to capture key aspects from the theoretical framework in a short easy to complete format on two sides of one A4 sheet (see Appendix 1). It repeated some items from the first web survey for the purposes of comparison, and re-framed other aspects of the framework, with an emphasis on items relating to practice. The survey instrument was identical at both time points (with the exception of two additional items in 2009), and provided the key source of comparison in relation to the question of progress over the first two years of implementation.

The paper survey asked respondents to indicate: demographic information (school type, school region, school level, socio-economic status rating); the frequency various implementation supports (including people, web and print) had been encountered (8 items on a 4-point Likert scale); the quality of implementation support (4 items on a 6-point Semantic Differential scale); their general views of the curriculum and of the degree of change required (2 items on a 6-point Semantic Differential scale); extent to which a range of key curriculum-related practices were evident in practice (22 items on a 4-point Likert scale); and the extent to which they had made change to a range of day-to-day practices (7 items on a 6-point Likert scale).

The 6-point change scale drew on a number of sources including Hall and Hord's (2006) Concerns-Based Adoption Model. They identified, verified, and operationally defined eight different levels of use of a new innovation. These levels are used to determine the extent and quality of change being implemented in classrooms (Hall & Hord, 1987, 2006; Hall & Loucks, 1975, 1977, 1981). They range from non-use, through orientation, preparation, mechanical, routine, refinement, integration, and finally to renewal (a stage whereby the use of an innovation in re-evaluated and modifications/improvements are sought). The change scale also drew on a model of change developed through studies on intentional change in relation to addictive behaviours (J. O. Prochaska & DiClemente, 1983; J. O. Prochaska, DiClemente, & Norcross, 1992). The model, described by Prochaska and others as 'transtheoretical', outlines six stages of intention change—pre contemplation, contemplation, preparation, action, maintenance and termination. The model, which has been tested widely in studies about individuals and addictive behaviours, has also been found to hold for organisations (J. M. Prochaska, 2000) and has been used in education settings (Evers, Prochaska, Van Marter, Johnson, & Prochaska, 2007). It was adapted for the purposes of this evaluation to outline a theory of curriculum implementation (see Figure 7).

Figure 7: Theory of curriculum implementation

Figure 7: Theory of curriculum implementation.

Web survey 2009

The second web survey (administered in 2009) repeated some items from the first web survey and the first paper survey for the purposes of mode comparison, and re-framed other aspects of the framework, with an emphasis on items relating to understanding. It asked respondents to indicate: demographic information; the extent to which a lack of various supports were considered to be barriers to curriculum implementation (9 items on a 6-point Likert scale); and their receptiveness to, and views of, the curriculum (10 items on a 6-point Semantic Differential scale). Respondents were also asked to indicate which of the curriculum principles (Treaty of Waitangi, Cultural diversity, Inclusion, Learning to learn, Community engagement, Coherence, Future focus) are emphasised most and least in the school's curriculum, and to give examples of those principles in practice. The 2009 web survey also included four open-ended understanding items and a space for general comment. The open-ended understanding items were designed to elicit examples from practice that indicate key understandings respondents bring to curriculum elements. The open-ended items used projective device questions in which respondents completed unfinished sentences:

  1. The best example I have seen of teaching for key competencies is....
  2. Teaching as Inquiry in the NZC requires teachers to....
  3. Values in the NZC requires teachers to....
  4. Key competencies in the NZC requires teachers to....
  5. The most significant change in practice in my school or class in response to the NZC is....

Web survey 2008

The first web survey, administered in 2008, was developed for teachers, and slightly revised to ensure the school leaders' version was relevant to the role of principal.

It included sections for:

  • indicating frequency of engagement with various supports—print, people and web
  • rating the value/quality of curriculum implementation supports—print, people and web
  • space for open-ended responses/views about implementation support
  • rating barriers to curriculum implementation (in terms of support provision)
  • suggesting priorities for implementation support
  • rating views about the NZC
  • rating confidence for implementing the NZC and its various elements
  • rating barriers to curriculum implementation (in terms of own capacity)
  • indicating the accuracy of statements about key aspects of the curriculum, and space for comment
  • indicating the extent of change in unit planning for the learning areas
  • indicating the extent/nature of discussion about the NZC
  • indicating the response to key competencies (in planning, discussions with staff/students)
  • indicating the nature of implementation goals
  • indicating the extent to which coherence across sectors is a focus
  • indicating the nature of provision of Te Reo Maori me ona Tikanga
  • indicating the focus, if any, of school review
  • indicating the extent of Teaching as Inquiry practices
  • open-ended responses to questions asking respondents to explain various key aspects of the curriculum.

Feedback from key stakeholders (in particular the Ministry of Education) and from pilot respondents was used to refine the survey instrument during the design phase.

Focus groups

In addition to the surveys, 26 focus groups were held (with selective sampling) exploring the views of 247 teachers, teacher educators and principals. Email interviews (using convenience sampling) were also used (James, 2007) as a means for gathering rich qualitative data from participants in remote locations.

In the first year of the research, the focus group interviews were based around a broad semi-structured schedule that prompted discussion about the quantity and quality of support encountered, views of the curriculum, and understandings about key shifts required. Focus group composition in the first year was typically based around subject area interests. They were selected not on the basis of being representative of the wider population, but on the basis that they would have contributions to make related to the implementation of the curriculum. The emphasis in the second year of the research shifted to focus on key themes emerging from the 2008 survey data. For example, themes of primary/secondary differences, principal/teacher differences, and issues in understanding and implementing partnerships, and the Teaching as Inquiry elements of the curriculum were foci for the interviews, and influenced the composition of the groups. In these focus groups, findings from survey data were used as prompts for discussion. As well as being a useful stimulus for discussion, focus group participants' reactions to survey findings provided an insight into possible explanations and examples underlying quantitative data. The number of participants in each focus group varied, but was typically between five and eight enabling a range of ideas and perspectives to be raised. An invitation to respond to questions in an email interview was also extended (via an Education Gazette advertisement) and 12 educators who were unable to attend focus groups in person participated in that way.

Analysis

Paper survey 2008–2009

Quantitative analysis of categorical and rating scale items in the survey (using SPSS) was carried out. Analyses included factor analysis, Multivariate Analysis Of Variance (MANOVA), effect-size calculations and stepwise linear regression. Focus group, email interview and open-ended survey items were coded qualitatively (using Excel and NVivo) and frequency counts were calculated for selected items.

Factor analysis

Factor analysis was used to uncover the dimensions of the support and receptivity items, and practice items. This involved an exploratory factor analysis using maximum likelihood extraction method, followed by oblimin rotation. A 5-factor solution for the support and receptivity items (excluding one item­—miserly-generous) and a 9-factor solution for the practice items were selected since those factor structures matched between the Time 1 and Time 2 data. This enabled comparisons to be made across the two time points. The factors, as outlined in the pattern matrices in Appendix 2, Appendix 3 and Appendix 4 were:

Support quantity: internal is a 4-item factor (α=0.73), measuring the quantity of encounters with support available within schools including own colleagues. The Ministry of Education provided publications and website material and the curriculum document.

Support quantity: external is a 4-item factor (α=0.66), measuring the quantity of encounters with support sourced externally to the school including state-funded advisors, private consultants, facilitators from other initiatives and colleagues from other schools.

Support quality is a 4-item factor (α=0.93), measuring respondents' views of how productive, relevant, stimulating and sound the quality of support provision was that they experienced.

Regard is a 3-item factor (α=0.80), measuring the extent to which respondents view the curriculum positively in terms of being flexible and practical and better than the previous curriculum.

Confidence is a 4-item factor (α=0.73), measuring the extent to which respondents feel confident about implementing the curriculum (confidence and ease of implementation) and consider it to be feasible (reasonable workload and uncomplicated).

Key competency: pedagogical is a 2-item factor (α=0.72), measuring the integration of key competencies across learning areas and fostering of students' dispositions to recognise when and how to use key competencies.

Key competency: disciplinary is a 4-item factor (α=0.61), measuring competencies relating to thinking and use of language, symbols and texts and the application of knowledge to meaningful real-world contexts.

Key competency: situated inter-/intra-personal is a 3-item factor (α=0.79), measuring educators' focus on competencies of relating to others, managing self and participating and contributing.

Values is a 4-item factor (α=0.80), measuring the emphasis on teaching that encourages curriculum values, develops values exploration skills and knowledge about the nature of values, and attends simultaneously to knowledge, attitudes and values during learning.

Student agency is a 2-item factor (α=0.71), measuring the emphasis on enabling students to participate in decisions about what and how they learn and how they are assessed.

Parent involvement is a 3-item factor (α=0.80), measuring the emphasis on parents and community members being consulted on teaching and learning matters, and taking part in teaching and learning both at home and at school.

Teaching as Inquiry is a 5-item factor (α=0.75), measuring the extent to which an inquiry-oriented approach is taken—being responsive to evidence about students' needs, abilities and response to teaching; drawing on both colleagues' experience and published research to inform changes to practice; and collecting and analysing data about student response to teaching.

Change to classroom practice is a 5-item factor (α=0.91), measuring the degree of change to planning, approaches/activities, resources, content/topics/themes and the role of students in class.

Change to reporting is a 2-item factor (α=0.94), measuring the degree of change to the content and manner of reporting to parents.

A single-factor solution across all of the practice items was also extracted, to identify if a single practice factor would be suitable for use in subsequent linear regression (23 items [excluding those rated on the change scale], α=0.92).

Comparisons between groups

A MANOVA test was conducted to look for between-subjects effects of a range of groups—2008 and 2009 respondents, those from different school levels (primary and secondary), those from schools of differing deciles (low, mid and high deciles), between those with different roles in schools (teachers and principals) and between those reporting experience of low and high quality support. For these multivariate analyses the regard and confidence factors were included as dependent variables alongside all of the practice factors since it was deemed important to establish between group differences on all these variables. Regard and confidence were subsequently treated as independent variables in other analyses described later.

Effect sizes (Cohen's d) were calculated for group mean differences on all of the factors (where those differences were found to be statistically significant) to determine the magnitude of the difference. Effect size calculations were considered useful for considering the relative magnitude across, for instance, each of the factors relating to practice, and to determine relative progress across multiple curriculum aspects.

Regression

A series of stepwise linear regressions were carried out in order to describe the relationship between one predicted (dependent) variable and a selection of predictor variables (independent). In the first instance a single practice factor was treated as the dependent variable, and all of the support and receptivity factors were included as independent variables. The strongest predictor from each analysis (which was confidence in the first regression treating practice as the dependent variable) was then treated as the dependent variable in the subsequent analysis.

Qualitative analysis

Thematic analyses

Focus groups were recorded, transcribed and entered into NVivo. The development of categories for the qualitative analysis occurred both inductively and deductively. The SERUP framework provided the basis for deductive categories, or confirmatory thematic (a priori) analysis. These categories included, for example, support encounters (positive, negative, quantity, quality, source); receptivity (positive, negative, value, feasibility, confidence); understanding and practice (all curriculum elements). There was also attention to exploratory (a posterior) analysis. This inductive coding allowed additional themes to be noted, and previously coded data were cross-checked as additional categories emerged. Web survey comments were also coded, mainly inductively, and frequency counts calculated for each category.

The curriculum context: Features of The New Zealand Curriculum

There are several important characteristics of the NZC that had implications for evaluating its implementation:

  • The New Zealand Curriculum provides direction and guidance for teaching and learning for every curriculum area in New Zealand's state and state-integrated schools.
  • Schools' local curricular are required to provide teaching and learning programmes that are based on The New Zealand Curriculum learning area statements, underpinned by its principles, and that address the stated values, key competencies, and achievement objectives.
  • The New Zealand Curriculum emphasises flexibility and school autonomy. Rather than a prescription, it focuses on school-based curriculum design—the requirement for schools to design local curricular that are responsive to the needs of their particular students and communities.
  • The New Zealand Curriculum, unlike its predecessor, has substantial emphasis on pedagogy—it signals not only the direction for teaching and learning in terms of what outcomes students should achieve, but also guidance for teaching and learning processes that signals how students and other stakeholders should experience teaching and learning.
  • The New Zealand Curriculum is one of two curriculum documents that comprise the national curriculum: The New Zealand Curriculum (English-medium) and Te Marautanga o Aotearoa (Māori-medium). Te Marautanga o Aotearoa is the partner document of NZC. It has been developed for Māori medium settings levels 1 and 2, however, all New Zealand schools can utilise this document in their respective schools. It is not a translation of NZC and was developed based on Māori philosophies and principles.

Footnote

  1. A school's decile indicates the extent to which it draws its students from low socio-economic communities. Decile 1 schools are the 10% of schools with the highest proportion of students from low socio-economic communities. Decile 10 schools are the 10% of schools with the lowest proportion of these students.

Contact Us

Education Data Requests
If you have any questions about education data then please contact us at:
Email:      Requests EDK
Phone:    +64 4 463 8065