TIMSS 1994: Mathematics and science literacy in the final year of schooling Publications
Publication Details
TIMSS-94/95 was the first in a cycle of studies designed to measure trends in science and mathematics achievement at the middle primary level (Year 5) and lower secondary level (Year 9). This is the fifth major report from the 94/95 study.
Author(s): Research and International Section, Ministry of Education
Date Published: June 1998
Summary
This report details the background to the development of the concept and measurement of mathematics and science literacy. It describes the overall mathematics and science literacy of New Zealand students who intended leaving secondary school at the end of 1995, including the relative performance of female and male students; students from the four main ethnic groupings; students for whom English is a second language; and other sub-groups of students of interest to the education community.
Context of the Mathematics and Science Literacy Study
Parts of this chapter have been reprinted (with minor amendments), with permission, from Assessing Science and Mathematics Literacy, by Graham Orpwood and Robert A. Garden, published by Pacific Educational Press, 1998.
The Third International Mathematics and Science Study (TIMSS) is the most comprehensive study ever undertaken to assess and compare international student achievement in mathematics and science. TIMSS represents the continuation of a series of studies sponsored by the International Association for the Evaluation of Educational Achievement (IEA), inasmuch as it involves surveys of student achievement in mathematics and science. Similar, earlier, IEA studies included the Second International Mathematics Study (SIMS) and the Second International Science Study (SISS). TIMSS does, however, comprise at least one aspect that is unique in the IEA tradition: the inclusion of an achievement survey in mathematics and science literacy (MSL) administered to students in their final year of secondary schooling. This report outlines the methodology and background of the MSL study, and presents key results from a New Zealand perspective. More complete international results of the literacy study have been published in Mullis et al (1998).
The overall TIMSS study design defines three populations of students whose achievement in mathematics and science has been surveyed. Detailed definitions of TIMSS populations are provided in Robitaille and Garden (1996); Martin and Kelly (1996); and, for population 3 particularly, in Mullis (1998).
Population 1 consisted of students in the pair of adjacent class levels that contain the most students who were 9 years old at the time of testing. Population 2 consisted of students in the pair of adjacent class levels that contain the most students who were 13 years old at the time of testing. Population 3 consisted of students in their final year of secondary school, regardless of the type of programme in which they were enrolled. The literacy study concerns those students in population 3 only.
TIMSS Research Questions
The research questions for TIMSS, together with the conceptual framework that guided the study and its design, can be found in Robitaille and Maxwell (1996). The four general research questions, from which more specific research questions derive, are as follows:
- The Intended Curriculum
How do countries vary in the intended learning goals for mathematics and science; and what characteristics of educational systems, schools and students influence the development of those goals? - The Implemented Curriculum
What opportunities are provided for students to learn mathematics and science; how do instructional practices in mathematics and science vary among nations; and what factors influence these variations? - The Attained Curriculum
What mathematics and science concepts, processes, and attitudes have the students learned; and what factors are linked to students' opportunity to learn? - Relationships Between Curricula and Social and Educational Contexts
How are the intended, the implemented, and the attained curriculum related with respect to the contexts of education, the arrangements for teaching and learning, and the outcomes of the educational process?
The Mathematics and Science Literacy Study
The most significant difference between the mathematics and science literacy study and both other components of TIMSS and other IEA studies, is that the MSL study is not an attempt to measure what has been taught and learned in a given year of schooling or to a given age-group of students. Instead, it is a study of the residue of mathematics and science learning that final-year students, who are on the point of leaving school and entering the workforce or post-secondary education, have retained regardless of their current areas of study. These students may have studied mathematics and science in their final years of school or they may not have; they may regard themselves as specialists in mathematics and science, in other subjects, or in none; they may be entering occupations or further education related to mathematics and science, or they may have no intention of doing so. Nonetheless, all of them have studied mathematics and science at some time during their school careers and all of them are entering a world increasingly affected by science and technology. The role of the literacy study within TIMSS, therefore, is to ask whether the mathematics and science school leavers have been taught is still remembered and can be applied to the challenges of life beyond school.
Of the four principal research questions of TIMSS, the literacy study focuses most strongly on Question 3, and relates to student achievement. In particular, as the account of the development of the MSL concept within TIMSS in Chapter 2 shows, it is the residue of conceptual learning in mathematics and science that the MSL study is most directly concerned with, together with students' ability to reason with, and apply, that conceptual learning in a social context. Question 4, concerning the relationships between curricula and social and educational contexts, is also important in the context of the literacy study. Adapted to the nature of the MSL study, Question 4 enquires into the relationship between attainment in mathematics and science literacy and overall features of each nation's provisions for teaching and learning those subjects. Of particular interest will be the relationship between a nation's population 31 specialist students' attainment in their specialty subjects, and the same nation's MSL attainment.
The literacy study is less specifically concerned with the other two research questions, although they informed its development. The MSL study could not be based on a specific intended curriculum, Question 1, since not all students surveyed were still enrolled in mathematics or science courses. However, the data collected for population 2 and through the curriculum analysis study did allow estimations of the intended curriculum for earlier stages of population 3 students' education. Similarly, the instructional practices that Question 2 examines are not vital to the MSL study, although population 2 data for each country can reveal the background instruction in mathematics and science experienced in earlier years by population 3 students.
The answers to the questions posed by the MSL study, that is, what mathematics and science do school-leavers retain, and can they apply what they have learned in those subjects, are of interest not only to researchers and teachers. In many countries, significant economic and social policy debates are premised on the importance of mathematics and science education to the population at large.
The first planning meeting for the mathematics and science literacy study was held in Paris in April 1991. The report of that meeting declared:
In some countries, political and business leaders are concerned that too few students are learning enough mathematics and science in school. In particular, they believe that too many students are leaving secondary school without the mathematics and science needed to staff the factories and laboratories that will make the country economically competitive. Few could say just how many graduates are needed with just what capabilities, but political leaders the world over constantly receive advice that the education system is failing their country.
In addition to providing able students to specialized university courses, schools are expected to produce secondary graduates who can cope with technology on the job (and learn quickly about its changes), care for the environment, and live and work safely. To do this (the reasoning goes) as many students as possible should be learning quite a lot of secondary mathematics and science. (McLean & Wolfe, 1991).
Beliefs about the relationship between school mathematics and science programmes, and national and personal economic well-being are strongly entrenched in policy literature, although little empirical evidence exists to support or refute them. The TIMSS literacy study was designed to contribute some answers to questions about what students retain from mathematics and science education as they leave school for the world beyond.
Subjects of the Literacy Testing: TIMSS Population 3
Since schools vary so greatly in structure and type across the countries participating in population 3, this definition was difficult to apply in a uniform fashion. Each country defined its own population 3 and described in detail the basis for its definition. Considerable effort was made to ensure that, in each country, as high a proportion of the relevant age cohort as possible was included in the target population. In some countries, where a variety of secondary school types exist, it was necessary to include students from vocational as well as from general or academic schools. Several reviews by TIMSS sampling staff, and meetings with country representatives, were required to maximise comparability between national populations.
The population 3 study also comprised surveys of achievement in advanced mathematics and physics. For this purpose, sub-populations of students were defined as those who were taking (or had taken) courses in advanced mathematics and/or physics. The precise definitions were also developed in each jurisdiction and then compared across countries to ensure reasonable levels of parallelism. Since mathematics and physics programmes, and the proportion of students taking them, vary significantly throughout the world, the establishment of comparable definitions was also a complex process.
The sampling procedure involved several steps. Each country drew a sample of schools and identified all the population 3 students within the selected schools as belonging to one of four categories:
- students who were neither mathematics or physics specialists;
- students who were mathematics but not physics specialists;
- students who were physics but not mathematics specialists; and
- students who were both mathematics and physics specialists.
Estimates of achievement in the literacy component of the test were obtained for each of the four groups, with those in the first group receiving only the literacy test. Members of the specialist sub-populations also received parts of the literacy test as part of a complex rotation design that is described more fully in Adams and Gonzalez (1996). Because of this complex design, with several sub-populations of population 3 being identified, estimates of mathematics and science literacy could be obtained for the overall population and, with somewhat reduced precision, for each of the sub-populations.
In the New Zealand sample these four groups were identified but in this report only estimates of literacy for the whole population and the population of students taking advanced mathematics, and the population taking advanced physics are quoted.
The sampling for population 31 differed from populations 1 and 2 in one key way: in those populations, intact classes of students were surveyed, enabling links between student achievement and school and classroom variables to be identified. Since the MSL test was administered to students many of whom were not taking mathematics or science courses in their final year of secondary school, collection of information concerning classroom variables was inappropriate. The range of contextual information linked to each student's achievement survey was thus limited to the information collected through the background questionnaires administered to each student.
International Administration
TIMSS was directed by Albert E. Beaton at the TIMSS International Study Centre at Boston College, Massachusetts, United States, from which the study was managed. Some aspects of international coordination, including coordination of development of mathematics tests was carried out at a TIMSS centre at the University of British Columbia, Canada, while coordination of development of science tests was done from the University of Toronto, Canada. Items were developed at a number of centres, including the Australian Council for Educational Research (ACER), Australia, the University of Oslo, Norway, and major testing agencies ETS and SRA in the United States. Development of an international sampling design and monitoring of its implementation in TIMSS countries was the responsibility of Statistics Canada in Ottawa. Initial data processing was carried out at the IEA Data Processing Centre in Hamburg. The test data was then sent to ACER in Melbourne for advanced analyses.
Modern methods of communication and data transfer made this level of international cooperation possible without loss of time. E-mail and fax also made it easy for the intensive communication between international centres, and between these centres and the national centres at various phases of TIMSS, and allowed debate over aspects of the study to be carried out 'at a distance'.
International Reports
Eight reports detailing international results for TIMSS, and two reports of a more technical nature have been published. In addition, an encyclopedia which includes contexts for education, and features of school mathematics and science curricula in TIMSS countries, and four monographs treating aspects of the study have been published. These publications are listed under separate heading in the reference section.
New Zealand Participation
TIMSS was administered in New Zealand by the IEA Unit (now the Comparative Education Research Unit) within the Research and International Section (now the Research Division) of the Ministry of Education. The IEA Unit was guided by a National Advisory Committee (see Appendix 1). This report details results for New Zealand in the MSL component of TIMSS. It contains some international comparisons where these are of interest, but for comprehensive international results the international report (Mullis et al, 1998) should be consulted.
Full participation in TIMSS at population 1 (standards 2 and 3) and population 2 (forms 2 and 3) levels promised to stretch the resources of the national centre in terms of both finance and personnel, so a decision was made to limit participation at population 3 (final year of schooling) level to the MSL study. The National Advisory Committee was of the view that investigating the general levels of mathematics and science literacy amongst school leavers would yield information of greater value than would be obtained by testing mathematics and physics specialists. It is also the case that in SIMS the performance of New Zealand form 7 mathematics specialists had been found to be satisfactory when compared with that of their counterparts in other participating countries (Garden & Irving, 1987).
New Zealand Reports
This report follows earlier publication of New Zealand reports presenting results for population 1 mathematics and science, population 2 mathematics, population 2 science, and the performance assessment component of TIMSS. These publications are listed under a separate heading in the bibliography section.
Sampling
Student Attrition
Identifying the population of students in their final year of schooling, drawing a sample, and administering tests and questionnaires to students in the sample posed a considerable challenge in most systems. In some countries, such as the Netherlands and Austria, students in their final year of school may be in any one of several different types of school, or well-defined tracks within school ranging from those students aspiring to university education to those bound directly from school to the labour market. A common pattern in Europe is to have 'academic' and 'vocational' tracks, sometimes in the same school and sometimes in different schools.
A second difficulty in achieving a representative sample is experienced in those countries where there is not a single terminal grade level from which nearly all students leave school. In New Zealand, as in some other systems, students may leave school before reaching, or completing, the highest class level (ie form 7) in order to pursue a full-time education in another educational institution, or they may simply 'drop out'. Even in those systems in which there is a 'graduation' year, drop-out rates prior to that year can be substantial.
The New Zealand sampling procedure involved selection of schools with probability proportional to the size of Year 13 roll, followed by random selection of students from all Year 13 students in form 6 or form 7, and Year 12 in form 6 or form 7 who were not planning to return to school in 1996. Adult students, exchange students from overseas, recent immigrants, and foreign fee-paying students who had less than four years secondary schooling in New Zealand were excluded. (Sampling details are available from CER Unit in the Ministry of Education.)
It had been expected that by including leavers from both form 6 and form 7, more than 80 percent of the age cohort would have been captured, but it was later found that in 1995, by the time the TIMSS tests were administered in August, only 70 percent of the age cohort was still in the school system.
Differing problems facing the different national centres resulted in considerable variations in the proportions of the age cohort from country to country, making international comparisons of achievement subject to careful interpretation.
Coverage
To aid interpretation of cross-national comparisons in literacy measures, a TIMSS Coverage Index (TCI) was calculated for each country. The TCI is simply the proportion of students in the age cohort who formed the TIMSS population 3. The TCI is thus an indicator of how representative population 3 was of the age cohort of school leavers in each country, but it is not informative about whether there is bias, or the direction of bias where it exists. In New Zealand it is likely that, on average, the mathematics and science literacy of those who had left school before, or during, the sixth form year would be weaker than that of the remaining students. This is probably true for other countries, but the relationship between size of TCI and score bias would certainly not be linear. Coverage indices for participating countries are included in Figure 3.1, Chapter 3.
Source of Bias
Bias due to less than full coverage of the age cohort may be substantial for some countries, but for most participants bias due to inadequate sampling of their defined populations will be more important. The sampling design national centres were required to adhere to in TIMSS was designed to ensure that samples were fully representative of population 3 in each country, but two factors worked against fulfilment of this ideal. Firstly, for various reasons, not every selected school was able to agree to participate in TIMSS. When a selected school is unable to participate efforts are made to replace it with another school having similar characteristics (such as location, size, school type), but in this process randomness is lost. The second factor was that, inevitably, some selected students were absent from the test session, and not all of these were able to be included in follow-up test sessions. Provided losses due to these two factors were not too great, it was still possible to obtain good population estimates for test scores and other variables by weighting adjustments, but losses in some countries were considerable.
Criteria for samples to be deemed adequate were strict in TIMSS. At least 85 percent of schools selected needed to take part, and at least 85 percent of selected students from these schools had to respond to study tests and questionnaires. The New Zealand National Centre was one of the seven that met the criteria. Tables for the MSL measures in Chapters 3 and 4 indicate which countries met the sampling criteria, and the extent to which other countries failed to do so. Notes on national samples can be found in Appendix 2, with more detailed accounts in Mullis et al (1998). National samples in which substantial numbers of schools and/or students were not represented are likely to have mean test scores which are biased upwards with respect to the populations they are drawn from, but this assumption cannot be tested.
Between-Country Comparisons
In comparing the results for one country with those of another, then, it is important to refer to the sampling category for each country, and to take possible bias into account. The four categories are:
- Countries which satisfied the sampling criteria.
- Countries not satisfying the sample participation rates.
- Countries with unapproved student sampling.
- Countries with unapproved sampling procedures and low participation rates.
Where TCIs are comparable New Zealand results can be compared confidently with those of countries in category 1, and with reasonable confidence with countries in category 2. Countries in categories 3 and 4 do not have good enough samples to justify comparison with New Zealand on the literacy measures. For example, of the three countries that performed significantly better than New Zealand on the combined mathematics and literacy scale, only Sweden met the sampling guidelines. (Mullis et al, 1998). Both of the other two countries, the Netherlands and Denmark, were in category 4 above, having deviated from the approved sampling procedures as well as having low participation rates.
Six countries had mean scores not significantly different (at the 95% level) from New Zealand 's, but only one, Switzerland, is in category 1, and Iceland has a very low TCI (55%). Thus, of the higher scoring countries it can be said with confidence that the school system in Sweden produces higher levels of MSL than the New Zealand system, and that Swiss and New Zealand means are not significantly different. Norway, France, Australia, and Canada, the other countries with mean scores not significantly different from New Zealand's, are in category 2, but satisfactory TCIs and sampling procedures (even where participation is not high) allow meaningful comparisons to be made.
Loss of ability to compare New Zealand results with those of the few countries with poor samples or TCIs is unfortunate, but for the purposes of judging whether levels of mathematics and science literacy of New Zealand school leavers are satisfactory, not too serious. Comparisons with several countries with similar educational contexts are possible. Just as importantly, the nature of literacy tests allows informed people to make item by item judgments as to whether the proportion of students answering correctly meets the reasonable expectations of mathematics and science educators, and of society.
Literacy Tests
The bases for the blueprints for construction of TIMSS tests were the Curriculum Frameworks for mathematics and science developed for the Curriculum Analysis component of the study (Robitaille et al, 1993). The Curriculum Frameworks were three dimensional, each incorporating a subject-matter content aspect which listed topics and sub-topics for the respective subjects; a performance expectation aspect listing the various kinds of performances or behaviours that teaching and testing are expected to elicit; and a perspectives aspect for use in the analysis of curricular documents. This latter allowed elements of textbooks and curriculum guides that dealt with such things as promotion of desirable attitudes to mathematics or science, encouragement to under-represented groups, and 'habits of mind', and did not figure in test construction.
The Curriculum Frameworks were based on intended curricula of the countries taking part, but the process for determining the numbers and types of items selected for various parts of the framework in making up the tests ensured that the tests were much more in line with implemented curricula, ie what teachers actually taught. The often substantial disparity between intended and implemented curricula (Livingstone, 1986; Travers & Westbury, 1989) makes the latter a more reliable predictor of what students will know and be able to do.
Unlike the tests for population 1, population 2, and for population 3 advanced mathematics and physics, the MSL test did not cover all of the mathematics or science content students could be expected to have covered at the relevant class level, but only that considered to fall within the general notion of 'literacy'. As will be seen from Chapter 2, this notion was not well-defined, and, to some extent, the TIMSS concept of literacy is defined by the test. (See Chapters 3 and 4).
Although the concept dimension for the test was restricted, this was not the case for the performance expectation dimension. Being literate in mathematics and science necessitates knowing certain content and being able to perform certain routine procedures automatically so that the knowledge and skills can be applied to such things as the solution of everyday problems; understanding and correctly interpreting news reports incorporating quantitative measures, statistics, and science-related issues; and communicating intelligently with other people about these matters.
Quality Control and Quality Assurance
The willingness of selected schools to participate in the MSL component of TIMSS, and ability of selected students to attend test sessions which resulted in variability of sample quality were factors which TIMSS management had no control over, beyond advising national centres on how to maximise these. Quality control over all aspects of the study that were the responsibility of the project management were comprehensive and rigorous.
During all stages of the study there was ready communication between national and international centres. This, complemented by regular meetings of national research coordinators, ensured rapid feedback on, and involvement in, planning and development from all participating countries. Detailed manuals were provided for national coordinators, school coordinators, test administrators, sampling experts, data coders, and data entry people. Translations of instruments were checked to ensure that their intent and meaning did not deviate from that of the original. Training sessions were held around the world for Curriculum Analysis, and for coding and scoring questionnaires and tests. It is a tribute to these procedures designed to maximise comparability of results, that both within-country and between-country reliability checks revealed very high levels of consistency.
As a quality assurance measure, independent monitors investigated administration of the study in each country. These quality assurance monitors interviewed the national research coordinators and school coordinators; checked national tests and questionnaires; observed tests and questionnaires being administered to students; and reported to the International Study Centre on these matters. No problems were found with administration of TIMSS in New Zealand. Thus no reservations need be held about the quality of New Zealand data. For further details refer to Martin and Mullis (1996).
Structure of this Report
One may hypothesise that a country whose educational system is organised towards the production of high-calibre specialists may not also be providing the more general population with mathematics and science literacy. The literacy study aimed to examine what mathematics and science all school-leavers, not just specialists, take with them into the world, and how they are able to use and apply their knowledge. How this general purpose, based on planning discussions that took place in 1991, was translated into the specific tests administered in 1995 as part of TIMSS is a rather long journey with many detours, the highlights of which are described in Chapter 2 of this report. The end of the journey, the literacy tests themselves, are described in detail in Chapter 3 (Mathematics) and Chapter 4 (Science). In Chapters 5 and 6, information collected from students and from principals of participating schools is examined, and the relationship of a selection of variables to measures of mathematics and science literacy explored. Finally, Chapter 7 presents an overview of the findings, identifies areas of strength and weakness, and suggests areas which could merit the attention of policy makers.
Footnote
- Participation in the population 3 study was optional. The following 21 countries participated fully in the MSL component of the population 3 study: Australia, Austria, Canada, Cyprus, Czech Republic, Denmark, France, Germany, Hungary, Iceland, Italy, Lithuania, Netherlands, New Zealand, Norway, Russian Federation, Slovenia, South Africa, Sweden, Switzerland, and United States.
Navigation
Contact Us
Education Data Requests
If you have any questions about education data please contact us:
Email: Requests Data and Insights
Phone: +64 4 463 8065