PISA and PIAAC: Similarities and Differences [webinar recording and slides]

Presented by: Patrick Bussière, Director,Skills Development Research Division, Policy Research Directorate, Employment and Social Development Canada

Last Fall, the Organization for Economic Co-operation and Development (OECD) released the results of the 2012 Programme for International Assessment of Adult Competencies (PIAAC) as well as results from the Programme for International Student Assessment (PISA).  While PISA has been measuring skills of 15-year olds in mathematics, reading and science in an ever growing number of participating countries since 2000, PIAAC was undertaken for the first time in 2012 and measured literacy, numeracy, and problem solving in a technology-rich environment skills of individuals aged 16-65.  These two instruments provide us with the most complete picture of skills in Canada and other countries that we ever had.  This webinar highlighted the new information that is now available, compared and contrasted these two studies, and linked the results to issues of importance to Canada.

Patrick Bussière co-leads Canada’s PIAAC team on behalf of Employment and Social Development Canada in collaboration with CMEC.  He has presented previous webinars on PIAAC for The Centre as well as presenting on PIAAC at Fall Institute 2013 and Fall Institute 2011.



Questions and Answers


Correction and explanation: PISA and "paper-based tests on a screen," and comparing PISA levels across major and minor domains

Q. A couple of corrections on PISA: computer-based assessment in PISA began in 2009 with digital reading assessment (international option). Digital reading from 2009, and computer based reading in mathematics (2012) are not simply 'paper-based tests on a screen'. For example, DRA used a similar set of digital texts (web pages, email, blogs) that students had to navigate to find solutions.  Levels in PISA are used for major AND minor domains in each administration.

A. This participant was correct in pointing that proficiency levels were generated for the minor domain in PISA 2012.  However, in its own analyses, Canada opted out of using these measures.  These proficiency levels were estimated based on results from prior PISA assessments where the domains were treated as major.  We were not satisfied with the procedure of simple extrapolation of proficiency levels from one PISA cycle to another.  Canada therefore decided not to use this methodology in our analyses.

Explaining differences in scores across countries; Can it help Canada improve skills?

Q. Could you please comment on scores of Norway and Sweden in Numeracy was lower  than OECD average,  yet  the PIAAC scores of these countries are better than the average.  If this is true,  what  interventions  do they use?  From the  Canadian point of view our PISA results are better  than the  OECD average yet in PIAAC we are just close to the average.   Why is there a drop in the overall skill level?

 Do we need to study what interventions countries are using to be at a higher skill level in PIAAC than PISA?  That would be key to improve skills in Canada. 

A.The webinar presentation was based on two separate reports, one for PIAAC and one for PISA.  As such, they did not attempt to explain results such as identified for Norway and Sweden.  To date, no official analysis has been performed trying to link the two data bases.  This has been identified as an area of interest by multiple parties, including ESDC and the OECD.  The understanding of interventions in particular countries would be key to understanding the differences in PISA and PIAAC results.

Purpose pf the final two graphics re explaining declining performance of Canadian youth

Q. How useful are the last two graphics that you presented, which juxtaposed rankings of countries in Numeracy for all ages, or Numeracy for 16-24 years from PIAAC 2012, against rankings of the same countries in Mathematics Literacy for PISA 2012? My concerns are as follows. First, the two surveys measure different constructs. Second, the orderings are based on rankings, not on absolute scores, and do not allow for sampling (or other) errors of estimation. Why not just compare the 16-24 age group with the whole sample for PIAAC?

A.The two graphics near the end of the presentation were not meant to serve an analytical purpose.  They were simple illustrations linking the two data sources that I was hoping would illustrate the lower performance of Canadian youth.  Rankings were used as opposed to actual scores simply because the latter are not comparable.  Further analysis is needed to understand the declining performance of Canadian youth.

Return to Top