Research Literacy Assessment Instruments in Social Sciences and Education: A Scoping Review*
Solmaz Aghazadeh1 & Mohammad Reza Anani Sarab2
Shahid Beheshti University, Tehran, Iran, Shahid Beheshti University, Tehran, Iran
Contact:  aghazadeh.solmaz@yahoo.com, anani@sbu.ac.ir

* This is a refereed article.
Received: 31 October, 2022.
Accepted: 11 January, 2023.
Published: 23 July, 2024.
Correspondent: Mohammad Reza Anani Sarab

This is an open-access article distributed under the terms of a CC BY-NC-SA 4.0 license
Abstract: Educational research plays a crucial role in the advancement of knowledge in the teaching profession. In fact, raising the quality standards in education necessitates teachers’ engagement in different levels and dimensions of research. To this end, improving teachers' research literacy is an essential first step, which also entails assessing the current state of teachers' research literacy using reliable and valid instruments. The present study provides a comprehensive review of the available instruments for assessing teachers’ research literacy, along with a description of the characteristics of the reviewed instruments. It also presents a review of the results of using these instruments in measuring the research literacy of various populations. Based on the findings, teachers’ research literacy is below the expected average level, reflecting the need for a change in policies and planning to improve it. Moreover, the results of this scoping review indicated that the instruments developed for measuring research literacy reflected the expectations of just the instructional context in which they have been developed and may not reflect the full dimensions of research literacy. Furthermore, the developed instruments each came with its own strengths and weaknesses in assessing teachers' research skills. Therefore, it is suggested that research literacy instruments should be selected based on the research skills expected from teachers in their institutional contexts and with full awareness of their limitations. To enhance the strengths and minimize the weaknesses, it is suggested that different sections from the current research literacy assessment instruments should be combined selectively to assess teachers’ research abilities.

Keywords: research engagement, teachers’ research literacy, research literacy assessment instruments, scoping review


Resumen: La investigación educativa desempeña un papel crucial en el avance del conocimiento en la profesión docente. De hecho, elevar los estándares de calidad en la educación requiere la participación de los docentes en diferentes niveles y dimensiones de la investigación. Para este fin, mejorar la alfabetización investigadora de los docentes es un primer paso esencial, que también implica evaluar el estado actual de la alfabetización investigativa de los docentes utilizando instrumentos confiables y válidos. El presente estudio proporciona una revisión exhaustiva de los instrumentos disponibles para evaluar la competencia investigadora de los docentes, junto con una descripción de las características de los instrumentos revisados. También presenta una revisión de los resultados del uso de estos instrumentos para medir la alfabetización investigadora de diversas poblaciones. Según los hallazgos, la alfabetización en investigación de los docentes está por debajo del nivel promedio esperado, lo que refleja la necesidad de un cambio en las políticas y la planificación para mejorar la alfabetización en investigación de los docentes. Además, los resultados de esta revisión de alcance indicaron que los instrumentos desarrollados para medir la alfabetización en investigación reflejaban las expectativas únicamente del contexto educativo en el que se desarrollaron y pueden no reflejar todas las dimensiones de la alfabetización en investigación. Además, cada uno de los instrumentos desarrollados tenía sus propias fortalezas y debilidades a la hora de evaluar las habilidades de investigación de los docentes. Por lo tanto, se sugiere que los instrumentos de alfabetización investigativa deben seleccionarse con base en las habilidades de investigación que se esperan de los docentes en sus contextos institucionales y con plena conciencia de sus limitaciones. Para mejorar las fortalezas y minimizar las debilidades, se sugiere que diferentes secciones de los instrumentos actuales de evaluación de la alfabetización en investigación se combinen selectivamente para evaluar las habilidades de investigación de los docentes.

Palabras Clave: participación en la investigación, alfabetización en investigación de los docentes, instrumentos de evaluación de la alfabetización en investigación, revisión del alcance


Introduction

Evidence-based practice (i.e., practice based on systematic research evidence) has become an integral part of educational research in recent years and is considered the foundation of teachers’ continuous professional development (Groß Ophoff, Wolf et al., 2017). Previously, teachers were considered to be individuals who had to follow a set of teaching methods provided by professionals. With the advent of the teacher-as-researcher movement (Stenhouse, 1975), however, teachers were encouraged to critique, rethink, and reconstruct their educational beliefs by drawing on their research skills while probing their practice. Such a research-based approach extends teachers’ role beyond transmitting information into mediating the process of knowledge construction through evidence-based practice. This role extension empowers teachers and helps them constantly analyze their beliefs, ideas, and knowledge (Cordingley, 2015). From this perspective, teachers are viewed as educational leaders who utilize the evidence available to solve the problems they face (Cochran-Smith & Lytle, 1999). Teachers who are familiar with research are better equipped to handle the challenges they face daily by staying up-to-date on the latest educational methods and enhancing their capability to select and implement effective teaching methods. All these ultimately contribute to improving the quality of education. From this perspective, teachers’ research engagement is essential to obtaining valid and research-supported information and making informed decisions, which can expand their professional knowledge and enhance their problem-solving abilities.

Despite the critical role of research in improving the quality of educational processes, teachers are reluctant to engage in research. As mentioned by Borg (2009) and Nassaji (2012), a lack of knowledge of research methodology and poor research skills are among the main factors that prevent teachers from engaging in research. Familiarity with the principles of major research paradigms and the ability to use a variety of data collection and analysis instruments and procedures are essential to effective research engagement. Hence, research literacy is a pre-requisite for achieving such skills. As described by Shank and Brown (2007), research literacy is the ability to understand and access scientific information purposefully and apply research evidence to educational decision-making. Similarly, Groß Ophoff et al. (2015) consider research literacy as an integral part of education as it informs teachers’ professional decision-making. Thus, research literate teachers can critically analyze research evidence and apply practical research strategies to educational processes (Wilson et al., 2013).

For teachers to engage in research effectively, it is therefore necessary to evaluate their research literacy and identify the dimensions that need more attention. Although research plays a critical role in improving the quality of education and teachers' research literacy is the basis for their engagement in research, assessing teachers’ research literacy has received little attention. Consequently, the present study is aimed at identifying and evaluating the instruments developed to measure teachers' research literacy. It also reviews the results of the application of the developed instruments in measuring the research literacy of populations in various contexts.

Research Methodology

In the present study, a scoping review has been conducted to examine research literacy instruments. According to Tullock and Ortega (2017), the primary purpose of conducting a scoping review is to inform research agendas in the study of a research problem for which emerging knowledge has begun to accumulate. They are seen as a particularly suitable choice when the goal is to “map rapidly the key concepts underpinning a research area and the main sources and types of evidence available” (Arksey & O'Malley, 2005, p. 21). Since research literacy can also be considered an emerging field, the present study aims to examine the scope and nature of studies whose focus was on developing instruments for measuring research literacy using the framework proposed by Arksey and O'Malley consisting of five main stages: “1) Identifying the research question; 2) Identifying relevant studies; 3) Study selection; 4) Charting the data; and 5) Collating, summarizing, and reporting the results” (p. 22).

Stage 1: Identifying the research questions

As this scoping review seeks to identify and evaluate research literacy instruments and presents the key results from studies using these instruments, the research questions of this study include the following:

  1. What instruments have been developed for measuring research literacy?
  2. What are the characteristics of the research literacy instruments developed in different research studies?
  3. What populations have been the targets of the instruments developed to measure research literacy?

Stage 2: Identify relevant studies

At this stage, the keywords (research literacy, literacy and research, literacy for research, research skills, research competence, research ability, scale, questionnaire, measure, tool, instrument, inventory and survey) were used separately and in combination using the Boolean operator (And, Or) to conduct a search in the titles, abstracts and full texts of search engines and databases including Google Scholar, Web of Science, Scopus, Science Direct, ProQuest, and ERIC. The time span of the search was limited to 22 years from 2000 to 2022. Details of the search strategy for research literacy instruments are presented in Table 1.

Table 1: Search strategy for research literacy instruments

Stage 3: Study selection

In the third stage, the criteria of study inclusion and exclusion were determined according to the research questions and the objectives of the present scoping review, as presented below. Those studies that met the following inclusion criteria were selected; otherwise, they were excluded.

Inclusion criteria

The studies were included based on the following criteria: studies published in English between 2000 and 2022, in the form of articles in prestigious scientific research journals, in the field of social sciences and education, with the main focus on the development and validation of research literacy instruments, and the availability of the latest and most complete version of the instruments proposed in the study. The reason for the selection of 2000 as the start of the time period was the publication of the first book-length treatment of research literacy in 2007 (Shank & Brown, 2007). We assumed that the studies on research literacy would not go back in time more than five to seven years before this publication.

Exclusion criteria

To remove the studies which were not relevant to the purposes of the current scoping review from the list, we relied on the following criteria: studies published before 2000, in languages other than English, in any form other than articles (e.g., books, dissertations/theses, conference presentations, letters to the editor), in unauthenticated journals, in fields not related to Social Sciences and Education (e.g., Health, Medicine, Computer Science, etc.), and the lack of access to the latest and most complete version of the instruments proposed in the study. We decided not to extend the study selection to disciplines other than education and social sciences because we assumed that research is discipline-based in nature. The selection was also limited to articles because they constitute a peer-reviewed form of publication intended to disseminate the latest empirical research results.

In the first step of the selection process, we examined the titles and excluded the articles that did not meet our inclusion criteria. In the next step, we reviewed the abstracts and full-texts, and we excluded the articles that met the exclusion criteria or were duplicates. Figure 1 presents the details of selecting and screening the reviewed articles according to the PRISMA-ScR flowchart (Tricco et al., 2018), which is proposed for scoping reviews.

Figure 1: Study selection and screening process

Stage 4: Charting the data

After completing the initial list of the selected articles, a data extraction form was used to code the articles based on the research objectives. This form included different sections as follows: study title, authors’ name, publication year, country, study design, study method, participants, data collection instruments, and study results. As a starting point, the data of five articles from the pool of selected articles were extracted using the data extraction form by both authors. After addressing the shortcomings and problems and resolving the ambiguities in the form's initial version, it was used to extract data from the selected articles including the ones used for piloting the extraction form. The selected articles' data were extracted and charted. In order to ensure the reliability of coding, the method of inter-coder agreement between two coders was used. To this end, Cohen's kappa coefficient was used to determine the coding reliability (see Shultz & Whitney, 2005). The numerical value of reliability was found to be 0.73.

Step 5: Collating, summarizing, and reporting the results

In the analysis phase, we collated and summarized the data extracted from the selected research articles in the form of tables (Table 2 and Appendix). The results of the analysis, as summarized in the afore-mentioned tables, are presented in two sections as follows: a) Research Literacy Assessment Instruments, and b) The Populations Studied Using the Research Literacy Assessment Instruments.

Results

Searching the databases using the keywords yielded a total of 334 records, 298 of which were excluded from further analysis due to duplication and the absence of one or more of the inclusion criteria after the articles’ titles, abstracts, and full texts had been reviewed. Finally, 36 full-text articles reached the analysis stage of the scoping review. Among the articles, six focused on developing and validating research literacy assessment instruments, and 30 examined populations where these instruments were used. The characteristics of the selected articles are presented in the Appendix).

Research literacy assessment instruments

One of the first studies operationalizing research literacy was conducted by Meerah et al. (2011), who developed a scale to identify the research skills of Ph.D. students in Malaysia. The researchers identified five components: research capacity, reflection, problem-solving, communication, and research methodology skills. These components were operationalized in the form of a self-report questionnaire with 39 items. The researchers validated the developed questionnaire through conducting a pilot study and a peer review. The reliability of the questionnaire was reported as 0.85. Second in this line of research is the study done by Groß Ophoff, et al. (2015) which developed a test to assess research literacy with the following three dimensions: information literacy, statistical literacy, and evidence-based reasoning. The test was presented in the form of 29 scenario-based items with both multiple-choice and true/false questions. After ensuring the content validity of the test, the researchers examined the factor structure of the research literacy construct. They proposed a three-factor model consisting of three components for the construct of research literacy. The reliability of the test was reported to be 0.90. Gess et al. (2018) designed a test to examine students’ research abilities at Humboldt-Universität zu Berlin. It consisted of 27 stand-alone multiple-choice items. The major components of the test were research methods, research methodology, and research procedures. The identified areas correspond with the stages of research (identifying research problems, planning research projects, and analyzing and interpreting data). The researchers conducted confirmatory factor analysis in order to validate the test, and the reliability of the test was reported as 0.70.

Similarly, Yousef et al. (2018) developed a test with three main components: information literacy, research methodology knowledge, and statistical literacy to determine education post-graduate research students' research literacy in five universities in Malaysia. The test was presented in the form of 40 stand-alone multiple-choice items. In order to validate the test, the researchers used the principal component analysis method, and the reliability of the test was reported to be 0.85. Focusing on the quantitative aspect of research literacy, Zaker and Nosratinia (2021) developed and validated a self-report questionnaire with Iranian university students majoring in undergraduate and post-graduate English language and literature programs to assess the research literacy of English language teachers. The questionnaire, which included 50 items on a six-point Likert scale (from 1= strongly agree to 6= strongly disagree), focused on four dimensions of quantitative research literacy, including developing research topic knowledge, research design knowledge, procedural knowledge, and data analysis knowledge. The researchers validated the questionnaire through exploratory and confirmatory factor analyses, and the reliability of the questionnaire was found to be 0.90. (Anani Sarab & Aghazadeh, in press) also designed a test in order to identify the strengths and weaknesses of public school and language institute English language teachers’ research literacy. The test consisted of three main components of information literacy, data literacy, and evidence-based reasoning and had 62 scenario-based items that were presented as a combination of scenario-based multiple-choice, true/false, matching, and sequencing items. The test was validated by conducting two pilot studies and calculating the items’ psychometric properties (i.e., the difficulty and discrimination index). Then, in order to identify the factor structure of the construct of research literacy, exploratory factor analysis was performed, which confirmed the three-factor model including the previously mentioned components. The characteristics of each of the research literacy assessment instruments reviewed are summarized in Table 2.

Table 2: Characteristics of research literacy instruments

As shown in Table 2, out of the six instruments developed for measuring research literacy, two were designed as self-report questionnaires and four in the form of tests. In the self-report questionnaires, descriptive statements on the Likert scale were used. In the tests, various multiple-choice, true/false, matching, and sequencing items were used. In most of the instruments designed as a test, only stand-alone multiple-choice questions were used to assess the research literacy of the respondents. However, among the instruments reviewed, two studies used scenario-based items that presented a hypothetical situation with related items in the form of multiple-choice combined with other formats. In addition, the research literacy components of the instruments have been labelled differently. They can be categorized as follows: research methods and design (research methodology skills, knowledge of research methods, research design); procedures of data collection and analysis (procedures, procedural and data analysis knowledge, statistical literacy); research information skills (problem-solving, developing research topic, information literacy) and the relevant reasoning skills (research capacity, reflection, evidence-based reasoning). The number of items included in these instruments ranged from 27 to 62. In some instruments the number of the items allocated to each component was not evenly distributed. For example, in the test proposed by Groß Ophoff et al. (2015), more items were devoted to information literacy and statistical literacy, which aimed to identify the sources of evidence and analyze the evidence to answer the research questions. Evidence-based reasoning, which includes the ability to reason based on the evidence gathered and depends on researchers' knowledge and methodological skills, had a smaller share in the test content than the other two components. Similarly, in the test developed by Yousef  et al. (2018), more items are dedicated to statistical literacy and research methodology knowledge than information literacy. It is worth noting that validation indexes (e.g., exploratory/confirmatory factor analyses/reliability) have been reported in all the reviewed instruments.

Populations studied using the research literacy assessment instruments

Ismail and Meerah (2012) used the Research Skills Questionnaire in their study to compare the research skills of Malaysian Ph.D. students studying in Malaysian universities and universities abroad. To this end, the researchers compared the participants’ answers in five dimensions: research capacity, reflection, problem-solving, communication, and research methodology. According to the findings, there was no significant difference between domestic and international students regarding their research skills. However, researchers acknowledged a need to strengthen the Ph.D. students’ research skills by encouraging them to participate in specialized workshops on research methodology.

 After completing the validation of the Educational Research Literacy Test, Groß Ophoff et al. (2015) used the instrument to assess Educational Science students’ progress after participating in two research methodology courses at early education and teacher training degree course programs in Germany. The results of the cross-sectional and longitudinal comparisons indicated a significant difference in the research literacy of the participants at the beginning and end of the semester. In other words, at the end of the semester, the participants were more proficient in all the components of educational research literacy. In a similar study, Groß Ophoff et al. (2017a) used the same test to examine the difference between research literacy of students studying at two different education degree programs (teacher training versus educational studies programs) across six universities in Germany. The results revealed that the students of the teacher training program showed higher proficiency in educational research literacy (ERL). Similarly, the overall competency in ERL was different across the universities. Subsequently, Schladitz et al. (2017) conducted a study with the aim of delving deeper into the factor structure of the Educational Research Literacy Test with three competing models: unidimensional, three-dimensional and bifactor models. They also checked the results of the model comparison relative to the scoring of the omitted and not-reached responses. The results showed that the construct of research literacy consists of one dominant factor and three secondary factors. It was further shown that handling the omitted and not-reached responses as missing information made no difference in the results of model comparison.Finally, Groß Ophoff and Egger (2021) investigated whether the Educational Research Literacy Test developed and validated in the context of German teacher training university programs could be applied to other national contexts, in this case the teacher training programs at Austrian universities. The results showed that the developed test can be validly used to assess education research literacy of teacher training students in both countries.

Gess et al. (2018) investigated the validity of score interpretations of a measure of social-scientific research literacy. They used the test with four groups (3rd and 4th semester undergraduates and 1st and 3rd semester graduate students) majoring in four disciplines including sociology, political sciences, educational studies, and psychology. The validity of interpretation of test results was investigated through comparing groups with different levels of competency. They further studied the incremental development of research competency from undergraduate 3rd semester to graduate 3rdsemester. The results showed that the research competency scores were related to progress as graduates outperformed undergraduates. The incremental progress was also confirmed as higher levels of competencies were acquired as student moved from lower to higher degree programs. In another validation study, Gess et al. (2017) focused on the specific range of research competencies in the social sciences and used the Research Competency Test to assess the research literacy of social science undergraduate students studying in German universities using differential item functioning. The researchers came to the same conclusion that, despite the many similarities found across the different disciplines of social sciences in terms of research competencies, there were differences between disciplines that could be attributed to their differential focus on quantitative or qualitative methods.

After reviewing the reliability and validity of the Research Literacy Test, Yousef et al. (2019b) used their validated Research Literacy Test to measure research literacy level of postgraduate students in five universities in Malaysia. Based on the findings, while postgraduate students’ information literacy level was above average, their statistical literacy, research methodology knowledge, and overall research literacy levels were found to be below average. In another study, Yousef et al. (2021) used the Research Literacy Test to assess the statistical literacy dimension of graduate students studying at Malaysian universities and reached the same results. They reported that the statistical literacy of graduate students was below the average level, and that the most challenging items for students were the items related to hypothesis-testing. The researchers concluded that the faculty should take steps to improve the students’ skills in research methodology and encourage them to improve their research literacy and productivity by participating more actively in various research-related activities.

Finally, Zaker et al. (2019) used the Quantitative Research Literacy Questionnaire (QRL) to assess the impact of critical appraisal training of published research on all aspects of quantitative research literacy of master’s degree (MA) students registered for the teachers of English as a foreign language (TEFL) research course at a university in Iran. In so doing, the researchers divided the MA TEFL students into experimental and control groups. In the experimental group, the participants received the materials related to the critical appraisal of published research, while the participants in the control group received standard training based on the research methods and design topics listed in the course syllabus. The QRL was used as the pre-test and post-test, and the results showed a significant difference between the experimental and control groups. In other words, the researchers reported that teaching critical appraisal of published research significantly impacted the teachers’ research literacy. In a similar study, Zaker et al. (2020) asked senior undergraduate students majoring in English language teaching (ELT) in a university in Iran to answer the same test (QRL) before and after participating in a critical appraisal of published research papers in order to evaluate the effect of participating in the course on their quantitative data analysis knowledge. The researchers reported that participation in the course had a significant impact on the students’ progress in the analysis and interpretation of quantitative data and examination of hypotheses.

Based on the findings, in most studies, the instruments developed were used to measure the research literacy of graduate and undergraduate students as the target population and only two studies have assessed the research literacy level of teachers. In addition, in most studies, research literacy assessment instruments were used as a diagnostic tool to assess the participants’ research literacy in terms of various components of the construct of research literacy. In some studies, these instruments were used as a criterion for measuring participants’ progress before and after participation in training courses related to research methodology, and the primary purpose of using them was to measure the effectiveness of such courses. Overall, what was reported in most of the reviewed studies was that the research literacy of the participants was below the average level and that necessary planning and measures needed to be taken to improve their current state of research literacy.

Discussion

With regard to the instruments developed for measuring research literacy, the findings provide evidence that, in some studies, researchers have not provided any justifications regarding the lack of distinction between knowledge and skills or their exclusive focus on certain knowledge-based components. However, using knowledge-based and skill-based components in a single instrument requires identifying the differences between these two aspects (i.e., knowledge and skill) and clarifying the reason for selecting certain components. Another important finding concerning the components of research literacy instruments is that in some instruments, the number of questions allocated for each component is not evenly distributed, and a significant number of questions are dedicated to the statistical literacy component of the construct of research literacy. In addition, most instruments have focused on the quantitative research paradigm, so the assessment of qualitative and mixed-methods research paradigms has received less attention. These findings are in line with the prevailing research pattern called the standard research perspective. As Borg (2007) rightly observes, this pattern results from the mindset that quantification, measurement, statistical analyses, and generalizability of research results are essential components of research. In fact, such a view reflects a minimalist attitude towards research, which ultimately leads to ignoring certain research paradigms (McDonough & McDonough, 1990). A trace of this perspective can be observed in the development of research literacy assessment instruments. This trend can negatively impact the validity of the developed instruments as their reduced content cannot adequately cover all aspects of the construct of research literacy.

Regarding the scales used in the instruments, according to the findings reported in Table 2, two instruments used self-report scales. However, the use of such scales has been criticized due to the low correlations between subjective and objective criteria for measuring facets of competency (Norris et al., 2003). There is a need to acknowledge that, in self-report types of instruments, respondents tend to report what they claim to know rather than what they actually know (Boynton & Greenhalgh, 2004). In addition, when reporting what they know, respondents may assess their abilities compared to their peers. Thus, the reliability of data obtained through self-report instruments can be influenced by external factors such as respondents’ personalities (John & Robbins, 1994) or their propensity for social desirability (Ziegler & Bühner, 2009). Therefore, given the limitations of using self-report scales, these types of instruments may not necessarily provide a clear picture of respondents’ actual research abilities. Based on the above reasons, it can be concluded that self-report instruments may have less validity and reliability compared to tests. As for the tests reported in the present scoping review, based on the reported results, we can conclude that almost all the studies have reported acceptable levels of reliability; however, the construct validity evidence provided across the studies varies in terms of statistical conclusion validity of the results. The Educational Literacy Test, developed and validated by Groß Ophoff and colleagues in a range of studies, is one of the scales for which the construct evidence from the comparison of competing factor structure models is available. The factor structure evidence for this scale is complemented with evidence for the structural aspect of the construct showing the relevancy of the construct-scoring criteria and scoring rubrics (see Messick, 1995). A second scale for which construct evidence validity has been provided is the measure of social-scientific Research Competency developed by Gess et al. (2017) validated in the same study, and in a subsequent one published a year later (Gess et al., 2018). These two studies have provided construct validity evidence investigating group differences and the construct incremental progress, as reflected by the students in their movement from lower to higher degree programs. This indicates the need for closer attention on the part of scale developers and users to examine the evidence provided for construct validity of the scale in focus.

With regard to the type of questions used in the reviewed instruments, the findings indicate that in most of the available instruments which are based on the assessment of individuals’ research skills through tests, only stand-alone multiple-choice questions have been used. However, it must be acknowledged that using stand-alone multiple-choice items limits assessment in measuring higher levels of cognitive skills (Parker et al., 2012). In fact, limiting the questions to stand-alone multiple-choice items, which do not allow the respondents to go beyond merely identifying the best answer, may provide insufficient information to fully diagnose their research literacy levels. In order to overcome such a limitation, multiple-choice questions can be used in combination with other formats (e.g., true/false, matching, sequencing questions). The scenario-based item type can best be used to elicit not only the knowledge base of research literacy but also the ability to use it in problem-based situations. The scenario encourages the respondents to make a decision based on their close consideration of the situation depicted in the scenario. In addition, to elicit the target knowledge, skills, and abilities underlying research literacy, scenario-based item types can be used in which a hypothetical situation is presented along with questions about that situation, and the respondents should come to a decision based on their close consideration of the situation depicted in the scenario. In other words, the respondents are supposed to evaluate the scenarios and indicate what they would do if they were actually confronted with such situations in the research process (Banuls & Salmeron, 2007). Thus, compared to multiple-choice questions that provide limited information about respondents' mastery of the subject (Hubbard et al., 2017), scenario-based items provide a clearer picture and a more precise assessment of their research literacy (Oostrom et al., 2016).

Conclusion

The objective of the current scoping review was to identify and evaluate the instruments that have been developed to assess research literacy in education and social sciences. Moreover, it aimed to present a review of the results of the application of the developed instruments in assessing the research literacy of various populations across different contexts. According to the results, research literacy of the test takers is below the average level, which highlights the need to rethink research methodology training curricula and policies and to create opportunities to promote the target groups’ research literacy, including university students who are considered as potential future teachers. Furthermore, based on the results, the available instruments may not reflect the full dimensions of research literacy and have some limitations which include an excessive focus on certain components of the construct of research literacy at the expense of the others, using subjective formats rather than objective ones (e.g., relying on self-reports rather than tests), focusing on knowledge-based instruments and ignoring the practical and skill-based dimensions of research literacy, overemphasis on the quantitative paradigm of research, and lack of coverage of qualitative and mixed-methods paradigms.

It is important to point out that the instruments developed to measure research literacy differ in characteristics based on the instructional context in which they were developed, and thus, they will provide different results on respondents’ research literacy. Such differences might not allow those responsible for educational decisions to reach comprehensive conclusions. As the developed instruments have their own strengths and weaknesses in measuring teachers' research literacy, it is recommended that they be selected with full awareness of their advantages and disadvantages and by taking into consideration the research skills expected from teachers in their institutional contexts. Finally, in order to obtain a clearer picture of the current state of research literacy among teachers, as long as the test structure is flexible enough and author permission is secured, one can selectively combine sections taken from the available research literacy assessment instruments to make a new test consistent with the contextual demands.

In this study, a review of research literacy instruments along with their application to the target populations was conducted using only research papers published in English. This issue could be addressed by including research articles in other languages. It should also be noted that in this study, the research literacy assessment instruments were evaluated through the scoping review method. Other data collection methods and instruments should be used, so that opinions of other stakeholder groups including teacher educators, teachers, teacher trainees and university students can also be considered regarding the current status of their research literacy and areas in which they need further support and training. Finally, the effect of factors such as field of study, academic degree, and institutional research culture on different dimensions of research literacy are among the issues that require a systematic investigation in future research.

References

Anani Sarab, M. R., & Aghazadeh, S. (in press). English language teachers’ research literacy test: Construction and validation. Journal of Language Horizons.

Arksey, H., & O'Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19-32. https://doi.org/10.1080/1364557032000119616

Arreman, I. E. (2008). The process of finding a shape: Stabilizing new research structures in Swedish teacher education, 2000–2007. European Educational Research Journal, 7(2), 157–175. https://doi.org/10.2304/eerj.2008.7.2.157

Banuls, V. A., & Salmeron, J. L. (2007). A scenario-based assessment model—SBAM. Technological Forecasting and Social Change, 74(6), 750-762. https://doi.org/10.1016/j.techfore.2006.05.015

Borg, S. (2007). Research engagement in English language teaching. Teaching and Teacher Education, 23(5), 731-747.https://doi.org/10.1016/j.tate.2006.03.012

Borg, S. (2009). English language teachers’ conceptions of research. Applied Linguistics, 30(3), 358–388.https://doi.org/10.1093/applin/amp007

Borg, S., & Alshumaimeri, Y. (2012). University teacher educators’ research engagement: Perspectives from Saudi Arabia. Teaching and Teacher Education, 28(3), 347-356. https://doi.org/10.1016/j.tate.2011.10.011

Borg, S., & Liu, Y. (D.). (2013). Chinese college English teachers' research engagement. TESOL Quarterly, 47(2), 270-299.https://doi.org/10.1002/tesq.56

Boynton, P. M., & Greenhalgh, T. (2004). Hands-on guide to questionnaire research: Selecting, designing, and developing your questionnaire. BMJ: British Medical Journal, 328(7451), 1312–1315. https://doi.org/10.1136/bmj.328.7451.1312

Cochran-Smith, M., & Lytle, S. L. (1999). The teacher research movement: A decade later. Educational Researcher, 28(7), 15-25. https://doi.org/10.3102/0013189x028007015

Constable, H. & Boyd, P. (2021). Master teachers: The constrained role of research literacy. In P. Boyd, A. Szplit, & Z. Zbróg (Eds.), Developing teachers' research literacy: International perspectives (pp. 243-269). Wydawnictwo Libron.

Cordingley, P. (2015). The contribution of research to teachers’ professional learning and development. Oxford Review of Education, 41(2), 234–252. https://doi.org/10.1080/03054985.2015.1020105

 Gess, C., Geiger, C., & Ziegler, M. (2018). Social-scientific research competency: Validation of test score interpretations for evaluative purposes in higher education. European Journal of Psychological Assessment, 35(5), 737–750.https://doi.org/10.1027/1015-5759/a000451

Gess, C., Wessels, I., & Blömeke, S. (2017). Domain-specificity of research competencies in the social sciences: Evidence from differential item functioning. Journal for Educational Research Online, 9(2), 11-36. https://doi.org/10.25656/01:14895

Griffoen, D. M. E. (2013). Research in higher professional education: A staff perspective. [Unpublished doctoral dissertation]. University of Amsterdam]. https://hdl.handle.net/11245/1.394559

Groß Ophoff, J., & Egger, C. (2021). Assessment of German and Austrian students' educational research literacy: Validation of a competency test based on cross-national comparisons. Studia Paedagogica, 26(4), 27-45.https://doi.org/10.5817/sp2021-4-2

Groß Ophoff, J., Schladitz, S., Leuders, J., Leuders, T., & Wirtz, M. A. (2015). Assessing the development of educational research literacy: The effect of courses on research methods in studies of educational science. Peabody Journal of Education, 90(4), 560–573. https://doi.org/10.1080/0161956x.2015.1068085

Groß Ophoff, J., Schladitz, S., & Wirtz, M. (2017a). Differences in research literacy in educational science depending on study program and university. In Proceedings of the 3rd International Conference on Higher Education Advances (pp. 1193-1202). Editorial Universitat Politècnica de València. https://doi.org/10.4995/head17.2017.5556

Groß Ophoff, J., Wolf, R., Schladitz, S., & Wirtz, M. (2017b). Assessment of educational research literacy in higher education: Construct validation of the factorial structure of an assessment instrument comparing different treatments of omitted responses. Journal for Educational Research Online, 9(2), 37-68. https://doi.org/10.25656/01:14896

Gutman, M., & Genser, L. (2017). How pre-service teachers internalize the link between research literacy and pedagogy. Educational Media International, 54(1), 63-76. https://doi.org/10.1080/09523987.2017.1324353

Haberfellner, C., & Fenzl, T. (2017). The utility value of research evidence for educational practice from the perspective of preservice student teachers in Austria: A qualitative exploratory study. Journal for Educational Research Online, 9(2), 69-87. https://doi.org/10.25656/01:14897

Harel, M. & Sela, O. (2011). “Like looking through a magnifying glass”: Teachers researching their work. Studies in Education,5, 102–132. http://www.jstor.org/stable/23981572

Hubbard, J. K., Potts, M. A., & Couch, B. A. (2017). How question types reveal student thinking: An experimental comparison of multiple-true-false and free-response formats. CBE Life Sciences Education, 16(2). https://doi.org/10.1187/cbe.16-12-0339

Ismail, R., & Meerah, T. S. M. (2012). Evaluating the research competencies of doctoral students. Procedia-Social and Behavioral Sciences, 59, 244-247. https://doi.org/10.1016/j.sbspro.2012.09.271

John, O. P., & Robins, R. W. (1994). Accuracy and bias in self- perception: Individual differences in self-enhancement and the role of narcissism. Journal of Personality and Social Psychology, 66, 206-219. https://doi.org/10.1037/0022-3514.66.1.206

Kuo, Y.-H., & Chiu, J. (2009). Mentoring undergraduates in their research proposal writing: EFL students in Taiwan. The Mentor: An Academic Advising Journal, 11, 1-8. https://doi.org/10.26209/mj1161540

Liu, Y., & Borg, S. (2014). Tensions in teachers’ conceptions of research: Insights from college English teaching in China. Chinese Journal of Applied Linguistics, 37(3), 273-291. https://doi.org/10.1515/cjal-2014-0018

McDonough, J., & McDonough, S. (1990). What’s the use of research? ELT Journal, 44(2), 102–109.https://doi.org/10.1093/eltj/44.2.102

Meerah, T. S. M., Osman, K., Zakaria, E., Ikhsan, Z. H., Krish, P., Lian, D. K. C., & Mahmod, D. (2011). Developing an instrument to measure research skills. Procedia–Social and Behavioral Sciences, 60, 630–636. https://doi.org/10.1016/j.sbspro.2012.09.434

Meijer, P. C., Oolbekkink, H. W., Meirink, J. A., & Lockhorst, D. (2013). Teacher research in secondary education: Effects on teachers’ professional and school development, and issues of quality. International Journal of Educational Research, 57, 39–50. https://doi.org/10.1016/j.ijer.2012.10.005

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50(9), 741–749.

Nassaji, H. (2012). The relationship between SLA research and language pedagogy: Teachers' perspectives. Language Teaching Research, 16(3), 337–365. https://psycnet.apa.org/doi/10.1037/0003-066X.50.9.741

Norris, S. P., Phillips, L. M., & Korpan, C. A. (2003). University students’ interpretation of media reports of science and its relationship to background knowledge, interest, and reading difficulty. Public Understanding of Science, 12(2), 123–145. https://doi.org/10.1177/09636625030122001

Oostrom, J. K., Melchers, K. G., Ingold, P. V., & Kleinmann, M. (2016). Why do situational interviews predict performance? Is it saying how you would behave or knowing how you should behave? Journal of Business and Psychology, 31, 279–291. https://doi.org/10.1007/s10869-015-9410-0

Parker, J. M., Anderson, C. W., Heidemann, M., Merrill, J., Merritt, B., Richmond, G., & Urban-Lurain, M. (2012). Exploring undergraduates’ understanding of photosynthesis using diagnostic question clusters. CBE-Life Sciences Education, 11(1), 47–57. https://doi.org/10.1187/cbe.11-07-0054

Schladitz, S., Groß Ophoff, J., & Wirtz, M. (2015). Konstruktvalidierung eines Tests zur Messung bildungswissenschaftlicher Forschungskompetenz [Construct validation of a test to measure educational research competence]. In S. Blömeke & O. Zlatkin-Troitschanskaia (Eds.), Kompetenzen von Studierenden, (pp. 167–184). Beltz.

Shank, G., & Brown, L. (2007). Exploring educational research literacy. Routledge.

Shultz, K. S., & Whitney, J. W. (2005). Measurement theory in action. Sage.

Stenhouse, L. (1975). An introduction to curriculum research and development. Heinemann.

Tack, H., & Vanderlinde, R. (2016). Measuring teacher educators’ researcherly disposition: Item development and scale construction. Vocations and Learning, 9(1), 43–62. https://doi.org/10.1007/s12186-016-9148-5

Tindowen, D. J., Guzman, J., & Macanang, D. (2019). Teachers’ conception and difficulties in doing action research. Universal Journal of Educational Research, 7(8), 1787–1794. https://doi.org/10.13189/ujer.2019.070817

Toom, A., Kynäslahti, H., Krokfors, L., Jyrhämä, R., Byman, R., Stenberg, K., Maaranen, K. & Kansanen, P. (2010). Experiences of a research-based approach to teacher education: Suggestions for future policies. European Journal of Education, 45(2), 331–344. https://doi.org/10.1111/j.1465-3435.2010.01432.x

Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D. J., Horsley, T., Weeks, L., Hempel, S. Akl, E. A., Chang, C., McGowan, J., Stewart, L., Hartling, L., Aldcroft, A., Wilson, M. G., Garritty, C., & Straus, S. E. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467-473. https://doi.org/10.7326/m18-0850

Tullock, B., & Ortega, L. (2017). Fluency and multilingualism in study abroad: Lessons from a scoping review. System, 71, 7-21. https://doi.org/10.1016/j.system.2017.09.019

Ulla, M. B. (2018). Benefits and challenges of doing research: Experiences from Philippine public-school teachers. Issues in Educational Research, 28(3), 797–810. https://www.iier.org.au/iier28/ulla.pdf

Westbroek, H., Jongejan, H. W., & de Vries, B. (2021). Research literacy in initial teacher education: Supporting the development of personal theories. In P. Boyd, A. Szplit, & Z. Zbróg (Eds.), Developing teachers' research literacy: International perspectives (pp. 113-135). Wydawnictwo Libron.

Williams, D., & Coles, L. (2007). Teachers’ approaches to finding and using research evidence: An information literacy perspective. Educational Research, 49(2), 185–206. https://doi.org/10.1080/00131880701369719

Wilson, A., Åkerlind, B., Walsh, B., Stevens, B., Turner, B., & Shield. A. (2013). Making professionalism meaningful to students in higher education. Studies in Higher Education, 38(8), 1222–1238. https://doi.org/10.1080/03075079.2013.833035

Xerri, D. (2018). The use of interviews and focus groups in teacher research. The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 91(3), 140–146. https://doi.org/10.1080/00098655.2018.1436820

Yamin-Ali, J., & Sambucharan, M. (2021). Action research journals: A window into how student teachers build research literacy. In P. Boyd, A. Szplit, & Z. Zbróg (Eds.), Developing teachers' research literacy: International perspectives (pp. 137-163). Wydawnictwo Libron.

Yusof, I. J., Latif, A. A., Derasid, N. A. C., & Jani, M. D. M. (2018). Measuring research literacy: Development of research literacy test. The Turkish Online Journal of Design, Art and Communication (TOJDAC), 8, 1649–1655. https://doi.org/10.7456/1080SSE/221

Yusof, I. J., Latif, A. A., Derasid, N. A. C., & Jani, M. D. M. (2019) Research literacy level of education postgraduate research students using Rasch measurement model. International Journal of Recent Technology and Engineering (IJRTE), 8, 791-796. https://doi.org/10.35940/ijrte.c1242.1083s219

Yusof, I. J., Latif, A. A., & Supie, H. S. M. (2021). Assessing statistical literacy level of postgraduate education research students in Malaysian research universities. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(5), 1318-1324. https://doi.org/10.17762/turcomat.v12i5.1800

Zaker, A., & Nosratinia, M. (2021). Development and validation of a quantitative research literacy questionnaire. International Journal of Foreign Language Teaching and Research, 9(37), 11-30. https://doi.org/10.52547/jfl.9.37.11

Zaker, A., Nosratinia, M., Birjandi, P., & Yazdanimoghaddam, M. (2019). The impact of implementing critical appraisal on EFL teachers' quantitative research literacy. Journal of Language and Translation, 9(2), 141-155. https://dorl.net/dor/20.1001.1.20088590.2019.9.2.10.0

Zaker, A., Nosratinia, M., Birjandi, P., & Yazdani Moghaddam, M. (2020). The impact of implementing critical appraisal on EFL teachers’ data analysis knowledge. International Journal of Foreign Language Teaching and Research, 8(29), 59-74. https://journals.iau.ir/article_667913.html

Ziegler, M., & Bühner, M. (2009). Modeling socially desirable responding and its effects. Educational and Psychological Measurement, 69(4), 548–565. https://doi.org/10.1177/0013164408324469


Contact us

mextesoljournal@gmail.com
We Are Social On

Login »
MEXTESOL A.C.

MEXTESOL Journal, vol. 48, no. 3, 2024, es una publicación cuadrimestral editada por la Asociación Mexicana de Maestros de Inglés, MEXTESOL, A.C., Versalles 15, Int. 301, Col. Juárez, Alcadía Cuauhtémoc, C.P. 06600, Ciudad de México, México, Tel. (55) 55 66 87 49, mextesoljournal@gmail.com. Editor responsable: Jo Ann Miller Jabbusch. Reserva de Derechos al uso Exclusivo No. 04-2015-092112295900-203, ISSN: 2395-9908, ambos otorgados por el Instituto Nacional de Derecho del Autor. Responsible de la última actualización de este número: Jo Ann Miller, Asociación Mexicana de Maestros de Inglés, MEXTESOL, A.C., Versalles 15, Int. 301, Col. Juárez, Alcadía Cuauhtémoc, C.P. 06600, Ciudad de México, México. Fecha de la última modificación: 31/08/2015. Las opiniones expresadas por los autores no necesariamente reflejan la postura del editor de la publicación. Se autoriza la reproducción total o parcial de los textos aquī publicados siempre y cuando se cite la fuente completa y la dirección electrónica de la publicación.

License

MEXTESOL Journal applies the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license to everything we publish.