Introduction
“I never knew anybody…who found life simple. I think a life or a time looks simple when you leave out the details.” ― Ursula K. Le Guin, The Birthday of the World and Other Stories.
Whether the teaching and learning of writing focuses on errors, audience, purpose, genre or something else, the process remains complex and emergent for both the native speaker and the English language learner. “An emergentist perspective [is] one that sees linguistic signs not as autonomous objects…, either social or psychological, but as contextualized products of the integration of various activities by… individuals in particular communicative situations” (Larsen-Freeman, 2006, p. 594). The author recognizes four assumptions underlying complex, dynamic systems: 1) that language is not fixed; 2) that interlanguage and target language (L2) never completely converge; 3) that language performance is not divided into discrete stages; 4) that no single performance in any one subsystem can totally account for linguistic progress; 5) that language is both cognitive and social; 6) that language development does not progress in a consistent manner, and 7) that although variations in individual developmental paths exist, certain patterns will emerge.
The fourth assumption is of particular interest when it comes to learning and receiving teacher feedback that supports the writing skill. Not only should English language learners receive timely feedback when learners need it most but they should also receive feedback based on different aspects of writing: accuracy, complexity, and fluency (Larsen-Freeman, 2006). Thus, assessing the writing skill of English language learners can extend a focus on syntactical, morphological, and lexical errors and may include assessing writing complexity and fluency as well.
Accuracy
Developing the writing skill emerges from the development of other communicative skill sets. What makes the writing skill so important is its role in connecting the writer with the reader and the impact listening, speaking, reading, vocabulary, grammar, and discourse strategies have on effective communication (Mugableh & Khreisat, 2019). When it comes to assessing written texts, most language instructors tend to focus on unfocused or comprehensive grammatical corrective feedback (CF) (Guénette, 2012; Lee, 2008). A focus on grammatical errors tends to concentrate on syntax and morphology. To be more specific, several studies conducted on written errors (Ababneh, 2017; Hamed, 2018; Hussen, 2015; Navas Brenes, 2017) have indicated that learners of English of different mother tongues have difficulty using spelling, punctuation, prepositions, articles, tenses, word order, and word choice, among others.
If CF is to be comprehensive and grammatical – yet useful – then awareness and prioritization of the types of written errors, which English language learners commit, will help language instructors employ more efficient assessments in the language-learning classroom. Sermsook et al. (2017) drew a relationship between the types of written errors produced by English language learners from Thailand by also looking at the source of the errors. They ranked the frequency of different error types into two categories: 1) errors at the sentential level and 2) errors at the word level. The most common errors at the sentential level included verb tense, subject-verb agreement, and sentence fragments while the most common at the word level included articles, nouns, and pronouns. The authors found that most of the errors, regardless of type, came from one source: interlingual interference. In contrast, Jamil et al. (2016) researched postgraduate Pakistani ELLs and divided the types of written errors into six categories: 1) the use of incorrect forms of verbs, 2) the use of present tense instead of past tense, 3) the use of past tense instead of present tense, 4) spelling errors, 5) inappropriate usage of vocabulary, and 6) subject-verb agreement. The authors recognized that unfocused grammatical CF is still a common teaching practice. Their error analysis further revealed that, although there is some overlap in the types of errors being committed, some error types are linked to prior learning experiences English language learners have had with the target language.
In addition to categorizing written errors at the sentence and word levels, they may also be characterized metalinguistically. When comparing phonological, morphological, and syntactic awareness, for instance, the latter two were found to offer more variance in the writing competence of Chinese English language learners (Sun et al., 2018).Mekala and Ponmani (2017) ranked morphological and syntactic errors together based on frequency, which provided context when employing appropriate forms of written corrective feedback: prepositions (22% of total errors), verb tense (15% of total errors), articles (13% of total errors), concord (12% of total errors) and spelling (9% of total errors). Conversely, Saavedra Jeldres and Campos Espinoza (2018) admitted that although an unfocused and comprehensive approach to written corrective feedback can include up to 15 different types of errors, they instead targeted five linguistic errors based on grammar (i.e., use of the indefinite article, subject-verb agreement and subject omission) and writing mechanics (i.e., capitalization and spelling). Whether at the sentence, word, morphological, or syntactic level, accuracy employing an error analysis becomes a useful precursor when making pedagogical decisions about instruction and assessment. To provide a richer educational context around the accuracy of an English language learner’s text, fluency and complexity should also be taken into consideration.
Another way to frame metalinguistic awareness, that is, “awareness of the forms, structure, and other aspects of a language” (Richards & Schmidt, 2002, p. 329) when analyzing written errors, is by pairing lexicon with grammar. To see whether computer-mediated communication improves the writing skill by learners of Spanish, grammatical and lexical accuracy and quantity of language were analyzed, concluding that electronic mail in foreign language writing had no real advantage over the paper-and-pencil version of dialogue journals and compositions (González-Bueno & Pérez, 2000). Gorozhankina and Bourne (2014) grouped grammar and lexical accuracy to analyze the quality of translational studies (i.e., brochures) in the tourism industry and found that lexical accuracy had a larger impact on the quality of the text than grammatical accuracy. Within a classroom context, the effectiveness of written corrective feedback had a significant positive effect on the writing skill based on both grammatical and lexical accuracy (Al-Hazzani & Altalhab, 2018). Thus, a more holistic approach to grammatical and lexical accuracy issues in both the classroom setting and in real-world contexts can better inform one’s practice on how written texts are being interpreted.
A final way to measure accuracy is by chunking text into T-units. Accuracy can be expressed as a total number of errors to total T-unit ratio (E/T) (Wolfe-Quintero et al., 1998). Hunt (1966) classified T-units as “minimal terminable units” that would include “exactly one main clause plus whatever subordinate clauses are attached to that main clause” (p. 737). So, in addition to analyzing accuracy metalinguistically, accuracy can also be expressed as a ratio, representing one’s development at a particular time or can show growth by comparing different accuracy ratios over time.
Fluency & Grammatical Complexity
As T-units are useful in measuring accuracy, they are also helpful when measuring fluency and grammatical complexity. Fluency reflects how comfortable an English language learner is when producing written text, which is best measured by “… counting the number, length, or rate of production unit… or [more specifically] T-units” (Wolfe-Quintero et al., 1998, p. 14). Hence, a higher words-per- T-Unit ratio would suggest a more competent English writer (Larsen-Freeman, 1978).
T-units are not only used to determine fluency of the written skill, but also to analyze grammatical complexity. Complexity refers to the “organization... [and] syntactic patterning” the writer uses to convey an idea (Foster & Skehan, 1996, p. 303). Thus, measuring clauses, sentences, and T-units as production units becomes the best way to determine grammatical complexity based on how varied and sophisticated the text happens to be (e.g., clause-to-T-units ratio (CT)(Wolfe-Quintero et al., 1998).
Of the few studies that have evaluated accuracy with grammatical complexity, Lahuerta (2017) sought to determine whether corrective feedback has an effect on the two. The participants were two groups of Spanish EFL learners. They were undergraduate students enrolled in the Degree in Modern Languages and their Literatures at a University in the north of Spain. They were divided into two groups: group A was formed by 34 advanced students, and group B was made up of 66 upper-intermediate students. Participants were asked to choose a topic and write a composition between 300 and 350 words. The written texts were scored along the following parameters: grammatical complexity, accuracy and surface errors. The researcher found significant differences between advanced and upper intermediate students both in complexity and in accuracy.
In another related study, Mubarak (2013) included two experimental groups and a control group and evaluated their development by applying pre-, post-, and delayed post-tests, aimed at investigating the effectiveness of direct corrective feedback and indirect corrective feedback. The researcher found that “even though the students had improved during the course of the experiment, neither type of corrective feedback had a significant effect on the accuracy, grammatical complexity, or lexical complexity of their writing, and that there was no difference in the effectiveness between the first type of feedback compared to the second” (p.ii).
Research Questions
To determine accuracy, fluency, and grammatical complexity, this descriptive-analysis study sought to collect and analyze quantitative data to answer the following research questions:
- How accurate are English language learner writers at an intermediate level?
- How fluent are English language learner writers at an intermediate level?
- How grammatically complex are the texts of English language learners at an intermediate level?
Method
Participants
The participants of this study included 31 Spanish-speaking Mexican English language learners who were enrolled in an English skills development course that was divided equally between the writing and speaking. The English skills development course is designed for English language learners at an intermediate level (i.e., B1-B2 according to the Common European Framework), which is taken during the first year of a bachelor’s degree program in English language teaching (Council of Europe, 2020). Nineteen participants identified as female while 12 identified as males with an average age of 20.75, ranging from 19-25.
The four-year bachelor’s degree program in English language teaching includes a propaedeutic year (henceforth Prope) for those learners who enter the university with a TOEFL score of less than 480, which is designed to assist the English language learner to achieve a B1 by offering 30 hours per week of courses delivered in English. The Prope year consists of two semesters of dedicated courses in grammar (five hours per week), reading (five hours per week), writing (five hours per week), listening and speaking (10 hours per week), and English culture (five hours per week). Vocabulary development is integrated throughout all propaedeutic courses. Of the 31 participants, five out of 12 males (42%) did not take Prope while two out of 19 females (11%) did not take Prope.
Instruments and Data Collection
The participants were shown an image of two women sitting down next to each other, seen from behind. The woman on the left had her arm around the one on the right, who was leaning up against another woman. The participants were instructed that they would have 50 minutes to write an essay on the topic of friendship. Participants were free to choose the audience, purpose, first, second or third person, and overall organization of ideas. The researchers monitored the writing of the essays and collected all 31 essays at the end of the time allotment. All participants signed an informed consent form from the beginning, agreeing to take part in the study and accepting that their participation or non-participation would in no way affect their grade and their names would not be revealed.
Data Analysis
Each essay was copied and then analyzed separately by two researchers according to the error correction code list (See Table 1). The inter-rater reliability was 92%, and cases of disagreement were discussed until a resolution was reached to determine each type of error (i.e., morphological, syntactic, or lexical errors). The web-based, cross-platform software package Dedoose (https://www.dedoose.com) was used to upload text for recording and analyzing the different types of errors.
Table 1: Error correction code list
Once error types were verified, the two researchers checked word counts and T-Units of each of the 31 essays, first separately, then coming together to verify frequency counts afterwards. For the purposes of this study, that is, the main objective was to concentrate on errors of accuracy, complexity, and fluency, errors in punctuation (except comma splices and run-on sentences) and spelling were not considered. We categorized comma splice and run-on errors as syntactic errors.
Results and Discussion
Accuracy
The average number of errors of all participants was 29.07, which included an average among males of 32.17 and females of 27.11. In the 31 essays analyzed, 901 total errors were found, of which 222 (24.6%) were classified as syntactic errors (See Table 2). One of the differences between these findings and those of Sermsook et al. (2017) was the errors made in word order and fragments. In Sermsook et al. (2017), the word order and fragment error percentages were 1.69% and 7.7% respectively, whereas these errors in this study reached 2.77% and 2.44% respectively.
Table 2: Students’ syntactic errors
The morphological errors committed by the participants totaled 428, or 47.5% of total errors, and included the following: word form, verb tense, article, preposition, and agreement (See Table 3). Verb tense errors at the sentential level in Sermsook et al. (2017) made up only 3.38% of the total errors ,whereas in this study, verb tense errors were 10.1% of total errors. Interestingly, in Mekala and Ponmani (2017), prepositions were the most common type of error (22.07%) compared to only 7.5% in this study and only 5.07% in Sermsook et al. (2017).
Table 3: Students’ morphological errors
The lexical errors totaled 251, or 27.9% of the total errors, and included only word choice and wrong word (see Table 4). Word choice as an error at the word level was considerably higher in this study (15.7%), when compared to 3.72% in Sermsook et al. (2017), and only .69% in Mekala and Ponmani (2017). As an average of percentage of total errors within each error category type (syntax, morphology and lexicon), the lexical errors had an average of 13.95% (i.e., averaging word choice at 15.7% and wrong word at 12.2%) compared to morphological errors (8.84%) and syntactic errors (4.92%).
Table 4: Students’ lexical errors
When comparing the central tendency of syntactic, morphological, and lexical errors, the average total of morphological errors is greater than the corresponding average of syntactic and lexical errors with the following respective means: 7.16, 13.81, and 8.09. On the other hand, there is a greater dispersion in the number of syntactic errors that students commit, which includes three outliers (see Figure 1.). In the case of morphological and lexical errors there are no outliers observed. Standard deviations were 7.33, 6.37, and 5.52 respectively.
Figure 1: Central tendency of error types
When comparing written errors based on gender, males had a higher average for both syntactical and lexical errors. Males had an average number of syntactical errors of 9.5 and lexical errors of 9.03 to females, who only had 5.68 and 7.47 respectively (see Figure 2). Those participants who took a year of Prope had a lower average of syntactical errors compared to those who did not, but had a higher average related to morphological and lexical errors. Although the difference of averages in lexical errors between the two groups was slight, when it came to morphological errors, those who took Prope had an average of 14.56 errors when compared to those who did not take Prope, averaging only 10.67 errors.
Figure 2: Average errors based on gender and Prope
Accuracy, Grammatical Complexity, and Fluency Ratios
While expressing accuracy in terms of syntax, morphology, and lexicon, a separate accuracy ratio can be compared with fluency and grammatical complexity ratios to provide further context. In this study, the total number of T-units taken from all of the written essays were 1,039, the total number of clauses were 1,572, and the total word count was 11,501. Based on these totals, the following ratios were found, namely: 1) an accuracy ratio of .87 (total errors divided by total T-units), 2) a grammatical complexity ratio of 1.51 (total clauses divided by total T-units), and 3) a frequency ratio of 11 (total words divided by total T-units).
Conclusions
Knowing what to focus on when providing written feedback to English language learners is simple yet complex. An emergent approach that includes accuracy, fluency and complexity provides a more comprehensive lens when designing instruction and assessment around the writing skill – one of the most demanding skills English language learners face. More than simply expressing accuracy just using a ratio, an English language learner’s awareness of syntax, morphology, and lexicon informs the teacher practitioner how instruction and assessment might also have the biggest impact on higher learning outcomes. Thus, the following key implications emerge from this study.
Pedagogy that limits feedback to accuracy alone ignores key activities that can also promote fluency and complexity. Timed writing activities that bring about awareness of words per minute and words per sentence or clause can add perspective to student writing and development. Regarding complexity, an awareness of independent and dependent clauses, and how they are situated throughout a text, can employ a written discourse approach to one’s teaching practice.
From a metalinguistic perspective, the accuracy of written texts can be categorized as syntax, morphology, and lexicon and can differ based on the culture and academic levels of the English language learners when compared to the literature. When providing unfocused or comprehensive written corrective feedback to the learner, knowing which error types and categories of errors can help inform the instructor on which errors have the biggest impact on the group overall. This study shows that activities designed only to improve word choice would have a different impact on written development than activities solely structured to improve errors in run-on sentences, for instance. At the same time, this study also shows a pedagogical need to design lessons that integrate written syntax, morphology, and lexicon, so that various entrance points into the development of the writing skill can be explored.
One of the limitations of this study was that since participants were not instructed to focus on any particular type of writing (e.g., narrative, expository, etc.), the types of written texts varied, which could have had an impact on the accuracy, fluency and complexity of the texts. Also, this study excludes other key aspects of developing a text: genre, audience, and purpose. Further research is needed to see how the details of accuracy (i.e., syntactic, morphological, and lexical errors), fluency, and complexity change over time, and how specific profiles of English language learners and their prior exposure to the target language (e.g., English) influence accuracy, fluency, and complexity.
References
Ababneh, I. (2017). Analysis of written English: The case of female university students in Saudi Arabia. International Journal of Social Science Studies, 5(4), 1-5. https://doi.org/10.11114/ijsss.v5i4.2264
Al-Hazzani, N., & Altalhab, S. (2018). Can explicit written corrective feedback develop grammatical and lexical accuracy of Saudi EFL learners? International Journal of Education and Literacy Studies, 6(4), 16–24. https://doi.org/http://dx.doi.org/10.7575/aiac.ijels.v.6n.4p.16
Council of Europe. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge University Press.
Foster, P., & Skehan, P. (1996). The influence of planning and task type on second language performance. Studies in Second Language Acquisition, 18(3), 299–323. https://doi.org/https://doi.org/10.1017/S0272263100015047
González-Bueno, M., & Pérez, L. C. (2000). Electronic mail in foreign language writing: A study of grammatical and lexical accuracy, and quantity of language. Foreign Language Annals, 33(2), 189–198. https://doi.org/https://doi.org/10.1111/j.1944-9720.2000.tb00911.x
Gorozhankina, T., & Bourne, J. (2014). La corrección léxica y gramatical: Dos parámetros de calidad en la traducción turística al ruso[Lexical and grammatical correction: Two quality parameters in tourist translation into Russian]. Tonos Digital, 27, 1–21. https://doi.org/http://hdl.handle.net/10481/37631
Guénette, D. (2012). The pedagogy of error correction: Surviving the written corrective feedback challenge. TESL Canada Journal, 30(1), 117–126. https://doi.org/https://doi.org/10.18806/tesl.v30i1.1129
Hamed, M. (2018). Common linguistic errors among non-English major Libyan students writing. Arab World English Journal, 9(3), 219-232. https://dx.doi.org/10.24093/awej/vol9no3.15
Hunt, K. W. (1966). Recent measures in syntactic development. Elementary English, 43(7), 732–939. www.jstor.org/stable/41386067
Hussen, M. (2015). Assessing students’ paragraph writing problems: The case of Bedeno Secondary School, grade 10 English class in focus [Unpublished master’s thesis], Haramaya University.
Jamil, S., Majoka, M. I., & Kamran, U. (2016). Analyzing common errors in English composition at postgraduate level in Khyber Pakhtunkhwa (Pakistan). Bulletin of Education & Research, 38(2), 53–67. http://pu.edu.pk/images/journal/ier/PDF-FILES/4_38_2_16.pdf
Lahuerta, A. C. (2017). Study of accuracy and grammatical complexity in EFL writing. International Journal of English Studies, 18(1), 71-89. https://doi.org/10.6018/ijes/2018/1/258971
Larsen-Freeman, D. (1978). An ESL index of development. TESOL Quarterly, 12(4), 439–448. https://doi.org/10.2307/3586142
Larsen-Freeman, D. (2006). The emergence of complexity, fluency, and accuracy in the oral and written production of five Chinese learners of English. Applied Linguistics, 27(4), 590–619. https://doi.org/https://doi.org/10.1093/applin/aml029
Lee, I. (2008). Understanding teachers’ written feedback practices in Hong Kong secondary classrooms. Journal of Second Language Writing, 17(2), 69–85. https://doi.org/https://doi.org/10.1016/j.jslw.2007.10.00
Mekala, S., & Ponmani, M. (2017). The impact of direct written corrective feedback on low proficiency ESL learners' writing ability. The IUP Journal of Soft Skills, 11(4), 23–54.
Mubarak, M. (2013). Corrective feedback in L2 writing: A study of practices and effectiveness in the Bahrain context [Unpublished doctoral dissertation]. University of Sheffield. https://etheses.whiterose.ac.uk/4129/1/PhD_MMubarak_2013.pdf
Mugableh, A. I., & Khreisat, M. N. (2019). Employing TBL and 3PS learning approaches to improve writing skill among Saudi EFL students in Jouf University. International Journal of Linguistics, Literature and Translation (IJLLT), 2(1), 217–229. http://files.eric.ed.gov/fulltext/ED593452.pdf
Navas Brenes, C. A. (2017). Observing students’ syntactic errors and the perceptions toward writing in the composition course LM-1235. Káñina, Revista de Artes y Letras, 41(1), 109-130. https://revistas.ucr.ac.cr/index.php/kanina/article/view/28839/30221
Richards, J. C., Schmidt, R., Platt, H., & Schmidt, M. (2002). Longman dictionary of language teaching & applied linguistics (3rd ed.). Pearson.
Saavedra Jeldres, P. A., & Campos Espinoza, M. (2018). Combining the strategies of using focused written corrective feedback: A study with upper-elementary Chilean EFL learners. Colombian Applied Linguistics Journal, 20(1), 79–90. https://doi.org/10.14483/22487085.12332
Sermsook, K., Liamnimitr, J., & Pochakorn, R. (2017). An analysis of errors in written English sentences: A case study of Thai EFL students. English Language Teaching, 10(3), 101–110. https://doi.org/http://dx.doi.org/10.5539/elt.v10n3p101
Sun, B., Hu, G., & Curdt-Christiansen, X. L. (2018). Metalinguistic contribution to writing competence: A study of monolingual children in China and bilingual children in Singapore. Reading and Writing, 31, 1499–1523. https://doi.org/https://doi.org/10.1007/s11145-018-9846-5
Wolfe-Quintero, K., Inagaki, S., & Kim, H.-Y. (1998). Second language development in writing: Measures of fluency, accuracy & complexity. University of Hawaii Press, Second Language Teaching & Curriculum Center.