Poster Session
Evaluating the Performance of the Hugging Chat Assistant as a Grammar Checker for EFL Learners: A Comparison with Grammarly
Poster Session Hsin Yueh Yang
With the rise of artificial intelligence technologies, large language models (LLMs) and natural language processing (NLP) systems have given rise to a variety of AI assistants. In particular, AI assistants tailored to learners’ needs have great potential as helpful grammar checkers for improving learners’ writing ability and reducing teachers’ workload for checking grammar errors. However, the validation of AI-based grammar checkers requires further research. This study evaluates the effectiveness of the Hugging Chat Assistant, powered by Meta Llama 3, as a grammar checker for English as a Foreign Language (EFL) learners. The researcher developed custom prompts for the Hugging Chat Assistant to detect grammar errors based on the principles from existing studies and compared its performance to Grammarly. A dataset of 100 grammatically incorrect sentences, targeting five common grammar errors faced by intermediate learners, was used to assess the tools’ precision and recall rates. The findings suggest that the Hugging Chat Assistant outperforms Grammarly in terms of accuracy and sensitivity in detecting grammar errors. Additionally, its ability to provide contextualized and personalized explanations was evaluated positively by English major college students. However, a questionnaire revealed that EFL learners hold mixed opinions regarding the user interface and feedback from both checkers on their writing tasks. While the Hugging Face Assistant demonstrates significant potential, further research is needed to validate the findings with a larger dataset and through long-term studies to track user experiences and the potential impact of these tools on EFL learners’ writing development.
Analysis of the Role and Efficacy of Google Gemini in Assisting English Essay Writing Process: Cases of High School Students in Taiwan
Poster Session Yu Shan Sung
Taiwan has implemented the English General Scholastic Ability Test (GSAT), a high-stake college entrance exam, to determine students’ learning outcomes. The GSAT consists of a reading section and a writing section which requires students to write a short essay in English based on a given topic. Since the reading section occupies a large portion of the exam, high school English teachers usually put more focus on the instruction of grammar, vocabulary, and reading skills, and less focus on the instruction and practice of English essays. Moreover, the heavy workload in teaching writing, limited class time, and large class size may decrease the effectiveness of teaching. To alleviate the problem, we must consider tapping into artificial intelligence (AI) to enhance writing proficiencies among learners. This study, therefore, analyzes the role and efficacy of AI, Google Gemini, in assisting Taiwanese students’ process of writing an English essay that is required in GSAT. A mixed-methods design was adopted. The pretest-posttest experiment looked into English essays’ improvement after using Gemini, and the interview provided participants’ perceptions in terms of Gemini’s strengths and drawbacks. Posttest results yielded significant improvements in essay structure and grammar, whereas minimal change was found in content. As for the interview data, Gemini's ability to generate ideas and improve essay organization was acknowledged. However, participants express frustration with the tool’s instability, time-consuming interactions, and overwhelming information, leading to mixed perceptions about its overall utility.