Author
PhD. In Linguistics, English Language Department, King Faisal University, Hofuf, Kingdom of Saudi Arabia
[email protected], [email protected]
ORCID: https://orcid.org/0000-0003-4089-1920
Abstract
This study investigates the ability of Large Language Models (LLMs) to process complex syntactic phenomena, including relative clauses, wh-movement, and center- embedding. By analyzing examples derived from linguistic literature, the study highlights both the strengths and limitations of LLMs in handling syntax. The results reveal that while LLMs exhibit competence in simpler syntactic constructions, they struggle with deeper hierarchical dependencies and abstract syntactic constraints. The study underscores the need for integrating explicit syntactic principles into LLM architectures to bridge the gap between surface-level fluency and generative linguistic competence.