Foreign language anxiety significantly hinders students’ academic performance, confidence, and motivation, posing barriers to effective language acquisition. This study leverages RoBERTa and ALBERT, advanced Transformer-based AI models, to detect anxiety in 324 text samples from foreign language learners, comprising 241 “Anxiety” and 83 “No-Anxiety” instances. Class imbalance was mitigated using Random Oversampling, and data quality was improved through preprocessing steps: text cleaning, case folding, tokenization, lemmatization, and stop word removal. Hyperparameter optimization via Optuna, with a search space including learning rate (1e-5 to 5e-5), batch size (4, 8, 16), and epochs (3 to 10), enabled both models to achieve 88% accuracy, surpassing the 82% accuracy of a prior GRU model. Weighted average metrics for RoBERTa are precision 0.90, recall 0.88, and F1-score 0.86, while ALBERT recorded precision, recall, and F1-score of 0.88. ALBERT demonstrated greater training stability, with validation loss decreasing consistently from 0.6 to 0.4, whereas RoBERTa showed overfitting, with a gap between training loss (below 0.1) and validation loss (around 0.3). These findings highlight the potential of RoBERTa and ALBERT for real-time anxiety detection, supporting AI-driven systems to foster inclusive learning environments. Addressing overfitting and refining class imbalance strategies can enhance their application, promoting language proficiency and psychological well-being in diverse educational settings.