Sentiment analysis in Persian texts poses a persistent challenge in the field of natural
language processing (NLP) due to the unique linguistic features and structural
complexities of the language. Existing methods for sentiment analysis often demand
substantial computational resources because of their reliance on complex models
with numerous parameters. Despite this, these methods frequently fail to achieve
satisfactory levels of accuracy and generalization. Additionally, their performance
deteriorates when confronted with unconventional or noisy data, limiting their
efficiency in real-world applications. This research proposes a novel transformerbased
model tailored for sentiment analysis in Persian texts. By reducing the
number of model layers and employing adversarial training techniques, the proposed
approach significantly enhances performance. The reduction in model parameters
not only improves computational efficiency but also achieves superior accuracy
and F1-score compared to existing approaches. Experimental evaluations reveal
that the proposed model attains an accuracy of 96.84 and an F1-score of 96.83
on the “Taghche” dataset, and an accuracy of 90.72 and an F1-score of 90.69 on
the “Snappfood” dataset. These results highlight the model’s clear advantage in
developing lightweight, faster, and more effective sentiment analysis systems for
practical and wide-ranging applications.