The functional requirements (FRs) classification in software requirements classification (SRC) is a difficult task due to class imbalance, fine-grained subcategories, and semantic complexities. Existing Machine Learning (ML) and Deep Learning (DL) models often rely on handcrafted characteristics or overlook the contextual meaning. This work presents a novel hybrid ensemble framework that refines three pre-trained transformers (BERT, DistilBERT, and RoBERTa) and combines them using two mechanisms: (1) an Attention-Based Fusion Mechanism that dynamically selects the most contextually relevant transformer for each instance, and (2) an Accuracy-Per-Class Weighted Ensemble that assigns weights based on per-class validation accuracy. Tested on multiple datasets, the approach outperformed single-transformer and DL models (CNN, LSTM, BiLSTM, and GRU) by a large margin (p < 0.001), achieving 95% accuracy and 0.94 F1-score. To the best of our knowledge, this is the first study to combine attention fusion and transformer-based ensembles for SRC, establishing a new standard for SRC.
|