Handcrafted-Guided, Bias-Aware Cross-Attention for Token-Level Pre-Pooling Fusion in Breast Histopathology
| International Journal of Electronics and Communication Engineering |
| © 2026 by SSRG - IJECE Journal |
| Volume 13 Issue 3 |
| Year of Publication : 2026 |
| Authors : Pattupogula Subramanyam, Gurumurthy Hari Krishnan |
How to Cite?
Pattupogula Subramanyam, Gurumurthy Hari Krishnan, "Handcrafted-Guided, Bias-Aware Cross-Attention for Token-Level Pre-Pooling Fusion in Breast Histopathology," SSRG International Journal of Electronics and Communication Engineering, vol. 13, no. 3, pp. 185-192, 2026. Crossref, https://doi.org/10.14445/23488549/IJECE-V13I3P115
Abstract:
Histopathological evaluations of breast cancer are the most accurate form of evaluation; However, the current state of automated computer-assisted diagnostics does not perform as well when there is variability between different colour stains or due to changes in magnification during image capturing. Most current deep learning methods require convolutional or transformer-based representations, with feature fusion performed after spatial pooling of images, which hinders the colour morphology interactions needed to classify normal tissue versus abnormal tissue correctly. To counteract these limitations, A novel Handcrafted Computing (HCC) module was developed, which fuses (pre-pooled) tokens created using HCC-generated compact handcrafted colour statistic features and deep CNN-generated spatial feature tokens. The new design enables DCOG to model persistent colour-region affinities using a learnable attention bias prior, this enables handcrafted generated tokens to work as first-class queries for spatial tokens while remaining lightweight and backbone agnostic; The proposed framework has been validated on the publicly available BreaKHis breast cancer histopathological dataset at each of the defined four magnification factors, 40×; 100×; 200×; 400× using patient-level splits to prevent data leakage between training and test sample sets and the performance was assessed using the metrics of Accuracy, Macro F1 Score, Macro AUC and Cohen's Kappa to measure both discriminative ability as well as agreement beyond chance level. Experiments have shown that the proposed HCA module reaches an accuracy of 96.87%, Macro-F1 of 96.87%, Macro-AUC of 0.9956, and a Cohen’s κ of 0.9373, thereby proving superiority over traditional cross-attention without bias as well as self-attention methods. It is a remarkable achievement of the proposed technique as it has resulted in a considerably reduced number of misclassifications of critical benign lesions as malignant, while still having a high sensitivity for the malignant class.
Keywords:
BreakHis, Breast Cancer Histopathology, Cross-Attention Fusion, Learnable Priors, Token-Level Pre-Pooling.
References:
[1] Fabio A. Spanhol et al., “A Dataset for Breast Cancer Histopathological Image Classification,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 7, pp. 1455-1462, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Soham Chattopadhyay et al., “DRDA-Net: Dense Residual Dual-Shuffle Attention Network for Breast Cancer Classification using Histopathological Images,” Computers in Biology and Medicine, vol. 145, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Cong Cong et al., “Colour Adaptive Generative Networks for Stain Normalisation of Histopathology Images,” Medical Image Analysis, vol. 82, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Musa Adamu Wakili et al., “Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning,” Computational Intelligence and Neuroscience, vol. 2022, no. 1, pp. 1-31, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Tania Afroz Toma et al., “Breast Cancer Detection Based on Simplified Deep Learning Technique with Histopathological Image Using BreaKHis Database,” Radio Science, vol. 58, no. 11, pp. 1-18, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Giulia Lucrezia Baroni, “Optimizing Vision Transformers for Histopathology: Pretraining and Normalization in Breast Cancer Classification,” Journal of Imaging, vol. 10, no. 5, pp. 1-21, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Abeer Heikal et al., “Fine Tuning Deep Learning Models for Breast Tumor Classification,” Scientific Reports, vol. 14, pp. 1-26, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Niful Islam et al., “Fusing Global Context with Multiscale Context for Enhanced Breast Cancer Classification,” Scientific Reports, vol. 14, pp. 1-16, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Lama A. Aldakhil, Haifa F. Alhasson, and Shuaa S. Alharbi, “Attention-Based Deep Learning Approach for Breast Cancer Histopathological Image Multi-Classification,” Diagnostics, vol. 14, no. 13, pp. 1-27, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Chengyang Gao et al., “Transformer based Multiple Instance Learning for WSI Breast Cancer Classification,” Biomedical Signal Processing and Control, vol. 89, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Xunping Wang, and Wei Yuan, “Nuclei-Level Prior Knowledge Constrained Multiple Instance Learning for Breast Histopathology Whole Slide Image Classification,” iScience, vol. 27, no. 6, pp. 1-20, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Lu Cao et al., “Multi-Branch Spectral Channel Attention Network for Breast Cancer Histopathology Image Classification,” Electronics, vol. 13, no. 2, pp. 1-17, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Rui Ding et al., “A Deep Multi-Branch Attention Model for Histopathological Breast Cancer Image Classification,” Complex & Intelligent Systems, vol. 10, pp. 4571-4587, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Aadhi Aadhavan Balasubramanian et al., “Ensemble Deep Learning-Based Image Classification for Breast Cancer Subtype and Invasiveness Diagnosis from Whole Slide Image Histopathology,” Cancers, vol. 16, no. 12, pp. 1-13, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Xiaoli Wang et al., “MFF-ClassificationNet: CNN-Transformer Hybrid with Multi-Feature Fusion for Breast Cancer Histopathology Classification,” Biosensors, vol. 15, no. 11, pp. 1-29, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Venkata Nagaraju Thatha et al., “Histopathological Image based Breast Cancer Diagnosis using Deep Learning and Bio Inspired Optimization,” Scientific Reports, vol. 15, pp. 1-24, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Guolan Wang et al., “Multi-Classification of Breast Cancer Pathology Images based on a Two-Stage Hybrid Network,” Journal of Cancer Research and Clinical Oncology, vol. 150, pp. 1-15, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Oluwatosin Tanimola et al., “Breast Cancer Classification Using Fine-Tuned SWIN Transformer Model on Mammographic Images,” Analytics, vol. 3, no. 4, pp. 461-475, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Suxing Liu, and Byungwon Min, “DCS-ST for Classification of Breast Cancer Histopathology Images with Limited Annotations,” Applied Sciences, vol. 15, no. 15, pp. 1-18, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Faseela Abdullakutty et al., “Histopathology in Focus: A Review on Explainable Multi-Modal Approaches for Breast Cancer Diagnosis,” Frontiers in Medicine, vol. 11, pp. 1-23, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Yuting Yan et al., “Breast Cancer Histopathology Image Recognition using Transformer Learning,” Medical Engineering & Physics, vol. 138, no. 1, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Dehua Liu et al., “Dual-Attention Multiple Instance Learning Framework for Pathology Whole-Slide Image Classification,” Electronics, vol. 13, no. 22, pp. 1-18, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Martim Afonso et al., “Multiple Instance Learning for WSI: A Comparative Analysis of Attention-Based Approaches,” Journal of Pathology Informatics, vol. 15, pp. 1-9, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Attasuntorn Traisuwan et al., “Color Normalization in Breast Cancer Immunohistochemistry Images Based on Sparse Stain Separation and Self-Sparse Fuzzy Clustering,” Diagnostics, vol. 15, no. 18, pp. 1-19, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Qinyi Zhang et al., “Enhanced Nuclear Information Fusion and Visual Transformer for Pathological Breast Cancer Image Classification,” Scientific Reports, vol. 15, pp. 1-16, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Bin Yang et al., “Transformer-based Multiple Instance Learning Network with 2D Positional Encoding for Histopathology Image Classification,” Complex & Intelligent Systems, vol. 11, pp. 1-17, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Jin-Gang Yu et al., “Prototypical Multiple Instance Learning for Predicting Lymph Node Metastasis of Breast Cancer from Whole-Slide Pathological Images,” Medical Image Analysis, vol. 85, 2023.
[CrossRef] [Google Scholar] [Publisher Link]

10.14445/23488549/IJECE-V13I3P115