Enhancing Sentence Prediction through Bidirectional Long Short-Term Memory Networks
| International Journal of Electronics and Communication Engineering |
| © 2026 by SSRG - IJECE Journal |
| Volume 13 Issue 3 |
| Year of Publication : 2026 |
| Authors : Madheswari Kanmani, Shreenidhi H S, Ancy Mergin, Iwin Thanakumar Joseph S, V Vivek |
How to Cite?
Madheswari Kanmani, Shreenidhi H S, Ancy Mergin, Iwin Thanakumar Joseph S, V Vivek, "Enhancing Sentence Prediction through Bidirectional Long Short-Term Memory Networks," SSRG International Journal of Electronics and Communication Engineering, vol. 13, no. 3, pp. 292-300, 2026. Crossref, https://doi.org/10.14445/23488549/IJECE-V13I3P123
Abstract:
Autocompletion of sentences is a very essential aspect of an intelligent writing system, but most n-gram models and unidirectional recurrent neural networks are unable to capture long-range and bidirectional contextual dependencies, leading to loss of semantic coherence and contextual accuracy. This paper presents a Bidirectional Long Short-Term Memory (BiLSTM) based model as an improvement of contextual complete sentences, especially when dealing with medium-sized text sequences. A text cleaning, tokenization, and structured n-gram sequence generation of medium-length article titles preprocessing generates training samples. The suggested architecture will include an embedding layer that represents dense words, a BiLSTM layer to learn forward and backward word contextual relations, and a Softmax output layer to predict the next word with probability. Mechanisms used to enhance generalization and avoid overfitting include early stopping and checkpoints, whereas mechanisms used towards inference include temperature-controlled sampling to ensure a balance between prediction coherence and lexical diversity. The new features of this publication are an architecture of context-optimized BiLSTM designs that are specific to the medium-length title datasets, as well as adaptive temperature-based inference as part of a lightweight training pipeline. The effectiveness of bidirectional recurrent models in the use of intelligent text suggestion systems is proven by experimental results on better fluency and contextual relevance.
Keywords:
Next-word prediction, Tokenization, N-Gram sequence, Embedding layer, Temporal sampling, Natural Language Processing (NLP).
References:
[1] Md Robiul Islam, Al Amin, and Aniqua Nusrat Zereen, “Enhancing Bangla Language Next Word Prediction and Sentence Completion through Extended RNN with Bi-LSTM Model On N-gram Language,” 2024 3rd International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE), Gazipur, Bangladesh, pp. 1-6, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Bekan Kitaw Mekonen, “Attention-Driven Bidirectional LSTM Neural Network for Afaan Oromo Next Word Generation,” Journal of Business, Communication & Technology, vol. 4, no. 1, pp. 48-65, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Syed Hasham Hameed et al., “Advanced Next-Word Prediction: Leveraging Text Generation with LSTM Model,” Journal of Computing & Biomedical Informatics, vol. 8, no. 2, pp. 1-13, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Xin Lian, Zekun Wang, and Christopher J. MacLellan, “Efficient and Scalable Masked Word Prediction Using Concept Formation,” Cognitive Systems Research, vol. 92, pp. 1-11, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Mohammad Amaz Uddin et al., “Explainable Detector: Exploring Transformer-Based Language Modeling Approach for SMS Spam Detection with Explainability Analysis,” Digital Communications and Networks, vol. 11, no. 5, pp. 1-15, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Zichang Wang, and Xiaoping Lu, “Gender Prediction Model Based on CNN-BiLSTM-Attention Hybrid,” Electronic Research Archive, vol. 33, no. 4, pp. 2366-2390, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Gurjot Singh Mahi, and Amandeep Verma, “Sentence Completion System for Punjabi using Bi-LSTM,” Proceedings of the Yukthi 2021- The International Conference on Emerging Trends in Engineering – GEC Kozhikode, Kerala, India, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Touseef Iqbal, and Shaima Qureshi, “The Survey: Text Generation Models in Deep Learning,” Journal of King Saud University - Computer and Information Sciences, vol. 53, no. 6, pp. 2515-2528, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Beakcheol Jang, “Bi-LSTM Model to Increase Accuracy in Text Classification: Combining Word2vec CNN and Attention Mechanism,” Applied Sciences, vol. 10, no. 17, pp. 1-14, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Jindong Chen et al., “Attention-based BiLSTM with Positional Embeddings for Fake Review Detection,” Journal of Big Data, vol. 12, pp. 1-23, 2025.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Siddhartha Brahma, “Improved Sentence Modeling using Suffix Bidirectional LSTM,” arXiv preprint, pp. 1-8, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Eliyahu Kiperwasser, and Yoav Goldberg, “Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations,” Transactions of the Association for Computational Linguistics, vol. 4, pp. 313-327, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Linqin Cai et al., “A Stacked BiLSTM Neural Network Based on Coattention Mechanism for Question Answering,” Computational Intelligence and Neuroscience, vol. 2019, no. 1, pp. 1-12, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Rahul Kumar et al., “On-Device Detection of Sentence Completion for Voice Assistants with Low-Memory Footprint,” Proceedings of the 17th International Conference on Natural Language Processing (ICON), NLPAI, Patna, pp. 384-392, 2020.
[Google Scholar] [Publisher Link]
[15] Tomas Mikolov, “Recurrent Neural Network-Based Language Model,” Interspeech 2010, pp. 1045-1048, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Sepp Hochreiter, and Jürgen Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Kyunghyun Cho et al., “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation,” arXiv preprint, pp. 1-15, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” arXiv preprint, pp. 1-16, 2014.
[CrossRef] [Google Scholar] [Publisher Link]

10.14445/23488549/IJECE-V13I3P123