Skip to the main content

Original scientific paper

https://doi.org/10.17559/TV-20231120001119

An Experimental Study on Improved Sequence-to-Sequence Model in Machine Translation

Yuan-shuai Lan orcid id orcid.org/0009-0001-3289-8519 ; School of Electronic Information Engineering, Greely university of China, Chengdu, Sichuan 123, Section 2, Chengjian Avenue, Eastern New Area, Jianyang City, Chengdu City, Sichuan Province, China *
Chuan Li ; -
Xueqin Meng ; -
Tao Zheng ; -
Mincong Tang ; -

* Corresponding author.


Full text: english pdf 1.311 Kb

page 940-947

downloads: 277

cite


Abstract

This paper presents the N-Seq2Seq model for enhancing machine translation quality and efficiency. The core innovations include streamlined attention mechanisms for focusing on crucial details, word-level tokenization to preserve meaning, text candidate frames for prediction acceleration, and relative positional encoding reinforcing word associations. Comparative analyses on English-Chinese datasets demonstrate approximately 4 BLEU score improvements over baseline Seq2Seq and 2 BLEU gains over Transformer models. Moreover, the N-Seq2Seq model reduces average inference time by 60% and 43% respectively. These techniques improve contextual modeling, reduce non-essential information, and accelerate reasoning. Importantly, the model achieves higher accuracy with low overhead, making it possible to deploy on mobile applications, while the Chinese-centric design can also be quickly adapted to other languages.

Keywords

BLEU evaluation; machine translation; N-Seq2Seq model; Seq2Seq model; WoBert model

Hrčak ID:

330559

URI

https://hrcak.srce.hr/330559

Publication date:

1.5.2025.

Visits: 542 *