Call for Papers -- Parallel and Distributed Computing Architectures for Large Scale Natural Language Processing Tasks

2024-08-23

This additionally holds true for several language processing tasks because there is an enormous volume of textual data that needs to be evaluated in a reasonable length of time, and this proportion is always growing. The discipline of natural language processing has seen a fundamental change in the large-scale processing of information and computer systems as a result of this situation. According to the inquiries, there is a lot of space for development in terms of language performance during processing when using parallel architectures. Using huge clusters with several processing nodes could assist in attaining improved outcomes.

One way of describing a language is as a collection of conventions or symbols joined together and used to transmit or broadcast information. Because not every user is proficient in the language particular to a technology, Natural Language Processing serves users who lack the time to acquire proficiency in new languages. Creating an architecture that is easy for trained professionals to use while merging distributed and parallel computer programmes is a difficult task. In addition to offering a way to identify the right data set for the operation, the structure must make it simple to access computationally difficult processes and advanced computing tools. Within the field of artificial intelligence lies natural language processing, which handles a wide range of complex, advanced, and demanding language-related activities, including synthesis, machine translation, and responding to questions. Models, methods, and algorithms are designed and put into practice in NLP to address real-world issues related to language comprehension. With even more encouraging results, neural network models have more recently begun to be used to analyse linguistic natural language information. To try and bring natural-language researchers up to speed with neural approaches, this lesson covers neural network designs from the viewpoint of natural language processing science.

The scalability of the distributed language processing design offered in this special issue leverages the storm to integrate the NLP modules into a processing cascade that performs the linguistic assessment of the materials. Designing systems that can execute distributed programmes in parallel across massive machine clusters is necessary for scalability. By combining several NLP modules into a single processing chain with the design described here, it is feasible to install the language-processing components concurrently on a cluster of computers in a distributed environment. Other than being able to make and consume language annotations in a predetermined format, the NLP modules are not restricted in any way.

 

Guest Editors:
Prof. Norma Binti Alias (MGE), Faculty of Science, Universiti Teknologi Malaysia, Johor, Malaysia
Prof. Fiza Zafar (CGE), Centre for Advanced Studies in Pure and Applied Mathematics, Bahauddin Zakariya University, Multan, Pakistan
Prof. Fairouz Tchier (CGE), Mathematics Department, College of Science, King Saud University, Saudi Arabia

 

Important Dates:
Submission Deadline: February 28, 2025
Notification to Author: April 30, 2025
Revised Version Submission: June 15, 2025
Final Acceptance: August 25, 2025