Call for Papers -- Advancing Automated Visual Inspection in Radiography with Generative Adversarial Networks
A deep learning architecture has the acronym generative adversarial network. From a given training dataset, it trains two neural networks to compete with one another to produce more genuine new data. GANs are able to produce realistic looking, high-quality images. In order to train machine learning models, it can be useful in producing a variety of data samples. Compared to other generative model types, GANs frequently converge more quickly and are comparatively simple to train. One kind of neural network used for unsupervised machine learning is referred to as generative adversarial networks. However, mode collapse and non-convergence are the two main training issues for GANs, making them challenging to train. Changing the network architecture to produce a more potent model is one workable way to allow GAN to handle these two problems. The Generator and Discriminator are the two main parts of GANs. The generator job is to create fake samples based on the original sample, much like a thief, and trick the discriminator into believing the fake to be real. A Generator and a Discriminator combine to form a GAN, a machine learning model. While the Discriminator works to demonstrate that the created images are fraudulent, the Generator generates new images at random.
The GAN produces a series of images that resemble handwritten numbers as its final result. Thus, may anticipate that the GAN will be able to output any value that a neural network is capable of producing, given its structure. This might be a number, a picture, or a variety of other kinds of variables. A potent tool for a variety of image and video synthesis problems, the generative adversarial network framework enables the synthesis of visual information in an unconditional or input conditional fashion. One usage of GANs for data augmentation that we have included in this suggested framework is image to image translation. In addition to calculating the loss function to enhance the quality of the created target image, generative networks facilitate the mapping between source and target images. They demonstrate that, similar to the way real world cameras annotate acquired photos with traces of their photo response non uniformity pattern, each GAN leaves its unique stamp on the images it creates. Our GAN uses a feed forward neural network with five layers three dense levels, an input layer, and an output layer as the discriminator.
A classifier, the discriminator network differs slightly from the generator network. GAN provides greater theoretical value in encoding single class traits that are helpful for classification tasks than autoencoder style algorithms. The use of discriminative models in the GAN process is its most obvious advantage for classification tasks. Moreover, hyperparameters that impact the model quality and performance, like the learning rate, iteration count, and network design, can have an impact on GANs. The purpose of GANs is not really representation learning. The goal of GANs is to produce extremely realistic and real world like data. This is crucial for a number of applications in which training, testing, or simulation calls for synthetic data. Articles are invited that explore Advancing Automated Visual Inspection in Radiography with Generative Adversarial Networks. Case studies and practitioner perspectives are also welcome.
Suggested research and application topics of interest include but not limited to:
- Generative adversarial network for panoramic radiography image quality refinement.
- The repeatability and discriminative capability of radiomic characteristics are enhanced by generative adversarial networks.
- Accurate, high resolution lateral cephalometric radiography produced using quality assessments and a progressive, expanding generative adversarial network.
- Employing generative adversarial networks to create ultrasonic images that are identical to real images.
- Sophisticated deep learning methods used for automatic identification and categorization of femur neck fractures.
- Generative adversarial network based automated ultrasonic elastography generation.
- Applications of generative adversarial networks in biomedical informatics.
- Automatic glaucoma identification using texture features and generative adversarial networks.
- Generative adversarial network applications for computer assisted diagnostics.
- Automating Visual Inspection and Image Interpretation in Contemporary Manufacturing and Healthcare Environments.
- Unsupervised adversarial learning applied to radiography testing of turbine blades for aero engines.
- Effective anomaly identification for breast ultrasound imaging using generative adversarial networks.
Guest Editors:
Dr. Sabari Nathan, Senior AI Engineer, Couger Inc., Tokyo 150-0001, Japan
Prof. Sasithradevi A, Centre for Advanced Data Science, Vellore Institute of Technology, Tamil Nadu, India
Dr. Adeline Sneha J, Asia Pacific University of Technology and Innovation, Kualalumpur, Malaysia
Tentative Timeline:
Submission Deadline: September 30, 2024
Authors Notification: November 30, 2024
Revised Papers Deadline: January 25, 2025
Final Notification: March 25, 2025