A Resource-Efficient Model for Deep Kernel Learning

Authors

  • Luisa D'Amore Department of Mathematics and Applications, University of Naples Federico II, 80126 Napoli, Italy

Keywords:

Parallel machine learning, parallel and distributed deep learning, GPU parallelism, domain decomposition, problem and model reduction

Abstract

According to the Hughes phenomenon, the major challenges encountered in computations with learning models come from the scale of complexity, e.g. the so-called curse of dimensionality. There are various approaches for accelerated learning computations with minimal loss of accuracy. These approaches range from model-level to implementation-level approaches. The first one is rarely used in its basic form. Perhaps, this is due to the theoretical understanding of mathematical insights into model decomposition approaches, and thus the ability to develop mathematical improvements, has lagged behind. We describe a model-level decomposition approach that combines both the decomposition of the operators and the decomposition of the network. We perform a feasibility analysis on the resulting algorithm, both in terms of its accuracy and scalability.

Downloads

Download data is not yet available.

Published

2025-02-28

How to Cite

D’Amore, L. (2025). A Resource-Efficient Model for Deep Kernel Learning. Computing and Informatics, 44(1). Retrieved from http://147.213.75.17/ojs/index.php/cai/article/view/7011

Most read articles by the same author(s)