Resource-Efficient Model for Deep Kernel Learning
DOI:
https://doi.org/10.31577/cai_2025_1_1Keywords:
Parallel machine learning, parallel and distributed deep learning, GPU parallelism, domain decomposition, problem and model reductionAbstract
According to Hughes phenomenon, the major challenges encountered in computations with learning models come from the scale of complexity, e.g. the so-called curse of dimensionality. Approaches for accelerated learning computations range from model- to implementation-level. The first type is rarely used in its basic form. Perhaps, this is due to the theoretical understanding of mathematical insights. We describe a model-level decomposition approach that combines both the decomposition of the objective function and of data. We perform a feasibility analysis of the resulting algorithm, both in terms of accuracy and scalability.
Downloads
Download data is not yet available.
Downloads
Published
2025-02-28
How to Cite
D’Amore, L. (2025). Resource-Efficient Model for Deep Kernel Learning. Computing and Informatics, 44(1), 1–25. https://doi.org/10.31577/cai_2025_1_1
Issue
Section
Articles