A Resource-Efficient Model for Deep Kernel Learning
Keywords:
Parallel machine learning, parallel and distributed deep learning, GPU parallelism, domain decomposition, problem and model reductionAbstract
According to the Hughes phenomenon, the major challenges encountered in computations with learning models come from the scale of complexity, e.g. the so-called curse of dimensionality. There are various approaches for accelerated learning computations with minimal loss of accuracy. These approaches range from model-level to implementation-level approaches. The first one is rarely used in its basic form. Perhaps, this is due to the theoretical understanding of mathematical insights into model decomposition approaches, and thus the ability to develop mathematical improvements, has lagged behind. We describe a model-level decomposition approach that combines both the decomposition of the operators and the decomposition of the network. We perform a feasibility analysis on the resulting algorithm, both in terms of its accuracy and scalability.