Thanks to their easy implementation via radial basis functions (RBFs), meshfree kernel methods have proved to be an effective tool for, e.g., scattered data interpolation, PDE collocation, and classification and regression tasks. Their accuracy might depend on a length scale hyperparameter, which is often tuned via cross-validation schemes. Here we leverage approaches and tools from the machine learning community to introduce two-layered kernel machines, which generalize the classical RBF approaches that rely on a single hyperparameter. Indeed, the proposed learning strategy returns a kernel that is optimized not only in the Euclidean directions, but that further incorporates kernel rotations. The kernel optimization is shown to be robust by using recently improved calculations of cross-validation scores. Finally, the use of greedy approaches, and specifically of the vectorial kernel orthogonal greedy algorithm (VKOGA), allows us to construct an optimized basis that adapts to the data. Beyond a rigorous analysis on the convergence of the so-constructed two-layered (2L)-KOGA, its benefits are highlighted on both synthesized and real benchmark datasets
Data-Driven Kernel Designs for Optimized Greedy Schemes: A Machine Learning Perspective / Wenzel, Tizian; Marchetti, Francesco; Perracchione, Emma. - In: SIAM JOURNAL ON SCIENTIFIC COMPUTING. - ISSN 1064-8275. - 46:1(2024), pp. 101-126. [10.1137/23m1551201]
Data-Driven Kernel Designs for Optimized Greedy Schemes: A Machine Learning Perspective
Perracchione, Emma
2024
Abstract
Thanks to their easy implementation via radial basis functions (RBFs), meshfree kernel methods have proved to be an effective tool for, e.g., scattered data interpolation, PDE collocation, and classification and regression tasks. Their accuracy might depend on a length scale hyperparameter, which is often tuned via cross-validation schemes. Here we leverage approaches and tools from the machine learning community to introduce two-layered kernel machines, which generalize the classical RBF approaches that rely on a single hyperparameter. Indeed, the proposed learning strategy returns a kernel that is optimized not only in the Euclidean directions, but that further incorporates kernel rotations. The kernel optimization is shown to be robust by using recently improved calculations of cross-validation scores. Finally, the use of greedy approaches, and specifically of the vectorial kernel orthogonal greedy algorithm (VKOGA), allows us to construct an optimized basis that adapts to the data. Beyond a rigorous analysis on the convergence of the so-constructed two-layered (2L)-KOGA, its benefits are highlighted on both synthesized and real benchmark datasetsFile | Dimensione | Formato | |
---|---|---|---|
23m1551201.pdf
accesso aperto
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
PUBBLICO - Tutti i diritti riservati
Dimensione
1.15 MB
Formato
Adobe PDF
|
1.15 MB | Adobe PDF | Visualizza/Apri |
2301.08047.pdf
non disponibili
Tipologia:
1. Preprint / submitted version [pre- review]
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
837.84 kB
Formato
Adobe PDF
|
837.84 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2986644