Generalization of Parameter Selection of SVM and LS-SVM for Regression

A Support Vector Machine (SVM) for regression is a popular machine learning model that aims to solve nonlinear function approximation problems wherein explicit model equations are difficult to formulate. The performance of an SVM depends largely on the selection of its parameters. Choosing between an SVM that solves an optimization problem with inequality constrains and one that solves the least square of errors (LS-SVM) adds to the complexity. Various methods have been proposed for tuning parameters, but no article puts the SVM and LS-SVM side by side to discuss the issue using a large dataset from the real world, which could be problematic for existing parameter tuning methods. We investigated both the SVM and LS-SVM with an artificial dataset and a dataset of more than 200,000 points used for the reconstruction of the global surface ocean CO2 concentration. The results reveal that: (1) the two models are most sensitive to the parameter of the kernel function, which lies in a narrow range for scaled input data; (2) the optimal values of other parameters do not change much for different datasets; and (3) the LS-SVM performs better than the SVM in general. The LS-SVM is recommended, as it has less parameters to be tuned and yields a smaller bias. Nevertheless, the SVM has advantages of consuming less computer resources and taking less time to train. The results suggest initial parameter guesses for using the models.

Keyword(s)

support vector machine for regression, SVM, LS-SVM, machine learning, parameter optimization, global ocean CO2

Full Text

FilePagesSizeAccess
Publisher's official version
111 Mo
How to cite
Zeng J, Tan ZH, Matsunaga T, Shirai T (2019). Generalization of Parameter Selection of SVM and LS-SVM for Regression. Machine Learning And Knowledge Extraction. 1 (2). 745-755. https://doi.org/10.3390/make1020043, https://archimer.ifremer.fr/doc/00676/78774/

Copy this text