Previous works in literature showed that performance estimations of learning procedures can be characterized by a convergence rate ranging between O(n^(−1)) (fast rate) and O(n^(−1/2)) (slow rate). In order to derive such result, some assumptions on the prob- lem are required; moreover, even when convergence is fast, the constants characterizing the bounds are often quite loose. In this work, we prove new Rademacher complexity (RC) based bounds, which do not require any additional assumptions for achieving a fast convergence rate O(n^(−1)) in the optimistic case and a slow rate O(n^(−1/2)) in the general case. At the same time, they are characterized by smaller constants with respect to other state-of-the-art RC fast converging alternatives in literature. The results proposed in this work are obtained by exploiting the fundamental work of Talagrand on concentration inequalities for product mea- sures and empirical processes. As a further issue, we also provide the extension of the results to the semi-supervised learning framework, showing how additional unlabeled samples allow improving the tightness of the derived bound.

Global Rademacher Complexity Bounds: From Slow to Fast Convergence Rates

ONETO, LUCA;
2015-01-01

Abstract

Previous works in literature showed that performance estimations of learning procedures can be characterized by a convergence rate ranging between O(n^(−1)) (fast rate) and O(n^(−1/2)) (slow rate). In order to derive such result, some assumptions on the prob- lem are required; moreover, even when convergence is fast, the constants characterizing the bounds are often quite loose. In this work, we prove new Rademacher complexity (RC) based bounds, which do not require any additional assumptions for achieving a fast convergence rate O(n^(−1)) in the optimistic case and a slow rate O(n^(−1/2)) in the general case. At the same time, they are characterized by smaller constants with respect to other state-of-the-art RC fast converging alternatives in literature. The results proposed in this work are obtained by exploiting the fundamental work of Talagrand on concentration inequalities for product mea- sures and empirical processes. As a further issue, we also provide the extension of the results to the semi-supervised learning framework, showing how additional unlabeled samples allow improving the tightness of the derived bound.
2015
Oneto, Luca; Ghio, Alessandro; Ridella, Sandro; Anguita, Davide
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/994514
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? ND
social impact