On the Trade-off Between Equivalence Constraints and Labels

Speaker
Liat Ein-Dor - IBM Research
Date
21/06/2015 - 11:30 - 10:30Add to Calendar 2015-06-21 10:30:00 2015-06-21 11:30:00 On the Trade-off Between Equivalence Constraints and Labels Supervised learning is based predominantly on labeled examples which are often expensive and scarce. An alternative form of supervision is equivalence constraints, i.e. two examples which are known to be from the same/different classes, yet their class labels are unknown. Equivalence constraints are often easier and cheaper to obtain, but the theoretical underpinnings of their learning utility relative to labels is still lacking. In this work we develop novel framework for analyzing the learning utility of equivalence constraints. Specifically, we extend the statistical mechanics Perceptron capacity calculations, used thus far only for labeled data, to supervised learning from equivalence constraints. We then derive generalization bounds for training with equivalence constraints, using a link between Perceptron capacity and Rademacher complexity. We prove that for large sample sizes, a sample with EC supervision becomes as powerful as a fully labeled sample of the same size. We also prove that this result holds even when the examples in the constraints are highly correlated. Department Seminar Room, Building 216 Room 201 אוניברסיטת בר-אילן - Department of Mathematics mathoffice@math.biu.ac.il Asia/Jerusalem public
Place
Department Seminar Room, Building 216 Room 201
Abstract

Supervised learning is based predominantly on labeled examples which are often expensive and scarce. An alternative form of supervision is equivalence constraints, i.e. two examples which are known to be from the same/different classes, yet their class labels are unknown. Equivalence constraints are often easier and cheaper to obtain, but the theoretical underpinnings of their learning utility relative to labels is still lacking. In this work we develop novel framework for analyzing the learning utility of equivalence constraints. Specifically, we extend the statistical mechanics Perceptron capacity calculations, used thus far only for labeled data, to supervised learning from equivalence constraints. We then derive generalization bounds for training with equivalence constraints, using a link between Perceptron capacity and Rademacher complexity. We prove that for large sample sizes, a sample with EC supervision becomes as powerful as a fully labeled sample of the same size. We also prove that this result holds even when the examples in the constraints are highly correlated.

Last Updated Date : 10/08/2015