Learning and counting
A map $Y: X \rightarrow \{+1, -1\}$ over a finite set $X \con R^d $ is (linearly) separable if there exists $w \in R^d$ such that $\sign(w \bigcdot x)=Y(x)$ for all $x \in X$. When the Euclidean norm of $w$ is $\|w\|=1$, the number $\marg (w,Y) = \min_{x \in X} {Y(x) w\bigcdot x }$ is the margin of $w$ with respect to $Y$. The number $\marg(Y) = \sup_{w \in R^d : \|w\|=1} \marg(w,Y)$ is the margin of $Y$. We call $Y$ an $\eps$-partition if its margin is at least $\eps$.
Given a set $X$ of size $n$, how many $\eps$ partitions exist?
To answer this question, we will use the classical perceptron algorithm (the first neural algorithm). We will also use the perceptron to prove a "sparse" version of Farkas' lemma and derive generalization bounds using sample compression schemes. If time permits, we will also discuss some open combinatorial problems in learning theory.
Last Updated Date : 24/12/2018