Sam Boosalis

Algorithm

Results

Difficult v. Random Negatives

"Implement a strategy for iteratively re-training a classifier with mined hard negatives and compare this to a baseline with random negatives."

Shared Params

Evaluation

Stages=1 (linear HOG)

Stages=2 (linear HOG)

Evaluation

Linear v. Nonlinear kernels

"Utilize and compare linear and non-linear classifiers. Linear classifiers can be trained with large amounts of data, but they may not be expressive enough to actually benefit from that training data. Non-linear classifiers can represent complex decision boundaries, but are limited in the amount of training data they can use at once."

linear kernel, HOG

Params

Evaluation

nonlinear kernel, HOG

Params

Evaluation

Start Scale (a parameter tuning of mine own)

Shared Params

Evaluation

Start Scale=3 (SIFT, linear kernel)

Start Scale=2 (SIFT, linear kernel)

Lambda, SVM with a RBF kernel (a parameter tuning of mine own)

Shared Params

Evaluation

Lambda=1 (HOG, nonlinear kernel)

Lambda=0.01 (HOG, nonlinear kernel)

Discussion

Mining Hard Negatives

We see a threefold increase in accuracy by merely once mining for hard negatives.

Sophisticated Features

After getting dismal results from a normalized but raw image, I used SIFT and HOG representations of images. This itself dramatically improved performance.

Accuracy

I got accuracies between 23% and 63%. With an AP over 63% (seen above), my final classifier was a nonlinear SVM with RBF kernel, HOG feature respresentation, and a lambda of 0.01. It was trained upon 1000 positive and negative training data.

lambda and stages

Here, there is a dialog between two parameters: 'lambda' and 'stages'. lambda determines whether the SVM overfits or underfits, while stages determines how difficult the data an SVM is trained upon. they are quite dependent, and thus should not be independently tuned.

nonlinear SVMs

The nonlinear SVM is best tuned with a small (<1) lambda.

Musing