Randomized Optimization
Class | Instructor | Date | Language | Ta'ed | Code |
---|---|---|---|---|---|
CS 4641 Machine Learning | Charles Isbell | Spring 2014 | Python | No | Code N/A |
Research Projects (CMA) | Karen Liu | Spring 2014-Spring 2015 | C++ | No | Code N/A |
This project was probably my favorite of the Machine Learning Class assignments, and the one that has had the biggest impact on my subsequent projects and research. I implemented Randomized Hill Climbing, Simulated Annealing, a genetic algorithm and the MIMIC algorithm and used each to map the weights of a Neural Net (instead of using Back Prop) to classify data sets that I had already classified in a previous assignment, to compare the performance of these algorithms. I also had to execute these algorithms on 3 "optimization" problems, in this case Optimized Knapsack, Travelling Salesman, and Continuous Peaks, to see which performed best. A result I found interesting was that, despite that these optimization problems are all basically re-phrasings of Optimized 3-SAT (so in effect rephrasings of each other), different algorithms performed markedly better on different problems. Basically the phrasing of "the question" determined which algorithm did best for these problems.
For Professor Liu I implemented numerous >CMA-ES-based optimization problems for various projects. When I am addressing uni-modal randomized search/optimization, I usually try CMA right after I try Simulated Annealing.