On 18 Oct 2016, I gave a talk at Austin ACM SIGKDD on the k-nearest neighbors algorithm. Topics included some machine learning theory (approximation vs. generalization, VC dimension), the algorithm itself, proving the algorithm's performance, and some practical concerns around choosing k.
Some other topics that I would probably include next time are similarity functions, high-dimensional spaces, and categorical data.
You can find the slides here. Note that my presentation probably won't make a ton of sense from these slides, as they were mostly aids to the words I was saying out loud. If you've got any questions, feel free to email me; I'd love to chat!
Thanks to everyone who came to watch; I appreciated hearing your feedback!