In machine learning, generalisation is the aim, and overfitting is the bane; but just because one avoids the latter does not guarantee the former. Of particular importance in some applications of machine learning is the “sanity" of the models learnt. In this talk Bob Sturm discusses one discipline in which model sanity is essential -- machine music listening — and how several hundreds of research publications may have unknowingly built, tuned, tested, compared and advertised “horses” instead of solutions. The true cautionary tale of the horse-genius Clever Hans provides the most appropriate illustration, but also ways forward.
YOU MAY ALSO LIKE:
- Brian Sletten's Data Science with Python Workshop (in London on 18th - 20th November 2019)
- Fast Track to Machine Learning with Louis Dorard (in London on 2nd - 4th December 2019)
- Practical ML 2020 (in London on 2nd - 3rd July 2020)
- A Guide to the Market Promise of Automagic AI-Enabled Detection and Response (in London on 29th October 2019)
- Keynote by Naoki Takezoe on Revisit Dependency Injection in Scala and Introduction to Airframe (in London on 25th November 2019)
- Abstract Data Types In The Region Of Abysmal Pain, And How To Navigate Them (SkillsCast recorded in September 2019)
- Using Kubeflow Pipelines for building machine learning pipelines (SkillsCast recorded in September 2019)
Clever Hans, Clever Algorithms: Are your machine learnings learning what you think?
Since the beginning of 2015, Bob Sturm has been a lecturer in the School of Electronic Engineering and Computer Science, Queen Mary University of London. He specialises in audio and music signal processing, machine listening, and valuation.