Qkduoukws0tepqnkaprd
SkillsCast

Making good predictions from many bad ones

5th December 2016 in London at CodeNode

There are 2 other SkillsCasts available from Progressive F# Tutorials 2016

SkillsCast coming soon.

Making bad predictions is pretty easy. But what if you could find a way to take many simple and mediocre prediction models, and combine them into a meta-model that works better than the sum of its individual parts? This question is the focus of what Machine Learning people call Ensemble Methods.

In this session, you will explore one of these techniques (boosting), and attempt to predict whether a bottle of wine is good or terrible. You will start from scratch, with extremely basic building blocks, and will progressively combine them into more powerful prediction models, making adjustments to reduce what we didn't guess correctly.

Along the way, you will see some of the features that make F# a wonderful language for data exploration and algorithm design, on a real dataset. And once you are done, you'll be able to impress your friends and colleagues with fancy words such as Decision Stumps or Gradient Boosting.

This session is beginners-friendly; no prior knowledge of F# or Machine Learning is required.

YOU MAY ALSO LIKE:

Thanks to our sponsors

Making good predictions from many bad ones

Mathias Brandewinder

Mathias Brandewinder has been writing software in C# for about 10 years, and loving every minute of it, except maybe for a few release days. He is an F# MVP, the author of "Machine Learning Projects for .NET Developers" (Apress), enjoys arguing about code and how to make it better, and gets very excited when discussing TDD or F#.

SkillsCast

SkillsCast coming soon.

Making bad predictions is pretty easy. But what if you could find a way to take many simple and mediocre prediction models, and combine them into a meta-model that works better than the sum of its individual parts? This question is the focus of what Machine Learning people call Ensemble Methods.

In this session, you will explore one of these techniques (boosting), and attempt to predict whether a bottle of wine is good or terrible. You will start from scratch, with extremely basic building blocks, and will progressively combine them into more powerful prediction models, making adjustments to reduce what we didn't guess correctly.

Along the way, you will see some of the features that make F# a wonderful language for data exploration and algorithm design, on a real dataset. And once you are done, you'll be able to impress your friends and colleagues with fancy words such as Decision Stumps or Gradient Boosting.

This session is beginners-friendly; no prior knowledge of F# or Machine Learning is required.

YOU MAY ALSO LIKE:

Thanks to our sponsors

About the Speaker

Making good predictions from many bad ones

Mathias Brandewinder

Mathias Brandewinder has been writing software in C# for about 10 years, and loving every minute of it, except maybe for a few release days. He is an F# MVP, the author of "Machine Learning Projects for .NET Developers" (Apress), enjoys arguing about code and how to make it better, and gets very excited when discussing TDD or F#.

Photos