Are you interested in learning more about stacking your machine learning models? Come and join us for this months InfiniteConf Bytes with Marios Michailidis!
Stacking (or stacked generalization) is a technique that allows the data scientist to combine many different machine learning models in order to make better predictions. This technique has been used to win many machine learning competitions (in kaggle).
This talk will present the basic elements of stacking and a generalised framework that uses it called StackNet. StackNet is a computational, scalable and analytical framework that resembles a feedforward neural network and uses stacking in multiple levels to improve the accuracy of predictions. StackNet will be demonstrated through practical examples with tips on how to make even stronger models.
YOU MAY ALSO LIKE:
- Brian Sletten's Data Science with R Workshop (in London on 2nd - 4th July 2018)
- Infiniteconf 2018 - The conference on Big Data and AI (in London on 5th - 6th July 2018)
- Fast Track to Machine Learning with Louis Dorard (in London on 3rd - 5th September 2018)
- Real-time Data Engineering in the Cloud (in London on 24th - 25th September 2018)