Are you interested in learning more about stacking your machine learning models? Come and join us for this months InfiniteConf Bytes with Marios Michailidis!
Stacking (or stacked generalization) is a technique that allows the data scientist to combine many different machine learning models in order to make better predictions. This technique has been used to win many machine learning competitions (in kaggle).
This talk will present the basic elements of stacking and a generalised framework that uses it called StackNet. StackNet is a computational, scalable and analytical framework that resembles a feedforward neural network and uses stacking in multiple levels to improve the accuracy of predictions. StackNet will be demonstrated through practical examples with tips on how to make even stronger models.
YOU MAY ALSO LIKE:
- Fast Track to Machine Learning with Louis Dorard (in London on 21st - 23rd May 2018)
- Brian Sletten's Data Science with R Workshop (in London on 2nd - 4th July 2018)
- Infiniteconf 2018 - The conference on Big Data and Fast Data (in London on 5th - 6th July 2018)
- Blockchain by Brian Sletten (in London on 9th - 10th July 2018)