Are you interested in learning more about stacking your machine learning models? Come and join us for this months InfiniteConf Bytes with Marios Michailidis!
Stacking (or stacked generalization) is a technique that allows the data scientist to combine many different machine learning models in order to make better predictions. This technique has been used to win many machine learning competitions (in kaggle).
This talk will present the basic elements of stacking and a generalised framework that uses it called StackNet. StackNet is a computational, scalable and analytical framework that resembles a feedforward neural network and uses stacking in multiple levels to improve the accuracy of predictions. StackNet will be demonstrated through practical examples with tips on how to make even stronger models.
YOU MAY ALSO LIKE: