Qbpsqmau5byyzsdajthb
SkillsCast

Distributed deep learning

16th December 2015 in London at CodeNode

This SkillsCast was filmed at Machine Learning, Deep Learning, SystemML & Apache Spark

Jan’s talk will explore the latest developments of the Muvr project. After a brief introduction on the nature of distributed systems (from distributed databases, through distributed state, to distributed computation), you will learn how to use Akka Cluster and Akka Persistence to implement such distributed systems. Once the data is safely stored in a journal, it is important to be able to perform deeper analysis. To do so, Jan will show you how to access the data in the journal in a distributed computation program running in Apache Spark.

With the mechanics done, Jan will explain how to use (deep) neural networks that can be very easily trained to recognize patterns in the ingested data.

You will learn the advantages and traps of designing distributed domains, data and computation: systems that may become the next generation of financial systems, bringing elasticity, resilience and responsiveness. You will look in particular at systems that consume data from IoT / wearables, and that perform immediate and batch analyses.

YOU MAY ALSO LIKE:

Thanks to our sponsors

Distributed deep learning

Jan Machacek

Jan Machacek is a passionate technologist with hands-on experience of the practical aspects of software delivery (architecture, quality, CI, CD), the project management approaches (applying the principles of agile project management), and mentoring and motivating engineering & business teams.

SkillsCast

Jan’s talk will explore the latest developments of the Muvr project. After a brief introduction on the nature of distributed systems (from distributed databases, through distributed state, to distributed computation), you will learn how to use Akka Cluster and Akka Persistence to implement such distributed systems. Once the data is safely stored in a journal, it is important to be able to perform deeper analysis. To do so, Jan will show you how to access the data in the journal in a distributed computation program running in Apache Spark.

With the mechanics done, Jan will explain how to use (deep) neural networks that can be very easily trained to recognize patterns in the ingested data.

You will learn the advantages and traps of designing distributed domains, data and computation: systems that may become the next generation of financial systems, bringing elasticity, resilience and responsiveness. You will look in particular at systems that consume data from IoT / wearables, and that perform immediate and batch analyses.

YOU MAY ALSO LIKE:

Thanks to our sponsors

About the Speaker

Distributed deep learning

Jan Machacek

Jan Machacek is a passionate technologist with hands-on experience of the practical aspects of software delivery (architecture, quality, CI, CD), the project management approaches (applying the principles of agile project management), and mentoring and motivating engineering & business teams.