日本語

Lessons learned while implementing my own distributed deep learning framework on Spark

TensorFlow, Caffe, Chainer are famous as general deep learning framework. But it is thought to be difficult to implement our own framework because of mathematics and peculiar notion such as calculation graph, autograd. In addition to this, making it scalable requires much harder work. Apache Spark is a general distributed data processing engine written in Scala. In this session, I will talk what I learned while implementing my own distributed deep learning framework (dllib) on that platform.

Session length
40 minutes
Language of the presentation
Japanese
Target audience
Intermediate: Requires a basic knowledge of the area
Who is your session intended to
People who are interested in distributed systems
People who have used Apache Spark before
Speaker
Kai Sasaki (Software Engineer, Treasure Data)

voted / votable

Candidate sessions