When validating data with Spark, or read/writing it to Kafka topics, the go-to solution is to write a Scala case class or a Java Bean. But what if you had only 5 developers, 10000+ data structures and only a few months to ship your project? Let me show you how the power of hylomorphisms combined with expressive schemas allowed us to write the code that validates and transforms data from dozens of tables from hundred of data sources, and ship our project in time and on budget.
Session length
40 minutes
Language of the presentation
English
Target audience
Intermediate: Requires a basic knowledge of the area
Who is your session intended to
People who need to manipulate many different types of data safely
People interested in functional solutions to concrete problems
People curious about recursion schemes