Spark: java.lang.UnsupportedOperationException: No Encoder found for java.time.LocalDate

for custom dataset type, you can use Kyro serde framework, as long as your data is actually serializable(aka. implements Serializable). here is one example of using Kyro: Spark No Encoder found for in Map[String,].

Kyro is always recommended since it's much faster and also compatible with Java serde framework. you can definitely choose Java native serde(ObjectWriter/ObjectReader) but it's much slower.

like the comments above, SparkSQL comes with lots of useful Encoders under sqlContext.implicits._, but that won't cover everything, so you might have to plugin your own Encoder.

Like I said, your custom data has to be serializable, and according to, it implements Serializable interface, so you are definitely good here.