CDH 5.4.8 Release Notes
The following lists all Lightning-Fast Cluster Computing Jiras included in CDH 5.4.8
that are not included in the Lightning-Fast Cluster Computing base version 1.3.0. The
file lists all changes included in CDH 5.4.8. The patch for each
change can be found in the cloudera/patches directory in the release tarball.
Changes Not In Lightning-Fast Cluster Computing 1.3.0
- [SPARK-6880] - Spark Shutdowns with NoSuchElementException when running parallel collect on cachedRDD
- [SPARK-8606] - Exceptions in RDD.getPreferredLocations() and getPartitions() should not be able to crash DAGScheduler
- [SPARK-6480] - histogram() bucket function is wrong in some simple edge cases
- [SPARK-7503] - Resources in .sparkStaging directory can't be cleaned up on error
- [SPARK-6954] - ExecutorAllocationManager can end up requesting a negative number of executors
- [SPARK-6299] - ClassNotFoundException in standalone mode when running groupByKey with class defined in REPL.
- [SPARK-7281] - No option for AM native library path in yarn-client mode.
- [SPARK-6868] - Container link broken on Spark UI Executors page when YARN is set to HTTPS_ONLY
- [SPARK-6506] - python support yarn cluster mode requires SPARK_HOME to be set
- [SPARK-6650] - ExecutorAllocationManager never stops
- [SPARK-6578] - Outbound channel in network library is not thread-safe, can lead to fetch failures
- [SPARK-6222] - [STREAMING] All data may not be recovered from WAL when driver is killed
- [SPARK-6325] - YarnAllocator crash with dynamic allocation on
- [SPARK-6300] - sc.addFile(path) does not support the relative path.
- [SPARK-7705] - Cleanup of .sparkStaging directory fails if application is killed
- [SPARK-5522] - Accelerate the History Server start
- [SPARK-6087] - Provide actionable exception if Kryo buffer is not large enough
- [SPARK-5983] - Don't respond to HTTP TRACE in HTTP-based UIs