“java.io.Exception: failed to uncompress the chunk” in Apache Spark

After I run spark-submit in my YARN cluster with Spark-1.6.2:

The job fail, and the log report:

Somebody in the internet say may be this is caused by the compatibility problem between Spark-1.6.2 and Snappy. Therefore I add

to my spark-submit shell script to change compress algorithm from Snappy to lz4. And this time everything goes ok.

LZ4 is faster, but not better

I need to compress small size of data in my project without too heavy side-effect on performance. Many people recommend LZ4 for me since it is almost the fastest compression algorithm at present.

Compression ratio is not the most important element here, but it still need to be evaluated. So I create a 4KB file contains normal text (from a README file from a open source software) and compress it with LZ4 and GZIP.

Compression Ratio 1.57 2.24

Hum, Looks like the compression ratio of LZ4 is not bad. But when I run test with this special content (changed from here):

the result became interesting:

Compression Ratio 0.99 2.25

GZIP could compress this content unexceptionally, but LZ4 don’t. So, LZ4 can’t compress some special content (like, numbers), although it is faster than any other compression algorithm.
Somebody may ask: why don’t you use special algorithm to compress content of numbers? The answer is: in real situation, we could not know the type of our small data segments.