前言
记录利用Spark 创建Hive表的几种压缩格式。
背景
本人在测试hive表的parquet和orc文件对应的几种压缩算法性能对比。利用Spark thrift server通过sql语句创建表,对比 parquet对应的gzip、snappy,orc对应的 snappy、zlib的压缩率以及查询性能。
parquet
建表语句:在最后加
1STORED AS PARQUET
parquet默认的压缩为snappy,如果想改成其他压缩格式如gzip,可在建表语句最后加
1STORED AS PARQUET TBLPROPERTIES('parquet.compression'='GZIP')
验证是否有效,查看hive表对应路径下的文件名
snappy的文件后缀为:.snappy.parquet
gzip的文件后缀为:.gz.parquet
还可通过修改spark参数
1--conf spark.sql.parquet.compression.codec=gzip
spark sql源码的定义:
1val PARQUET_COMPRESSION = buildConf("spark.sql.parquet.compression.codec")
2 .doc("Sets the compression codec used when writing Parquet files. If either `compression` or " +
3 "`parquet.compression` is specified in the table-specific options/properties, the " +
4 "precedence would be `compression`, `parquet.compression`, " +
5 "`spark.sql.parquet.compression.codec`. Acceptable values include: none, uncompressed, " +
6 "snappy, gzip, lzo, brotli, lz4, zstd.")
7 .version("1.1.1")
8 .stringConf
9 .transform(_.toLowerCase(Locale.ROOT))
10 .checkValues(Set("none", "uncompressed", "snappy", "gzip", "lzo", "lz4", "brotli", "zstd"))
11 .createWithDefault("snappy")
可以看出parquet的默认压缩为snappy,可选压缩格式为:"none", "uncompressed", "snappy", "gzip", "lzo", "lz4", "brotli", "zstd"
orc
建表语句:在最后加
1STORED AS ORC
ORC默认的压缩也是snappy,如果想改成其他压缩格式如zlib,可在建表语句最后加
1STORED AS ORC TBLPROPERTIES('orc.compress'='zlib')
spark 参数修改:
1--conf spark.sql.orc.compression.codec=zlib
spark sql源码的定义:
1val ORC_COMPRESSION = buildConf("spark.sql.orc.compression.codec")
2 .doc("Sets the compression codec used when writing ORC files. If either `compression` or " +
3 "`orc.compress` is specified in the table-specific options/properties, the precedence " +
4 "would be `compression`, `orc.compress`, `spark.sql.orc.compression.codec`." +
5 "Acceptable values include: none, uncompressed, snappy, zlib, lzo.")
6 .version("2.3.0")
7 .stringConf
8 .transform(_.toLowerCase(Locale.ROOT))
9 .checkValues(Set("none", "uncompressed", "snappy", "zlib", "lzo"))
10 .createWithDefault("snappy")
可选压缩格式为:"none", "uncompressed", "snappy", "zlib", "lzo"
注意parquet的key为parquet.compression,orc的key为orc.compress不要弄错,一开始我写成了orc.compression结果不生效,以为不能用sql设置呢
从上面的源码里“the precedence would becompression
,orc.compress
,spark.sql.orc.compression.codec
.”可以看出parquet.compression或者orc.compress的优先级要比设置spark参数高,至于最高的compression怎么用我还不清楚
json
默认不压缩
可用压缩格式:none, bzip2, gzip, lz4,snappy ,deflate
text
默认不压缩
可用压缩格式:none, bzip2, gzip, lz4, snappy , deflate
参考
https://www.it610.com/article/1295563894860881920.htm




