AWS Glue作业 - 将CSV转换为Parquet

6
我正在尝试使用AWS Glue将约1.5GB的GZIPPED CSV转换为Parquet。下面的脚本是自动生成的Glue作业,用于完成此任务。它似乎需要很长时间(我等了几个小时,使用10 DPUs,从未看到它结束或生成任何输出数据)。
我想知道是否有人有将1.5GB+ GZIPPED CSV转换为Parquet的经验-是否有更好的方法来完成此转换?
我有TB的数据要转换。令人担忧的是,转换GB需要这么长时间。
我的Glue作业日志有数千个条目,例如:
18/03/02 20:20:20 DEBUG Client: 
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.58.225
ApplicationMaster RPC port: 0
queue: default
start time: 1520020335454
final status: UNDEFINED
tracking URL: http://ip-172-31-51-199.ec2.internal:20888/proxy/application_1520020149832_0001/
user: root

AWS 自动生成的 Glue 作业代码:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

你尝试过转换小得多的文件吗?需要多长时间? - Natalia
1
再说一遍:GZIPPED格式不可分割。很可能只有一个执行器在工作。尝试设置开发端点并分享正在发生的情况。 - Natalia
完成于4.5小时。我认为您在非可分割的gzip文件方面一定有所发现。 - Drew
2
在本地模式下使用Spark在您的桌面上进行测试。这应该是单台机器和真实文件系统的参考时间。正如Natalia所指出的,您无法拆分.gz文件,因此没有并行性。 - stevel
1个回答

2

是的,我最近发现相较于Glue的DynamicFrames,Spark DataFrames是更快速的选择。

# boiler plate, generated code
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

# some job-specific variables
compression_type = 'snappy' # 'snappy', 'gzip', or 'none'
source_path = 's3://source-bucket/part1=x/part2=y/'
destination_path = 's3://destination-bucket/part1=x/part2=y/'

# CSV to Parquet conversion
df = spark.read.option('delimiter','|').option('header','true').csv(source_path)
df.write.mode("overwrite").format('parquet').option('compression', compression_type).save(destination_path )
job.commit()

我很快会尝试这个并且如果它有效的话,会标记为答案。谢谢。 - Drew
@Drew 好的,别让我们等着。它起作用了吗...? - gbeaven
@gbeaven 对不起,我不能说。我们最终使用自己的工具编写了代码,而不是使用 Glue。 - Drew

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接