Amazon S3合并小文件

23

有没有一种方法可以在Amazon S3上连接小于5MB的小文件。由于小文件,多部分上传不可行。

将所有这些文件都拉下来进行连接并不是有效的解决方案。

那么,有人能告诉我一些API来完成这些操作吗?


1
文件已经在S3上了吗?如果没有,您不能在上传之前进行连接(或压缩)吗? - Adam Ocsvari
3个回答

19

Amazon S3不提供连接函数。它主要是一个对象存储服务。

您需要一些处理过程来下载这些对象,将它们合并,然后再次上传。最有效的方法是并行下载这些对象,以充分利用可用带宽。但是,编写代码会更加复杂。

我建议在“云端”上进行处理,避免通过Internet下载对象。在Amazon EC2或AWS Lambda上执行处理将更有效且成本更低。


11
旧评论,但这并不完全正确。你可以在S3上有一个5MB的垃圾对象,并对其进行拼接,其中part 1 = 5MB的垃圾对象,part 2 = 你想要拼接的文件。重复这个步骤直到每个片段,最后使用范围复制来剥离掉这5MB的垃圾。 - wwadge
4
哦!那很不寻常,但非常酷!使用[上传部分-复制](http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html)从多个文件中复制数据,就像它们是同一文件的部分一样。太棒了! - John Rotenstein
对于上述的上传部分 - 复制,只要您的文件都大于5MB(其中一个可以小于),它就可以正常工作。您可以将它们视为多部分上传,并让S3为您连接它们。 - Namrata

7

根据@wwadge的评论,我写了一个Python脚本。

它通过上传一个比5MB稍微大一点的虚拟对象来绕过5MB的限制,然后将每个小文件附加到最后。最后从合并文件中剥离虚拟部分。

import boto3
import os

bucket_name = 'multipart-bucket'
merged_key = 'merged.json'
mini_file_0 = 'base_0.json'
mini_file_1 = 'base_1.json'
dummy_file = 'dummy_file'

s3_client = boto3.client('s3')
s3_resource = boto3.resource('s3')

# we need to have a garbage/dummy file with size > 5MB
# so we create and upload this
# this key will also be the key of final merged file
with open(dummy_file, 'wb') as f:
    # slightly > 5MB
    f.seek(1024 * 5200) 
    f.write(b'0')

with open(dummy_file, 'rb') as f:
    s3_client.upload_fileobj(f, bucket_name, merged_key)

os.remove(dummy_file)


# get the number of bytes of the garbage/dummy-file
# needed to strip out these garbage/dummy bytes from the final merged file
bytes_garbage = s3_resource.Object(bucket_name, merged_key).content_length

# for each small file you want to concat
# when this loop have finished merged.json will contain 
# (merged.json + base_0.json + base_2.json)
for key_mini_file in ['base_0.json','base_1.json']: # include more files if you want

    # initiate multipart upload with merged.json object as target
    mpu = s3_client.create_multipart_upload(Bucket=bucket_name, Key=merged_key)
        
    part_responses = []
    # perform multipart copy where merged.json is the first part 
    # and the small file is the second part
    for n, copy_key in enumerate([merged_key, key_mini_file]):
        part_number = n + 1
        copy_response = s3_client.upload_part_copy(
            Bucket=bucket_name,
            CopySource={'Bucket': bucket_name, 'Key': copy_key},
            Key=merged_key,
            PartNumber=part_number,
            UploadId=mpu['UploadId']
        )

        part_responses.append(
            {'ETag':copy_response['CopyPartResult']['ETag'], 'PartNumber':part_number}
        )

    # complete the multipart upload
    # content of merged will now be merged.json + mini file
    response = s3_client.complete_multipart_upload(
        Bucket=bucket_name,
        Key=merged_key,
        MultipartUpload={'Parts': part_responses},
        UploadId=mpu['UploadId']
    )

# get the number of bytes from the final merged file
bytes_merged = s3_resource.Object(bucket_name, merged_key).content_length

# initiate a new multipart upload
mpu = s3_client.create_multipart_upload(Bucket=bucket_name, Key=merged_key)            
# do a single copy from the merged file specifying byte range where the 
# dummy/garbage bytes are excluded
response = s3_client.upload_part_copy(
    Bucket=bucket_name,
    CopySource={'Bucket': bucket_name, 'Key': merged_key},
    Key=merged_key,
    PartNumber=1,
    UploadId=mpu['UploadId'],
    CopySourceRange='bytes={}-{}'.format(bytes_garbage, bytes_merged-1)
)
# complete the multipart upload
# after this step the merged.json will contain (base_0.json + base_2.json)
response = s3_client.complete_multipart_upload(
    Bucket=bucket_name,
    Key=merged_key,
    MultipartUpload={'Parts': [
       {'ETag':response['CopyPartResult']['ETag'], 'PartNumber':1}
    ]},
    UploadId=mpu['UploadId']
)

如果您已经有一个大小> 5MB的对象,想要添加更小的部分,则可以跳过创建虚拟文件和使用字节范围复制最后一部分。此外,我不知道它在大量非常小的文件上的性能如何-在这种情况下,最好下载每个文件,本地合并,然后上传。


我可以确认,在大量非常小的文件上,它的性能并不好。至少对于我的使用情况来说是这样。我的使用情况是处理小的 XML 文件。一个下载、合并、压缩和上传的 Lambda 函数更快。 - undefined

3
编辑:没有看到5MB的要求。由于此要求,此方法将无法使用。
来自https://ruby.awsblog.com/post/Tx2JE2CXGQGQ6A4/Efficient-Amazon-S3-Object-Concatenation-Using-the-AWS-SDK-for-Ruby

虽然可以通过EC2实例下载和重新上传数据到S3,但更有效的方法是使用在版本1.10.0中引入的新copy_part API操作指示S3进行内部复制。

代码:
require 'rubygems'
require 'aws-sdk'

s3 = AWS::S3.new()
mybucket = s3.buckets['my-multipart']

# First, let's start the Multipart Upload
obj_aggregate = mybucket.objects['aggregate'].multipart_upload

# Then we will copy into the Multipart Upload all of the objects in a certain S3 directory.
mybucket.objects.with_prefix('parts/').each do |source_object|

  # Skip the directory object
  unless (source_object.key == 'parts/')
    # Note that this section is thread-safe and could greatly benefit from parallel execution.
    obj_aggregate.copy_part(source_object.bucket.name + '/' + source_object.key)
  end

end

obj_completed = obj_aggregate.complete()

# Generate a signed URL to enable a trusted browser to access the new object without authenticating.
puts obj_completed.url_for(:read)

限制(包括但不限于)

  • 除最后一部分外,每个部分的最小大小为5 MB。
  • 已完成的多部分上传对象的最大大小为5 TB。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接