How to Upload and Download Files from S3 Using Python (boto3)


Setting Up Your Environment

Before you can interact with AWS S3 using Python, you’ll need to install the boto3 library and configure your AWS credentials.

To install boto3, run the following command:

pip install boto3

You also need to configure AWS credentials. The easiest way is through the AWS CLI:

aws configure

You’ll be prompted for your AWS Access Key, Secret Key, region, and output format. Alternatively, you can manually set the credentials using the boto3 session.

Creating an S3 Client

In order to communicate with S3, you need to create a client object. This object will be used to upload, download, and list objects from your S3 bucket.

import boto3

s3_client = boto3.client('s3')

Uploading Files to S3

To upload a file to S3, you can use the upload_file method. This method is capable of uploading a local file to a specified bucket on AWS S3.

def upload_file_to_s3(file_name, bucket_name, object_name=None):
    if object_name is None:
        object_name = file_name
    try:
        s3_client.upload_file(file_name, bucket_name, object_name)
        print(f'{file_name} has been uploaded to {bucket_name}/{object_name}')
    except Exception as e:
        print(f'Error uploading file: {e}')

upload_file_to_s3('myfile.txt', 'my-bucket', 'myfile.txt')

This function uploads the file myfile.txt to a bucket named my-bucket with the same object name.

Downloading Files from S3

To download a file from S3, the download_file method is used. This retrieves an object stored in an S3 bucket and saves it locally.

def download_file_from_s3(bucket_name, object_name, file_name):
    try:
        s3_client.download_file(bucket_name, object_name, file_name)
        print(f'{object_name} has been downloaded to {file_name}')
    except Exception as e:
        print(f'Error downloading file: {e}')

download_file_from_s3('my-bucket', 'myfile.txt', 'downloaded_myfile.txt')

This code downloads the myfile.txt object from my-bucket and saves it locally as downloaded_myfile.txt.

Handling Large Files with Multipart Upload

When uploading large files, you may want to split the file into parts and upload each part in parallel. boto3 offers a multipart upload functionality that can handle this for large files, improving performance.

def multipart_upload(file_name, bucket_name, object_name=None):
    if object_name is None:
        object_name = file_name
    try:
        # Initiate the multipart upload
        transfer = boto3.s3.transfer.S3Transfer(s3_client)
        transfer.upload_file(file_name, bucket_name, object_name)
        print(f'{file_name} has been uploaded using multipart upload to {bucket_name}/{object_name}')
    except Exception as e:
        print(f'Error in multipart upload: {e}')

multipart_upload('large_file.txt', 'my-bucket', 'large_file.txt')

The S3Transfer class efficiently uploads large files in parts, and the API manages the concurrency automatically.

Working with S3 Object Metadata

You can retrieve metadata of an S3 object using the head_object method. This can be useful to check the file’s size or other properties.

def get_object_metadata(bucket_name, object_name):
    try:
        response = s3_client.head_object(Bucket=bucket_name, Key=object_name)
        print(f'Metadata for {object_name}: {response}')
    except Exception as e:
        print(f'Error fetching metadata: {e}')

get_object_metadata('my-bucket', 'myfile.txt')

This function will return the metadata for myfile.txt from the my-bucket bucket.

Listing Objects in an S3 Bucket

You may want to list all objects in a bucket. The list_objects_v2 method allows you to list the contents of an S3 bucket.

def list_s3_objects(bucket_name):
    try:
        response = s3_client.list_objects_v2(Bucket=bucket_name)
        if 'Contents' in response:
            for item in response['Contents']:
                print(f'Object: {item["Key"]}, Size: {item["Size"]}')
        else:
            print('No objects found.')
    except Exception as e:
        print(f'Error listing objects: {e}')

list_s3_objects('my-bucket')

This method lists all objects in my-bucket with their names and sizes.

Deleting Files from S3

If you need to delete an object from an S3 bucket, use the delete_object method.

def delete_file_from_s3(bucket_name, object_name):
    try:
        s3_client.delete_object(Bucket=bucket_name, Key=object_name)
        print(f'{object_name} has been deleted from {bucket_name}')
    except Exception as e:
        print(f'Error deleting file: {e}')

delete_file_from_s3('my-bucket', 'myfile.txt')

This function deletes the object myfile.txt from the my-bucket bucket.

Error Handling

In the examples above, error handling is included using try-except blocks. It’s important to catch and handle exceptions to ensure that your application continues to run smoothly even if something goes wrong. Common errors include NoCredentialsError, PartialCredentialsError, and S3UploadFailedError.

By handling these exceptions, you can gracefully recover from issues like missing credentials or network problems, and provide helpful messages to users.

We earn commissions using affiliate links.


14 Privacy Tools You Should Have

Learn how to stay safe online in this free 34-page eBook.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top