• Skip to main content

Uly.me

cloud engineer

  • Home
  • About
  • Archives

s3

Python Boto3 S3 List Objects

February 27, 2022

How to list S3 objects using Python Boto3

import boto3
from sys import argv
bucket=argv[1]
s3 = boto3.resource('s3')
mybucket = s3.Bucket(bucket)
def list_buckets():
    print("List of Buckets: ")
    for bucket in s3.buckets.all():
        print(bucket.name)
def list_files():
    print("Buckets: The objects in the \"" + bucket + "\" buckets are ... ")
    for files in mybucket.objects.all():
        print(files.key)
def main():
    #list_buckets()
    list_files()
if __name__ == "__main__":
    main()

import boto3 from sys import argv bucket=argv[1] s3 = boto3.resource('s3') mybucket = s3.Bucket(bucket) def list_buckets(): print("List of Buckets: ") for bucket in s3.buckets.all(): print(bucket.name) def list_files(): print("Buckets: The objects in the \"" + bucket + "\" buckets are ... ") for files in mybucket.objects.all(): print(files.key) def main(): #list_buckets() list_files() if __name__ == "__main__": main()

Running command

$ python test.py bucket-name    # with python
$ ./test.py bucket-name         # without python

$ python test.py bucket-name # with python $ ./test.py bucket-name # without python

Results

Buckets: The objects in the "bucket-name" are ....
abc.txt
def.txt

Buckets: The objects in the "bucket-name" are .... abc.txt def.txt

Filed Under: Linux Tagged With: boto3, list, objects, python, s3

Terraform AWS S3

November 15, 2021

How to create S3 bucket via Terraform.

erraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}
 
provider "aws" {
  profile = "default"
  region  = "us-east-1"
}
 
resource "aws_s3_bucket" "bucket" {
  bucket = "my-ulysses-bucket"
  acl    = "private"
 
  tags = {
    Name        = "My Ulysses bucket"
    Environment = "Dev"
  }
}
 
resource "aws_s3_bucket_public_access_block" "example" {
  bucket = aws_s3_bucket.bucket.id
  block_public_acls = true
  block_public_policy = true
  ignore_public_acls = true
  restrict_public_buckets = true
}

erraform { required_providers { aws = { source = "hashicorp/aws" } } } provider "aws" { profile = "default" region = "us-east-1" } resource "aws_s3_bucket" "bucket" { bucket = "my-ulysses-bucket" acl = "private" tags = { Name = "My Ulysses bucket" Environment = "Dev" } } resource "aws_s3_bucket_public_access_block" "example" { bucket = aws_s3_bucket.bucket.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true }

Filed Under: Linux Tagged With: aws, bucket, create, s3, terraform

AWS Set Profile

November 7, 2021

If you find yourself using a named profile for every awscli command, you can set it temporarily.

Listing a bucket with a named profile.

aws s3 ls --profile=yourprofile

aws s3 ls --profile=yourprofile

If you set AWS_PROFILE, you can then list a bucket without a named profile.

export AWS_PROFILE=yourprofile
aws s3 ls

export AWS_PROFILE=yourprofile aws s3 ls

After you are done, you can reset it back to default.

export AWS_PROFILE=default

export AWS_PROFILE=default

Filed Under: Cloud Tagged With: aws, AWS_PROFILE, default, export, profile, s3, set

Copy S3 to GCS

October 13, 2021

You can use gsutil to copy directly from a S3 bucket to a GCS bucket.

gsutil can read/write to a S3 bucket as long as it has access to your AWS credentials.

The -R option is recursive and -m is for multi-threaded or multi-processing.

gsutil -m cp -R s3://bucket/ gs://bucket

gsutil -m cp -R s3://bucket/ gs://bucket

You can also use rsync instead of cp. Just be aware that rsync does not copy empty directory trees

gsutil -m rsync -r s3://bucket/ gs://bucket

gsutil -m rsync -r s3://bucket/ gs://bucket

If you plan to use -d, Use it with caution since it deletes content.

You may have to set the following to your defaults accounts.

export AWS_PROFILE=yourprofile
gcloud config set project your-project-id

export AWS_PROFILE=yourprofile gcloud config set project your-project-id

This ensures that default profile and projects are used in both AWS and GCP.

Filed Under: Cloud Tagged With: aws, cp, gcp, gcs, multi-processing, multi-threaded, rsync, s3

Copy S3 To Another Region

June 22, 2021

Buckets are regional. Your source and destination buckets should be in different regions.

aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET

aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET

Use the sync command. It will copy new or modified files.

Filed Under: Cloud Tagged With: another, copy, region, s3

S3FS

May 6, 2021

s3fs allows Linux to mount S3 buckets as a file system.

Install s3fs.

sudo apt install s3fs

sudo apt install s3fs

Setup credentials.

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /etc/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /etc/.passwd-s3fs chmod 600 ${HOME}/.passwd-s3fs

Mount it.

s3fs bucketname /mountpoint -o passwd_file=/etc/.passwd-s3fs

s3fs bucketname /mountpoint -o passwd_file=/etc/.passwd-s3fs

Mount it automatically.

bucketname /mountpoint fuse.s3fs _netdev,allow_other,passwd_file=/etc/.passwd-s3fs,rw,uid=1000,gid=1000 0 0

bucketname /mountpoint fuse.s3fs _netdev,allow_other,passwd_file=/etc/.passwd-s3fs,rw,uid=1000,gid=1000 0 0

Alternative.

s3fs#bucketname /mountpoint fuse _netdev,allow_other,use_cache=/root/cache,uid=1000,gid=1000,umask=022 0 0

s3fs#bucketname /mountpoint fuse _netdev,allow_other,use_cache=/root/cache,uid=1000,gid=1000,umask=022 0 0

Filed Under: Cloud Tagged With: bucket, file system, mount, s3, s3fs

AWS S3 Bucket Permission

February 16, 2021

I was getting this error when downloading a file from a S3 bucket.

fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden

fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden

Turns out to be a permission issue. Use –acl bucket-owner-full-control.

# UPLOAD
aws s3 cp file.txt s3://bucket-name/dir/ --acl bucket-owner-full-control
upload: .\file.txt to s3://bucket-name/dir/fw.sh
# DOWNLOAD
aws s3 cp s3://bucket-name/dir/file.txt . --acl bucket-owner-full-control
download: s3://bucket-name/dir/file.txt to .\file.txt

# UPLOAD aws s3 cp file.txt s3://bucket-name/dir/ --acl bucket-owner-full-control upload: .\file.txt to s3://bucket-name/dir/fw.sh # DOWNLOAD aws s3 cp s3://bucket-name/dir/file.txt . --acl bucket-owner-full-control download: s3://bucket-name/dir/file.txt to .\file.txt

You need to do for both upload and download.

Filed Under: Cloud Tagged With: aws, bucket, permissions, s3

AWS S3 Make Object Public

October 21, 2020

Copy object or file to S3 bucket.

aws s3 cp filename.ext s3://bucketname/ --profile your-profile

aws s3 cp filename.ext s3://bucketname/ --profile your-profile

To make it publicly available, run this command.

aws s3api put-object-acl \
--bucket bucket-name \
--key filename.ext \
--acl public-read \
--profile your-profile

aws s3api put-object-acl \ --bucket bucket-name \ --key filename.ext \ --acl public-read \ --profile your-profile

Filed Under: Cloud Tagged With: aws, cli, object, public, s3, s3api

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »
  • Home
  • About
  • Archives

Copyright © 2023