I’m also little concerned that some of our buckets have ended up with PUBLIC permissions. I must say that this side of S3 is a fog for me! But perhaps these need to be refined?
AWS emailed me on 22 March 2021 to say:
We are writing to notify you that you have configured your S3 bucket(s) to be publicly accessible, and this may be a larger audience than you intended. By default, S3 buckets allow only the account owner to access the contents of a bucket; however, customers can configure S3 buckets to permit public access. Public buckets are accessible by anyone on the Internet, and content in them may be indexed by search engines.
We recommend enabling the S3 Block Public Access feature on buckets if public access is not required. S3 bucket permissions should never allow “Principal”:”*” unless you intend to grant public access to your data. Additionally, S3 bucket ACLs should be appropriately scoped to prevent unintended access to “Authenticated Users” (anyone with an AWS account) or “Everyone” (anyone with Internet access) unless your use case requires it. For AWS’s definition of “Public Access,” please see The Meaning of “Public” .
The list of buckets which can be publicly accessed is below:
teachings-archive | eu-west-1
online-shedra | eu-west-1
You can ensure individual buckets, or all your buckets prevent public access by turning on the S3 Block Public Access feature . This feature is free of charge and it only takes a minute to enable. For step by step instructions on setting up S3 Block Public Access via the S3 management console, see Jeff Barr’s blog , or check out the video tutorial on Block Public Access .
If you have a business need to maintain some level of public access, please see Overview of Managing Access  for more in-depth instructions on managing access to your bucket to make sure you’ve permitted the correct level of access to your objects. If you would like more information about policy configuration in S3, please refer to Managing Access in Amazon S3 , and S3 Security Best Practices .
We recommend that you make changes in accordance with your operational best practices.
If you believe you have received this message in error or if you require technical assistance, please open a support case.
Amazon Web Services
Amazon Web Services, Inc. is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message was produced and distributed by Amazon Web Services Inc., 410 Terry Ave. North, Seattle, WA 98109-5210
This will become a ’transit lounge’ or quarantine area for all incoming media. The
current S3Bubble uploader (to be replaced by our own version) allows members to
upload teaching media here where it will remain until it is checked and moved to the
new original-media-store Bucket instead. Also, any future uploads that Rinpoche
sends in using the DropShare app will also go here to await processing in the
Simply storage of website backups of the whole Bodhicharya wordpress multi-site
generated by the BackupBuddy plugin.
This happens automatically on a schedule. [No longer used]
All media files for the websites are also stored on AWS S3 in the bodhicharyawebsites
bucket. These are called into place by the Offload S3 plugin which is editable
in the main Network Admin area.
This is media relating to COURSES on the website. I would rather use a bucket
named “courses” but its not possible to rename this one as we are already using
media inside it for the Bodhicharyavatara audio teachings. So it will probably remain
as it is now.
This is the main long term deep store of all original recordings. Our “Glacier” storage
bucket. Both the historical teachings that Arne has (which we are now transcoding for
long term storage) and all future original recordings sent in (via the archivists-inbox).
It will use the same year then teaching folder structure that the teachings-archive uses.
Everything in here will be set to lifecycle into Glacier storage after a set amount of
days (probably 30). Therefore I intend to upload our drive submission to Amazon
directly into this bucket so all the original teachings are ready to be stored in Glacier.
After a big drive upload we will just move the transcoded web versions of the videos
over to the teachings-archive for use on the website.
This contains all the transcoded media for use on the website. Organised in year
folders on the first level. Then, in folders within the year folders, are teaching folders
which are either manually made (via S3 console) or made by the teachings post
uploader (our custom plugin made by Johan). The media in here are delivered to the
website using Cloudfront.