Categorygithub.com/orme292/s3packer
modulepackage
1.5.0
Repository: https://github.com/orme292/s3packer.git
Documentation: pkg.go.dev

# README

s3packer - A configurable profile-based S3 backup and upload tool.

CLI for Linux/MacOS supports Amazon S3 | Oracle Cloud Object Storage | Linode (Akamai) Object Storage


Go Version Go Report Card Repo License

Code Quality


Jetbrains_OSS Jetbrains_GoLand

Special thanks to JetBrains!
s3packer is developed with support from the JetBrains Open Source program.


About

s3packer is a configurable yaml-based S3 storage upload and backup tool. Use YAML-based configs with s3packer that tell it what to upload, where to upload, how to name, and how to tag the objects. Redundancy is easier by using separate profiles for each provider. s3packer supports several AWS, OCI (Oracle Cloud), and Linode (Akamai).


Add Support for new Providers

You can build support for a custom provider by using the s3packs/provider package interfaces. To implement your own provider:

  • Create a new package under s3packs/providers (e.g. s3packs/providers/azure). Use a simple name prefixed with s3, like "s3azure", as the package name.
  • Implement the operator and object interface ( see: s3packs/provider/interfaces.go)
  • Implement the generator function interfaces (OperGenFunc, ObjectGenFunc)
  • Add the required configuration code
  • Add new provider to the getProviderFunctions fn s3packs/main.go

See current provider code for implementation examples: aws, oci, linode


Download

See the releases page...


Providers

s3packer supports AWS S3, Oracle Cloud Object Storage (OCI), and Linode (Akamai) Object Storage.

See the example profiles:


How to Use

To start a session with an existing profile, just type in the following command:

$ s3packer --profile="myprofile.yaml"

Creating a new Profile

s3packer can create a base profile to help get you started. To create one, use the --create flag:

$ s3packer --create="my-new-profile.yaml"

Setting up a Profile

s3packer profiles are written in YAML. To set one up, you just need to fill out a few fields before you can get started.

Version

Version: 6

Provider

Tell s3packer which service you're using

PROVIDERAcceptable ValuesRequiredDescription
Useaws, oci, linodeYname of provider you will be using
Provider:
  Use: aws

Each provider needs their own special fields filled out.
SEE: docs/general_config.md

Bucket

Tell s3packer where the bucket is and whether to create it

BUCKETAcceptable ValuesDefaultRequiredDescription
CreatebooleanfalseFWhether s3packer should create the bucket if it is not found
Nameany stringYThe name of the bucket
Regionany stringYThe region that the bucket is or will be in, e.g. eu-north-1
Bucket:
  Create: true
  Name: "deep-freeze"
  Region: "eu-north-1"

Options

s3packer's configurable options

OPTIONSAcceptable ValuesDefaultRequiredDescription
MaxUploadsany integer1NThe number of simultaneous uploads, at least 1.
FollowSymlinksbooleanfalseNWhether to follow symlinks under dirs provided
WalkDirsbooleantrueNWhether s3packer will walk subdirectories of dirs provided
OverwriteObjectsalways, neverneverNWhether overwrite objects that already exist in the bucket
MaxUpload considerations

Some providers can struggle with a high number of simultaneous uploads. Generally, anywhere between 1 and 5 is safe, however providers like AWS have demonstrated the ability to handle up to 50, or even more.

It's important to note that large files can be broken up into many parts which are then simultaneously uploaded. Part count, part size, and the large file threshold values are not configured by s3packer, unless otherwise called out.

For example, if you specify a MaxUploads value of 5, and s3packer tries to upload 5 large files that are each split into 20 parts, then there would be 100 simultaneous uploads happening. If you specify a MaxUpload value of 50 and there are 50 large files each split into 20 parts, then you could potentially have as many as 1,000 simultaneous uploads.

Options:
  MaxUploads: 1
  FollowSymlinks: false
  WalkDirs: true
  OverwriteObjects: "never"

Objects

s3packer's configurable options for object name and renaming

OBJECTSAcceptable ValuesDefaultRequiredDescription
NamingTypeabsolute, relativeYthe method s3packer uses to name objects that it uploads
NamePrefixany stringNThe string that will be prefixed to the object's "file" name
PathPrefixany stringNa string path that will be prefixed to the object's "file" name and "path" name
OmitRootDirbooleanTrueNwhether the relative root of a provided directory will be added to the objects path name
Objects:
  NamingType: absolute
  NamePrefix: backup-
  PathPrefix: /backups/april/2023
  OmitRootDir: true

NamingType
The default is relative.

  • relative: The key will be prepended with the relative path of the file on the local filesystem (individual files specified in the profile will always end up at the root of the bucket, plus the pathPrefix and then objectPrefix).
  • absolute: The key will be prepended with the absolute path of the file on the local filesystem.

NamePrefix
This is blank by default. Any value you put here will be added before the filename when it's uploaded to S3. Using something like weekly- will add that string to any file you're uploading, like weekly-log.log or weekly-2021-01-01.log.

PathPrefix
This is blank by default. Any value put here will be added before the file path when it's uploaded to S3. If you use something like /backups/monthly, the file will be uploaded to /backups/monthly/your-file.txt.


Files, Dirs

Tells s3packer what you want to upload. You can specify directories or individual files. When you specify a directory, s3packer will NOT traverse subdirectories, unless configured to. You must specify one or the other.

FILESRequiredDescription
pathYthe absolute path to the file that will be uploaded
DIRSRequiredDescription
pathYthe absolute path to the directory that will be uploaded
Files:
  - "/Users/forrest/docs/stocks/apple"
  - "/Users/jenny/docs/song_lyrics"
Dirs:
  - "/Users/forrest/docs/objJob-application-lawn-mower.pdf"
  - "/Users/forrest/docs/dr-pepper-recipe.txt"
  - "/Users/jenny/letters/from-forrest.docx"

Tags

Add tags to each uploaded object (if supported by the provider)

TAGSAcceptable ValuesRequiredDescription
Keyany valueNkey:value tag pair, will be converted to a string
Tags:
  Author: "Forrest Gump"
  Year: 1994

TagOptions

Options related to object tagging (dependent on whether the provider supports object tagging)

TAGOPTIONSAcceptable ValuesDefaultRequiredDescription
OriginPathbooleanFalseNWhether s3packer will tag the object with the original absolute path of the file
ChecksumSHA256booleanFalseNWhether s3packer will tag the object with the sha256 checksum of the file as uploaded
Tagging:
  OriginPath: true
  ChecksumSHA256: false

Note on Checksum Tagging
Some providers have checksum validation on objects to verify that uploads are completed correctly. This checksum is calculated separately from that process and is only for your future reference.


Logging

Options for logging output

LOGGINGAcceptable ValuesDefaultRequiredDescription
ScreenbooleanTrueNWhether s3packer will run in pretty mode, with an AltScreen display
Level1,2,3,4,52NThe severity level a log message must be to output to the console or file
ConsolebooleanFalseNWhether logging message will be output to stdout (set to false if screen is set to true).
FilebooleanFalseNWhether logging output will be written to a file. Output is structured in JSON format.
Logfilepath"/var/log/s3packer.log"NThe name of the file that output logging will be appended to.
Logging:
  Screen: false
  Level: 3
  Console: true
  File: true
  Logfile: "/var/log/backup.log"

Notes on Level
This is 2 WARN by default. The setting is by severity, with 1 being least severe (INFO) and 5 being most severe ( PANIC).


Things to Keep in Mind...

Individual Files

If you’re uploading individual files, just remember that the prefix will be added to the start of the filenames and they’ll be uploaded right to the root of the bucket. Note that if you have multiple files with the same name (like if you have five ‘log.log’ files from different directories), they could be overwritten as you upload.

Directories

When you’re uploading directories, all the subdirectories and files will be uploaded to the bucket as well. Processing directories with a large number of files can take some time as the checksums are calculated.


Issues & Suggestions

If you run into any problems, errors, or have feature suggestions PLEASE feel free to open a new issue on GitHub.


# Packages

No description provided by the author
No description provided by the author
No description provided by the author