Categorygithub.com/neutrinocorp/cloudsync
modulepackage
0.0.1-alpha
Repository: https://github.com/neutrinocorp/cloudsync.git
Documentation: pkg.go.dev

# README

Neutrino CloudSync

Go Build GoDoc Go Report Card codebeat badge Coverage Status Go Version

Neutrino CloudSync is an open-source tool used to upload entire file folders from any host to any cloud.

How-To

Provision your own infrastructure

This repository contains fully-customizable Terraform code (IaC) to deploy and/or provision live infrastructure in your own cloud account.

The code should be found here.

To run this code and provision your own infrastructure, you MUST have installed Terraform CLI in your admin host machine (not actual nodes which will interact with the platform's). Furthermore, an S3 bucket and a DynamoDB table is REQUIRED to persist terraform states remotely (S3) and lock/unlock a remote mutex lock mechanism (DynamoDB), enabling collaboration between multiple developers and hence development-purpose machines. If this functionality is not desired, please remove the main.tf's terraform block and leave it like this:

terraform {
}

Amazon Web Services

The following steps are specific for the Amazon Web Services (AWS) platform:

  • Go to deployments/terraform/workspaces/development.
  • Add a terraform.tfvars file with the following variables (replace with actual cloud account data):
aws_account = "0000"
aws_region = "us-east-N"
aws_access_key = "XXXX"
aws_secret_key = "XXXX"
  • OPTIONAL: Modify variables from variables.tf file as desired to configure your infrastructure properties.
  • Run the Terraform command terraform plan and verify a blob bucket and an encryption key will be created.
  • Run the Terraform command terraform apply and write yes after verifying a blob bucket and an encryption key are the only resources to be created.
  • OPTIONAL: Go to the GUI cloud console (or use the cloud CLI) and verify all resources have been created with their proper configurations.

NOTE: At this time, the deployed infrastructure will get tagged and named using development stage. This may be removed through Terraform files, more specifically in the main.tf file from development workspace folder.

Upload Files

Just start an uploader instance using the command:

user@machine:~ make run directory=DIRECTORY_TO_SYNC

or

user@machine:~ go run ./cmd/uploader/main.go -d DIRECTORY_TO_SYNC

The uploader program has other flags not specified on the examples. To read more about them please run the command:

user@machine:~ make help

or

user@machine:~ go run ./cmd/uploader/main.go -h

# Packages

No description provided by the author
Package storage holds 3rd party driver implementations for blob (and potentially other) stores.

# Functions

ListenAndExecuteUploadJobs waits and executes object upload jobs asynchronously received from internal queues.
ListenForSysInterruption waits and gracefully shuts down internal workers when an external agent sends a cancellation signal (e.g.
ListenUploadErrors waits and performs actions when object upload jobs fail.
NewConfig allocates a Config instance used by internal components to perform its processes.
NewScanner allocates a new Scanner instance which will use specified Config.
SaveConfig stores the specified Config into host's physical disk.
SaveConfigIfNotExists creates a path and/or Config file if not found.
ScheduleFileUploads traverses a directory tree based on specified configuration (Config.RootDirectory) and schedules upload jobs for each file found within all directories (if ScannerConfig.DeepTraversing was set as true) or files found in root directory only.
ShutdownUploadWorkers closes internal job queues and stores new configuration variables (if required).

# Variables

No description provided by the author
ErrFatalStorage non-recovery error issued by the blob storage.

# Structs

CloudConfig remote infrastructure and services configuration.
Config Main application configuration.
ErrFileUpload generic error generated from a blob upload job.
No description provided by the author
Object also known as file, information unit stored within a directory composed of an io.Reader holding binary data (Data) and a Key.
Scanner main component which reads and schedules upload jobs based on the files found on directories specified in Config.
ScannerConfig Scanner configuration.
Stats contains counters used by internal processes to keep track of its operations.

# Interfaces

BlobStorage unit of non-volatile binary large objects (BLOB) persistence.
ReadSeekerAt a custom read buffer type used to perform memory-efficient allocations when reading large objects.