You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
JustAnotherArchivist 9d4cf47a27 s/aarch64/arm64/ 2 months ago
.github/workflows ci: switch to official docker buildx GitHub action 3 years ago
cmd fix bug in purge for local storage 3 years ago
extras Initial 9 years ago
server Fix race condition crash on concurrent accesses to the same file 2 years ago
.bowerrc several small improvements 9 years ago
.dockerignore update dockerfile (#157) 5 years ago
.drone.yml s/aarch64/arm64/ 2 months ago
.gitignore Add Goland Exceptions 5 years ago
.jshintrc Initial 9 years ago
.travis.yml Remove fuzzit 3 years ago
Containerfile Cross-compile for aarch64 instead of amd64 2 months ago
LICENSE Fixes (#149) 5 years ago Merge pull request #328 from dutchcoders/fix-readme-build-instructions 3 years ago
Vagrantfile Initial 9 years ago Create and update 4 years ago
go.mod Work around russross/blackfriday bug 3 years ago
go.sum bump frontend 3 years ago
main.go Major rewrite 7 years ago
manifest.json Major rewrite 7 years ago Gitter Go Report Card Docker pulls Build Status

Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance. currently supports the s3 (Amazon S3), gdrive (Google Drive) providers, and local file system (local).


This project repository has no relation with the service at that’s managed by So far we cannot address any issue related to the service at

The service at is of unknown origin and reported as cloud malware.



$ curl --upload-file ./hello.txt

Encrypt & upload:

$ cat /tmp/hello.txt|gpg -ac -o-|curl -X PUT --upload-file "-"

Download & decrypt:

$ curl|gpg -o- > /tmp/hello.txt

Upload to virustotal:

$ curl -X PUT --upload-file nhgbhhj


$ curl -X DELETE <X-Url-Delete Response Header URL>

Request Headers


$ curl --upload-file ./hello.txt -H "Max-Downloads: 1" # Limit the number of downloads


$ curl --upload-file ./hello.txt -H "Max-Days: 1" # Set the number of days before deletion

Response Headers


The URL used to request the deletion of a file. Returned as a response header.

curl -sD - --upload-file ./hello | grep 'X-Url-Delete'


See good usage examples on

Create direct download link: -->

Inline file: -->


Parameter Description Value Env
listener port to use for http (:80) LISTENER
profile-listener port to use for profiler (:6060) PROFILE_LISTENER
force-https redirect to https false FORCE_HTTPS
tls-listener port to use for https (:443) TLS_LISTENER
tls-listener-only flag to enable tls listener only TLS_LISTENER_ONLY
tls-cert-file path to tls certificate TLS_CERT_FILE
tls-private-key path to tls private key TLS_PRIVATE_KEY
http-auth-user user for basic http auth on upload HTTP_AUTH_USER
http-auth-pass pass for basic http auth on upload HTTP_AUTH_PASS
ip-whitelist comma separated list of ips allowed to connect to the service IP_WHITELIST
ip-blacklist comma separated list of ips not allowed to connect to the service IP_BLACKLIST
temp-path path to temp folder system temp TEMP_PATH
web-path path to static web files (for development or custom front end) WEB_PATH
proxy-path path prefix when service is run behind a proxy PROXY_PATH
proxy-port port of the proxy when the service is run behind a proxy PROXY_PORT
ga-key google analytics key for the front end GA_KEY
provider which storage provider to use (s3, storj, gdrive or local)
uservoice-key user voice key for the front end USERVOICE_KEY
aws-access-key aws access key AWS_ACCESS_KEY
aws-secret-key aws access key AWS_SECRET_KEY
bucket aws bucket BUCKET
s3-endpoint Custom S3 endpoint. S3_ENDPOINT
s3-region region of the s3 bucket eu-west-1 S3_REGION
s3-no-multipart disables s3 multipart upload false S3_NO_MULTIPART
s3-path-style Forces path style URLs, required for Minio. false S3_PATH_STYLE
storj-access Access for the project STORJ_ACCESS
storj-bucket Bucket to use within the project STORJ_BUCKET
basedir path storage for local/gdrive provider BASEDIR
gdrive-client-json-filepath path to oauth client json config for gdrive provider GDRIVE_CLIENT_JSON_FILEPATH
gdrive-local-config-path path to store local config cache for gdrive provider GDRIVE_LOCAL_CONFIG_PATH
gdrive-chunk-size chunk size for gdrive upload in megabytes, must be lower than available memory (8 MB) GDRIVE_CHUNK_SIZE
lets-encrypt-hosts hosts to use for lets encrypt certificates (comma seperated) HOSTS
log path to log file LOG
cors-domains comma separated list of domains for CORS, setting it enable CORS CORS_DOMAINS
clamav-host host for clamav feature CLAMAV_HOST
rate-limit request per minute RATE_LIMIT
max-upload-size max upload size in kilobytes MAX_UPLOAD_SIZE
purge-days number of days after the uploads are purged automatically PURGE_DAYS
purge-interval interval in hours to run the automatic purge for (not applicable to S3 and Storj) PURGE_INTERVAL

If you want to use TLS using lets encrypt certificates, set lets-encrypt-hosts to your domain, set tls-listener to :443 and enable force-https.

If you want to use TLS using your own certificates, set tls-listener to :443, force-https, tls-cert-file and tls-private-key.


Switched to GO111MODULE

go run main.go --provider=local --listener :8080 --temp-path=/tmp/ --basedir=/tmp/


$ git clone
$ cd
$ go build -o transfersh main.go


For easy deployment, we’ve created a Docker container.

docker run --publish 8080:8080 dutchcoders/ --provider local --basedir /tmp/

S3 Usage

For the usage with a AWS S3 Bucket, you just need to specify the following options:

  • provider
  • aws-access-key
  • aws-secret-key
  • bucket
  • s3-region

If you specify the s3-region, you don’t need to set the endpoint URL since the correct endpoint will used automatically.

Custom S3 providers

To use a custom non-AWS S3 provider, you need to specify the endpoint as defined from your cloud provider.

Storj Network Provider

To use the Storj Network as storage provider you need to specify the following flags:

  • provider --provider storj
  • storj-access (either via flag or environment variable STORJ_ACCESS)
  • storj-bucket (either via flag or environment variable STORJ_BUCKET)

Creating Bucket and Scope

In preparation you need to create a scope (or copy it from the uplink configuration) and a bucket.

To get started, download the latest uplink from the release page:

After extracting, execute uplink setup. The Wizard asks for Satellite to use, the API Key (which you can retrieve via the Satellite UI), as well as an Encryption Key. Once the uplink is setup create the bucket using the following schema: uplink mb sj://<BUCKET> where is your desired name.

Afterwards you can copy the SCOPE out of the configuration file of the uplink and then start the startup of the endpoint. For enhanced security its recommended to provide both the scope and the bucket name as ENV Variables.


export STORJ_BUCKET=transfersh
export STORJ_ACCESS=<SCOPE> --provider storj

Google Drive Usage

For the usage with Google drive, you need to specify the following options:

  • provider
  • gdrive-client-json-filepath
  • gdrive-local-config-path
  • basedir

Creating Gdrive Client Json

You need to create a Oauth Client id from download the file and place into a safe directory

Usage example

go run main.go --provider gdrive --basedir /tmp/ --gdrive-client-json-filepath /[credential_dir] --gdrive-local-config-path [directory_to_save_config]


Contributions are welcome.


Remco Verhoef

Uvis Grinfelds


Andrea Spacca

Code and documentation copyright 2011-2018 Remco Verhoef. Code released under the MIT license.