# transfer.sh [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/dutchcoders/transfer.sh?utm_source=badge&utm_medium=badge&utm_campaign=&utm_campaign=pr-badge&utm_content=badge) [![Go Report Card](https://goreportcard.com/badge/github.com/dutchcoders/transfer.sh)](https://goreportcard.com/report/github.com/dutchcoders/transfer.sh) [![Docker pulls](https://img.shields.io/docker/pulls/dutchcoders/transfer.sh.svg)](https://hub.docker.com/r/dutchcoders/transfer.sh/) [![Build Status](https://travis-ci.org/dutchcoders/transfer.sh.svg?branch=master)](https://travis-ci.org/dutchcoders/transfer.sh) Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance. Transfer.sh currently supports the s3 (Amazon S3), gdrive (Google Drive) providers, and local file system (local). ## Disclaimer This project repository has no relation with the service at https://transfer.sh that's managed by https://storj.io. So far we cannot address any issue related to the service at https://transfer.sh. ## Usage ### Upload: ```bash $ curl --upload-file ./hello.txt https://transfer.sh/hello.txt ``` ### Encrypt & upload: ```bash $ cat /tmp/hello.txt|gpg -ac -o-|curl -X PUT --upload-file "-" https://transfer.sh/test.txt ```` ### Download & decrypt: ```bash $ curl https://transfer.sh/1lDau/test.txt|gpg -o- > /tmp/hello.txt ``` ### Upload to virustotal: ```bash $ curl -X PUT --upload-file nhgbhhj https://transfer.sh/test.txt/virustotal ``` ### Deleting ```bash $ curl -X DELETE ``` ## Request Headers ### Max-Downloads ```bash $ curl --upload-file ./hello.txt https://transfer.sh/hello.txt -H "Max-Downloads: 1" # Limit the number of downloads ``` ### Max-Days ```bash $ curl --upload-file ./hello.txt https://transfer.sh/hello.txt -H "Max-Days: 1" # Set the number of days before deletion ``` ## Response Headers ### X-Url-Delete The URL used to request the deletion of a file. Returned as a response header. ```bash curl -sD - --upload-file ./hello https://transfer.sh/hello.txt | grep 'X-Url-Delete' X-Url-Delete: https://transfer.sh/hello.txt/BAYh0/hello.txt/PDw0NHPcqU ``` ## Add alias to .bashrc or .zshrc ### Using curl ```bash transfer() { curl --progress-bar --upload-file "$1" https://transfer.sh/$(basename "$1") | tee /dev/null; echo } alias transfer=transfer ``` ### Using wget ```bash transfer() { wget -t 1 -qO - --method=PUT --body-file="$1" --header="Content-Type: $(file -b --mime-type "$1")" https://transfer.sh/$(basename "$1"); echo } alias transfer=transfer ``` ## Add alias for fish-shell ### Using curl ```fish function transfer --description 'Upload a file to transfer.sh' if [ $argv[1] ] # write to output to tmpfile because of progress bar set -l tmpfile ( mktemp -t transferXXXXXX ) curl --progress-bar --upload-file "$argv[1]" https://transfer.sh/(basename $argv[1]) >> $tmpfile cat $tmpfile command rm -f $tmpfile else echo 'usage: transfer FILE_TO_TRANSFER' end end funcsave transfer ``` ### Using wget ```fish function transfer --description 'Upload a file to transfer.sh' if [ $argv[1] ] wget -t 1 -qO - --method=PUT --body-file="$argv[1]" --header="Content-Type: (file -b --mime-type $argv[1])" https://transfer.sh/(basename $argv[1]) else echo 'usage: transfer FILE_TO_TRANSFER' end end funcsave transfer ``` Now run it like this: ```bash $ transfer test.txt ``` ## Add alias on Windows Put a file called `transfer.cmd` somewhere in your PATH with this inside it: ```cmd @echo off setlocal :: use env vars to pass names to PS, to avoid escaping issues set FN=%~nx1 set FULL=%1 powershell -noprofile -command "$(Invoke-Webrequest -Method put -Infile $Env:FULL https://transfer.sh/$Env:FN).Content" ``` ## Link aliases Create direct download link: https://transfer.sh/1lDau/test.txt --> https://transfer.sh/get/1lDau/test.txt Inline file: https://transfer.sh/1lDau/test.txt --> https://transfer.sh/inline/1lDau/test.txt ## Usage Parameter | Description | Value | Env --- | --- | --- | --- listener | port to use for http (:80) | | profile-listener | port to use for profiler (:6060)| | force-https | redirect to https | false | tls-listener | port to use for https (:443) | | tls-listener-only | flag to enable tls listener only | | tls-cert-file | path to tls certificate | | tls-private-key | path to tls private key | | http-auth-user | user for basic http auth on upload | | http-auth-pass | pass for basic http auth on upload | | ip-whitelist | comma separated list of ips allowed to connect to the service | | ip-blacklist | comma separated list of ips not allowed to connect to the service | | temp-path | path to temp folder | system temp | web-path | path to static web files (for development or custom front end) | | proxy-path | path prefix when service is run behind a proxy | | ga-key | google analytics key for the front end | | uservoice-key | user voice key for the front end | | provider | which storage provider to use | (s3, grdrive or local) | aws-access-key | aws access key | | AWS_ACCESS_KEY aws-secret-key | aws access key | | AWS_SECRET_KEY bucket | aws bucket | | BUCKET s3-region | region of the s3 bucket | eu-west-1 | S3_REGION s3-no-multipart | disables s3 multipart upload | false | | s3-path-style | Forces path style URLs, required for Minio. | false | | basedir | path storage for local/gdrive provider| | gdrive-client-json-filepath | path to oauth client json config for gdrive provider| | gdrive-local-config-path | path to store local transfer.sh config cache for gdrive provider| | gdrive-chunk-size | chunk size for gdrive upload in megabytes, must be lower than available memory (8 MB) | | lets-encrypt-hosts | hosts to use for lets encrypt certificates (comma seperated) | | log | path to log file| | If you want to use TLS using lets encrypt certificates, set lets-encrypt-hosts to your domain, set tls-listener to :443 and enable force-https. If you want to use TLS using your own certificates, set tls-listener to :443, force-https, tls-cert=file and tls-private-key. ## Development Switched to GO111MODULE ```bash go run main.go --provider=local --listener :8080 --temp-path=/tmp/ --basedir=/tmp/ ``` ## Build If on go < 1.11 ```bash go get -u -v ./... ``` ```bash go build -o transfersh main.go ``` ## Docker For easy deployment, we've created a Docker container. ```bash docker run --publish 8080:8080 dutchcoders/transfer.sh:latest --provider local --basedir /tmp/ ``` Pass the params to the transfer.sh binary inside container by the *args*, not through docker environment vars. ```bash docker run -p 8080:8080 dutchcoders/transfer.sh:latest --provider s3 --http-auth-user my-username --http-auth-pass somepassword --aws-access-key $AWS_ACCESS_KEY_ID --aws-secret-key $AWS_SECRET_ACCESS_KEY --bucket $AWS_TRANSFERSH_BUCKET --s3-region $AWS_TRANSFERSH_BUCKET_REGION ``` ## HOW TO? Kubernetes helm chart ```sh # Kubernetes examples: # run locally kubectl run transfersh --restart=Never --image=dutchcoders/transfer.sh:latest -- --http-auth-user my-username --http-auth-pass somepassword --provider local --basedir=/tmp # run with s3 kubectl run transfersh --restart=Never --image=dutchcoders/transfer.sh:latest -- --http-auth-user my-username --http-auth-pass somepassword --provider s3 --aws-access-key $AWS_ACCESS_KEY_ID --aws-secret-key $AWS_SECRET_ACCESS_KEY --bucket $AWS_TRANSFERSH_BUCKET --s3-region $AWS_TRANSFERSH_BUCKET_REGION # Manual process of Helm chart deployment # if your service is going to run behind nginx or any other proxy then update, proxy-path variable too in deployment.yaml, by-default it is blank. # Example to manually create needed secrets for deployment params totally aligned with [Usage Params](https://github.com/dutchcoders/transfer.sh#usage-1) kubectl create secret generic transfersh-secrets --from-literal=HTTP_AUTH_USER=$HTTP_AUTH_USER --from-literal=HTTP_AUTH_PASS=$HTTP_AUTH_PASS --from-literal=AWS_ACCESS_KEY=$AWS_ACCESS_KEY --from-literal=AWS_SECRET_KEY=$AWS_SECRET_KEY --from-literal=BUCKET=$BUCKET --from-literal=S3_REGION=$S3_REGION --from-literal=PROXY_PATH=$PROXY_PATH --from-literal=PROVIDER=$PROVIDER cd charts/transfersh helm install --debug --name=transfersh transfersh/ ``` ## S3 Usage For the usage with a AWS S3 Bucket, you just need to specify the following options: - provider - aws-access-key - aws-secret-key - bucket - s3-region If you specify the s3-region, you don't need to set the endpoint URL since the correct endpoint will used automatically. ### Custom S3 providers To use a custom non-AWS S3 provider, you need to specify the endpoint as definied from your cloud provider. ## Contributions Contributions are welcome. ## Creators **Remco Verhoef** - - **Uvis Grinfelds** ## Maintainer **Andrea Spacca** ## Copyright and license Code and documentation copyright 2011-2018 Remco Verhoef. Code released under [the MIT license](LICENSE).