No description
Find a file
2019-08-19 01:15:56 +05:30
charts/transfersh fixed env tag template positioning bug 2019-08-19 00:46:50 +05:30
cmd Add support for S3 path style URLS. 2019-07-21 09:29:01 -05:00
extras Initial 2014-10-16 20:01:43 +02:00
server Add support for S3 path style URLS. 2019-07-21 09:29:01 -05:00
.bowerrc several small improvements 2014-10-16 20:48:07 +02:00
.dockerignore update dockerfile (#157) 2018-09-22 10:55:02 +02:00
.gitignore Add Goland Exceptions 2019-03-19 13:57:34 +01:00
.jshintrc Initial 2014-10-16 20:01:43 +02:00
.travis.yml ISSUE-223 2019-05-11 15:25:07 +02:00
Dockerfile corrected transfersh server port to 8080 2019-08-18 13:34:45 +05:30
go.mod fix resolved url 2019-07-16 20:51:31 +02:00
go.sum fix resolved url 2019-07-16 20:51:31 +02:00
LICENSE Fixes (#149) 2018-08-13 08:56:08 +02:00
lock.json Implement rate limiting option, fixes #71 2017-03-28 17:08:34 +02:00
main.go Major rewrite 2017-03-22 18:09:21 +01:00
manifest.json Major rewrite 2017-03-22 18:09:21 +01:00
README.md updated README 2019-08-19 01:15:56 +05:30
Vagrantfile Initial 2014-10-16 20:01:43 +02:00

transfer.sh Gitter Go Report Card Docker pulls Build Status

Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance.

Transfer.sh currently supports the s3 (Amazon S3), gdrive (Google Drive) providers, and local file system (local).

Disclaimer

This project repository has no relation with the service at https://transfer.sh that's managed by https://storj.io. So far we cannot address any issue related to the service at https://transfer.sh.

Usage

Upload:

$ curl --upload-file ./hello.txt https://transfer.sh/hello.txt

Encrypt & upload:

$ cat /tmp/hello.txt|gpg -ac -o-|curl -X PUT --upload-file "-" https://transfer.sh/test.txt

Download & decrypt:

$ curl https://transfer.sh/1lDau/test.txt|gpg -o- > /tmp/hello.txt

Upload to virustotal:

$ curl -X PUT --upload-file nhgbhhj https://transfer.sh/test.txt/virustotal

Deleting

$ curl -X DELETE <X-Url-Delete Response Header URL>

Request Headers

Max-Downloads

$ curl --upload-file ./hello.txt https://transfer.sh/hello.txt -H "Max-Downloads: 1" # Limit the number of downloads

Max-Days

$ curl --upload-file ./hello.txt https://transfer.sh/hello.txt -H "Max-Days: 1" # Set the number of days before deletion

Response Headers

X-Url-Delete

The URL used to request the deletion of a file. Returned as a response header.

curl -sD - --upload-file ./hello https://transfer.sh/hello.txt | grep 'X-Url-Delete'
X-Url-Delete: https://transfer.sh/hello.txt/BAYh0/hello.txt/PDw0NHPcqU

Add alias to .bashrc or .zshrc

Using curl

transfer() {
    curl --progress-bar --upload-file "$1" https://transfer.sh/$(basename "$1") | tee /dev/null;
    echo
}

alias transfer=transfer

Using wget

transfer() {
    wget -t 1 -qO - --method=PUT --body-file="$1" --header="Content-Type: $(file -b --mime-type "$1")" https://transfer.sh/$(basename "$1");
    echo
}

alias transfer=transfer

Add alias for fish-shell

Using curl

function transfer --description 'Upload a file to transfer.sh'
    if [ $argv[1] ]
        # write to output to tmpfile because of progress bar
        set -l tmpfile ( mktemp -t transferXXXXXX )
        curl --progress-bar --upload-file "$argv[1]" https://transfer.sh/(basename $argv[1]) >> $tmpfile
        cat $tmpfile
        command rm -f $tmpfile
    else
        echo 'usage: transfer FILE_TO_TRANSFER'
    end
end

funcsave transfer

Using wget

function transfer --description 'Upload a file to transfer.sh'
    if [ $argv[1] ]
        wget -t 1 -qO - --method=PUT --body-file="$argv[1]" --header="Content-Type: (file -b --mime-type $argv[1])" https://transfer.sh/(basename $argv[1])
    else
        echo 'usage: transfer FILE_TO_TRANSFER'
    end
end

funcsave transfer

Now run it like this:

$ transfer test.txt

Add alias on Windows

Put a file called transfer.cmd somewhere in your PATH with this inside it:

@echo off
setlocal
:: use env vars to pass names to PS, to avoid escaping issues
set FN=%~nx1
set FULL=%1
powershell -noprofile -command "$(Invoke-Webrequest -Method put -Infile $Env:FULL https://transfer.sh/$Env:FN).Content"

Create direct download link:

https://transfer.sh/1lDau/test.txt --> https://transfer.sh/get/1lDau/test.txt

Inline file:

https://transfer.sh/1lDau/test.txt --> https://transfer.sh/inline/1lDau/test.txt

Usage

Parameter Description Value Env
listener port to use for http (:80)
profile-listener port to use for profiler (:6060)
force-https redirect to https false
tls-listener port to use for https (:443)
tls-listener-only flag to enable tls listener only
tls-cert-file path to tls certificate
tls-private-key path to tls private key
http-auth-user user for basic http auth on upload
http-auth-pass pass for basic http auth on upload
ip-whitelist comma separated list of ips allowed to connect to the service
ip-blacklist comma separated list of ips not allowed to connect to the service
temp-path path to temp folder system temp
web-path path to static web files (for development or custom front end)
proxy-path path prefix when service is run behind a proxy
ga-key google analytics key for the front end
uservoice-key user voice key for the front end
provider which storage provider to use (s3, grdrive or local)
aws-access-key aws access key AWS_ACCESS_KEY
aws-secret-key aws access key AWS_SECRET_KEY
bucket aws bucket BUCKET
s3-region region of the s3 bucket eu-west-1 S3_REGION
s3-no-multipart disables s3 multipart upload false
s3-path-style Forces path style URLs, required for Minio. false
basedir path storage for local/gdrive provider
gdrive-client-json-filepath path to oauth client json config for gdrive provider
gdrive-local-config-path path to store local transfer.sh config cache for gdrive provider
gdrive-chunk-size chunk size for gdrive upload in megabytes, must be lower than available memory (8 MB)
lets-encrypt-hosts hosts to use for lets encrypt certificates (comma seperated)
log path to log file

If you want to use TLS using lets encrypt certificates, set lets-encrypt-hosts to your domain, set tls-listener to :443 and enable force-https.

If you want to use TLS using your own certificates, set tls-listener to :443, force-https, tls-cert=file and tls-private-key.

Development

Switched to GO111MODULE

go run main.go --provider=local --listener :8080 --temp-path=/tmp/ --basedir=/tmp/

Build

If on go < 1.11

go get -u -v ./...
go build -o transfersh main.go

Docker

For easy deployment, we've created a Docker container.

docker run --publish 8080:8080 dutchcoders/transfer.sh:latest --provider local --basedir /tmp/

Pass the params to the transfer.sh binary inside container by the args, not through docker environment vars.

docker run -p 8080:8080 dutchcoders/transfer.sh:latest --provider s3 --http-auth-user my-username --http-auth-pass somepassword --aws-access-key $AWS_ACCESS_KEY_ID --aws-secret-key $AWS_SECRET_ACCESS_KEY --bucket $AWS_TRANSFERSH_BUCKET --s3-region $AWS_TRANSFERSH_BUCKET_REGION

Manually run inside kubernetes cluster

# run locally
kubectl run transfersh --restart=Never --image=dutchcoders/transfer.sh:latest -- --http-auth-user my-username --http-auth-pass somepassword --provider local --basedir=/tmp 

# run with s3
kubectl run transfersh --restart=Never --image=dutchcoders/transfer.sh:latest -- --http-auth-user my-username --http-auth-pass somepassword --provider s3 --aws-access-key $AWS_ACCESS_KEY_ID --aws-secret-key $AWS_SECRET_ACCESS_KEY --bucket $AWS_TRANSFERSH_BUCKET --s3-region $AWS_TRANSFERSH_BUCKET_REGION

# Example to manually create needed secrets for deployment params totally aligned with [Usage Params](https://github.com/dutchcoders/transfer.sh#usage-1)
kubectl create secret generic transfersh-secrets --from-literal=HTTP_AUTH_USER=$HTTP_AUTH_USER --from-literal=HTTP_AUTH_PASS=$HTTP_AUTH_PASS --from-literal=AWS_ACCESS_KEY=$AWS_ACCESS_KEY --from-literal=AWS_SECRET_KEY=$AWS_SECRET_KEY --from-literal=BUCKET=$BUCKET --from-literal=S3_REGION=$S3_REGION --from-literal=PROXY_PATH=$PROXY_PATH --from-literal=PROVIDER=$PROVIDER

TIPS

If your service is going to run behind nginx or any other proxy in your kubernetes cluster then passing on proxy-path variable becomes a must to avoid to avoid errors by webend, by-default it is blank. dont add prefix '/' for the path. Ex: if your kubernetes ingress piece of routing yaml is like this

...
spec:
  rules:
  - host: api.myhost.mysite.com
    http:
      paths:
      - backend:
          serviceName: transfersh
          servicePort: 80
        path: /filemanager
...

=> PROXY_PATH arg must be set to 'filemanager' & not /filemanager.

Helm chart

cd charts/transfersh
helm install --debug --name=transfersh transfersh/

NOTE:

  • All variables are same as mentioned here with below mentioned operations done on them.
  • Operations applied on Usage params
    • UPPERCASING them
    • replacing hyphens by underscores
  • Ex: http-auth-user => HTTP_AUTH_USER, s3-region => S3_REGION
  • Every arg needed by the transfer.sh binary is passed via environment variable in deployment yaml injected via the secrets/configmaps at runtime.
  • Deployment fails in case of non-availability of secrets/configMaps in your cluster, as selected via values.yaml file.

S3 Usage

For the usage with a AWS S3 Bucket, you just need to specify the following options:

  • provider
  • aws-access-key
  • aws-secret-key
  • bucket
  • s3-region

If you specify the s3-region, you don't need to set the endpoint URL since the correct endpoint will used automatically.

Custom S3 providers

To use a custom non-AWS S3 provider, you need to specify the endpoint as definied from your cloud provider.

Contributions

Contributions are welcome.

Creators

Remco Verhoef

Uvis Grinfelds

Maintainer

Andrea Spacca

Code and documentation copyright 2011-2018 Remco Verhoef. Code released under the MIT license.