Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set additional headers using --header-upload and --header-download #59

Closed
19 of 36 tasks
pquerna opened this issue May 10, 2015 · 44 comments
Closed
19 of 36 tasks

Set additional headers using --header-upload and --header-download #59

pquerna opened this issue May 10, 2015 · 44 comments
Milestone

Comments

@pquerna
Copy link

pquerna commented May 10, 2015

For example if your s3 bucket is being served behind CloudFront, it is common to set Cache-Control: max-age=300,public to reduce cache TTL, or setting Content-Encoding: gzip for pre-compressed files.

s3cmd has the --add-header parameters for this purpose

Backends which have and haven't got support for --header-upload and --header-download

Backends which are crossed out are not HTTP based and can't support the feature

  • alias
  • amazonclouddrive
  • azureblob
  • b2
  • box
  • cache
  • chunker
  • crypt
  • drive
  • dropbox
  • fichier
  • ftp
  • googlecloudstorage
  • googlephotos
  • http
  • hubic
  • jottacloud
  • koofr
  • local
  • mailru
  • mega
  • memory
  • onedrive
  • opendrive
  • pcloud
  • premiumizeme
  • putio
  • qingstor
  • s3
  • sftp
  • sharefile
  • sugarsync
  • swift
  • union
  • webdav
  • yandex
@ncw
Copy link
Member

ncw commented May 10, 2015

I see. Wouldn't be too hard! I'd probably make an --add-header flag which all the remote storage systems could use not just s3.

Thanks for the suggestion.

@schickling
Copy link

+1

@ncw ncw changed the title s3: Set additional headers Set additional headers Sep 8, 2015
@ncw
Copy link
Member

ncw commented Sep 8, 2015

NB this additional header only wants to be set on the operation that actually uploaded the file - not on all the other operations (eg listing directories etc)

@nodughere
Copy link

Agreed.

The headers should be a dictionary.

Example rclone with Swift
rclone --add-header={'x-meta-object-my-favorite-car':'ford mustang', 'X-Delete-After': 3600} copy /home/doug/data/ SwiftCluster:container

This command would add metadata and allow objects to auto delete in 1 hour as an example

Example rclone with S3
rclone --add-header={'x-amz-meta-my-favorite-car':'ford mustang'} copy /home/doug/data/ S3Cluster:bucket

@ncw ncw added this to the Soon milestone Feb 10, 2016
@wouterv
Copy link

wouterv commented Feb 14, 2016

I have a suggestion for local file system copy actions:

What whould help with backups is to have an additional header file next to the actual file with key=value file, or a json file containing the meta data.

And with that to use those files when uploading to a cloud provider.

Perhaps with an extra option --copy-headers=true --head-file-extension=.header or something for both copying from/to a local file system.

@treyd
Copy link

treyd commented Jun 6, 2016

+1 for this request, although I would like the additional header to also be set on all requests. The reason behind this is there is some middleware in Swift that uses headers to control functionality, and it would be great if we can have rclone do this as well

@ncw
Copy link
Member

ncw commented Jun 6, 2016

I'm all in favour of a general purpose option to add headers... Anyone fancy sending in a PR?

@fortunto2
Copy link

I'm sorry, but when the scheduled implementation?

@ncw ncw modified the milestones: v1.37, Soon Feb 21, 2017
@ncw
Copy link
Member

ncw commented Jun 20, 2017

rclone now has all the machinery to make this relatively straight forward.

I would implement two flags which could be repeated

--upload-header "Header: String"
--download-header "Header: String"

Which would be applied to Uploads (specifically Object.Update or Fs.Put) or Downloads (specifically Object.Open).

These would need to be applied as options (there is a general purpose HTTPOption already). A bit of work would need to be done in each remote to make this work. The options would be applied in the Copy primitive (and possibly elsewhere - this might need factoring).

@ncw ncw modified the milestones: v1.37, v1.38 Jul 19, 2017
@peixotorms
Copy link

+1

@ncw ncw modified the milestones: v1.38, v1.39 Sep 30, 2017
@browny
Copy link

browny commented Nov 16, 2017

+1

@ncw ncw modified the milestones: v1.39, v1.40 Jan 11, 2018
@ducktype
Copy link

+1

@ncw ncw removed this from the v1.40 milestone Mar 19, 2018
@ncw
Copy link
Member

ncw commented Apr 23, 2020

I have also implemented the --header flag which affects all transactions on all backends.

@bjg2
Copy link

bjg2 commented Apr 30, 2020

Hey guys, just got to the point to test / use this feature, and I'm having issues.

I'm running version: rclone-v1.51.0-254-g74d9dabd-beta-windows-amd64, and I use the following command:

copyto SOURCE DEST --header-upload "Content-Disposition: attachment; filename='randomstvar.txt'"

On AWS S3 headers work as expected:
image

On Google Cloud Storage, not as much:
image

Is there some bug with upload headers with GCS, or am I doing something wrong? This is very important for me at the moment, I should upload a bunch of data to the GCS in a few days... :)

@ncw
Copy link
Member

ncw commented May 1, 2020

s3 - good!

gcs.,, The docs are here: https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata

I can see the PUT requests are OK with -vv --dump bodies but the headers don't get set...

Just tried a test with gsutil and it puts the headers inline in the object info :-( I'll have a go at fixing that in a bit.

@bjg2
Copy link

bjg2 commented May 2, 2020

Thanks for looking into this!

Not to push you, but is it possible to fix this issue until Monday? In Monday I have to upload like a few milion files to GCS, and it would be great if I wouldn't have to track them down and change their content disposition as needed...

ncw added a commit that referenced this issue May 2, 2020
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.

After this fix we set the headers in the object upload request itself.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type

It would be possible to support adding metadata also if there is demand.
@ncw
Copy link
Member

ncw commented May 2, 2020

@bjg2 I've tracked down the problem here. Rclone needs to set the headers on the object data in the request not the actual request.

I've had a go at fixing this here - can you have a go? It works in my tests so if it works for you it should be good for your upload!

https://beta.rclone.org/branch/v1.51.0-265-gb40e997f-fix-59-gcs-headers-beta/ (uploaded in 15-30 mins)

@bjg2
Copy link

bjg2 commented May 2, 2020

Both S3 and GCS work now! Thanks!

image

@Shareed2k
Copy link
Contributor

what about custom headers, in gcs ?

ncw added a commit that referenced this issue May 6, 2020
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.

After this fix we set the headers in the object upload request itself.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
ncw added a commit that referenced this issue May 6, 2020
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.

After this fix we set the headers in the object upload request itself.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
@ncw
Copy link
Member

ncw commented May 6, 2020

what about custom headers, in gcs ?

I've added support for custom metadata? Using

x-goog-meta-key: value

Is that what you mean?

I don't think it is possible to set any of the other headers

I've merged this to master now which means it will be in the latest beta in 15-30 mins and released in v1.52

@calebcase calebcase mentioned this issue May 8, 2020
5 tasks
@ncw ncw modified the milestones: v1.52, v1.53 May 29, 2020
ncw added a commit that referenced this issue Jun 5, 2020
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also #59
ncw added a commit that referenced this issue Jun 5, 2020
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also #59
ncw added a commit that referenced this issue Jun 10, 2020
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also #59
ncw added a commit that referenced this issue Jun 10, 2020
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also #59
mawaya pushed a commit to mawaya/rclone that referenced this issue Jun 12, 2020
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also rclone#59
negative0 pushed a commit to negative0/rclone that referenced this issue Jul 16, 2020
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also rclone#59
@Cinerar
Copy link

Cinerar commented Jul 28, 2020

Am i right that this doesn't implemented for operations.CopyFile but only for operations.Copy and operations.Rcat?

@ncw
Copy link
Member

ncw commented Jul 29, 2020

Am i right that this doesn't implemented for operations.CopyFile but only for operations.Copy and operations.Rcat?

@Cinerar --header-upload and --header-download should work for copy, copyfile, cat and rcat

@Cinerar
Copy link

Cinerar commented Jul 29, 2020

In my experiments i found that --header-upload are not set from Env Variable RCLONE_HEADER_UPLOAD (maybe i used wrong one?), so i was forced to use
fs.Config.UploadHeaders = []*fs.HTTPOption{ &fs.HTTPOption{ Key: "Content-Disposition", Value: "attachement", }}

and i confirm that it works for copyfile too.

@ncw
Copy link
Member

ncw commented Sep 1, 2020

This is all finished and merged now

  --header stringArray            Set HTTP header for all transactions
  --header-download stringArray   Set HTTP header for download transactions
  --header-upload stringArray     Set HTTP header for upload transactions

Please open a new issue if there is problem with it!

@ncw ncw closed this as completed Sep 1, 2020
@Manojnaik1
Copy link

Hi, can anyone please guide me on how to upload multiple metadata

@ncw
Copy link
Member

ncw commented Jan 3, 2023

@Manojnaik1 repeat the flag as many times as necessary.

@Manojnaik1
Copy link

@ncw thanks for your reply. it's working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests