-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set additional headers using --header-upload and --header-download #59
Comments
I see. Wouldn't be too hard! I'd probably make an Thanks for the suggestion. |
+1 |
NB this additional header only wants to be set on the operation that actually uploaded the file - not on all the other operations (eg listing directories etc) |
Agreed. The headers should be a dictionary. Example rclone with Swift This command would add metadata and allow objects to auto delete in 1 hour as an example Example rclone with S3 |
I have a suggestion for local file system copy actions: What whould help with backups is to have an additional header file next to the actual file with key=value file, or a json file containing the meta data. And with that to use those files when uploading to a cloud provider. Perhaps with an extra option --copy-headers=true --head-file-extension=.header or something for both copying from/to a local file system. |
+1 for this request, although I would like the additional header to also be set on all requests. The reason behind this is there is some middleware in Swift that uses headers to control functionality, and it would be great if we can have rclone do this as well |
I'm all in favour of a general purpose option to add headers... Anyone fancy sending in a PR? |
I'm sorry, but when the scheduled implementation? |
rclone now has all the machinery to make this relatively straight forward. I would implement two flags which could be repeated --upload-header "Header: String" Which would be applied to Uploads (specifically Object.Update or Fs.Put) or Downloads (specifically Object.Open). These would need to be applied as options (there is a general purpose HTTPOption already). A bit of work would need to be done in each remote to make this work. The options would be applied in the Copy primitive (and possibly elsewhere - this might need factoring). |
+1 |
+1 |
+1 |
I have also implemented the |
s3 - good! gcs.,, The docs are here: https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata I can see the Just tried a test with gsutil and it puts the headers inline in the object info :-( I'll have a go at fixing that in a bit. |
Thanks for looking into this! Not to push you, but is it possible to fix this issue until Monday? In Monday I have to upload like a few milion files to GCS, and it would be great if I wouldn't have to track them down and change their content disposition as needed... |
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them. After this fix we set the headers in the object upload request itself. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type It would be possible to support adding metadata also if there is demand.
@bjg2 I've tracked down the problem here. Rclone needs to set the headers on the object data in the request not the actual request. I've had a go at fixing this here - can you have a go? It works in my tests so if it works for you it should be good for your upload! https://beta.rclone.org/branch/v1.51.0-265-gb40e997f-fix-59-gcs-headers-beta/ (uploaded in 15-30 mins) |
what about custom headers, in gcs ? |
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them. After this fix we set the headers in the object upload request itself. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Goog-Meta- Note for the last of those are for setting custom metadata in the form "X-Goog-Meta-Key: value".
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them. After this fix we set the headers in the object upload request itself. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Goog-Meta- Note for the last of those are for setting custom metadata in the form "X-Goog-Meta-Key: value".
I've added support for custom metadata? Using
Is that what you mean? I don't think it is possible to set any of the other headers I've merged this to master now which means it will be in the latest beta in 15-30 mins and released in v1.52 |
Before this change we were setting the headers on the PUT request for normal and multipart uploads. For normal uploads this caused the error 403 Forbidden: There were headers present in the request which were not signed After this fix we set the headers in the object upload request itself as the s3 SDK expects. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Amz-Tagging - X-Amz-Meta- Note for the last of those are for setting custom metadata in the form "X-Amz-Meta-Key: value". This now works for multipart uploads and single part uploads See also #59
Before this change we were setting the headers on the PUT request for normal and multipart uploads. For normal uploads this caused the error 403 Forbidden: There were headers present in the request which were not signed After this fix we set the headers in the object upload request itself as the s3 SDK expects. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Amz-Tagging - X-Amz-Meta- Note for the last of those are for setting custom metadata in the form "X-Amz-Meta-Key: value". This now works for multipart uploads and single part uploads See also #59
Before this change we were setting the headers on the PUT request for normal and multipart uploads. For normal uploads this caused the error 403 Forbidden: There were headers present in the request which were not signed After this fix we set the headers in the object upload request itself as the s3 SDK expects. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Amz-Tagging - X-Amz-Meta- Note for the last of those are for setting custom metadata in the form "X-Amz-Meta-Key: value". This now works for multipart uploads and single part uploads See also #59
Before this change we were setting the headers on the PUT request for normal and multipart uploads. For normal uploads this caused the error 403 Forbidden: There were headers present in the request which were not signed After this fix we set the headers in the object upload request itself as the s3 SDK expects. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Amz-Tagging - X-Amz-Meta- Note for the last of those are for setting custom metadata in the form "X-Amz-Meta-Key: value". This now works for multipart uploads and single part uploads See also #59
Before this change we were setting the headers on the PUT request for normal and multipart uploads. For normal uploads this caused the error 403 Forbidden: There were headers present in the request which were not signed After this fix we set the headers in the object upload request itself as the s3 SDK expects. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Amz-Tagging - X-Amz-Meta- Note for the last of those are for setting custom metadata in the form "X-Amz-Meta-Key: value". This now works for multipart uploads and single part uploads See also rclone#59
Before this change we were setting the headers on the PUT request for normal and multipart uploads. For normal uploads this caused the error 403 Forbidden: There were headers present in the request which were not signed After this fix we set the headers in the object upload request itself as the s3 SDK expects. This means that we only support a limited range of headers - Cache-Control - Content-Disposition - Content-Encoding - Content-Language - Content-Type - X-Amz-Tagging - X-Amz-Meta- Note for the last of those are for setting custom metadata in the form "X-Amz-Meta-Key: value". This now works for multipart uploads and single part uploads See also rclone#59
Am i right that this doesn't implemented for operations.CopyFile but only for operations.Copy and operations.Rcat? |
@Cinerar --header-upload and --header-download should work for |
In my experiments i found that --header-upload are not set from Env Variable RCLONE_HEADER_UPLOAD (maybe i used wrong one?), so i was forced to use and i confirm that it works for copyfile too. |
This is all finished and merged now
Please open a new issue if there is problem with it! |
Hi, can anyone please guide me on how to upload multiple metadata |
@Manojnaik1 repeat the flag as many times as necessary. |
@ncw thanks for your reply. it's working. |
For example if your s3 bucket is being served behind CloudFront, it is common to set
Cache-Control: max-age=300,public
to reduce cache TTL, or settingContent-Encoding: gzip
for pre-compressed files.s3cmd
has the--add-header
parameters for this purposeBackends which have and haven't got support for
--header-upload
and--header-download
Backends which are
crossed outare not HTTP based and can't support the featurealiascachechunkercryptftplocalmemorysftpunionThe text was updated successfully, but these errors were encountered: