-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: pre-signed URL for S3 storage #2855
Conversation
…re belongs to the current blob (S3) storage. The feature is disabled by default in order to keep backward compatibility. The background go-routine spawns once during startup and periodically signs and updates external links if that links belongs to current S3 storage. The original idea was to sign external links on-demand, however, with current architecture it will require duplicated code in plenty of places. If do it, the changes will be quite invasive and in the end pointless: I believe, the architecture will be eventually updated to give more scalable way for pluggable storage. For example - Upload/Download interface without hard dependency on external link. There are stubs already, but I don't feel confident enough to change significant part of the application architecture.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for your contribution.
I am really happy to have this PR, and I have tried this feature with the latest I am, again, really happy to have this PR, and willing to provide my all help if necessary. Also, I am wondering, if it is possible not to set the resource's ACL to 'public_read' after the pre-sign feature works? |
Thanks for the feedback. Could you please help me to debug?
It's a bit hard for me to speak for Aliyun, but generally speaking, for S3 compatible storage you may mark the bucket as |
Of course, I am willing to help, and I have made a fast fix in #2860
After examination, I find that in my case, the resources' ExternalLink's hostname is not a part of the endpoint, and the There is a double-layer ACL, the bucket level, and the resource level. Backet ACL can be private, but the |
@ertuil I see... Thanks for the fix! I was thinking maybe it's time to redesign storage layer 😅 to avoid so many heuristics. My idea is to extend/introduce storage interface like this: type ResourceProvider interface {
Delete(ctx context.Context, key string) error
Upload(ctx context.Context, key string, payload io.Reader) error
Download(ctx context.Context, key string) (io.ReadCloser, error)
} the table should be extended by field In that case, The response should be proxied by memos instance (ex: Pros:
Cons:
The proposal contains plenty of serious changes so I would like to hear your opinions as well as get blessing from @boojack 😄 before start. |
I'm curious about the decision to implement apiv2 to gRPC. Could you please shed some light on the notable benefits compared to the HTTP format used in v1? |
Adds automatically background refresh of all external links if they are belongs to the current blob (S3) storage. The feature is disabled by default in order to keep backward compatibility.
The background go-routine spawns once during startup and periodically signs and updates external links if that links belongs to current S3 storage.
fixes #1191
Original idea
The original idea was to sign external links on-demand, however, with current architecture it will require duplicated code in plenty of places. If do it, the changes will be quite invasive and in the end pointless: I believe, the architecture will be eventually updated to give more scalable way for pluggable storage. For example - Upload/Download interface without hard dependency on external link. There are stubs already, but I don't feel confident enough to change significant part of the application architecture.