Skip to content

S3: fix CORS handler + GZIP handling #12855

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 11, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions localstack-core/localstack/services/s3/cors.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,12 +97,6 @@ def handle_cors(self, chain: HandlerChain, context: RequestContext, response: Re
https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html
"""

# this is used with the new ASF S3 provider
# although, we could use it to pre-parse the request and set the context to move the service name parser
if config.DISABLE_CUSTOM_CORS_S3:
return

request = context.request
is_s3, bucket_name = self.pre_parse_s3_request(context.request)

Expand All @@ -111,8 +105,14 @@ def handle_cors(self, chain: HandlerChain, context: RequestContext, response: Re
return

# set the service so that the regular CORS enforcer knows it needs to ignore this request
# we always want to set the service early, because the `ContentDecoder` is very early in the chain and
# depends on S3
context.service = self._service

if config.DISABLE_CUSTOM_CORS_S3:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought: I'll just ramble around here, just to make sure that we are on the same page when it comes to the (performance) implications of this PR. Please correct me if any of this is wrong 😅

So this config is false by default. This means in the vast majority of cases, we already already did execute the minimum S3 service detection, and for this (default) case, this PR does not have any implications.

However, for users who actively set DISABLE_CUSTOM_CORS_S3 in their configuration, this PR will change the behavior such that this simplified service routing heuristic is now being executed.

If this is the case, I would say that this change is absolutely reasonable, and the implications are pretty much minimal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, this is exactly it. Thanks for making it clearer 👌

# we do not apply S3 specific headers if this config flag is set
return

is_options_request = request.method == "OPTIONS"

def stop_options_chain():
Expand Down
36 changes: 35 additions & 1 deletion tests/aws/services/s3/test_s3_cors.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
import gzip
from io import BytesIO

import pytest
import requests
import xmltodict
Expand All @@ -14,7 +17,7 @@
from localstack.testing.config import TEST_AWS_ACCESS_KEY_ID
from localstack.testing.pytest import markers
from localstack.utils.aws.request_context import mock_aws_request_headers
from localstack.utils.strings import short_uid
from localstack.utils.strings import checksum_crc32, short_uid


def _bucket_url_vhost(bucket_name: str, region: str = "", localstack_host: str = None) -> str:
Expand Down Expand Up @@ -791,3 +794,34 @@ def test_delete_cors(self, s3_bucket, snapshot, aws_client):
aws_client.s3.get_bucket_cors(Bucket=s3_bucket)

snapshot.match("get-cors-deleted", e.value.response)

@markers.aws.only_localstack
def test_s3_cors_disabled(self, s3_bucket, aws_client, monkeypatch):
monkeypatch.setattr(config, "DISABLE_CUSTOM_CORS_S3", True)
# the ContentDecoder handler depends on the S3 CORS/pre-process handler to determine the service name

data = "1234567890"
# Write contents to memory rather than a file.
upload_file_object = BytesIO()
mtime = 1676569620 # hardcode the GZIP timestamp
with gzip.GzipFile(fileobj=upload_file_object, mode="w", mtime=mtime) as filestream:
filestream.write(data.encode("utf-8"))

raw_gzip_value = upload_file_object.getvalue()
checksum = checksum_crc32(raw_gzip_value)

# Upload gzip
put_obj = aws_client.s3.put_object(
Bucket=s3_bucket,
Key="test.gz",
ContentEncoding="gzip",
Body=raw_gzip_value,
)
assert put_obj["ChecksumCRC32"] == checksum

get_obj = aws_client.s3.get_object(
Bucket=s3_bucket,
Key="test.gz",
)
assert get_obj["ContentEncoding"] == "gzip"
assert get_obj["Body"].read() == raw_gzip_value
Loading