tl;dr: don’t set gzip = true (see docs)
when using hugo deploy to deploy a site to S3 and CloudFront under normal assumptions.
With gzip = true, hugo will write files that are already compressed;
e.g. index.html on S3 will be a binary gzip’d file.
The thing is that this will mostly work fine for you in the browser,
but will have some surprising behavior with curl.
This is because browsers always send an Accept-Encoding: gzip, br, deflate, ... header,
and are happy to receive gzip data in response,
but curl will only send such a header with the --compressed flag passed,
and will not transparently decompress the response even if the server properly provides the Content-Encoding: gzip header,
and so will show you a warning about binary content if you try to send the result to a terminal:
> curl 'https://example.com/index.html'
Warning: Binary output can mess up your terminal. Use "--output -" to tell curl to output it to your terminal anyway, or consider "--output <FILE>" to save to a file.
You don’t need to compress the files before uploading to serve them compressed.
CloudFront can transparently handle compression for you based on the client’s Accept-Encoding header
via the Compress option.
This also has the benefit of supporting other forms of compression like brotli.
Depending on what you’re trying to do,
it may not be “wrong” to compress files with gzip before placing them in S3 and just require clients to deal with it,
but allowing both compressed and uncompressed requests are the norm for the Web,
so your site will be less surprising if you allow both.