The Hoof & Paw
DocsCategoriesTagsView the current conditions from the WolfspyreLabs WeatherstationToggle Dark/Light/Auto modeToggle Dark/Light/Auto modeToggle Dark/Light/Auto modeBack to homepage

Using s3cmd with RadosGW

Getting s3cmd wired up to RadosGW

I walked through setting up radosgw with a proxmox cluster awhile back.
That was fun, and I learned a lot making it work.

Now I’d like to actually USE it y’know?
SO….. What’s next??

🐺🔥⚗️

Well…. what felt the most sensible to me was to start by plumbing s3cmd up to work with it.

Pre-requisites

  • A functional RadosGW setup.
    … ✅ Check
  • The desire to use s3cmd
    … ✅ Check
  • s3cmd installed

Okay Great! lets start there!

Installing s3cmd

Ubuntu

Installing s3cmd - Ubuntu
apt-get update
apt-get install s3cmd

MacOS

Installing s3cmd - MacOS
brew install s3cmd

Somewhat anticlimactic, huh?


Configuring s3cmd

s3cmd --configure -c /tmp/s3test.cfg
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [key]: key
Secret Key [secret]: secret
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.example.com]: s3.example.com

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.example.com]: %(bucket)s.s3.example.com

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/local/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
  Access Key: key
  Secret Key: secret
  Default Region: US
  S3 Endpoint: s3.example.com
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.example.com
  Encryption password:
  Path to GPG program: /usr/local/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/tmp/s3test.cfg'

This is the config that gets generated:

The lines necessary to change are highlighted in the example output below:

# cat /tmp/s3test.cfg
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
[default]
access_key = key
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
connection_max_age = 5
connection_pooling = True
content_disposition =
content_type =
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/local/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = s3.example.com
host_bucket = %(bucket)s.s3.example.com
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_allow_unordered = False
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_copy_chunk_size_mb = 1024
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
public_url_use_https = False
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = secret
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
ssl_client_cert_file =
ssl_client_key_file =
stats = False
stop_on_error = False
storage_class =
throttle_max = 100
upload_id =
urlencoding_mode = normal
use_http_expect = False
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
Obviously there’s a couple changes to make to point s3cmd in the rght direction.

Namely:

  • access_key
  • access_token
  • host_base
  • host_bucket

As a matter of fact, a barebones .s3cfg file need only contain the following:

Minimally functional .s3cfg file

~$ cat ~/.s3cfg
[default]
access_key = key
secret_key = secret
host_base = s3.example.com
host_bucket = %(bucket)s.s3.example.com

Validation

Now that we have it set up, lets make sure it works!

Cookin with gas!
~$ s3cmd  ls
2023-02-04 23:05  s3://testbucket
2023-02-05 00:01  s3://testbucket2
~$

I’ll dive into setting up s3 webites in a future post… Toodles for now!
❤️🐺

🐺🔥⚗️
Back to April Posts...

References

Primary Source documentation

Blog posts, Guides, and other informative secondary sources