S3sync is a project mainly written in RUBY and SHELL, it's free.
Home page, wiki, forum, bug reports, etc: http://s3sync.net
This is a ruby program that easily transfers directories between a local directory and an S3 bucket:prefix. It behaves somewhat, but not precisely, like the rsync program. In particular, it shares rsync's peculiar behavior that trailing slashes on the source side are meaningful. See examples below.
One benefit over some other comparable tools is that s3sync goes out of its way to mirror the directory structure on S3. Meaning you don't need to use s3sync later in order to view your files on S3. You can just as easily use an S3 shell, a web browser (if you used the --public-read option), etc. Note that s3sync is NOT necessarily going to be able to read files you uploaded via some other tool. This includes things uploaded with the old perl version! For best results, start fresh!
s3sync runs happily on linux, probably other *ix, and also Windows (except that symlinks and permissions management features don't do anything on Windows). If you get it running somewhere interesting let me know (see below)
s3sync is free, and license terms are included in all the source files. If you decide to make it better, or find bugs, please let me know.
The original inspiration for this tool is the perl script by the same name which was made by Thorsten von Eicken (and later updated by me). This ruby program does not share any components or logic from that utility; the only relation is that it performs a similar task.
(using S3 bucket 'mybucket' and prefix 'pre') Put the local etc directory itself into S3 s3sync.rb -r /etc mybucket:pre (This will yield S3 keys named pre/etc/...) Put the contents of the local /etc dir into S3, rename dir: s3sync.rb -r /etc/ mybucket:pre/etcbackup (This will yield S3 keys named pre/etcbackup/...) Put contents of S3 "directory" etc into local dir s3sync.rb -r mybucket:pre/etc/ /root/etcrestore (This will yield local files at /root/etcrestore/...) Put the contents of S3 "directory" etc into a local dir named etc s3sync.rb -r mybucket:pre/etc /root (This will yield local files at /root/etc/...) Put S3 nodes under the key pre/etc/ to the local dir etcrestore and create local dirs even if S3 side lacks dir nodes s3sync.rb -r --make-dirs mybucket:pre/etc/ /root/etcrestore (This will yield local files at /root/etcrestore/...)
You need a functioning Ruby (>=1.8.4) installation, as well as the OpenSSL ruby library (which may or may not come with your ruby).
How you get these items working on your system is really not any of my business, but you might find the following things helpful. If you're using Windows, the ruby site has a useful "one click installer" (although it takes more clicks than that, really). On debian (and ubuntu, and other debian-like things), there are apt packages available for ruby and the open ssl lib.
s3sync needs to know several interesting values to work right. It looks for them in the following environment variables -or- a s3config.yml file. In the yml case, the names need to be lowercase (see example file). Furthermore, the yml is searched for in the following locations, in order: $S3CONF/s3config.yml $HOME/.s3conf/s3config.yml /etc/s3conf/s3config.yml
Required: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
If you don't know what these are, then s3sync is probably not the
right tool for you to be starting out with.
Optional:
AWS_S3_HOST - I don't see why the default would ever be wrong
HTTP_PROXY_HOST,HTTP_PROXY_PORT,HTTP_PROXY_USER,HTTP_PROXY_PASSWORD - proxy
SSL_CERT_DIR - Where your Cert Authority keys live; for verification
SSL_CERT_FILE - If you have just one PEM file for CA verification
S3SYNC_RETRIES - How many HTTP errors to tolerate before exiting
S3SYNC_WAITONERROR - How many seconds to wait after an http error
S3SYNC_MIME_TYPES_FILE - Where is your mime.types file
S3SYNC_NATIVE_CHARSET - For example Windows-1252. Defaults to ISO-8859-1.
AWS_CALLING_FORMAT - Defaults to REGULAR
REGULAR # http://s3.amazonaws.com/bucket/key
SUBDOMAIN # http://bucket.s3.amazonaws.com/key
VANITY # http://
Important: For EU-located buckets you should set the calling format to SUBDOMAIN Important: For US buckets with CAPS or other weird traits set the calling format to REGULAR
I use "envdir" from the daemontools package to set up my env variables easily: http://cr.yp.to/daemontools/envdir.html For example: envdir /root/s3sync/env /root/s3sync/s3sync.rb -etc etc etc I know there are other similar tools out there as well.
You can also just call it in a shell script where you have exported the vars first such as:
export AWS_ACCESS_KEY_ID=valueGoesHere ... s3sync.rb -etc etc etc
But by far the easiest (and newest) way to set this up is to put the name:value pairs in a file named s3config.yml and let the yaml parser pick them up. There is an .example file shipped with the tar.gz to show what a yaml file looks like. Thanks to Alastair Brunton for this addition.
You can also use some combination of .yaml and environment variables, if you want. Go nuts.
For low-level S3 operations not encapsulated by the sync paradigm, try the companion utility s3cmd.rb. See README_s3cmd.txt.
s3sync lacks the special case code that would be needed in order to handle a source/dest that's a single file. This isn't one of the supported use cases so don't expect it to work. You can use the companion utility s3cmd.rb for single get/puts.
In S3 there's no actual concept of folders, just keys and nodes. So, every tool uses its own proprietary way of storing dir info (my scheme being the best naturally) and in general the methods are not compatible.
If you populate S3 by some means other than s3sync and then try to use s3sync to "get" the S3 stuff to a local filesystem, you will want to use the --make-dirs option. This causes the local dirs to be created even if there is no s3sync-compatible directory node info stored on the S3 side. In other words, local folders are conjured into existence whenever they are needed to make the "get" succeed.
s3sync's normal operation is to compare the file size and MD5 hash of each item to decide whether it needs syncing. On the S3 side, these hashes are stored and returned to us as the "ETag" of each item when the bucket is listed, so it's very easy. On the local side, the MD5 must be calculated by pushing every byte in the file through the MD5 algorithm. This is CPU and IO intensive!
Thus you can specify the option --no-md5. This will compare the upload time on S3 to the "last modified" time on the local item, and not do md5 calculations locally at all. This might cause more transfers than are absolutely necessary. For example if the file is "touched" to a newer modified date, but its contents didn't change. Conversely if a file's contents are modified but the date is not updated, then the sync will pass over it. Lastly, if your clock is very different from the one on the S3 servers, then you may see unanticipated behavior.
On my debian install I didn't find any root authority public keys. I installed some by running this shell archive: http://mirbsd.mirsolutions.de/cvs.cgi/src/etc/ssl.certs.shar (You have to click download, and then run it wherever you want the certs to be placed). I do not in any way assert that these certificates are good, comprehensive, moral, noble, or otherwise correct. But I am using them.
There is a debian package ca-certificates; this is what I'm using now. apt-get install ca-certificates and then use: SSL_CERT_DIR=/etc/ssl/certs
You used to be able to use just one certificate, but recently AWS has started using more than one CA.
Invoke by typing s3sync.rb and you should get a nice usage screen. Options can be specified in short or long form (except --delete, which has no short form)
ALWAYS TEST NEW COMMANDS using --dryrun(-n) if you want to see what will be affected before actually doing it. ESPECIALLY if you use --delete. Otherwise, do not be surprised if you misplace a '/' or two and end up deleting all your precious, precious files.
If you use the --public-read(-p) option, items sent to S3 will be ACL'd so that anonymous web users can download them, given the correct URL. This could be useful if you intend to publish directories of information for others to see. For example, I use s3sync to publish itself to its home on S3 via the following command: s3sync.rb -v -p publish/ ServEdge_pub:s3sync Where the files live in a local folder called "publish" and I wish them to be copied to the URL: http://s3.amazonaws.com/ServEdge_pub/s3sync/... If you use --ssl(-s) then your connections with S3 will be encrypted. Otherwise your data will be sent in clear form, i.e. easy to intercept by malicious parties.
If you want to prune items from the destination side which are not found on the source side, you can use --delete. Always test this with -n first to make sure the command line you specify is not going to do something terrible to your cherished and irreplaceable data.
The latest version of s3sync should normally be at: http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz and the Amazon S3 forums probably have a few threads going on it at any given time. I may not always see things posted to the threads, so if you want you can contact me at [email protected] too.
2006-09-29: Added support for --expires and --cache-control. Eg: --expires="Thu, 01 Dec 2007 16:00:00 GMT" --cache-control="no-cache"
2007-02-19 Version 1.1.0 WARNING Lots of path-handling changes. PLEASE test safely before you just swap this in for your working 1.0.x version.
2007-06-02 Version 1.1.3 IMPORTANT! Pursuant to http://s3sync.net/forum/index.php?topic=49.0 , the tar.gz now expands into its own sub-directory named "s3sync" instead of dumping all the files into the current directory.
In the case of commands of the form: s3sync -r somedir somebucket: The root directory node in s3 was being stored as "somedir/" instead of "somedir" which caused restores to mess up when you say: s3sync -r somebucket: restoredir The fix to this, by coincidence, actually makes s3fox work even less well with s3sync. I really need to build my own xul+javascript s3 GUI some day.
FNORD