Provided by: s3fs_1.79+git90-g8f11507-2_amd64 bug


       S3FS - FUSE-based file system backed by Amazon S3


       s3fs bucket[:/path] mountpoint  [options]

       umount mountpoint

   utility mode ( remove interrupted multipart uploading objects )
       s3fs -u bucket


       s3fs  is  a  FUSE  filesystem  that  allows  you  to  mount an Amazon S3 bucket as a local
       filesystem. It stores files natively and transparently in S3  (i.e.,  you  can  use  other
       programs to access the same files).


       The  s3fs  password  file  has  this  format  (use this format if you have only one set of

       If you have more than one set of credentials, this syntax is also recognized:

       Password files can be stored in two locations:
            /etc/passwd-s3fs     [0640]
            $HOME/.passwd-s3fs   [0600]


   general options
       -h   --help
              print help

              print version

       -f     FUSE foreground option - do not run as daemon.

       -s     FUSE singlethreaded option (disables multi-threaded operation)

   mount options
       All s3fs options must given in the form where "opt" is:

       -o default_acl (default="private")
              the default canned acl to apply to all written  S3  objects,  e.g.,  "public-read".
              Any created files will have this canned acl.  Any updated files will also have this
              canned acl applied!

       -o retries (default="2")
              number of times to retry a failed S3 transaction.

       -o use_cache (default="" which means disabled)
              local folder to use for local file cache.

       -o del_cache - delete local file cache
              delete local file cache when s3fs starts and exits.

       -o storage_class (default is standard)
              store object with specified storage class.  this option  replaces  the  old  option
              use_rrs.  Possible values: standard, standard_ia, and reduced_redundancy.

       -o use_rrs (default is disable)
              use  Amazon's  Reduced  Redundancy  Storage.  this option can not be specified with
              use_sse.  (can specify use_rrs=1 for old version) this option has been replaced  by
              new storage_class option.

       -o use_sse (default is disable)
              Specify  three type Amazon's Server-Site Encryption: SSE-S3, SSE-C or SSE-KMS. SSE-
              S3 uses Amazon S3-managed encryption keys, SSE-C uses customer-provided  encryption
              keys, and SSE-KMS uses the master key which you manage in AWS KMS.  You can specify
              "use_sse" or "use_sse=1" enables SSE-S3 type (use_sse=1  is  old  type  parameter).
              Case  of  setting  SSE-C, you can specify "use_sse=custom", "use_sse=custom:<custom
              key file path>" or "use_sse=<custom key file path>"(only  <custom  key  file  path>
              specified  is old type parameter).  You can use "c" for short "custom".  The custom
              key file must be 600 permission. The file can have some lines,  each  line  is  one
              SSE-C key.  The first line in file is used as Customer-Provided Encryption Keys for
              uploading and changing headers etc.  If there are some keys after first line, those
              are used downloading object which are encrypted by not first key.  So that, you can
              keep all  SSE-C  keys  in  file,  that  is  SSE-C  key  history.   If  you  specify
              "custom"("c") without file path, you need to set custom key by load_sse_c option or
              AWSSSECKEYS environment.(AWSSSECKEYS environment  has  some  SSE-C  keys  with  ":"
              separator.)   This  option  is  used to decide the SSE type.  So that if you do not
              want to encrypt a object at uploading, but you need to decrypt encrypted object  at
              downloaing, you can use load_sse_c option instead of this option.  For setting SSE-
              KMS, specify "use_sse=kmsid" or "use_sse=kmsid:<kms id>".   You  can  use  "k"  for
              short  "kmsid".  If you san specify SSE-KMS type with your <kms id> in AWS KMS, you
              can set it after "kmsid:"(or "k:").  If you specify only "kmsid"("k"), you need  to
              set  AWSSSEKMSID  environment  which  value is <kms id>.  You must be careful about
              that you can not use the KMS id which is not same EC2 region.

       -o load_sse_c - specify SSE-C keys
              Specify  the  custom-provided  encription  keys  file  path   for   decrypting   at
              duwnloading.   If  you  use  the  custom-provided  encription key at uploading, you
              specify with "use_sse=custom".  The file has many lines, one line means one  custom
              key.   So  that  you  can  keep  all SSE-C keys in file, that is SSE-C key history.
              AWSSSECKEYS environment is as same as this file contents.

       -o passwd_file (default="")
              specify the path to the password  file,  which  which  takes  precedence  over  the
              password in $HOME/.passwd-s3fs and /etc/passwd-s3fs

       -o ahbe_conf (default="" which means disabled)
              This option specifies the configuration file path which file is the additional HTTP
              header by file(object) extension.
               The configuration file format is below:
               line         = [file suffix] HTTP-header [HTTP-values]
               file suffix  = file(object) suffix, if this  field  is  empty,  it  means  "*"(all
               HTTP-header  = additional HTTP header name
               HTTP-values  = additional HTTP header value
               .gz      Content-Encoding     gzip
               .Z       Content-Encoding     compress
                        X-S3FS-MYHTTPHEAD    myvalue
               A  sample configuration file is uploaded in "test" directory.  If you specify this
              option for set "Content-Encoding" HTTP header, please take care for RFC 2616.

       -o public_bucket (default="" which means disabled)
              anonymously mount a public bucket when set to 1, ignores the $HOME/.passwd-s3fs and
              /etc/passwd-s3fs files.

       -o connect_timeout (default="300" seconds)
              time to wait for connection before giving up.

       -o readwrite_timeout (default="60" seconds)
              time to wait between read/write activity before giving up.

       -o max_stat_cache_size (default="1000" entries (about 4MB))
              maximum number of entries in the stat cache

       -o stat_cache_expire (default is no expire)
              specify expire time(seconds) for entries in the stat cache

       -o enable_noobj_cache (default is disable)
              enable cache entries for the object which does not exist.  s3fs always has to check
              whether file(or sub directory)  exists  under  object(path)  when  s3fs  does  some
              command,  since  s3fs has recognized a directory which does not exist and has files
              or sub directories  under  itself.   It  increases  ListBucket  request  and  makes
              performance  bad.   You  can specify this option for performance, s3fs memorizes in
              stat cache that the object(file or directory) does not exist.

       -o no_check_certificate (by default this option is disabled)
              do not check ssl certificate.  server certificate  won't  be  checked  against  the
              available certificate authorities.

       -o nodnscache - disable dns cache.
              s3fs is always using dns cache, this option make dns cache disable.

       -o nosscache - disable ssl session cache.
              s3fs is always using ssl session cache, this option make ssl session cache disable.

       -o multireq_max (default="20")
              maximum number of parallel request for listing objects.

       -o parallel_count (default="5")
              number  of  parallel  request  for  uploading  big  objects.   s3fs  uploads  large
              object(default:over 20MB) by multipart post request, and sends  parallel  requests.
              This  option  limits  parallel  request  count  which s3fs requests at once.  It is
              necessary to set this value depending on a CPU and a network band.  This option  is
              lated to fd_page_size option and affects it.

       -o fd_page_size(default="52428800"(50MB))
              number  of  internal  management  page  size for each file descriptor.  For delayed
              reading and writing by s3fs, s3fs manages pages which  is  separated  from  object.
              Each  pages  has  a  status  that  data is already loaded(or not loaded yet).  This
              option should not be changed when you don't have a trouble with performance.   This
              value    is    changed   automatically   by   parallel_count   and   multipart_size
              values(fd_page_size value = parallel_count * multipart_size).

       -o multipart_size(default="10"(10MB))
              number of one part size in  multipart  uploading  request.   The  default  size  is
              10MB(10485760byte),  this  value  is  minimum  size.  Specify number of MB and over
              10(MB).  This option is lated to fd_page_size option and affects it.

       -o url (default="")
              sets the url to use to access Amazon S3. If you want to use HTTPS, then you can set

       -o endpoint (default="us-east-1")
              sets  the  endpoint to use.  If this option is not specified, s3fs uses "us-east-1"
              region as the default.  If the s3fs could not connect to the  region  specified  by
              this option, s3fs could not run.  But if you do not specify this option, and if you
              can not connect with the default region, s3fs will retry to  automatically  connect
              to  the  other  region.  So s3fs can know the correct region name, because s3fs can
              find it in an error from the S3 server.

       -o sigv2 (default is signature version 4)
              sets signing AWS requests by sing Signature Version 2.

       -o mp_umask (default is "0000")
              sets umask for the mount point directory.  If allow_other option is not  set,  s3fs
              allows  access  to  the  mount  point only to the owner.  In the opposite case s3fs
              allows access to all users as the default.  But if you  set  the  allow_other  with
              this option, you can controll the permission permissions of the mount point by this
              option like umask.

       -o nomultipart - disable multipart uploads

       -o enable_content_md5 ( default is disable )
              verifying uploaded data without multipart by content-md5 header.   Enable  to  send
              "Content-MD5"  header  when  uploading a object without multipart posting.  If this
              option is enabled, it has some influences on a performance of s3fs  when  uploading
              small  object.   Because  s3fs  always checks MD5 when uploading large object, this
              option does not affect on large object.

       -o iam_role ( default is no role )
              set the IAM Role that will supply the credentials from the instance meta-data.

       -o noxmlns - disable registing xml name space.
              disable  registing  xml  name  space   for   response   of   ListBucketResult   and
              ListVersionsResult    etc.    Default    name    space    is    looked    up   from
              "".  This option should not be specified now,
              because s3fs looks up xmlns automatically after v1.66.

       -o nocopyapi - for other incomplete compatibility object storage.
              For  a  distributed  object  storage which is compatibility S3 API without PUT(copy
              api).  If you set this option, s3fs do not use  PUT  with  "x-amz-copy-source"(copy
              api).  Because  traffic  is increased 2-3 times by this option, we do not recommend

       -o norenameapi - for other incomplete compatibility object storage.
              For a distributed object storage which is compatibility  S3  API  without  PUT(copy
              api).   This  option is a subset of nocopyapi option. The nocopyapi option does not
              use copy-api for all command(ex. chmod, chown, touch, mv,  etc),  but  this  option
              does not use copy-api for only rename command(ex. mv).  If this option is specified
              with nocopapi, the s3fs ignores it.

       -o use_path_request_style (use legacy API calling style)
              Enble compatibility with S3-like APIs which do not support the virtual-host request
              style, by using the older path request style.

       -o dbglevel (default="crit")
              Set   the   debug   message   level.   set  value  as  crit(critical),  err(error),
              warn(warning), info(information) to debug level. default debug level  is  critical.
              If  s3fs run with "-d" option, the debug level is set information.  When s3fs catch
              the signal SIGUSR2, the debug level is bumpup.

       -o curldbg - put curl debug message
              Put the debug message from libcurl when this option is specified.


       Most of the generic mount options described in 'man mount' are supported  (ro,  rw,  suid,
       nosuid,  dev,  nodev, exec, noexec, atime, noatime, sync async, dirsync).  Filesystems are
       mounted with '-onodev,nosuid' by default, which can only be  overridden  by  a  privileged

       There  are  many  FUSE specific mount options that can be specified. e.g. allow_other. See
       the FUSE README for the full set.


       Maximum file size=64GB (limited by s3fs, not Amazon).

       If enabled via the "use_cache" option, s3fs automatically maintains a local cache of files
       in  the  folder specified by use_cache. Whenever s3fs needs to read or write a file on S3,
       it first downloads the entire file locally  to  the  folder  specified  by  use_cache  and
       operates  on  it.  When fuse_release() is called, s3fs will re-upload the file to S3 if it
       has been changed. s3fs uses md5 checksums to minimize downloads from S3.

       The folder specified by use_cache is just a local cache. It can be deleted  at  any  time.
       s3fs rebuilds it on demand.

       Local file caching works by calculating and comparing md5 checksums (ETag HTTP header).

       s3fs  leverages  /etc/mime.types  to "guess" the "correct" content-type based on file name
       extension. This means that you can copy a website to S3 and serve it up directly  from  S3
       with correct content-types!


       Due  to  S3's  "eventual consistency" limitations, file creation can and will occasionally
       fail. Even after a successful create, subsequent reads can fail for an indeterminate time,
       even  after  one  or  more  successful  reads.  Create  and read enough files and you will
       eventually encounter this failure. This is not a flaw in s3fs and it is  not  something  a
       FUSE  wrapper  like  s3fs can work around. The retries option does not address this issue.
       Your application must either tolerate or compensate for these  failures,  for  example  by
       retrying creates or reads.


       s3fs has been written by Randy Rizun <>.