I've been playing with the minio project the past couple nights to see what it offers. I repurposed my older NAS which is a 12 x 4TB of JBOD, Xeon E3-1270 v3, 32GB RAM. After a few hours of updating firmwares and BIOS along with a fresh install of Ubuntu 20.0.4 the system was working very well. I have 10Gb NICs in both this NAS and my other zfs nas via direct connect.
The setup and configuration is mostly time consuming due to the basic sysadmin tasks of partitioning and formatting each of the drives with xfs and setting up fstab to mount. You can use any linux filesystem, xfs was one recommendation offered for minio deployments so I tried it. Most of which can be scripted, etc and are all common linux tasks so nothing specifically noteworthy. I created a common minio directory with a specific datadrive directory for each HDD. I changed the ownership to my user account for minio to be able to access.
The minio install was a breeze and was done via curl. The only complicated piece was setting up my own systemd service to make it startup on reboots but even that was easy enough. When starting the minio process you pass in all the mounted paths to all the drives you want to use with this service. Since the data drives are common filesystems you can see how minio writes to each data drive with its own directories and files. hypothetically you could store other local files in there if you wanted.
The minio web interface is pretty basic and straightforward. There aren't many admin operations you can perform from it as they focus those on the mc commandline util. You define and browse buckets of object storage through basic GUI buttons. The real details are in the minio mc command line utility. Setting up the mc utility is also very easy. You would install this on any remote system you wish to manage files to/from the minio object storage. The command line usage for copying files
As a further test, I configured
rclone on my zfs NAS in order to use the
sync features. Since minio is an S3 compatible object store, configuring rclone was pretty basic. A few things I changed when syncing files was to use --s3-chunk-size 250M --s3-upload-concurrency 4 --s3-disable-checksum. This increased the throughput quite a bit. Most backups are copying at 330-370MB/sec in total bandwidth over my 10Gb link.
Some pros are that it's really easy to browse my collections and create different buckets to play around with. It all looks like local files when using the mc command line utility or even basic browsing via the GUI. Once I can dig into the nuances of the features I see options for bucket versioning and changing parity levels to improve efficiency in the overhead of protecting my data.
Another fun benefit is browsing the buckets from my mobile device's web browser. I can very easily upload video/pics to a bucket or even create a new bucket. Then it's also very easy to create a sharable url to send and it has an expiration.
Some cons are that it isn't entirely clear what the remaining capacity is in a given system. Understandably the minio system is very scalable by deploying new instances either on the same or other physical hardware. Files can and will even distribute parity among multiple physical systems if desired. Any HDD failures need to be managed with linux utils like smartmon, etc to know when you need to repair/replace something.
I haven't yet tried popping out an HDD to see how it reacts and also what happens if/when I need to replace a drive. How does it resilver parity, etc.
Other things that would make this easier is if I could setup a proper DNS with an actual domain name in my home lab. Then I could add TLS via lets encrypt and also start naming the buckets versus using IP addresses.