LizardFS is a distributed, scalable, fault-tolerant and highly available file system. It allows users to combine disk space located on many servers into a single name space which is visible on Unix-like and Windows systems in the same way as other file systems. LizardFS makes files secure by keeping all the data in many replicas spread over available servers. It can be used also to build an affordable storage, because it runs without any problems on commodity hardware.

Disk and server failures are handled transparently without any downtime or loss of data. If storage requirements grow, it’s possible to scale an existing LizardFS installation just by adding new servers – at any time, without any downtime. The system will automatically move some data to newly added servers, because it continuously takes care of balancing disk usage across all connected nodes. Removing servers is as easy as adding a new one.

Lizardfs docs here:

Unique features:

  • support for multiple datacenters and media types,
  • fast snapshots,
  • transparent trash bin,
  • QoS mechanisms,
  • quotas and a comprehensive set of monitoring tools make it suitable for a range of enterprise-class applications.

Use cases

Our customers use LizardFS successfully in the following scenarios:

  • archive – with LTO Library/Drive support
  • storage for:
    • virtual machines (as an OpenStack backend or similar)
    • media/CCTV etc
    • render farms
    • backups
  • repositories
  • as a network share drive for Windows™ servers
  • DRC (Disaster Recovery Center)
  • HPC (High Performance Computing)



  • downtime of your storage to 0
  • time required for maintenance tasks by 90%
  • TCO by up to 50%

Get even more added value:

  • exit vendor lock-in
  • choose components for high performance or low cost/TB
  • scalability of up to 1 Exabyte

Servers instead of an expensive Disk Array

Lizard scheme

Outdated solution – Disk Arrays

In the field of data storage, the most popular solution until now has been a Disk Array. However, this technology turned out to be costly, generating large costs for ongoing maintenance and scalability. Once the storage capacity of a Disk Array was full, the only way to continue to store more data was to buy another costly “shelf” from the same producer. When the possibility of adding additional elements was fully exhausted, one had to migrate all the data to a larger, usually much more expensive Disk Array. Thus, one was left with an old and now redundant Disk Array.

Modern solution – Software Defined Storage LizardFS

LizardFS makes it possible to build storage on multiple servers, regardless of their manufacturer. Increasing storage capacity is accomplished by adding additional server(s) with hard drives to the cluster. This type of solution ensures readiness for exponential growth of storage whilst using servers from your preferred vendor. Thus significantly decreasing overall data storage costs.

Faster and safer data recovery – LizardFS vs RAID

If one values data accessibility and security, it is important to pay attention to how long a hard disk recovery takes in case of failure, and what the risks are associated with that recovery process.

Outdated solution – RAID groups – hard drive recovery time increases with the growth of storage capacity

RAID groups are a very common solution presently. Unfortunately, the time to rebuild a RAID group after a hard disk failure increases with the capacity of the disks. Currently with the popular 4TB+ hard drives the recovery of a RAID6 group can take between 2 and 20 hours. Moreover there is a risk another hard drive will fail during this process. This can lead to irreversible loss of the entire data set stored on hard drives.

Modern solution – SDS LizardFS – hard drive recovery time decreases with the growth of storage capacity

With a Distributed Software Defined Storage solution, such as LizardFS, this process has a completely different progression. Data Recovery from a damaged or failed hard drive is done by replicating the data from other hard drives. Consequently the system always keeps a previously set number of copies of the same data in different physical locations. Thanks to the distribution and dissemination of data, an individual hard drives’ workload is lightened during the recovery process, which in turn makes the whole operation quicker and more secure – thus with the growth of storage capacity hard drive recovery time decreases.

Official Ubuntu Repository:

Official Debian Repository:




Distribution means that all the data is distributed among multiple chunk servers. You can easily add and remove drives or the whole storage nodes. It is as simple as “plug and scale”.


Despite single replica (goal replication) we added Erasure Codingwhich enables you to save disc space and allows parallel writes and reads to multiple chunkservers for increased performance.


You can easily scale your LizardFS cluster up and down both vertically and horizontally by a single drive or a single node. You can have thousands of chunkservers and up to 1 Exabyte of data.

Ease of Installation and Administration

If you have good knowledge of LINUX/UNIX systems you shouldn’t need more than 2 hours to get your cluster up and running (our current record is 28 minutes – let us know if you beat it).



Please enter your comment!
Please enter your name here