Cleaning
Background
Cleaning is a table service employed by Hudi to reclaim space occupied by older versions of data and keep storage costs in check. Apache Hudi provides snapshot isolation between writers and readers by managing multiple versioned files with MVCC concurrency. These file versions provide history and enable time travel and rollbacks, but it is important to manage how much history you keep to balance your costs. Cleaning service plays a crucial role in manging the tradeoff between retaining long history of data and the associated storage costs.
Hudi enables Automatic Hudi cleaning by default. Cleaning is invoked immediately after each commit, to delete older file slices. It's recommended to leave this enabled to ensure metadata and data storage growth is bounded. Cleaner can also be scheduled after every few commits instead of after every commit by configuring hoodie.clean.max.commits.
Cleaning Retention Policies
When cleaning old files, you should be careful not to remove files that are being actively used by long running queries.
For spark based:
| Config Name | Default | Description |
|---|---|---|
| hoodie.clean.policy | KEEP_LATEST_COMMITS (Optional) | org.apache.hudi.common.model.HoodieCleaningPolicy: Cleaning policy to be used. Config Param: CLEANER_POLICY |
The corresponding config for Flink based engine is clean.policy.
Hudi cleaner currently supports the below cleaning policies to keep a certain number of commits or file versions:
-
KEEP_LATEST_COMMITS: This is the default policy. This is a temporal cleaning policy that ensures the effect of having lookback into all the changes that happened in the last X commits. Suppose a writer is ingesting data into a Hudi dataset every 30 minutes and the longest running query can take 5 hours to finish, then the user should retain atleast the last 10 commits. With such a configuration, we ensure that the oldest version of a file is kept on disk for at least 5 hours, thereby preventing the longest running query from failing at any point in time. Incremental cleaning is also possible using this policy. Number of commits to retain can be configured by
hoodie.clean.commits.retained. The corresponding Flink related config isclean.retain_commits. -
KEEP_LATEST_FILE_VERSIONS: This policy has the effect of keeping N number of file versions irrespective of time. This policy is useful when it is known how many MAX versions of the file does one want to keep at any given time. To achieve the same behaviour as before of preventing long running queries from failing, one should do their calculations based on data patterns. Alternatively, this policy is also useful if a user just wants to maintain 1 latest version of the file. Number of file versions to retain can be configured by
hoodie.clean.fileversions.retained. The corresponding Flink related config isclean.retain_file_versions. -
KEEP_LATEST_BY_HOURS: This policy clean up based on hours.It is simple and useful when knowing that you want to keep files at any given time. Corresponding to commits with commit times older than the configured number of hours to be retained are cleaned. Currently you can configure by parameter
hoodie.clean.hours.retained. The corresponding Flink related config isclean.retain_hours.