Storage Policy

note

Storage policies are available in QuestDB Enterprise only.

A storage policy automates the lifecycle of table partitions. It defines when partitions are converted to Parquet, when native data is removed, and when local copies are dropped. This replaces the need for manual partition management or external scheduling.

info

Storage policies currently operate locally only. Parquet files are not automatically uploaded to object storage, and the DROP REMOTE clause is reserved syntax — it is rejected at SQL parse time with 'DROP REMOTE' is not supported yet. Accordingly, the drop_remote column in the storage_policies view is always blank in the current release; it is kept for forward compatibility. Object storage integration will be added in a future release.

Requirements

Storage policies require:

How it works

A storage policy consists of up to four TTL settings. Each setting controls a stage in the partition lifecycle:

SettingDescription
TO PARQUETConvert the partition from native binary format to Parquet
DROP NATIVERemove native binary files, keeping only the local Parquet copy
DROP LOCALRemove all local data (both native and Parquet)
DROP REMOTEReserved. Will remove the Parquet file from object storage when remote upload is supported

All settings are optional. Use only the ones relevant to your use case. All TTL values must be positive; 0 is rejected.

Partition lifecycle

As time passes, each partition progresses through the stages defined by the policy:

                  TO PARQUET        DROP NATIVE       DROP LOCAL
[Native] ──────────┬──────────────────┬──────────────────┬───────
│ │ │
▼ ▼ ▼
Native + Parquet Parquet only Data removed
(local) (local)

TTL evaluation

Storage policy TTLs follow the same evaluation rules as TTL. A partition becomes eligible for a lifecycle action when its entire time range falls outside the TTL window:

eligible when: partition_end_time < reference_time - TTL

This rule is applied independently for each stage's TTL. A partition can be eligible for TO PARQUET long before it is eligible for DROP NATIVE, DROP LOCAL, or (one day) DROP REMOTE. Each stage uses its own TTL in the formula above; the stages share only the reference time and the ordering constraint TO PARQUET <= DROP NATIVE <= DROP LOCAL <= DROP REMOTE.

The reference time is min(wall_clock_time, latest_timestamp) by default — the same formula used by TTL. The cairo.ttl.use.wall.clock setting applies to storage policies as well: setting it to false removes the wall-clock cap for both TTL and storage policy evaluation. See TTL § Reference time for the rationale and the data-loss hazard of disabling the cap.

QuestDB checks storage policies periodically (every 15 minutes by default) and processes eligible partitions automatically.

Storage policy vs TTL

Storage policies replace TTL in QuestDB Enterprise. If you are already familiar with TTL, this comparison is the fastest way in:

TTLStorage Policy
AvailabilityOpen sourceEnterprise only
ActionDrops partitions entirelyGraduated lifecycle (convert, then drop)
Parquet conversionNoYes (automatic local conversion)
GranularitySingle retention windowUp to four independent TTL stages

In QuestDB Enterprise, CREATE TABLE ... TTL and ALTER TABLE SET TTL are deprecated. Use storage policies instead:

-- Instead of:
-- ALTER TABLE trades SET TTL 30 DAYS;

-- Use:
ALTER TABLE trades SET STORAGE POLICY(DROP LOCAL 30d);
note

If a table already has a TTL set, you must clear it with ALTER TABLE SET TTL 0 before setting a storage policy. SET TTL 0 is the only SET TTL value Enterprise accepts; any non-zero value is rejected with TTL settings are deprecated, please, create a storage policy instead.

Setting a storage policy

At table creation

CREATE TABLE trades (
ts TIMESTAMP,
symbol SYMBOL,
price DOUBLE
) TIMESTAMP(ts) PARTITION BY DAY
STORAGE POLICY(TO PARQUET 3d, DROP NATIVE 10d, DROP LOCAL 1M)
WAL;

On existing tables

ALTER TABLE trades SET STORAGE POLICY(
TO PARQUET 3 DAYS,
DROP NATIVE 10 DAYS,
DROP LOCAL 1 MONTH
);

Only the specified settings are changed. Omitted settings remain unchanged.

On materialized views

CREATE MATERIALIZED VIEW hourly_trades AS (
SELECT ts, symbol, sum(price) total
FROM trades
SAMPLE BY 1h
) PARTITION BY DAY
STORAGE POLICY(TO PARQUET 7d, DROP NATIVE 14d);
ALTER MATERIALIZED VIEW hourly_trades SET STORAGE POLICY(TO PARQUET 7d);

For full syntax details, see ALTER TABLE SET STORAGE POLICY.

TTL duration format

Storage policy TTLs accept the same duration formats as TTL:

UnitLong formShort form
Hours1 HOUR / 2 HOURS1h
Days1 DAY / 3 DAYS1d / 3d
Weeks1 WEEK / 2 WEEKS1W / 2W
Months1 MONTH / 6 MONTHS1M / 6M
Years1 YEAR / 2 YEARS1Y / 2Y

Ordering constraint

TTL values must be in ascending order:

TO PARQUET <= DROP NATIVE <= DROP LOCAL <= DROP REMOTE

For example, you cannot drop native files before the Parquet conversion completes. All TTL values must be positive — 0 is rejected.

Disabling and enabling

Temporarily suspend a storage policy without removing it:

ALTER TABLE trades DISABLE STORAGE POLICY;

Re-enable it later:

ALTER TABLE trades ENABLE STORAGE POLICY;

Both ENABLE and DISABLE require a policy to exist on the table; the statement returns an error otherwise.

Removing a storage policy

To permanently remove a storage policy from a table:

ALTER TABLE trades DROP STORAGE POLICY;

Checking storage policies

Query the storage_policies system view to see all active policies:

SELECT * FROM storage_policies;
table_dir_nameto_parquetdrop_nativedrop_localdrop_remotestatuslast_updated
trades~1272h240h1mA2025-01-15T10:30:00.000000Z
  • TTL values are rendered in just two units: h for hours and m for months. Hour-, day-, and week-based durations are normalized to hours when stored, so a 3 DAYS TTL appears as 72h and 1 WEEK appears as 168h. Month-based durations keep the lowercase m suffix — 1m in this view means one month, not one minute; QuestDB's duration shorthand has no unit for minutes
  • Status A means active; D means disabled (see Disabling and enabling)
  • Unset stages appear blank. drop_remote is always blank in the current release because DROP REMOTE is rejected at SQL parse time with 'DROP REMOTE' is not supported yet; the column is kept for forward compatibility

For the full column reference and types, see storage_policies.

Replication

Storage policy definitions are persisted in WAL-backed system tables, so the policy itself is replicated to every instance in the cluster. Enforcement runs independently on each instance — Parquet files are produced locally and are not replicated.

This means the primary and its replicas can temporarily disagree on which partitions have been converted to Parquet or dropped, depending on when each node's storage policy check interval last fired. The state converges as each instance processes its own queue. See Replication overview for details.

Configuration

Storage policy behavior can be tuned in server.conf. Time-based properties accept values with unit suffixes (e.g., 15m, 30s, 1h) or raw microsecond values:

PropertyDefaultDescription
storage.policy.check.interval15m (15 min)How often QuestDB scans for partitions to process
storage.policy.retry.interval1m (1 min)Retry interval for failed tasks
storage.policy.max.reschedule.count20Maximum retries before abandoning a task
storage.policy.writer.wait.timeout30s (30 sec)Timeout for acquiring the table writer
storage.policy.worker.count2Number of storage policy worker threads (0 disables the feature)
storage.policy.worker.affinity-1 (no affinity)CPU affinity for each worker thread (comma-separated list)
storage.policy.worker.sleep.timeout100msSleep duration when worker has no tasks

Permissions

Storage policy operations require specific permissions in QuestDB Enterprise:

OperationRequired permission
SET STORAGE POLICYSET STORAGE POLICY
DROP STORAGE POLICYREMOVE STORAGE POLICY
ENABLE STORAGE POLICYENABLE STORAGE POLICY
DISABLE STORAGE POLICYDISABLE STORAGE POLICY

Grant permissions using standard RBAC syntax:

GRANT SET STORAGE POLICY ON trades TO analyst;
GRANT REMOVE STORAGE POLICY ON trades TO admin;

End-to-end example

A complete lifecycle on a single table, from creation through verification, modification, inspection of the generated DDL, temporary suspension, and permanent removal:

1. Create the table with a storage policy
CREATE TABLE trades (
ts TIMESTAMP,
symbol SYMBOL,
price DOUBLE
) TIMESTAMP(ts) PARTITION BY DAY
STORAGE POLICY(TO PARQUET 3d, DROP NATIVE 10d, DROP LOCAL 1M)
WAL;
2. Verify via the system view
SELECT table_dir_name, to_parquet, drop_native, drop_local, status
FROM storage_policies
WHERE table_dir_name LIKE 'trades%';
table_dir_nameto_parquetdrop_nativedrop_localstatus
trades~1272h240h1mA
3. Modify one stage (others remain unchanged)
ALTER TABLE trades SET STORAGE POLICY(TO PARQUET 1d);
4. Inspect the current DDL
SHOW CREATE TABLE trades;
CREATE TABLE 'trades' (
ts TIMESTAMP,
symbol SYMBOL CAPACITY 256 CACHE,
price DOUBLE
) timestamp(ts) PARTITION BY DAY
STORAGE POLICY(TO PARQUET 1 DAY, DROP NATIVE 10 DAYS, DROP LOCAL 1 MONTH) WAL;
5. Temporarily suspend the policy (e.g. during a backfill)
ALTER TABLE trades DISABLE STORAGE POLICY;
-- status in storage_policies changes to 'D'
6. Re-enable and, later, drop it for good
ALTER TABLE trades ENABLE STORAGE POLICY;
ALTER TABLE trades DROP STORAGE POLICY;
-- row disappears from storage_policies

Guidelines

Use caseSuggested policyRationale
Real-time metricsTO PARQUET 1d, DROP NATIVE 7d, DROP LOCAL 30dKeep recent data fast, drop old data automatically
Trading dataTO PARQUET 7d, DROP NATIVE 30dKeep Parquet locally for long-term queries
IoT telemetryTO PARQUET 1d, DROP NATIVE 3d, DROP LOCAL 90dHigh volume, convert early to save disk; keep a brief native overlap for in-flight queries before dropping the native files
Aggregated viewsTO PARQUET 30dLow volume, keep locally in Parquet

Tips:

  • Start with TO PARQUET and DROP NATIVE to reduce local disk usage while keeping data queryable in Parquet format
  • Use DROP LOCAL with care as it permanently removes data from the local disk
  • TTL values should be significantly larger than the partition interval