QuestDB Enterprise quick start

QuestDB Enterprise offers the entire feature set of QuestDB open source, with premium additions.

This guide will walk you through a basic Enterprise setup.

Each production configuration will be unique, however these examples will help inform your own unique choices.


Requirements
0. Secure the built in admin
1. Setup TLS
2. Setup a database administrator
3. Create interactive user accounts
4. Ingest data, InfluxDB Line Protocol
5. Ingest data, Kafka Connect (optional)
6. Query data, PostgreSQL query
7. Setup replication
8. Enable compression
9. Configure a storage policy
10. Double-check kernel limits
Next steps
FAQ


Requirements

The following are required prior to following this guide:

Installation guide

Changes take place in your conf/server.conf file, the QuestDB Web Console, your app code or third-party tool.

Check the code snippet's title to see where the command is to be invoked.

If you run into any trouble, please contact us by email or visit the Community Forum.

0. Secure the built in admin

QuestDB Enterprise provides a built-in administrator account.

By default, it has the login admin and the password quest.

Before you go any further, please change the default password!

Consider changing the name, too.

To change these values, swap your own in place of myadmin and my_very_secure_pwd:

server.conf - Securing built-in admin account
# the built-in admin's user name and password
acl.admin.user=myadmin
acl.admin.password=my_very_secure_pwd
Kubernetes deployments

In Kubernetes, you can read the password from a mounted secret file instead of hardcoding it. Set QDB_ACL_ADMIN_PASSWORD_FILE to the path of the mounted secret. See Secrets from files for details.

We will optionally disable this built-in administrator account later.

For more on access control, see Role-Based Access Control.

1. Setup TLS

QuestDB supports TLS versions 1.2 and 1.3.

To configure TLS on all interfaces, set the following:

server.conf - Changing cert paths
tls.enabled=true
tls.cert.path=/path/to/certificate.pem
tls.private.key.path=/path/to/private_key

To hot-reload the certificate and private key and update the files on disk, login to your QuestDB Web Console. This is accessible by default at http://localhost:9000. Login using the built-in administrator credential.

Then, execute:

Web Console - Reloading TLS
SELECT reload_tls();

TLS is now active.

For more details on TLS see the TLS operations documentation.

2. Setup a database administrator

The built-in admin aids in the first mile, and as needed on a recovery basis.

A helpful practice is to have one created admin through which to setup other accounts.

Create a new database admin:

Web Console - Creating an admin; use your own, secure password!
CREATE USER myadmin WITH PASSWORD 'xyz';
GRANT all TO myadmin WITH GRANT OPTION;

For emphasis: Please choose a secure password!

After admin creation, we can now disable the built-in admin:

server.conf - Disabling service account
acl.admin.user.enabled=false

Can you keep it? If it's secured, it's up to you. Consider different roles. You may be setting up an Enterprise cluster as the infrastructure admin. In this case, the built-in admin is your tool to do infrastructure tasks. The admin we just created may be of a different persona, the one who sets up users, groups, dictates how data can enter and be queried.

However you break it down, remember that it can always be reactivated.

3. Create interactive user accounts

Now that you have an admin account, create interactive users.

Interactive users are those who will ingest into and query your database, and manipulate its data. These are different than administrators, like you, who delegate permissions.

Create and govern users through role-based access control and the curation of your access control list.

Interactive users may utilize the Web Console and/or the Postgres querying clients. It is common practice to set them up as readonly. But how you setup these users or groups is up to you.

For ingestion, we'll cover that in the next section. Consider this first wave of users your "database consumers".

For permissions, the Web Console requires HTTP, and the PostgreSQL interface requires PGWIRE:

Web Console - Creating multiple users with differing permissions.
-- Read only user, can read all tables:
CREATE USER readonly WITH PASSWORD 'xyz';
GRANT HTTP, PGWIRE TO readonly;
GRANT SELECT ON ALL TABLES TO readonly;

-- User with all permissions on a specific table:
CREATE USER user1 WITH PASSWORD 'abc';
GRANT HTTP, PGWIRE TO user1;
GRANT ALL ON table1 TO user1;

-- User who can manage access to a specific table:
CREATE USER user2 WITH PASSWORD 'abc';
GRANT HTTP, PGWIRE TO user2;
GRANT ALL ON table2 TO user2 WITH GRANT OPTION;

Permission grants can be specific and fine-tuned.

List the full list of applied permissions with all_permissions().

4. Ingest data, InfluxDB Line Protocol

The recommended method for high-throughput ingestion is InfluxDB Line Protocol (ILP) over HTTP.

We recommend using a service account for programmatic ingestion. Service accounts apply a cleaner set of access permissions and are less likely to be affected by day-to-day user management.

The process is:

  1. Create a service account and grant it permissions.
  2. Generate a REST token for the service account.
  3. Use this token in your client's connection string.

Step 1: Create the Service Account

First, run the following SQL in the web console. This creates a service account named ingest_http and grants it the necessary permissions to use HTTP endpoints and manage data.

Web Console - Setup a service account
CREATE SERVICE ACCOUNT ingest_ilp;
-- Grant permission to create tables and use HTTP endpoints
GRANT HTTP, CREATE TABLE TO ingest_ilp;
-- Grant permission to add columns and insert data
GRANT ADD COLUMN, INSERT ON ALL TABLES TO ingest_ilp;

-- OR, for more granular control:
-- GRANT ADD COLUMN, INSERT ON table1, table2 TO ingest_ilp;

Step 2: Generate an Authentication Token

Next, generate a REST API token for the service account. This token acts as a password, so you must store it securely.

Web Console - Generate a token for the ingest client
ALTER SERVICE ACCOUNT ingest_ilp CREATE TOKEN TYPE REST WITH TTL '3000d' REFRESH;

This command returns a token. Copy it immediately, as it's shown only once.

nametokenexpires_atrefresh
ingest_ilpqt1KAsf1U9YbUVAX1H2IahXEE3-4qBcK-zx_jsZUzV9bLY2033-09-19T15:32:51.628453Ztrue

Step 3: Use the Token in Your Client

You can now use this token to authenticate your application. The following Java example shows how to use the client library by configuring it from a connection string. This is the recommended approach.

Java - Ingesting data via ILP
import io.questdb.client.Sender;
import java.time.temporal.ChronoUnit;

public class Ingest {
public static void main(String[] args) {
try (Sender sender = Sender.fromConfig("https::addr=localhost:9000;token=qt1KAsf1U9YbUVAX1H2IahXEE3-4qBcK-zx_jsZUzV9bLY;")) {
sender.table("ilptest");
sender.symbol("sym1", "symval1")
.doubleColumn("double1", 100.0)
.at(System.currentTimeMillis(), ChronoUnit.MILLIS);
}
}
}
A Note on TLS

The https:: prefix in the connection string tells the client to connect using TLS. By default, the client will verify the server's certificate. For local testing with self-signed certificates, you can disable this validation by adding tls.verify=insecure; to the configuration string. This is not recommended for production.

Connecting a client to ILP is a common path.

However, you may use something like Kafka.

For more on ILP ingestion, see:

5. Ingest data, Kafka Connect (optional)

If you're not using Kafka, you can skip to section 6.

The official QuestDB Kafka Connect sink forwards messages from Kafka topics directly to your database using ILP protocol. The setup process is straightforward:

  1. Create a dedicated service account in QuestDB.
  2. Generate an authentication token for the account.
  3. Configure the Kafka sink connector with your QuestDB address and the token.

Step 1: Create the Service Account

In the QuestDB web console, create a service account named kafka and grant it the permissions required to connect and write data.

Web Console - Create a Kafka service account
CREATE SERVICE ACCOUNT kafka;

-- Grant permissions to use HTTP, create tables, add new columns and insert data
GRANT HTTP, CREATE TABLE TO kafka;
GRANT ADD COLUMN, INSERT ON ALL TABLES TO kafka;

-- OR, for more granular control:
-- GRANT ADD COLUMN, INSERT ON table1, table2 TO ingest_ilp;

Step 2: Generate an Authentication Token

Next, generate a REST API token for the kafka service account. This token is a secret credential and should be treated like a password.

Web Console - Generate a token for the service account
-- Creates a token that is valid for 1 year (365 days)
ALTER SERVICE ACCOUNT kafka CREATE TOKEN TYPE REST WITH TTL '365d';

The command returns a token. Copy it immediately, as it will not be shown again.

nametokenexpires_at
kafkaqt1KAsf1U9YbUVAX1H2IahXEE3-4qBcK-zx_jsZUzV9bLY2026-07-03T18:05:00.000000Z

Save the private key in a secure location!

Step 3: Configure the Kafka Connect Sink

Create a configuration file for the QuestDB sink connector. In the client.conf.string property, provide your QuestDB server address and paste the token you just generated.

questdb-sink.properties
# --- Connector Identity ---
name=QuestDBSinkConnector
connector.class=io.questdb.kafka.QuestDBSinkConnector
tasks.max=1

# --- Source Kafka Topic ---
topics=your_kafka_topic

# --- QuestDB Connection ---
# Use https:: if your QuestDB server has TLS enabled.
# Replace the placeholder with the token you generated.
client.conf.string=https::addr=localhost:9000;token=qt1KAsf1U9YbUVAX1H2IahXEE3-4qBcK-zx_jsZUzV9bLY;

# --- Optional: Data Mapping ---
# Use a field from the Kafka message key or value as a QuestDB symbol.
# symbol.columns=device_id

Once you deploy this configuration, the connector will start sending data from your Kafka topic to QuestDB. If you encounter any issues, check the logs for both your Kafka Connect worker and your QuestDB server for more details.

See the QuestDB Kafka Connector documentation for more details on the configuration options and how to set up the connector.

6. Query data, PostgreSQL query

Now onto querying.

We will demonstrate programmatic querying via the PostgreSQL interface.

Again, in this case we recommend a unique user or a service account.

We will create a service account named "dashboard".

We'd assume that this is Grafana or a similar visual data representation application.

To setup the service account:

Web Console - Create a service account called 'dashboard' and grant permissions
CREATE SERVICE ACCOUNT dashboard WITH password 'pwd';
GRANT pgwire TO dashboard;
GRANT select on all tables TO dashboard;

Applying Java & jdbc, we can setup a client to query.

We're providing a username and password instead of a token:

Java - Querying via JDBC
import java.sql.*;
import java.util.Properties;

public class App {
public static void main(String[] args) throws SQLException {
Properties properties = new Properties();
properties.setProperty("user", "dashboard");
properties.setProperty("password", "pwd");
properties.setProperty("sslmode", "require");

final Connection connection = DriverManager.getConnection(
"jdbc:postgresql://localhost:8812/qdb", properties);
try (PreparedStatement preparedStatement = connection.prepareStatement(
"SELECT x, timestamp_sequence('2023-07-20', 1000000) FROM long_sequence(5);")) {
try (ResultSet rs = preparedStatement.executeQuery()) {
while (rs.next()) {
System.out.println(rs.getLong(1));
System.out.println(rs.getTimestamp(2));
}
}
}
connection.close();
}
}

This covers the very basics of user creation and service accounts.

We have an ingest service account and a dashboard service account.

For more on querying, see:

For the full role-based access control docs, including group management, see the RBAC operations guide.

Next, we will enable Enterprise-specific features.

7. Setup replication

Replication consists of:

  • a primary database instance
  • an object storage
  • any number of replica instances

The primary instance uploads its Write Ahead Log (WAL) to the object storage, and the replica instances apply the same data to their tables by downloading and processing the WAL.

Full instructions can be found within the replication page, however the key parts are:

  1. Setup the object storage: Supported options are Azure Blob Storage, Amazon S3 or Network File Storage (NFS).
  2. Set up a primary node: Alter the server.conf within the primary-to-be and create a snapshot of the database.
  3. Setting up a replica node: Alter the server.conf in the replica(s)-to-be and perform "recovery" from the snapshot of the primary database. The snapshot provides a starting point for the instance, which will soon catch up with the primary node.

8. Enable compression

Compression requires the Zettabyte File System (ZFS).

We'll assume Ubuntu, and demonstrate the basics CLI commands which you'd apply in something like an AWS EC2 to enable ZFS:

Ubuntu - Install ZFS
sudo apt update
sudo apt install zfsutils-linux

To enable compression, create a ZPool with compression enabled:

Ubuntu - Enable compression
zpool create -m legacy -o feature@lz4_compress=enabled autoexpand=on -O compression=lz4 -t volume1 questdb-pool sdf

The exact commands depend on which version of ZFS you are running. Use the ZFS docs to customize your ZFS to meet your requirements.

If you are running QuestDB Enterprise in Kubernetes, QuestDB offers a Container Storage Interface (CSI) Driver to create ZFS volumes in your cluster.

Please contact us for more information to see if your version and distribution of Kubernetes is supported.

For more on storage and compression, see Enable compression with ZFS.

9. Configure a storage policy

Storage policies are the Enterprise primitive for automated partition retention. A policy converts older partitions to Parquet, drops the native binary files, and eventually drops the Parquet files too — on a schedule you define. This supersedes plain TTL in Enterprise, where ALTER TABLE SET TTL with a non-zero value is rejected.

Migrating from TTL when upgrading from OSS

Tables and materialized views that were created in OSS keep their existing TTL setting after you upgrade to Enterprise — no data is lost at upgrade time. However, Enterprise rejects any new TTL changes (both CREATE ... TTL and ALTER ... SET TTL <non-zero>) with:

TTL settings are deprecated, please, create a storage policy instead

To move a legacy table or materialized view from TTL to a storage policy:

  1. Clear the existing TTL by setting it to 0. This is the only SET TTL value Enterprise accepts, and it is required before a storage policy can be attached:

    Clear the legacy TTL
    ALTER TABLE trades SET TTL 0;
    -- or, for a materialized view:
    ALTER MATERIALIZED VIEW trades_hourly SET TTL 0;
  2. Attach a storage policy that reproduces — and ideally extends — the retention the TTL used to provide. A policy lets you keep data in Parquet after you would previously have dropped it, so DROP LOCAL (or DROP NATIVE if you don't want Parquet at all) is the stage that replaces the old TTL horizon:

    Replace a 1-month TTL with an equivalent policy
    ALTER TABLE trades SET STORAGE POLICY(
    TO PARQUET 3 DAYS,
    DROP NATIVE 10 DAYS,
    DROP LOCAL 1 MONTH
    );

    If you want the policy to behave exactly like the old TTL (delete the partition outright after the same interval), use a single-stage policy — for example STORAGE POLICY(DROP NATIVE 1 MONTH) to match TTL 1 MONTH.

Do this for every table and materialized view you want to keep managed automatically. Tables without a storage policy retain their data indefinitely once their legacy TTL has been cleared.

Creating new tables with a storage policy

Attach a policy at table creation:

Web Console - Create a table with a storage policy
CREATE TABLE trades (
ts TIMESTAMP,
symbol SYMBOL,
price DOUBLE
) TIMESTAMP(ts) PARTITION BY DAY
STORAGE POLICY(TO PARQUET 3d, DROP NATIVE 10d, DROP LOCAL 1M)
WAL;

Or attach a policy to an existing table:

Web Console - Set a storage policy on an existing table
ALTER TABLE trades SET STORAGE POLICY(
TO PARQUET 3 DAYS,
DROP NATIVE 10 DAYS,
DROP LOCAL 1 MONTH
);

Check active policies via the storage_policies system view:

SELECT * FROM storage_policies;

For the full concept, including the stage model, timing, and replication behavior, see Storage Policy and ALTER TABLE SET STORAGE POLICY.

10. Double-check kernel limits

QuestDB works together with your server operating system to achieve maximum performance. Prior to putting your server under heavy loads, consider checking your kernel-based limitations.

Specifically, increase the limits for how many files can be opened by your OS and its users, and the maximum amount of virtual memory allowed. This helps QuestDB operate most effectively.

Next steps

This guide has prepared you for reliable, production-ready usage of QuestDB Enterprise.

If you're new to QuestDB, consider checking out:

Otherwise, enjoy!

FAQ

General Setup and Configuration

Q: How do I change the default administrator password?

A: To change the default administrator password, update your server.conf file with the following lines, replacing myadmin and my_very_secure_pwd with your chosen administrator username and a secure password:

acl.admin.user=myadmin
acl.admin.password=my_very_secure_pwd

Q: What should I do if I encounter errors during the TLS setup process?

A: If you encounter errors during the TLS setup, ensure that the certificate and private key paths are correctly specified in your server.conf file. Also, verify that your certificates are valid and not expired. For further troubleshooting, consult the TLS operations documentation.

Security and Access Control

Q: Can I recover a lost private key for a service account?

A: No, once a private key for a service account is generated, it cannot be retrieved again. It is crucial to store it securely immediately upon creation. If lost, you will need to generate a new token for the service account.

Q: How do I securely manage service account tokens?

A: Securely managing service account tokens involves storing them in a safe location, such as a secure secrets management tool. Limit the distribution of these tokens and regularly rotate them to enhance security.

Ingestion and Querying

Q: What should I do if data ingestion via Kafka Connect fails?

A: If data ingestion via Kafka Connect fails, check your service account permissions and ensure the private key used in Kafka's configuration matches the one generated for your service account. Also, verify your network settings and ensure there are no connectivity issues between Kafka and QuestDB.

Q: How can I troubleshoot issues with querying data using the PostgreSQL interface?

A: Ensure the service account or user has the correct permissions to query the tables of interest. Verify the connection string and authentication details used in your client application. For issues related to SSL, make sure the SSL mode is appropriately configured in your client connection settings.

Replication and Compression

Q: What steps should I take if replication is not working as expected?

A: Verify that the object storage is correctly set up and accessible by the primary instance. Ensure the server.conf settings for replication are correctly configured on both the primary and replica nodes. Check the logs for any errors related to replication and ensure there's network connectivity between all involved parties.

Q: Compression is enabled, but I'm not observing reduced storage usage. What could be the issue?

A: Ensure that the ZFS filesystem is correctly configured with compression enabled. Note that the actual compression ratio achieved can vary significantly depending on the nature of your data. Some types of data may not compress well. Review the ZFS compression statistics to understand the compression level being achieved. If it seems out of expected range, please contact us.