Sizing Guidelines

Evaluate the overall performance and capacity goals that you have for Couchbase, and use that information to determine the necessary resources that you’ll need in your deployment.

When you plan to deploy a Couchbase Server cluster, perhaps the most common (and important) question that comes up is: How many nodes do I need and what size do they need to be?

With the increasing number of Couchbase services and the flexibility of the Couchbase Data Platform, the answer to this question can be challenging. This guide aims to help you better size your deployment.

If you want detailed recommendations for your specific deployment, you can contact Couchbase Support. You can also find more sizing information on the Couchbase Blog.

The sizing recommendations and calculations discussed in this guide are based on an analysis of performance data and common use-cases.

General Considerations

The sizing of your Couchbase Server cluster is critical to its overall stability and performance. While there are some basic system requirements to run Couchbase Server, you still need to evaluate the overall performance and capacity requirements for your workload and dataset, and then divide that into the hardware and resources you have available.

Your application wants the majority of reads to come out of the cache, and to have the I/O capacity to handle the writes. There needs to be enough capacity in all areas to support everything the system is doing while maintaining the required level of performance.

About Couchbase Server Resources

This guide discusses four types of resources that you should consider when sizing a Couchbase Server cluster node:

CPU

CPU refers to the number of cores and clock speed that are required to run your workload.

RAM

RAM is frequently one of the most crucial areas to size correctly. Cached documents allow the reads to be served at very low latency and consistently high throughput.

This resource represents the main memory that you allocate to Couchbase Server and should be determined by the following factors:

  • How much free RAM is available beyond OS and other applications

  • How much data do you want to store in main memory

  • How much latency you expect from KV/indexing/query performance

Some components that require RAM are:

  • Memory-optimized Global Indexes, which enable in-memory index processing and index scans with the lowest latency.

  • Full Text Search (FTS)

Table 1. Minimum RAM Quota for Couchbase Server Components
Component Minimum RAM

Data Service

1024 MB

Index Service (Standard Global Secondary)

256 MB

Indexing Service (Memory-Optimized)

256 MB minimum, 1024 MB and above recommended

Search Service (Full-Text Search)

256 MB minimum; 2048 MB and above recommended

Query Service

No RAM-allocation is required for this service.

Eventing Service

256 MB

Analytics Service

1024 MB

Storage (disk space)

Requirements for your disk subsystem are:

  • Disk size — which refers to the amount of the disk storage space that is needed to hold your entire data set.

  • Disk I/O — which is a combination of your sustained read/write rate, the need for compacting the database files, and anything else that requires disk access.

To better support Couchbase Server, keep in mind the following:

  • Disk space continues to grow if fragmentation ratio keeps climbing. To mitigate this, add enough buffer in your disk space to store all of the data. Also, keep an eye on the fragmentation ratio in the Couchbase user interfaces and trigger compaction processes when needed.

  • Solid State Drives (SSDs) are desired, but not required. An SSD will give much better performance than a Hard Disk Drive (HDD) when it comes to disk throughput and latency.

Network

Enough network bandwidth is vital to the performance of Couchbase Server. A reliable high-speed network for intra-cluster and inter-cluster communications has a huge effect on overall performance and scalability of Couchbase Server.

Most deployments can achieve optimal performance with 1 Gbps interconnects, but some may need 10 Gbps.

Sizing Data Service Nodes

Data Service nodes handle data service operations, such as create/read/update/delete (CRUD).

It’s important to keep use-cases and application workloads in mind since different application workloads have different resource requirements. For example, if your working set needs to be fully in memory, you might need large RAM size. On the other hand, if your application requires only 5% of data in memory, you will need disks with enough space to store all of the data, and that are fast enough for your read/write operations.

You can start sizing the Data Service nodes by answering the following questions:

  1. Is the application primarily (or even exclusively) using individual document access?

  2. Do you plan to use Views?

  3. Do you plan to use XDCR?

  4. What’s your working set size and what are your data operation throughput and latency requirements?

Answers to the above questions can help you better understand the capacity requirement of your cluster and provide a better estimation for sizing.

The following is an example use-case for sizing RAM:

Table 2. Input Variables for Sizing RAM
Input Variable Value

documents_num

1,000,000

ID_size

100

value_size

10,000

number_of_replicas

1

working_set_percentage

20%

Table 3. Constants for Sizing RAM
Constants Value

Type of Storage

SSD

overhead_percentage

25%

metadata_per_document

56 for 2.1 and higher, 64 for 2.0.x

high_water_mark

85%

Based on the provided data, a rough sizing guideline formula would be:

Table 4. Guideline Formula for Sizing a Cluster
Variable Calculation

no_of_copies

1 + number_of_replicas

total_metadata

(documents_num) * (metadata_per_document + ID_size) * (no_of_copies)

total_dataset

(documents_num) * (value_size) * (no_of_copies)

working_set

total_dataset * (working_set_percentage)

Cluster RAM quota required

(total_metadata + working_set) * (1 + headroom) / (high_water_mark)

number of nodes

Cluster RAM quota required / per_node_ram_quota

Based on the above formula, these are the suggested sizing guidelines:

Table 5. Suggested Sizing Guideline
Variable Calculation

no_of_copies

= 1 for original and 1 for replica

total_metadata

= 1,000,000 * (100 + 56) * (2) = 312,000,000

total_dataset

= 1,000,000 * (10,000) * (2) = 20,000,000,000

working_set

= 20,000,000,000 * (0.2) = 4,000,000,000

Cluster RAM quota required

= (312,000,000 + 4,000,000,000) * (1+0.25)/(0.85) = 6,341,176,470

This tells you that the RAM requirement for the whole cluster is 7 GB. Note that this amount is in addition to the RAM requirements for the operating system and any other software that runs on the cluster nodes.

Sizing Index Service Nodes

A node running the Index Service must be sized properly to create and maintain secondary indexes and to perform index scan for N1QL queries.

Similarly to the nodes that run the Data Service, there is a set of questions you need to answer to take care of your application needs:

  1. What is the length of the document key?

  2. Which fields need to be indexed?

  3. Will you be using simple or compound indexes?

  4. What is the minimum, maximum, or average value size of the index field?

  5. How many indexes do you need?

  6. How many documents need to be indexed?

  7. How often do you want compaction to run?

Answers to these questions can help you better understand the capacity requirement of your cluster, and provide a better estimation for sizing.

The following is an example use-case for sizing disk:

Table 6. Disk Sizes
Input variable Value

docID

20 bytes

Number of index fields

1

Secondary index

24 bytes

Number of documents to be indexed

20M

When you calculate disk usage for the above test cases, there are a few factors you need to keep in mind:

  1. Compaction is disabled. This case illustrates the worst-case scenario for disk usage.

  2. Couchbase Server uses an append-only storage format. Therefore, actual disk usage will be larger than data size.

  3. Fragmentation will affect the disk usage. The larger the fragmentation, the more disk you will need.

The above index consumes 6 GB of disk space.

Sizing Query Service Nodes

A node that runs the Query Service executes queries for your application needs.

Since the Query Service doesn’t need to persist data to disk, there are very minimal resource requirements for disk space and disk I/O. You only need to consider CPU and memory.

There are a few questions that will help size the cluster:

  1. What types of queries do you need to run?

  2. Do you need to run stale=ok or stale=false queries?

  3. Are the queries simple or complex (requiring JOINs, for example)?

  4. What are the throughput and latency requirements for your queries?

Different queries have different resource requirements. A simple query might return results within milliseconds while a complex query may require several seconds.

The following is an example use-case for sizing CPU:

Assume that you have a user profile store, which stores a user’s name and email address. You would like to query based on a user’s email address, and you create a secondary index on email. Now you would like to run a query that looks like this:

Select * from bucket where email = "foo@gmail.com"

By default, N1QL uses stale=ok for a consistency model.

It was observed that this query utilized 24 cores completely to achieve an 80% latency of 5ms against a bucket of 20M documents.