RethinkDB Deployments allow developers to focus on what they do best (developing useful applications) and avoid the guesswork of capacity planning. These deployments are deployed on fast SSDs, fully replicated, and have access to huge amounts of RAM. As an application’s data size increases, the deployment resources scale fluidly to maintain consistent performance and are billed based on actual usage.
RethinkDB works best with a reasonable ratio of RAM to data size and fast I/O. Increasing deployment server capacity traditionally requires a change in server size, replacement of I/O subsystems, and a coordinated operations process. Developers typically perform a precarious balancing act: trying to avoid unnecessary cost while hoping that their data growth won’t take them by surprise or cause outages.
In addition to elastic provisioning, RethinkDB Deployments include built-in daily backups so you can be confident that your data will never be lost. Backups use the RethinkDB dump command to perform the backup to ensure minimal impact to your applications.
- Automatically scaling server stack that scales RAM, CPU, and I/O as your RethinkDB data grows.
- 3 node clusters on extremely fast SSDs (solid state drives).
- Guaranteed resources per deployment.
- No-cost backups, no matter how big your deployment grows.
See Compose Datacenter Availability for current location availability.
All RethinkDB Deployments are high-availability clusters. You get fully redundant architecture from the hardware to your deployment itself. RethinkDB clusters consist of a master member which will coordinate writes. The additional members can become master in the event of a node failure. The data is spread across the cluster based on the replica and shard count (more info).
In addition to the 3 node RethinkDB cluster, we provide 1 Haproxy node to serve as a proxy and provide SSL support to the cluster. If your driver does not support SSL, you can still use an SSH capsule to create a tunnel to the cluster.
RethinkDB deployments are provisioned into isolated capsules with guaranteed resource allocation. Resources scale automatically based on the total disk use of the RethinkDB database. RAM is scaled at 1/10th of data size, disk IOPs are scaled at 60x data size.
CPU and network capacity are allocated using weighted priorities based on disk usage with the ability to burst to more CPU and network when available.
cache-size is adjusted based on the overall data size to ensure stable operation and performance.
Example: a RethinkDB Deployment that is 10GB on disk will be assigned 1GB of RAM and 600 IOPs (300 write, 300 read).
As your RethinkDB Deployment scales, we charge based on the total resource allocation. We charge $18/mo per resource increment, each increment includes:
- Data and resources replicated across 2 servers
- 1GB of storage
- 0.1GB of RAM
- 60 disk IOPs
Please send us a support request to reserve resources for your deployment. Though it's not yet a self-service feature, we can scale resources up to something larger than our autoscaling algorithms would normally assign. This does increase the price of the service, however, since we bill based on the total resources assigned for a given deployment.
Note: We will occasionally increase the RAM allotment on RethinkDB Deployments to ease the workload on our operations group. We do not charge for the increases we put in place to help our operations group do a better job.
We do not make these databases available in 3rd party marketplaces. However, as long as your PaaS partner is available in the same datacenter, then Compose databases can be used while you run your application from the PaaS provider.
If this article didn't solve things, summon a human and get some help!