At Compose, our deployments are provisioned into isolated capsules each with guaranteed resource allocation. You should never have to worry about it. That said, we're happy to explain in more detail.
Auto-scaling controls the resources available to your databases. The resource numbers given refer to each database node in a Compose deployment; for example, where there are two high availability members in a deployment and the deployment is scaled with 10GB of disk and 1GB of RAM, that means each member has 10GB of disk and 1GB of RAM.
As a general rule, resources are then scaled automatically based on the total disk storage use of a deployment. RAM usage is typically based on a ratio of provisioned data storage. IOPs are guaranteed to be sixty times the GB size of the provisioned data storage split evenly between read and write with the ability to burst higher when capacity allows. These resources are bundled as units.
Auto-scaling is designed to respond to the short-to-medium term trends of your database. Every hour, each deployment is checked and, generally, if it is running short on resources, then more units are allocated to the deployment.
Scaling Down is not handled automatically
Auto-scaling does not scale down deployments where disk/memory usage has shrunk. The resources provisioned to your databases will remain for your future needs, or until you scale down your deployment manually.
For most databases, disk space will generally increase without restrictions with the system adding the appropriate number of units for the disk space consumed, with its associated RAM and IOPs, every hour. For Redis, etcd and RabbitMQ, as they are memory-centric, the amount of RAM consumed is monitored. When a Redis deployment has consumed over 95% of its RAM, an extra unit is added to increase the available RAM (and disk storage and IOPs).
There is one exception to the hourly autoscale, Elasticsearch. The system will still check every hour, but only autoscales every 24 hours. That means that if you are planning on a RAM heavy operations which will put a spike in your usual RAM usage, we recommend you manully scale your Elasticsearch deployment's resources up first, carry out the operation and then scale back down. Compose calculates billing on an hourly basis, so only the hours spent scaled up will be billed for at the higher rate.
For MongoDB (with MMAPv1), Elasticsearch, PostgreSQL and RethinkDB a unit is 1GB of disk storage. A ratio of 10:1 is used to allocate RAM so for each GB of disk, 102MB of RAM is allocated to the data capsules.
MongoDB with WiredTiger, a unit is still 1GB of disk storage. In this one case but it uses a 4:1 ratio to allocate RAM, so for each GB of disk, 256MB of RAM is allocated to the data capsules.
For Redis, RAM a unit is 256MB of RAM. That is then mapped at 1:1 to give 256MB of disk storage.
For etcd and RabbitMQ, a unit is 256MB of RAM. A 1:4 ratio of memory to disk storage means 1GB of storage.
For disk storage based deployments, exceeding the current allocated space will start the process of autoscaling up. That applies to MongoDB, Elasticsearch, PostgreSQL and RethinkDB.
Redis, etcd and RabbitMQ autoscale up, as they hit thresholds in memory consumption.
A MongoDB (with MMAPv1) deployment that is 10GB on disk will be using 10 units and therefore also be assigned 1GB of RAM and 600 IOPs (300 write, 300 read). If it's actual usage goes above 10GB, the auto-scaling system will kick in and add a unit of resources, allowing up to 11GB of disk with nodes being assign 1.122GB of RAM and 660 IOPS (330 write, 300 read).
A Redis deployment that has 1GB of RAM will be using 4 units and therefore be assigned 1GB of RAM and 60 IOPs (30 write, 30 read). If it uses over 972MB of RAM (95% of 1GB) the autoscaling system will kick in and add a unit of resources, allowing up to 1.25GB of RAM, 1.25GB of disk and 75IOPS (approx, 37 read, 37 write). The next RAM threshold would be around 1216MB used.
Scaling is under your control if you want to override it to get a particular amount of RAM or disk. At the bottom of the overview screen for each database is a Scaling panel which offers you the chance to scale up by three different amounts – usually starting with 1.5x, 2.0x and 3.0x. These would add 50%, 100% or 200% of resource units to your current allocation. The costs associated with these scale-ups are displayed with each option and can be enacted by clicking the button associated with each.
If you have scaled up but are not, obviously, using the resources allocated, then the Scaling panel will display that you are also eligible to scale down to reduce your RAM and disk usage.
Updated 7 months ago