Portal Management


Each Compose database deployment runs on its own private virtual encrypted network. The only way traffic gets in or out is through one of the access portals on that deployment. Access portals are specialized nodes on the private network with an external IP address that manage the traffic from the your applications into the private network. We have got two types of access portals currently available, the TCP/Haproxy portal and the SSH portal.

Portal Management can be done on the Security page for any deployment. For example, an Elasticsearch deployment looks like this:


Listing of portals in the Security panel.

Elasticsearch supports TCP and SSH access portals, and each class of portal is displayed with its own list of portals with Remove buttons and an Add button at the bottom of the list. When no more of a particular class of portal can be added, the Add button is not displayed. Similarly, when the minimum for a particular class of portal has been reached, the Remove buttons are not displayed.

For deployments where both SSH portals and haproxy portal are available, you may choose to customize any combination of access that suits your business and application needs.

Portal configurations

This table reflects the currently available portal configurations for the Compose databases.

DatabaseTCP PortalsSSH PortalsSpecialized Portals
MongoDBNoNoMongos routers
Elasticsearch1 to 3 (Default 2)0 to 3 (Default 0)No
Redis1 to 3 (Default 1)0 to 3 (Default 0)No
PostgreSQL1 to 3 (Default 2)NoNo
RethinkDBNo0 to 3 (Default 0)RethinkDB proxy
1 to 2 (Default 1)
RabbitMQ1 to 3 (Default 2)NoNo
Scylla1 for nodetoolNo
etcd1 to 3 (Default 2)NoNo
  • ¹ ScyllaDB is a special case where nodes and portals are bound together in such a way that it is not possible to vary the number of portals.
  • ² MySQL in beta runs with a single TCP access portal.

TCP/Haproxy Portals

We use haproxy portals to support high-availblity configurations on almost all of the database types. The portal tracks the leader of the cluster and handles the connections should the cluster-leader become unavailable. The exception is ScyllaDB, where each node has it's own portal, and your application would have to handle fail-over between nodes. MongoDB and RethinkDB TCP portals are specialized configurations for those database types that have knowledge of your configurations, member shards, and nodes. They are called Mongo Routers and RethinkDB Proxies (respectively).

Another use for the haproxy portals for SSL termination. When you connect to your databases through HTTPS all the traffic between your application and the database is encrypted. The portal terminates the SSL connection and the traffic can continue to the database nodes.

In MongoDB, Elasticsearch, RethinkDB, ScyllaDB and RabbitMQ deployments, handling the SSL through the portal allows us to update certificates and patch vulnerabilities without having to update or restart your databases.

PostgreSQL and MySQL only make use of the high-availblity configurations that the haproxy portals offer, as their SSL connections terminate at the database layer.

Redis doesn't offer SSL support, but there is a haproxy portal for high-availblity. In order to ensure secure connections to your deployment, you would need an SSH tunnel.

SSH Portals

If you have RethinkDB, Elasticsearch, and Redis deployments you have the option for securing your deployment and databases is through an SSH portal. Connections to your databases is through an SSH tunnel using pubilc/private key authentication. Any port on your local system can be mapped to any of the hosts/ports of your deployment for flexible integration into your application.

For Redis users, this is currently the only way to securely connect and encrypt traffic to your deployment as Redis doesn't support SSL/TLS connections.

Portal Scaling

Vertical Scaling

The work that the haproxy portal does relies mostly on the CPU and network for speed and concurrent connection handling. Each portal starts with 64MB of memory. Depending on your use case and your application's connection load, you may want to scale your portals to add memory in 64MB units.

If you have connection problems and suspect it might be a lack of memory in the haproxy portal, check the logfiles for that haproxy (found on the Logfiles page of your deployment). A loge entry will look like this:


Sample error from a haproxy log.

What you will want to look for is a 2 to 4 letter error code. In the log entry above, it is the --NI, (unrelated, this error is a cookie error). If a haproxy portal runs out of resources that string will start with an 'R'. If you see an 'RC' in that string then the connection to the server was prevented due to a lack of resources. The complete reference of haproxy error codes and a full diagram of the log messages can be found in the haproxy documentation.

SSH portals start with 64MB of memory. If you find that your connections exceed these resources you may add additional memory in 64MB units. If you are troubleshooting connection errors from an application that connects through an SSH tunnel, you might check in the ssh logs (available on the Logfiles page of your deployment) for SSH_TUNNEL_ERROR_RESOURCE_SHORTAGE 4 (0x0004) Resource became unavailable while trying to connect or similar errors.

Horizontal Scaling

Alternatively, you may also add portals to your deployment. This could give you extra fail-over options; should one portal become unresponsive or time out, your application or driver could switch connections to your other portal. An additional portal could also provide access to a partner or analytics tool. For example, if you added a portal to an Elasticsearch deployment you could use it specifically for Kibana access.

More information, screenshots, and instructions for scaling portals on a specific deployment type can be found on the database-specific Resources and Scaling page.

Still Need Help?

If this article didn't solve things, summon a human and get some help