Benchmark

Illustrative Database and Cloud Benchmarks

The following interactive plots highlight daily challenges that need to be addressed when operating distributed databases in the cloud. The outlined results can be achieved with our BaaS platform and make your decision easier by providing comprehensive insights in your cloud-hosted database. 

Of course, the outline challenges represent only a small set of the supported evaluation dimension of our BaaS platform.

Cloud Performance Comparison for operating Databases

Cloud Performance: Can a private OpenStack challenge Amazon Web Services?

The adjoining plots show exemplary performance (throughput/s) measurement of a single node Apache Cassandra instance that is operated on different VM types on a private OpenStack and AWS EC2.

In this case the results show a higher database performance on the OpenStack resources compared to the selected EC2 resources.

Of course, this statement cannot be generalized. It rather highlights the need for comprehensive benchmarks of potential cloud resources to operate databases.

Open benchmark on full page:

Database Scalability Benchmarking

Scalability: Does Cassandra really scales linear?

The adjoining plot shows exemplary scalability measurements of different Apache Cassandra clusters that are operated on AWS EC2. The measurements are based on a fixed write-heavy workload.

In this case the results show that Apache Cassandra scales with increasing cluster sizes but only until a cluster size of 7 nodes as this cluster size saturates the applied workload.

These exemplary results show only a small frame database scalability benchmarking and highlight the need for in-depth scalability benchmarking under consideration of growing workload intensities, large-scale database clusters and multiple cloud resource types.

Open benchmark on full page:

Database Capacity Planning

Capacity Planning: Using a sledge-hammer to crack a nut?

The adjoining plot shows exemplary performance measurements of different Apache Cassandra and Couchbase clusters that are operated on a private OpenStack under the load of a moderate write-heavy workload.

The results provide two key insights: (i) Couchbase provides a significantly higher performance compared to Apache Cassandra; (ii) larger cluster sizes do not automatically provide higher performance if the applied workload does not saturate a single database instance.

Of course these insights are only a snapshot for a specific evaluation scenario and aspects such as the consistency models of Apache Cassandra and Couchbase need to be considered.

Open benchmark on full page:

Database-specific Benchmarking

Can I get high Performance and strong Consistency from my database?

The adjoining plot shows exemplary performance measurements of 3-nodes Apache Cassandra and Couchbase clusters that are operated on a private OpenStack under a write-heavy workload.

The results show that applying stronger consistency configurations results in increased performance results for both databases. Yet, the performance decrease for Couchbase is significantly higher (from ~15.000 ops/s to 170 ops/s) compared to Apache Cassandra (from ~12.000 ops/s to 3370 ops/s).

In consequence, comprehensive benchmarks for database runtime configurations are required to avoid performance surprises when changing runtime configurations.

Open benchmark on full page:

Unified Scoring

A Multi-Dimensional Result Visualization Approach

The adjoining plot shows the visualization approach of our unified score over database  performance metrics (throughput, latency and variance) and monthly cloud resource for different Apache Cassandra clusters on AWS EC2.

Our unified score allows the individual weighting for each metric and supports additional evaluation dimensions such as scalability, elasticity and availability. 

Is this a useful visualisation? What do you think?

Open chart on full page:

Are you interested in similar benchmarks for MongoDB, CockroachDB, ... or MS Azure, Telekom Cloud, ... ?