Cloud Benchmarking

measuring the performance of cloud systems

Two identical cloud instances from AWS exhibit wildly different IO performance

With the cloud, it's hard to know precisely what you are paying for in terms of performance. Even computing resources that are configured identically (and which cost the same) may differ dramatically in actual performance when used.

Not only is your performance inherently dependent on what other tenants are doing on the same servers, even the hardware itself may vary from instance to instance or service to service.

ICET-lab has conducted extensive benchmarking studies of real-life cloud providers, particularly of Infrastructure-as-a-Service clouds such as AWS EC2 and function (serverless) environments such as AWS Lambda, pioneering tools and methods that allow customers to predict better what to expect from the services they are using.

For Infrastructure-as-a-Service clouds, we have compared the performance of three industry-leading services (EC2, Microsoft Azure, and IBM Softlayer), with a particular focus on how predictable performance is. We have executed over 50.000 individual benchmark runs, making this the largest study of it's kind to our knowledge (Leitner & Cito, 2016).

To enable this work, we have built a custom open-source cloud benchmarking toolkit dubbed Cloud Workbench (CWB) (Scheuner et al., 2014). CWB is flexible enough to support a wide range of benchmarking studies - for example, we used the same toolkit to compare software performance in different cloud environments (Laaber et al., 2019).

Comparison of the performance variation in different cloud providers
The web-based user interface of Cloud Workbench

For benchmarking serverless systems, we are a member of and collaborate extensively with the SPEC Research Group, a worldwide research collaboration under the umbrella of the Standard Performance Evaluation Corporation (SPEC), a non-profit consortium that is responsible for most state-of-the-art benchmarks that computer systems are compared with.

We are currently working on a toolkit for benchmarking serverless applications, along with an empirical study of AWS Lambda performance. More details about this work will be added soon!

Contacts:

Dr. Joel Scheuner

Dr. Philipp Leitner


  1. Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds
    Philipp Leitner, and Jürgen Cito
    ACM Transactions on Internet Technology, Apr 2016
  2. Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking
    Joel Scheuner, Philipp Leitner, Jürgen Cito, and Harald Gall
    In Proceedings of the 6th IEEE International Conference on Cloud Computing Technology and Science (CloudCom’14) , Apr 2014
  3. laaber.jpg
    Software microbenchmarking in the cloud. How bad is it really?
    Christoph Laaber, Joel Scheuner, and Philipp Leitner
    Empirical Software Engineering, Apr 2019