PerfKit Benchmarker

PerfKit Benchmarker is an open effort to define a canonical set of benchmarks to measure and compare cloud offerings. Please review the Licensing section before continuing.

Installation

Before you can run the PerfKit Benchmarker, you need to establish account credentials on the cloud provider you want to benchmark. For ProfitBricks, make sure you have Signed Up.

You also need the software dependencies, which are mostly command line tools and credentials to access your accounts without a password. The following steps should help you get the CLI tool auth in place.

If you are running on Windows, you will need to install GitHub Windows since it includes tools like openssl and an ssh client. Alternatively, you can install Cygwin since it should include the same tools.

Install Python 2.7 and pip

If you are running on Windows, get the latest version of Python 2.7. This should have pip bundled with it. Make sure your PATH environment variable is set so that you can use both python and pip on the command line (you can have the installer do it for you if you select the correct option).

Most Linux distributions and recent Mac OS X versions already have Python 2.7 installed. If Python is not installed, you can likely install it using your distribution's package manager, or see the Python Download page.

If you need to install pip, see these instructions.

(Windows Only) Install GitHub Windows

Instructions: https://windows.github.com/

Make sure that openssl/ssh/scp/ssh-keygen are on your path (you will need to update the PATH environment variable). The path to these commands should be:

C:\\Users\\\<user\>\\AppData\\Local\\GitHub\\PortableGit\_\<guid\>\\bin

Install PerfKit

Download PerfKit Benchmarker from GitHub. Support for ProfitBricks as a cloud provider was added in release 1.7.0. So please download release v1.7.0 or newer.

Install Dependencies

$ cd /path/to/PerfKitBenchmarker
$ sudo pip install -r requirements.txt

ProfitBricks Cloud Account Setup

Get started by running:

$ sudo pip install -r perfkitbenchmarker/providers/profitbricks/requirements.txt

PerfKit Benchmarker uses the Requests module to interact with ProfitBricks' Cloud API. HTTP Basic authentication is used to authorize access to the API. Please set this up as follows:

Create a configuration file containing the email address and password associated with your ProfitBricks account, separated by a colon.

Example:

$ less ~/.config/profitbricks-auth.cfg
email:password

The PerfKit Benchmarker will automatically base64 encode your credentials before making any calls to the Cloud API.

PerfKit Benchmarker uses the file location ~/.config/profitbricks-auth.cfg by default. You can use the --profitbricks_config flag to override the path.

Run a Single Benchmark

The following example shows how to run the iperf benchmark:

$ ./pkb.py --cloud=ProfitBricks --machine_type=Small --benchmarks=iperf

PerfKit Benchmarker will authenticate against the ProfitBricks Cloud API and provision a Virtual Data Center and the resources necessary to perform the benchmark run. The resources will automatically be removed when the benchmark run completes. Detailed information about the process is logged to the system in a location provided at the conclusion of the benchmark run.

Run Windows Benchmarks

You must be running on a Windows machine in order to run Windows benchmarks. Install all dependencies as above and set TrustedHosts to accept all hosts so that you can open PowerShell sessions with the VMs (both machines having each other in their TrustedHosts list is necessary, but not sufficient to issue remote commands; valid credentials are still required):

set-item wsman:\localhost\Client\TrustedHosts -value *

Now you can run Windows benchmarks by running with --os_type=windows. Windows has a different set of benchmarks than Linux does. They can be found under perfkitbenchmarker/windows_benchmarks/. The target VM OS is Windows Server 2012 R2.

Run All Standard Benchmarks

Run without the --benchmarks parameter and every benchmark in the standard set will run serially which can take a couple of hours (alternatively, run with --benchmarks="standard_set"). Additionally, if you don't specify --cloud=..., all benchmarks will run on the Google Cloud Platform.

Run All Benchmarks in a Named Set

Named sets are are groupings of one or more benchmarks in the benchmarking directory. This feature allows parallel innovation of what is important to measure in the Cloud, and is defined by the set owner. For example the GoogleSet is maintained by Google, whereas the StanfordSet is managed by Stanford. Once a quarter a meeting is held to review all the sets to determine what benchmarks should be promoted to the standard_set. The Standard Set is also reviewed to see if anything should be removed.

To run all benchmarks in a named set, specify the set name in the benchmarks parameter (e.g., --benchmarks="standard_set"). Sets can be combined with individual benchmarks or other named sets.

Useful Global Flags

The following are some common flags used when configuring PerfKit Benchmarker.

Flag Notes
--help see all flags
--benchmarks A comma separated list of benchmarks or benchmark sets to run such as --benchmarks=iperf,ping . To see the full list, run ./pkb.py --help
--cloud Cloud where the benchmarks are run. For ProfitBricks, use --cloud=profitbricks
--machine_type Type of machine to provision if pre-provisioned machines are not used. Most cloud providers accept the names of pre-defined provider-specific machine types.
--zone This flag allows you to override the default zone. See the table below.
--data_disk_type Type of disk to use. Names are provider-specific.

The default cloud is 'GCP', override with the --cloud flag. Each cloud has a default zone which you can override with the --zone flag, the flag supports the same values that the corresponding Cloud CLIs take:

Cloud name Default zone Notes
ProfitBricks ZONE_1 Additional zones: ZONE_2

Example:

./pkb.py --cloud=profitbricks --zone=ZONE_2 --benchmarks=iperf,ping

Licensing

The following is important information regarding licensing of the benchmarks that PerfKit Benchmarker utilizes.

PerfKit Benchmarker provides wrappers and workload definitions around popular benchmark tools. It instantiates VMs on the Cloud provider of your choice, automatically installs benchmarks, and runs the workloads without user interaction.

Due to the level of automation you will not see prompts for software installed as part of a benchmark run. Therefore you must accept the license of each of the benchmarks individually, and take responsibility for using them before you use the PerfKit Benchmarker.

In its current release these are the benchmarks that are executed:

Some of the benchmarks invoked require Java. You must also agree with the following license:

CoreMark setup cannot be automated. EEMBC requires users to agree with their terms and conditions, and PerfKit Benchmarker users must manually download the CoreMark tarball from their website and save it under the perfkitbenchmarker/data folder (e.g. ~/PerfKitBenchmarker/perfkitbenchmarker/data/coremark_v1.0.tgz)

SPEC CPU2006 benchmark setup cannot be automated. SPEC requires that users purchase a license and agree with their terms and conditions. PerfKit Benchmarker users must manually download cpu2006-1.2.iso from the SPEC website, save it under the perfkitbenchmarker/data folder (e.g. ~/PerfKitBenchmarker/perfkitbenchmarker/data/cpu2006-1.2.iso), and also supply a runspec cfg file (e.g. ~/PerfKitBenchmarker/perfkitbenchmarker/data/linux64-x64-gcc47.cfg). Alternately, PerfKit Benchmarker can accept a tar file that can be generated with the following steps:

  • Extract the contents of cpu2006-1.2.iso into a directory named cpu2006
  • Run cpu2006/install.sh
  • Copy the cfg file into cpu2006/config
  • Create a tar file containing the cpu2006 directory, and place it under the perfkitbenchmarker/data folder (e.g. ~/PerfKitBenchmarker/perfkitbenchmarker/data/cpu2006v1.2.tgz).

PerfKit Benchmarker will use the tar file if it is present. Otherwise, it will search for the iso and cfg files.

Support

We have demonstrated how to get the PerfKit Benchmarker installed and how to use it. The project itself contains additional information in its included README. The wiki also contains detailed information about the PerfKit Benchmarker project.

If you have questions or comments on using PrefKit Benchmarker with ProfitBricks resources, please post a question in the Community section of this site.

Issues can also be reported at GitHub PerfKit Benchmarker.