Running the CodeSign Protect Benchmark tool
The CodeSign Protect Benchmark tool measures the expected performance of CodeSign Protect in your environment. The tool can be configured to perform the tasks that are typically done by the CodeSign Protect Client and measure the duration of each task.
NOTE This tool is available only on Windows clients.
Venafi provides a default set of benchmark tasks that use CodeSign Protect’s signing backend and can be used to get a comparative measure of the deployed hardware. Additionally, you can create custom benchmark tasks to measure the load a system or a particular environment can tolerate. This enables you to measure resource utilization, identify any bottlenecks, and ensure optimal performance.
Running the default benchmarks
To run the default benchmark tests, follow these steps:
-
Import the test environments, users, and keys into your configuration
-
CodeSign Protect ships with an XML file that defines the objects required for the default benchmark test. You must import it using the Trust Protection Platform TppTool utility from the command line on your Trust Protection Platform server. You can do so by running this command:
<InstallDir>\Platform>TppTool.exe -u <master-admin-username> -i "<InstallDir>\Utilities\CodeSign Protect\CodeSigningBenchmarkObjects.xml"
NOTE
<InstallDir>
is the location of your Trust Protection Platform installation (typically c:\Program Files\Venafi). -
After running this command, you will be prompted for your master administrator password. If you successfully imported the configuration, the TppTool should respond with 'Schema updated successfully.’
-
This creates the following objects:
-
Seven local users: benchmark-admin, benchmark-approver, benchmark-auditor, benchmark-certowner, benchmark-owner, benchmark-per-user1, benchmark-user.
-
One local user group, benchmark-users which only contains benchmark-per-user1
-
Three CodeSign Protect projects, Benchmark Single, Benchmark Per-User, and Benchmark Keys with the local benchmark user identities assigned
-
Benchmark Single project will have the following environment types: Certificate, CSP, .Net, and GPG
-
The Benchmark Per-User project will have one certificate and GPG per user environment
-
Benchmark Keys project will have eight environments, one for each of the supported key types, ECCP256, ECCP384, ECCP521, ED25519, RSA1024, RSA2048, RSA3072, RSA4096
-
-
-
Enable HSM access for CodeSignBenchmark
-
Open a browser and go to https://<yourserver>/aperture/codesign/projects/ and click on the Benchmark Single project. Once on the project page, enter something into the description field and click Save. This step will automatically grant API access permissions to the user (imported in step 1). Repeat this step 2 for the Benchmark Per-User project.
-
-
Generate the benchmark configuration file
-
CodeSign Protect Benchmark tool takes its configuration from a JSON file. You can generate the default configuration file from the Benchmark tool by running:
"<InstallDir>\Utilities\CodeSign Protect\CodeSignBenchmark" -default:<yourhost>
-
Substitute <yourhost> with the hostname of your Trust Protection Platform server that is running the HSM backend
-
Running that command will place a file benchmark-config.json in <InstallDir>\Utilities\CodeSign Protect\BenchmarkResults, or in your user AppData folder.
-
-
The configuration is now in place to run the default benchmark tasks. To perform the tasks, run
"<InstallDir>\Utilities\CodeSign Protect\CodeSignBenchmark"
It will output all results to the OutputDir configured in the JSON configuration file, by default <InstallDir>\Utilities\CodeSign Protect\BenchmarkResults.
NOTE For more accurate results, it is best to run the Benchmark tool from a CodeSign Protect client machine that will be performing signing operations. You can do so by copying the <InstallDir>\Utilities\CodeSign Protect folder somewhere onto your CodeSign Protect Windows client machine and running CodeSignBenchmark.exe.
Defining Tests in GUI
The CodeSignBenchmarkGUI.exe tool, which is installed by default under <InstallDir>\Utilities\CodeSign Protect\, enables the creation of new benchmark suites and the ability to edit existing ones. To create a new suite, run the tool, click Set to configure server, username, and password for OAuth and start adding tasks.
Understanding the Data
-
A configuration file contains a Suite of Tasks.
-
A Task is a single command (for example, "Sign this hash") that is being sent to the server. A single task can execute that command multiple times, controlled by the
RepeatCount
configuration value. Tasks are executed in parallel via multiple threads. TheWorkCount
configuration value determines how many threads will run in parallel. -
To avoid outliers, the Work (in other words, threads performing the tasks) can be done multiple times. The
RunCount
configuration value determines how many times. -
To prevent things like initial cache priming on the server to skew the results, the tool performs the Work once for warmup before measuring.
-
The Benchmark tool measures the duration of a Task. If multiple threads are performing the task, the measurement will represent the slowest thread. If multiple runs of the Work are done, the measured time will be the average of all runs.
-
All measurements are in milliseconds. The smaller a measurement is, the better the performance.
-
The total number of commands the server will see can be calculated by the following formula: RepeatCount * WorkCount * (RunCount + 1) # +1 for the warmup round
Select JSON Configuration Values
Value |
Description |
---|---|
RunCount | The number of times the work is performed and measured |
RepeatCount | The number of times the command is sent per thread |
WorkCount | The number of threads to simultaneously perform each task |
Host | The FQDN or IP address of the HSM backend. Must match the presented certificate if using HTTPS |
Username | The username to present to the authentication server for obtaining an OAuth token |
Password | The password to present to the authentication server for obtaining an OAuth token |
OutputDir | The directory in which results are stored |
TestName | The base name for the benchmark suite being run |
Understanding the Output Files
The Benchmark tool will create multiple output files. All files are stored in CSV format so they can be easily imported into standard spreadsheet tools.
<TestName> yyyy-mm-dd hh.MM.csv
This file holds a record of every task run, along with the results and relevant configuration values. It serves as a record for the individual test run; however, it is not suited for charting through a spreadsheet.
<TestName>.csv
This file is created if it does not already exist, otherwise new results are appended in each subsequent run. It has a variable number of columns:
-
The first column holds the date/time a benchmark suite was run.
-
All subsequent columns represent a task that was being run, with the value being the average milliseconds the task took to complete.
Since it is being appended to every run, this file serves as a cumulative record of the benchmark suite. This design choice is particularly useful when graphing with a spreadsheet application.
<Task>.csv
This file is created if it does not already exist, otherwise new results are appended in each subsequent run. It has two columns:
-
The first column holds the date/time a benchmark suite was run.
-
The second column holds time in milliseconds the task took.
Each row represents a run of the individual task and is designed to allow graphing with a spreadsheet application.
<TestName>.png
If GnuPlot is installed on the system running CodeSignBenchmark, it will be detected automatically, and in addition to the CSV result files, an informative over-time plot of the test results will be generated. The location of GnuPlot is configurable in the JSON config file via the GnuPlotPath
setting. If GnuPlot is installed but no plotting is desired, it can be disabled by setting the GenerateGnuPlot
JSON configuration option to false. The official URL for obtaining GnuPlot is http://gnuplot.sourceforge.net/download.html