Copyright (C) 2022 Broadcom. All rights reserved.
The term "Broadcom" refers solely to the Broadcom Inc. corporate affiliate that
owns the software below. This work is licensed under the OpenAFC Project
License, a copy of which is included with this software program.
afc_load_tool.py - script for AFC load test
Table of contents
- Overview
- Requests, messages
- Prerequisites
- Population Database preparation
- Config file
afc_load_tool.py- Usage example
Overview
afc_load_tool.py is a script that sends AFC Requests to AFC Server in several parallel streams. It was designed to be used standalone.
Several instances of this script can be used simultaneously (e.g. from different locations), however this script does not (yet?) contain means for orchestration and statistics aggregation of this use case.
The major AFC performance bottleneck is AFC computations, so in order to avoid test results to be determined by this slowest component, this script has an ability to preload response cache with (fake) results so that subsequent load test would use these fake results and not be constrained by AFC computations' performance.
This script requires Python 3.6+. It is desirable to have pyyaml module installed, however this requirement may be circumvented (by converting YAML config file to JSON e.g. with json_config subcommand or by some online converter - and then using this JSON config file on system without pyyaml module installed).
Requests, messages
It is important to distinguish AFC Request Message from AFC Request
AFC Request Message is a message sent to AFC Server. It may contain several AFC Requests - more than one if sent from, say, AFC Aggregator. afc_load_tool.py supports putting more than one request to message (by means of --batch command line parameter).
Each AFC Request, generated by afc_load_tool.py, has AP coordinates and Serial Number determined by Request Index (an integer value). Thus it is important for set of Request Indices (--idx_range command line parameter) used during load test to be the same (or be a subset of) set of Request Indices used during preload.
Prerequisites
afc_load_tool.py is a Python 3 script. Successfully tried on Python 3.7 (more is better, as always).
Besides, it has the following dependencies that are not part of Python distribution:
- yaml module. Required.
Typically it's already installed. If not - it can be installed withpip install pyyaml(this works even on Alpine) - requests module. Optional (used for
--no-reconnectswitch).
Also typically is already installed. If not - it can be installed withpip install requests(this works even on Alpine)
Population Database preparation
load subcommand may distribute request location proportional to population (to test AFC Engine performance in noncached mode). It does this by means of --population <DATABASE_NAME> switch, where DATABASE_NAME is a name of SQLite3 file.
This SQLite file (population database) is prepared from some population density GDAL-compatible 'image' file (e.g. for USA from 2020 1X1 km population density map taken from WorldPop Hub. YMMV)
Population database is prepared from source GDAL-compatible image file by means of tools/geo_converters/make_population_db.py script. This script requires GDAL libraries and utilities to be installed, so most likely one may want to run it from the container. Image for container may be built with tools/geo_converters/Dockerfile. To avoid muddling explanation with file mappings, access rights, etc., following text will assume that this script executed without (or completely within) the container.
General format:
make_population_db.py [OPTIONS] SRC_IMAGE_FILE DST_DATABASE_FILE
Where options are:
| Option | Meaning |
|---|---|
| --resolution ARC_SECONDS | Resolution of resulting database in latitude/longitude seconds. Default is 60, which, as of time of this writing, seems pretty adequate |
| --center_lon_180 | Better be set for countries (like USA) that exist in both east and west hemispheres. Should be determined automagically, but does not, sorry |
| --overwrite | Overwrite target database file if it exists |
Example:
make_population_db.py --center_lon_180 usa_pd_2020_1km.tif usa_pd.sqlite3
Config file
Some important constants used by script are stored outside, in config file. This is YAML (or JSON) file, by default having same name and location as script, with .yaml extension (on systems without pyyaml Python module installed - with .json extension).
File content is, hopefully, self-descriptive, so here is just a brief overview of its sections.
| Section | Content |
|---|---|
| defaults | Default values for performance-related script parameters (index range, number of repetitions, etc. - see commends in config file and script help message) |
| region | Region for which requests will be generated: rectangle, grid density, Ruleset ID (AFC's regulatory domain identifier), Certification ID (AFC's manufacturer identifier - must be registered in RAT DB for used Ruleset ID) |
| req_msg_pattern | AFC Request Message pattern |
| resp_msg_pattern | AFC Response Message pattern |
| paths | Points of modifications in Request/Response Message patterns |
| channels_20mhz | List of 20MHz channels, used to make preload AFC Responses unique |
| rest_api | (Semi)default URLs for REST APIs used by script |
It is possible to use several merged together config files - e.g. have one full config file and several config files with default and/or regional parameters.
This script is provided with default config file in YAML format (as it is more human-readable and allows for comments). On system without pyyaml Python m,odulke installed installed JSON config files are to be used. Conversion may be performed with json_config subcommand (that, of course, should be performed on system with pyyaml installed) or with some online tool.
Since JSON is a subset of YAML (i.e. every valid JSON file is valid YAML file), JSON config files may also be used on system with pyyaml inatalled (however note that default config file name on such systems is having .yaml extension).
afc_load_tool.py
General invocation format:
afc_load_tool.py [--config [+]FILENAME] ... SUBCOMMAND [PARAMETERS]
--config may be specified several times or not at all (in which case default one will be used). If config filename prefixed with +, it is used as an addition (not instead of) the default config. E.g.
afc_load_tool.py --config +brazil.yaml] ... preload
might be used when brazil.yaml defines region section for Brazil, whereas the rest of config data comes from default config file.
Without parameters script prints list of subcommands. Help on specific subcommand may be printed by means of help subcommand. E.g. help for preload subcommand may be printed as follows:
afc_load_tool.py help preload
Note that due to Python idiosyncrasies, --help will only print help on --config parameter.
preload subcommand
Preloads Rcache with fake responses for given range of request indices.
This causes subsequent load test to test only performance of dispatcher+msghnd+rat_db+bulk_postgres, without drowning into slowness of AFC Engine.
Parameters:
| Parameter | Default in Config |
Meaning |
|---|---|---|
| --idx_range FROM-TO | 0-1000000 | Range of request indices to preload to cache. Note that TO is 'afterlast' index, i.e. 0-1000000 means [0-999999] |
| --parallel N | 20 | Number of parallel streams (may speed up operation and entertaining in general) |
| --batch N | 1 | Number of requests per message (may speed up operation and entertaining in general) |
| --backoff SEC | 0.1 | Initial backoff (in seconds) if request failed. Useful in wrestling with DDOS protection |
| --retries N | 5 | Number of retries to make if request fails. Useful in wrestling with DDOS protection |
| --status_period N | 1000 | Once in what number of requests to print intermediate statistics. 0 to not at all |
| --dry | Don't actually send anything to servers. Useful to determine overhead of this script itself | |
| --comp_proj PROJ | Docker Compose project name (part before service name in container names) to use to determine IPs of services being used (rat_server and rcache for this subcommand) | |
| --rat_server HOST[:PORT] | rat_server | IP Address and maybe port of rat_server service. If not specified rat_service of compose project, specified by --comp_proj is used |
| --rcache HOST:PORT | rcache:8000 | IP Address and port of rcache service. If not specified rcache of compose project, specified by --comp_proj is used |
| --protect_cache | Protect rcache from invalidation (by ULS downloader - not doing so may divert some requests to AFC Engine during load and thus degrade performance). Cache may be unprotected with cache --unprotect subcommand |
|
| --no_reconnect | Every sender process establishes permanent connection to Rcache service and sends Rcache update requests over this connection. Requires requests Python module to be installed. Default is to establish connection on every update request send |
load subcommand
Do the load test. Several processes simultaneously send AFC Request messages to dispatcher (front end) or msghnd (back end), waiting for responses.
Parameters:
| Parameter | Default in Config |
Meaning |
|---|---|---|
| --idx_range FROM-TO | 0-1000000 | Range of request indices to use (supposedly they have been preloaded to rcache with preload subcommand). Note that TO is 'afterlast' index, i.e. 0-1000000 means [0-999999] |
| --count N | 1000000 | Number of requests to send |
| --parallel N | 20 | Number of parallel streams |
| --batch N | 1 | Number of requests per message |
| --backoff SEC | 0.1 | Initial backoff (in seconds) if request failed. Useful in wrestling with DDOS protection |
| --retries N | 5 | Number of retries to make if request fails. Useful in wrestling with DDOS protection |
| --status_period N | 1000 | Once in what number of requests to print intermediate statistics. 0 to not at all |
| --dry | Don't actually send anything to servers. Useful to determine overhead of this script itself | |
| --comp_proj PROJ | Docker Compose project name (part before service name in container names) to use to determine IPs of services being used (msghnd for this subcommand) | |
| --afc HOST[:PORT] | msghnd:8000 | IP Address andm aybe port of service to send AFC Requests to. If neither it not --localhost specified, requests will be sent to msghnd, determined by means of --comp_proj |
| --localhost [http|https] | Send AFC requests to AFC http (default) or https port of AFC server (dispatcher service), running on localhost with project, specified by --comp_proj | |
| --no_reconnect | Every sender process establishes permanent connection to target server and sends AFC Request messages over this connection. Requires requests Python module to be installed. Default is to establish connection on every update request send (which is closer to real life, but slows things and lead to different set of artifacts)) |
|
| --no_cache | Forces recomputation of each AFC Request | |
| --req FIELD1=VALUE1[;FIELD2=VALUE2...] | Modify AFC Request field(s). Top level is request (not message), path to deep fields is dot-separated (e.g. location.elevation.height), Several semicolon-separated fields may be specified (don't forget to quote such parameter) and/or this switch may be specified several times. Value may be numeric, string or list (enclosed in [] and formatted per JSON rules) | |
| --population POPULATION_DB_FILE | Choose positions according to population density. POPULATION_DB_FILE is an SQLite3 file, prepared with make_population_db.py (see chapter on it). Note that use of this parameter may lead to positions outside the shore, that AFC Engine treats as incorrect - that's life... |
|
| --random | Chose points randomly - according to population database (if --population specified) or within region rectangle in config file. Useful fro AFC Engine (--no_cache) testing |
|
| --err_dir DIRECTORY | Directory to write failed AFC Request messages to. By default errors are reported, but failed requests are not logged | |
| --ramp_up SECONDS | Gradually increase load (starting streams one by one instead of all at once) for this time |
netload subcommand
Makes repeated GET requests on dispatcher, msghnd, or rcache service.
Note that report unit for this subcommand is message, whereas for preload and load commands unit is request (with a single message containing batch number of requests).
Parameters:
| Parameter | Default in Config |
Meaning |
|---|---|---|
| --count N | 1000000 | Number of requests to send |
| --parallel N | 20 | Number of parallel streams |
| --backoff SEC | 0.1 | Initial backoff (in seconds) if request failed. Useful in wrestling with DDOS protection |
| --retries N | 5 | Number of retries to make if request fails. Useful in wrestling with DDOS protection |
| --status_period N | 1000 | Once in what number of requests to print intermediate statistics. 0 to not at all |
| --dry | Don't actually send anything to servers. Useful to determine overhead of this script itself | |
| --comp_proj PROJ | Docker Compose project name (part before service name in container names) to use to determine IPs of services being used | |
| --afc HOST[:PORT] | IP Address and maybe port of target msghnd or dispatcher service. If neither it not --localhost specified, requests will be sent to msghnd, determined by means of --comp_proj |
|
| --rcache HOST:PORT | rcache:8000 | IP Address and port of rcache service.. If not specified rat_service of compose project, specified by --comp_proj is used |
| --localhost [http|https] | Access AFC server (i.e. dispatcher service) using http(default) or https of compose project specified by --comp_proj | |
| --no_reconnect | Every sender process establishes permanent connection to target server and sends AFC Request messages over this connection. Requires requests Python module to be installed. Default is to establish connection on every update request send (which is closer to real life, but slows things and lead to different set of artifacts)) |
|
| --target dispatcher|msghnd|rcache| | Guess attempt | What service to access. If not specified - attempt to guess is made (dispatcher if --localhost, msghnd if --afc, rcache if --rcache |
cache subcommand
Small toolset to control over rcache - functional subset of rcache_tool.py, put here for better accessibility.
Parameters:
| Parameter | Default in Config |
Meaning |
|---|---|---|
| --comp_proj PROJ | Docker Compose project name (part before service name in container names) to use to determine IPs of services being used | |
| --rcache HOST:PORT | rcache:8000 | IP Address and port of rcache service.. If not specified rat_service of compose project, specified by --comp_proj is used |
| --protect | Protect rcache from invalidation (by ULS downloader - not doing so may divert some requests to AFC Engine during load and thus degrade performance) |
|
| --unprotect | Remove Rcache protection from invalidation (normal mode of Rcache operation) | |
| --invalidate | Invalidate rcache. If one need to force AFC Engine, --no_cache option of load command looks like better alternative` |
afc_config subcommand
Modify field(s) in AFC Config for current region, that will be used in subsequent tests (makes sense for AFC Engine, i.e. no cache, test)
Parameters:
| Parameter | Meaning |
|---|---|
| --comp_proj PROJ | Docker Compose project name (part before service name in container names) to use to determine IPs of services being used |
| --rat_server HOST[:PORT] | rat_server |
| FIELD1=VALUE1 [FIELD2=VALUE2...] | Fields to modify. For deep fields, path specified as comma-separated sequence of keys (e.g. freqBands.0.startFreqMHz) |
json_config subcommand
Convert YAML config file to JSON format. This command should be executed on system with pyyaml Python module installed. However resulting JSON file may be used on system without pyyaml installed. Of course, conversion may also be performed with some online tool.
Invocation:
afc_load_tool.py [--config [+]FILENAME.yaml] ... json_config [JSON_FILE]
Here json_file, if specified, is resulting file name. By default it has same file name and directory as script, but .json extension.
Usage example
$ docker ps
CONTAINER ID IMAGE ... PORTS NAMES
...
acffe87ff12c public.ecr.aw... 0.0.0.0:374->80/tcp, :::374->80/tcp, 0.0.0.0:458->443/tcp, :::452->443/tcp foo_l2_dispatcher_1
bce6e9629ef1 110738915961.... 80/tcp, 443/tcp foo_l2_rat_server_1
7f76bdee65d8 110738915961.... 8000/tcp foo_l2_msghnd_1
152dd3a42ce7 110738915961.... foo_l2_worker_1
d102db21067d public.ecr.aw... foo_l2_cert_db_1
987115a71c95 public.ecr.aw... foo_l2_als_siphon_1
08e1ec3248d2 public.ecr.aw... foo_l2_rcache_1
803e9b266310 public.ecr.aw... 5432/tcp foo_l2_ratdb_1
7ba8ce112cae public.ecr.aw... 5432/tcp foo_l2_bulk_postgres_1
92fd56d4db75 public.ecr.aw... foo_l2_objst_1
a5c680bb32b6 rcache_tool:f... foo_l2_rcache_tool_1
871849a6e841 public.ecr.aw... 9092/tcp foo_l2_als_kafka_1
efea7249498f public.ecr.aw... foo_l2_uls_downloader_1
b3903f7c2d92 public.ecr.aw... 4369/tcp, 5671-5672/tcp, 15691-15692/tcp, 25672/tcp foo_l2_rmq_1
...
Important piece of information here is the Docker compose project name - common prefix of container names: foo_l2
Now, preloading rcache, using default parameters from config file (these parameters will be printed) and protecting it from invalidation:
$ afc_load_tool.py preload --comp_proj foo_l2 --protect_cache
In this command abovementioned compose project name foo_l2 was used.
Now, doing load test with default parameters from config file (these parameters will be printed)):
- Send AFC requests to HTTP port of AFC server on current host (i.e. to dispatcher service):
$ afc_load_tool.py load --comp_proj foo_l2 --localhost - Send AFC requests to msghnd service (bypassing dispatcher service):
$ afc_load_tool.py load --comp_proj foo_l2 - Send AFC requests to explicitly specified server foo.bar:
$ afc_load_tool.py load --afc foo.bar:80
Upon completion of testing it is recommended to re-enable cache invalidation:
$ afc_load_tool.py cache --comp_proj foo_l2 --unprotect
Test AFC Server (dispatcher service) network performance (no preload or Rcache protection needed):
$ afc_load_tool.py netload --comp_proj foo_l2 --target dispatcher
Test AFC Engine (no preload or Rcache protection needed):
- Set Max Link Distance to 200km:
$ afc_load_tool.py afc_config --comp_proj foo_l2 maxLinkDistance=200 - Do population-density-based testing. No batching to evaluate performance of single AFC, small report interval because AFC Engine is slo-o-o-o-ow, multistream to speed up statistics gathering:
$ afc_load_tool.py load --random --population usa_pd.sqlite3 --comp_proj foo_l2 \
--localhost --status_period 1 --batch 1 --parallel 10 --retries 0 --no_cache
Note that this test will create some position with 'invalid coordinates' (AFC error code 103) - that's because some positions are generated 'off shore`. This is known problem and it would not be dealt with. - Ditto, but with large uncertainty:
$ afc_load_tool.py load --random --population usa_pd.sqlite3 --comp_proj foo_l2 \
--localhost --status_period 1 --batch 1 --parallel 10 --retries 0 --no_cache \
--req "location.elevation.verticalUncertainty=30;location.elevation.height=300" \
--req "location.ellipse.minorAxis=150;location.ellipse.majorAxis=150"