Merge /home/greearb/btbits/x64_btbits/server/lf_scripts

This commit is contained in:
Ben Greear
2021-04-22 10:13:15 -07:00
141 changed files with 24521 additions and 9184 deletions

153
README.md
View File

@@ -4,6 +4,16 @@ with LANforge systems. On your LANforge system, these scripts are
typically installed into `/home/lanforge/scripts`. The `LANforge/` sub directory holds
the perl modules (`.pm` files) that are common to the perl scripts.
## LANforge CLI Users Guide: https://www.candelatech.com/lfcli_ug.php ##
The LANforge CLI Users Guide is a good place to start for understanding scripts
## LANforge on system cli help and cli command composer ##
The LANforge has an on system help / system query, when on LANforge browse to
http://local_host:8080
The LANforge has an on system cli help and cli command composer
http://local_host:8080/help
### Commonly Used ###
The `lf_*.pl` scripts are typically more complete and general purpose
scripts, though some are ancient and very specific. In particular,
@@ -11,13 +21,13 @@ these scripts are more modern and may be a good place to start:
| Name | Purpose |
|------------------|-----------|
| `lf_portmod.pl` | Query and update physical and virtual ports |
| `lf_firemod.pl` | Query and update connections (Layer 3) |
| `lf_icemod.pl` | Query and update WAN links and impairments |
| `lf_attenmod.pl` | Query and update CT70X programmable attenuators |
| `lf_associate_ap.pl` | Query and update wifi stations |
| `lf_tos_test.py` | Python script to generate traffic at different QoS and report in spreadsheet |
| `lf_sniff.py` | Python script to create packet capture files, especially OFDMA /AX captures |
| `lf_associate_ap.pl` | LANforge server script for associating virtual stations to an arbitrary SSID |
| `lf_attenmod.pl` | Query and update CT70X programmable attenuators |
| `lf_firemod.pl` | Query and update connections (Layer 3) |
| `lf_icemod.pl` | Query and update WAN links and impairments |
| `lf_portmod.pl` | Query and update physical and virtual ports |
| `lf_tos_test.py` | Generate traffic at different QoS and report in spreadsheet |
| `lf_sniff.py` | Create packet capture files, especially OFDMA /AX captures |
The `lf_wifi_rest_example.pl` script shows how one might call the other scripts from
within a script.
@@ -25,41 +35,96 @@ within a script.
### Examples and Documents ###
Read more examples in the [scripting LANforge](http://www.candelatech.com/lfcli_api_cookbook.php) cookbook.
### Python Scripts ###
## Python Scripts ##
When starting to use Python, please run the update_dependencies.py script located in py-scripts to install all necessary dependencies for this library.
### Python Scripts py-json/LANforge ###
Core communication files to LANforge
| Name | Purpose |
|------|---------|
| `lf_tos_test.py` | Python script to generate traffic at different QoS and report performance in a spreadsheet |
| `lf_sniff.py` | Python script to create packet capture files, especially OFDMA /AX captures |
| `cicd_TipIntegration.py` | Python script for Facebook TIP infrastructure|
| `add_dut.py` | defined list of DUT keys, cli equivalent: https://www.candelatech.com/lfcli_ug.php#add_dut |
| `add_file_endp.py` | Add a File endpoint to the LANforge Manager. cli equivalent: add_file_endp https://www.candelatech.com/lfcli_ug.php#add_file_endp |
| `add_l4_endp.py` | Add a Layer 4-7 (HTTP, FTP, TELNET, ..) endpoint to the LANforge Manager. cli equivalent: add_l4_endp https://www.candelatech.com/lfcli_ug.php#add_l4_endp |
| `add_monitor.py` | Add a WIFI Monitor interface. These are useful for doing low-level wifi packet capturing. cli equivalent: add_monitor https://www.candelatech.com/lfcli_ug.php#add_monitor |
| `add_sta.py` | Add a WIFI Virtual Station (Virtual STA) interface. cli equivalent: add_sta https://www.candelatech.com/lfcli_ug.php#add_sta |
| `add_vap.py` | Add a WIFI Virtual Access Point (VAP) interface. cli equivalent: add_vap https://www.candelatech.com/lfcli_ug.php#add_vap |
| `set_port.py` | This command allows you to modify attributes on an Ethernet port. cli equivalent: set_port https://www.candelatech.com/lfcli_ug.php#set_port |
| `lfcli_base.py` | json communication to LANforge |
| `LFRequest.py` | Class holds default settings for json requests to LANforge, see: https://gist.github.com/aleiphoenix/4159510|
| `LFUtils.py` | Defines useful common methods |
| `set_port.py` | This command allows you to modify attributes on an Ethernet port. These options includes the IP address, netmask, gateway address, MAC, MTU, and TX Queue Length. cli equivalent: set_port https://www.candelatech.com/lfcli_ug.php#set_port |
### Python Scripts py-json/ ###
| Name | Purpose |
|------|---------|
|`create_wanlink.py` | Create and modify WAN Links Using LANforge JSON AP : http://www.candelatech.com/cookbook.php?vol=cli&book=JSON:+Managing+WANlinks+using+JSON+and+Python |
|`cv_commands.py` | This is a library file used to create a chamber view scenario. import this file as showed in create_chamberview.py to create a scenario|
|`cv_test_manager.py` | This script is working as library for chamberview tests. It holds different commands to automate test.|
|`cv_test_reports.py` | Class: lanforge_reports Pulls reports from LANforge|
|`dut_profile.py` | Class: DUTProfile (new_dut_profile) Use example: py-scripts/update_dut.py used to updates a Device Under Test (DUT) entry in the LANforge test scenario A common reason to use this would be to update MAC addresses in a DUT when you switch between different items of the same make/model of a DUT|
|`fio_endp_profile.py` | Class: FIOEndpProfile (new_fio_endp_profile) Use example: py-scripts/test_fileio.py will create stations or macvlans with matching fileio endpoints to generate and verify fileio related traffic|
|`gen_cxprofile.py` | Class: GenCXProfile (new_generic_endp_profile) Use example: test_generic.py will create stations and endpoints to generate traffic based on a command-line specified command type |
|`http_profile.py` | Class: HTTPProfile (new_http_profile) Use example: test_ipv4_l4_wifi.py will create stations and endpoints to generate and verify layer-4 upload traffic|
|`influx2.py` | Class: RecordInflux version 2.0 influx DB client|
|`influx.py` | Class: RecordInflux influx DB client|
|`l3_cxprofile.py` | Class: L3CXProfile (new_l3_cx_profile) Use example: test_ipv4_variable_time.py will create stations and endpoints to generate and verify layer-3 traffic|
|`l4_cxprofile.py` | Class: L4CXProfile (new_l4_cx_profile) Use example: test_ipv4_l4.py will create stations and endpoints to generate and verify layer-4 traffic |
|`mac_vlan_profile.py` | Class: MACVLANProfile (new_mvlan_profile) Use example: test_fileio.py will create stations or macvlans with matching fileio endpoints to generate and verify fileio related traffic. |
|`multicast_profile.py` | Class: MULTICASTProfile (new_multicast_profile) Use example: test_l3_longevity.py multi cast profiles are created in this test |
|`port_utils.py` | Class: PortUtils used to set the ftp or http port|
|`qvlan_profile.py` | Class: QVLANProfile (new_qvlan_profile) Use example: create_qvlan.py (802.1Q VLAN)|
|`realm.py` | Class: The Realm Class is inherited by most python tests. Realm Class inherites from LFCliBase. The Realm Class contains the configurable components for LANforge, For example L3 / L4 cross connects, stations. http://www.candelatech.com/cookbook.php?vol=cli&book=Python_Create_Test_Scripts_With_the_Realm_Class|
|`station_profile.py` | Class: StationProfile (new_station_profile) Use example: most scripts create and use station profiles|
|`test_group_profile.py` | Class: TestGroupProfile (new_test_group_profile) Use example: test_fileio.py will create stations or macvlans with matching fileio endpoints to generate and verify fileio related traffic|
|`vap_profile.py` | Class: VAPProfile (new_vap_profile) profile for creating Virtual AP's Use example: create_vap.py |
|`wifi_monitor_profile.py` | Class: WifiMonitor (new_wifi_monitor_profile) Use example: tip_station_powersave.py This script uses filters from realm's PacketFilter class to filter pcap output for specific packets.|
|`wlan_theoretical_sta.py` | Class: abg11_calculator Standard Script for WLAN Capaity Calculator Use example: wlan_capacitycalculator.py|
|`ws_generic_monitor.py` | Class: WS_Listener web socket listener Use example: ws_generic_monitor_test.py, ws_generic_monitor to monitor events triggered by scripts, This script when running, will monitor the events triggered by test_ipv4_connection.py|
|`ws-sta-monitor.py` | Example of how to filter messages from the :8081 websocket |
### Python Scripts py-scripts ###
Test scripts and helper scripts
| Name | Purpose |
|------|---------|
| `lf_tos_test.py` | Generate traffic at different QoS and report performance in a spreadsheet |
| `lf_sniff.py` | Create packet capture files, especially OFDMA /AX captures |
| `cicd_TipIntegration.py` | Facebook TIP infrastructure|
| `cicd_testrail.py` | TestRail API binding for Python 3 |
| `cicd_testrailAndInfraSetup.py` | Python script for Facebook TIP infrastructure |
| `lf_cisco_dfs.py` | Python scripts customized for cisco controllers |
| `lf_cisco_snp.py` | Python script customized for cisco controllers |
| `lf_dut_sta_vap_test.py` | Python script to load an existing scenario, start some layer 3 traffic, and test the Linux based DUT that has SSH server |
| `run_cv_scenario.py` | Python script to set the LANforge to a BLANK database then it will load the specified database and start a graphical report |
| `sta_connect2.py` | Python script to create a station, run TCP and UDP traffic then verify traffic was received. Stations are cleaned up afterwards |
| `test_fileio.py` | Python script to test FileIO traffic |
| `test_generic.py` | Python script to test generic traffic using generic cross-connect and endpoint type |
| `test_ipv4_connection.py` | Python script to test connections to a VAP of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_ipv4_l4.py` | Python script to test layer 4 traffic using layer 4 cross-connect and endpoint type |
| `test_ipv4_l4_ftp_upload.py` | Python script to test ftp upload traffic |
| `test_ipv4_l4_ftp_urls_per_ten.py` | Python script to test the number of urls per ten minutes in ftp traffic |
| `test_ipv4_l4_ftp_wifi.py` | Python script to test ftp upload traffic wifi-wifi |
| `test_ipv4_l4_urls_per_ten.py` | Python script to test urls per ten minutes in layer 4 traffic |
| `test_ipv4_l4_wifi.py` | Python script to test layer 4 upload traffic wifi-wifi|
| `test_ipv4_ttls.py` | Python script to test connection to ttls system |
| `test_ipv4_variable_time.py` | Python script to test connection and traffic on VAPs of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_ipv6_connection.py` | Python script to test IPV6 connection to VAPs of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_ipv6_variable_time.py` | Python script to test IPV6 connection and traffic on VAPs of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_l3_WAN_LAN.py` | Python script to test traffic over a bridged NAT connection |
| `test_l3_longevity.py` | Python script customized for cisco controllers |
| `test_l3_scenario_throughput.py` | Python script to load an existing scenario and run the simultaneous throughput over time and generate report and P=plot the G=graph|
| `test_l3_unicast_traffic_gen.py` | Python script to generate unicast traffic over a list of stations|
| `tip_station_powersave.py` | Python script to generate and test for powersave packets within traffic run over multiple stations |
| `cicd_testrailAndInfraSetup.py` | Facebook TIP infrastructure |
| `lf_dfs_test.py` | Test testing dynamic frequency selection (dfs) between an AP connected to a controller and Lanforge|
| `lf_snp_test.py` | Test scaling and performance (snp) run various configurations and measures data rates |
| `lf_dut_sta_vap_test.py` | Load an existing scenario, start some layer 3 traffic, and test the Linux based DUT that has SSH server |
| `run_cv_scenario.py` | Set the LANforge to a BLANK database then it will load the specified database and start a graphical report |
| `sta_connect2.py` | Create a station, run TCP and UDP traffic then verify traffic was received. Stations are cleaned up afterwards |
| `test_fileio.py` | Test FileIO traffic |
| `test_generic.py` | Test generic traffic using generic cross-connect and endpoint type |
| `test_ipv4_connection.py` | Test connections to a VAP of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_ipv4_l4.py` | Test layer 4 traffic using layer 4 cross-connect and endpoint type |
| `test_ipv4_l4_ftp_upload.py` | Test ftp upload traffic |
| `test_ipv4_l4_ftp_urls_per_ten.py` | Test the number of urls per ten minutes in ftp traffic |
| `test_ipv4_l4_ftp_wifi.py` | Test ftp upload traffic wifi-wifi |
| `test_ipv4_l4_urls_per_ten.py` | Test urls per ten minutes in layer 4 traffic |
| `test_ipv4_l4_wifi.py` | Test layer 4 upload traffic wifi-wifi|
| `test_ipv4_ttls.py` | Test connection to ttls system |
| `test_ipv4_variable_time.py` | Test connection and traffic on VAPs of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_ipv6_connection.py` | Test IPV6 connection to VAPs of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_ipv6_variable_time.py` | Test IPV6 connection and traffic on VAPs of varying security types (WEP, WPA, WPA2, WPA3, Open) |
| `test_l3_WAN_LAN.py` | Test traffic over a bridged NAT connection |
| `test_l3_longevity.py` | Create variable stations on multiple radios, configurable rates, PDU, ToS, TCP and/or UDP traffic, upload and download, attenuation |
| `test_l3_scenario_throughput.py` | Load an existing scenario and run the simultaneous throughput over time and generate report and P=plot the G=graph|
| `test_l3_unicast_traffic_gen.py` | Generate unicast traffic over a list of stations|
| `tip_station_powersave.py` | Generate and test for powersave packets within traffic run over multiple stations |
### Perl and Shell Scripts ###
## Perl and Shell Scripts ##
| Name | Purpose |
|------|---------|
@@ -68,20 +133,23 @@ Read more examples in the [scripting LANforge](http://www.candelatech.com/lfcli_
| `attenuator_series.pl` | Reads a CSV of attenuator settings and replays them to CT70X programmble attenuator |
| `ftp-upload.pl` | Use this script to collect and upload station data to FTP site |
| `imix.pl` | packet loss survey tool |
| `lf_associate_ap.pl` | LANforge server script for associating virtual stations to an chosen SSID |
| `lf_associate_ap.pl` | LANforge server script for associating virtual stations to an chosen SSID |
| `lf_attenmod.pl` | This program is used to modify the LANforge attenuator through the LANforge |
| `lf_auto_wifi_cap.pl` | This program is used to automatically run LANforge-GUI WiFi Capacity tests |
| `lf_cmc_macvlan.pl` | This program is used to stress test the LANforge system, and may be used as an example for others who wish to automate LANforge tests |
| `lf_cmc_macvlan.pl` | Stress test sets up traffic types of udp , tcp , continuously starts and stops the connections |
| `lf_create_bcast.pl` | creates a L3 broadcast connection |
| `lf_cycle_wanlinks.pl` | example of how to call lf_icemod.pl from a script |
| `lf_endp_script.pl` | create a hunt script on a L3 connection endpoint |
| `lf_firemod.pl` | queries and modifies L3 connections |
| `lf_generic_ping.pl` | Generate a batch of Generic lfping endpoints |
| `lf_gui_cmd.pl` | Initiate a stress test |
| `lf_icemod.pl` | queries and modified WANLink connections |
| `lf_ice.pl` | adds and configures wanlinks |
| `lf_l4_auth.pl` | example of scripting L4 http script with basic auth |
| `lf_l4_reset.sh` | reset any layer 4 connection that reaches 0 Mbps over last minute |
| `lf_log_parse.pl` | Convert the timestamp in LANforge logs (it is in unix-time, miliseconds) to readable date |
| `lf_loop_traffic.sh` | Repeatedly start and stop a L3 connection |
| `lf_macvlan_l4.pl` | Set up connection types: lf_udp, lf_tcp across 1 real port and many macvlan ports on 2 machines. Then continously starts and stops the connections. |
| `lf_mcast.bash` | Create a multicast L3 connection endpoint |
| `lf_monitor.pl` | Monitor L4 connections |
| `lf_nfs_io.pl` | Creates and runs NFS connections |
@@ -110,6 +178,9 @@ Read more examples in the [scripting LANforge](http://www.candelatech.com/lfcli_
| `wait_on_ports.pl` | waits on ports to have IP addresses, can up/down port to stimulate new DHCP lease |
| `wifi-roaming-times.pl` | parses `wpa_supplicant_log.wiphyX` file to determine roaming times |
### LANForge Monitoring ###
From LANforge cli on port 4001 do a 'show_event' to see events from LANforge
### Compatibility ###
Scripts will be kept backwards and forwards compatible with LANforge
releases as much as possible.
@@ -121,7 +192,7 @@ one script to a separate directory is going to break its requirements.
### Requirements ###
The perl scripts require the following perl packages to be installed. Most of these
perl packages are available through your repository as `.deb` or `.rpm` packages.
perl packages are available through your repository as `.deb` or `.rpm` packages.
| Perl Package | RPM | Required |
| -------------------|------------------|----------------|
@@ -161,5 +232,3 @@ Please contact support@candelatech.com if you have any questions.
_Thanks,
Ben_

View File

@@ -5,6 +5,9 @@ use warnings;
use diagnostics;
use Carp;
use Data::Dumper;
use File::Temp qw(tempfile tempdir);
use Getopt::Long;
my $Q='"';
my $q="'";
my @idhunks = split(' ', `id`);
@@ -13,378 +16,501 @@ die ("Must be root to use this")
unless( $hunks[0] eq "uid=0(root)" );
@idhunks = undef;
@hunks = undef;
my $start_time = `date +%Y%m%d-%H%M%S`;
chomp($start_time);
my $do_help = 0;
my $do_automatic = ( -t STDIN ) ? 0 : 1; # test for terminal stdin
my $debug = $ENV{'DEBUG'};
my $usage = "$0 :
Use this to update /etc/hosts and /etc/httpd/http.conf for
LANforge server operations. By default this script will backup
your /etc/hosts file to /etc/.hosts.\$date and write a new copy
to /tmp/t_hosts_\$random. It will show you the difference between
the files and prompt you to continue. When you approve it will
copy /tmp/t_hosts_\$random to /etc/hosts.
-d --debug enable debug (or use \$ set DEBUG=1)
-a --auto automatic operation mode, no prompts
-h --help this message
";
GetOptions(
"h|help" => \$do_help,
"d|debug" => \$debug,
"a|auto|automatic" => \$do_automatic,
) || die($usage);
if ($do_help) {
print $usage;
exit(0);
}
sub syslogg {
my $msg = join('\n', @_);
$msg =~ s/\r*\n/ /;
`logger -t adjust_apache "$msg"`
}
sub err {
my $msg = "[error] ".join("\n", @_);
print STDERR $msg, "\n";
syslogg($msg) if ($do_automatic);
}
sub die_err {
my $msg = "[fatal] ".join("\n", @_);
syslogg($msg) if ($do_automatic);
die($msg);
}
sub warning {
my $msg = "[warning] ".join("\n", @_);
print STDOUT $msg, "\n";
syslogg($msg) if ($do_automatic);
}
sub info {
my $msg = "[inf] ".join("\n", @_);
print STDOUT $msg, "\n";
syslogg($msg) if ($do_automatic);
}
my $MgrHostname = `cat /etc/hostname`;
chomp($MgrHostname);
print "Will be setting hostname to $MgrHostname\n";
sleep 3;
info("Will be setting hostname to $MgrHostname");
sleep 3 if ($debug);
my $config_v = "/home/lanforge/config.values";
# grab the config.values file
die ("Unable to find $config_v" )
die_err("Unable to find $config_v" )
unless ( -f $config_v);
my @configv_lines = `cat $config_v`;
die ("Probably too little data in config.values")
die_err("Probably too little data in config.values")
unless (@configv_lines > 5);
my %configv = ();
foreach my $line (@configv_lines) {
my ($key, $val) = $line =~ /^(\S+)\s+(.*)$/;
$configv{$key} = $val;
}
die ("Unable to parse config.values")
die_err("Unable to parse config.values")
unless ((keys %configv) > 5);
die ("no mgt_dev in config.values")
die_err("no mgt_dev in config.values")
unless defined $configv{'mgt_dev'};
print "LANforge config states mgt_dev $configv{'mgt_dev'}\n";
info("LANforge config states mgt_dev $configv{'mgt_dev'}");
if ( ! -d "/sys/class/net/$configv{'mgt_dev'}") {
print "Please run lfconfig again with your updated mgt_port value.\n";
exit(1);
die_err( "Please run lfconfig again with your updated mgt_port value.");
}
my $ipline = `ip -o a show $configv{"mgt_dev"}`;
#print "IPLINE[$ipline]\n";
my ($ip) = $ipline =~ / inet ([0-9.]+)(\/\d+)? /g;
die ("No ip found for mgt_dev; your config.values file is out of date: $!")
die_err("No ip found for mgt_dev; your config.values file is out of date: $!")
unless ((defined $ip) && ($ip ne ""));
print "ip: $ip\n";
print "ip: $ip\n" if ($debug);
# This must be kept in sync with similar code in lf_kinstall.
my $found_localhost = 0;
my $fname = "/etc/hosts";
my $backup = "/etc/.hosts.$start_time";
`cp $fname $backup`;
die_err("Unable to create backup of /etc/hosts at $backup") if ( ! -f $backup );
my ($fh, $editfile) = tempfile( "t_hosts_XXXX", DIR=>'/tmp', SUFFIX=>'.txt');
if (-f "$fname") {
my @lines = `cat $fname`;
open(FILE, ">$fname") or die "Couldn't open file: $fname for writing: $!\n\n";
my $foundit = 0;
my $i;
# chomp is way to simplistic if we need to weed out \r\n characters as well
#chomp(@lines);
for( my $i=0; $i < @lines; $i++) {
($lines[$i]) = $lines[$i] =~ /^([^\r\n]+)\r?\n$/;
}
# we want to consolidate the $ip $hostname entry for MgrHostname
my @newlines = ();
my %more_hostnames = ();
my $new_entry = "$ip ";
#my $blank = 0;
#my $was_blank = 0;
my $counter = 0;
my $debug = 0;
if ((exists $ENV{"DEBUG"}) && ($ENV{"DEBUG"} eq "1")) {
$debug = 1;
}
my %host_map = (
"localhost.localdomain" => "127.0.0.1",
"localhost" => "127.0.0.1",
"localhost4.localdomain4" => "127.0.0.1",
"localhost4" => "127.0.0.1",
"localhost.localdomain" => "::1",
"localhost" => "::1",
"localhost6.loaldomain6" => "::1",
"localhost6" => "::1",
$MgrHostname => $ip,
"lanforge.localnet" => "192.168.1.101",
"lanforge.localdomain" => "192.168.1.101",
);
my %comment_map = ();
my %address_marker_map = ();
my %address_map = (
"127.0.0.1" => "localhost.localdomain localhost localhost4.localdomain4 localhost4",
"::1" => "localhost.localdomain localhost localhost6.loaldomain6 localhost6",
$ip => $MgrHostname,
"192.168.1.101" => "lanforge.localnet lanforge.localdomain",
);
if ($debug){
print Dumper(\%address_map);
print Dumper(\%host_map);
}
my $prevname = "";
my $previp = "";
for my $ln (@lines) {
next if (!(defined $ln));
print "\nLN[$ln]\n" if ($debug);
next if ($ln =~ /^\s*$/);
next if ($ln =~ /^\s*#/);
next if ($ln =~ /^###-LF-HOSTAME-NEXT-###/); # old typo
next if ($ln =~ /^###-LF-HOSTNAME-NEXT-###/);
my $comment = undef;
print "PARSING IPv4 ln[$ln]\n" if ($debug);
if ($ln =~ /#/) {
($comment) = $ln =~ /^[^#]+(#.*)$/;
($ln) = $ln =~ /^([^#]+)\s*#/;
print "line with comment becomes [$ln]\n" if ($debug);
my @lines = `cat $fname`;
#open(FILE, ">$fname") or die "Couldn't open file: $fname for writing: $!\n\n";
my $foundit = 0;
my $i;
# chomp is way to simplistic if we need to weed out \r\n characters as well
#chomp(@lines);
for (my $i = 0; $i < @lines; $i++) {
($lines[$i]) = $lines[$i] =~ /^([^\r\n]+)\r?\n$/;
}
@hunks = split(/\s+/, $ln);
my $middleip = 0;
my $counter2 = -1;
my $linehasip = 0;
my $lfhostname = 0;
if ((defined $comment) && ($comment ne "")) {
$comment_map{$hunks[0]} = $comment;
# we want to consolidate the $ip $hostname entry for MgrHostname
my @newlines = ();
my %more_hostnames = ();
my $new_entry = "$ip ";
#my $blank = 0;
#my $was_blank = 0;
my $counter = 0;
if ((exists $ENV{"DEBUG"}) && ($ENV{"DEBUG"} eq "1")) {
$debug = 1;
}
my %host_map = (
"localhost.localdomain" => "127.0.0.1",
"localhost" => "127.0.0.1",
"localhost4.localdomain4" => "127.0.0.1",
"localhost4" => "127.0.0.1",
"localhost.localdomain" => "::1",
"localhost" => "::1",
"localhost6.loaldomain6" => "::1",
"localhost6" => "::1",
$MgrHostname => $ip,
"lanforge.localnet" => "192.168.1.101",
"lanforge.localdomain" => "192.168.1.101",
);
my %comment_map = ();
my %address_marker_map = ();
my %address_map = (
"127.0.0.1" => "localhost.localdomain localhost localhost4.localdomain4 localhost4",
"::1" => "localhost.localdomain localhost localhost6.loaldomain6 localhost6",
$ip => $MgrHostname,
"192.168.1.101" => "lanforge.localnet lanforge.localdomain",
);
if ($debug) {
print Dumper(\%address_map);
print Dumper(\%host_map);
}
for my $hunk (@hunks) {
print "\n HUNK",$counter2,"-:$hunk:- " if ($debug);
$counter2++;
next if ($hunk =~ /^localhost/);
next if ($hunk =~ /^lanforge-srv$/);
next if ($hunk =~ /^lanforge\.local(domain|net)$/);
next if ($hunk =~ /^extra6?-\d+/);
if ($hunk =~ /^$ip$/) {
$linehasip++;
$lfhostname++;
}
elsif ($hunk =~ /^$MgrHostname$/) {
$lfhostname++;
$prevname = $hunk;
}
my $prevname = "";
my $previp = "";
if (($hunk =~ /^127\.0\.0\.1/)
|| ($hunk =~ /^192\.168\.1\.101/)
|| ($hunk =~ /^::1$/)){
$previp = $hunk;
$linehasip++;
}
elsif ($hunk =~ /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/) {
$linehasip++;
print " IP4($hunk)" if ($debug);
if ($counter2 > 0) { # we're not first item on line
$middleip++ if ($counter2 > 0);
print "middle" if ($debug);
}
if (!(defined $address_map{$hunk})) {
$address_map{$hunk} = "";
}
print "+IP4" if ($debug);
for my $ln (@lines) {
next if (!(defined $ln));
# print "\nLN[$ln]\n" if ($debug);
next if ($ln =~ /^\s*$/);
next if ($ln =~ /^\s*#/);
next if ($ln =~ /LF-HOSTAME-NEXT/); # old typo
next if ($ln =~ /LF-HOSTNAME-NEXT/);
my $comment = undef;
# print "PARSING IPv4 ln[$ln]\n" if ($debug);
if ($ln =~ /#/) {
($comment) = $ln =~ /^[^#]+(#.*)$/;
($ln) = $ln =~ /^([^#]+)\s*#/;
print "line with comment becomes [$ln]\n" if ($debug);
}
@hunks = split(/\s+/, $ln);
my $middleip = 0;
my $counter2 = -1;
my $linehasip = 0;
my $lfhostname = 0;
if ((defined $comment) && ($comment ne "")) {
$comment_map{$hunks[0]} = $comment;
}
for my $hunk (@hunks) {
# print "\n HUNK",$counter2,"-:$hunk:- " if ($debug);
$counter2++;
next if ($hunk =~ /^localhost/);
next if ($hunk =~ /^lanforge-srv$/);
next if ($hunk =~ /^lanforge\.local(domain|net)$/);
next if ($hunk =~ /^extra6?-\d+/);
if (("" ne $prevname) && ($counter2 > 0)) {
print " hunk($hunk)prev($prevname)" if ($debug);
$address_map{$hunk} .= " $prevname"
if ($address_map{$hunk} !~ /\s*$prevname\s*/);
$host_map{$prevname} .= " $hunk";
}
$previp = $hunk;
}
elsif (($hunk =~ /[G-Zg-z]+\.?/) || ($hunk =~ /^[^:A-Fa-f0-9]+/)) {
print " notIP($hunk)" if ($debug);
$prevname = $hunk;
if ($middleip) {
print " middle($previp)" if ($debug);
$address_map{$previp} .= " $hunk"
if ($address_map{$previp} !~ /\b$hunk\b/);
$prevname = $hunk;
$host_map{$prevname} .= " $previp";
}
elsif ($linehasip) {
print " prev($previp $hunk)" if ($debug);
$address_map{$previp} .= " $hunk"
if ($address_map{$previp} !~ /\s*$hunk\s*/);
$host_map{$hunk} .= " $previp";
}
elsif ($lfhostname) {
$more_hostnames{$hunk} = 1;
$host_map{$hunk} .= " $previp";
}
else { # strange word
if ("" eq $previp) {
print " hunk($hunk) has no IP***" if ($debug);
$more_hostnames{$hunk} = 1;
if ($hunk =~ /^\s*$/) {
next;
}
if ($hunk =~ /^$ip$/) {
$linehasip++;
$lfhostname++;
}
elsif ($hunk =~ /^$MgrHostname$/) {
$lfhostname++;
$prevname = $hunk;
}
$previp = "" if (!defined($previp));
if (($hunk =~ /^127\.0\.0\.1/)
|| ($hunk =~ /^192\.168\.1\.101/)
|| ($hunk =~ /^::1$/)) {
$previp = $hunk;
$linehasip++;
}
elsif ($hunk =~ /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/) {
$linehasip++;
# print " IP4($hunk)" if ($debug);
if ($counter2 > 0) { # we're not first item on line
$middleip++ if ($counter2 > 0);
# print "middle" if ($debug);
}
if (!(defined $address_map{$hunk})) {
$address_map{$hunk} = "";
}
# print "+IP4" if ($debug);
if (("" ne $prevname) && ($counter2 > 0)) {
# print " hunk($hunk)prev($prevname)" if ($debug);
$address_map{$hunk} .= " $prevname"
if ($address_map{$hunk} !~ /\s*$prevname\s*/);
# $host_map{$prevname} .= " $hunk";
if ($host_map{$prevname} !~ /\b$hunk\b/) {
$host_map{$prevname} .= " $hunk";
}
}
$previp = $hunk;
}
elsif (($hunk =~ /[G-Zg-z]+\.?/) || ($hunk =~ /^[^:A-Fa-f0-9]+/)) {
# print " notIP($hunk)" if ($debug);
$prevname = $hunk;
if ($middleip) {
# print " middle($previp)" if ($debug);
$address_map{$previp} .= " $hunk"
if ($address_map{$previp} !~ /\b$hunk\b/);
$prevname = $hunk;
if ($host_map{$prevname} !~ /\b$hunk\b/) {
$host_map{$prevname} .= " $previp";
}
}
elsif ($linehasip) {
# print " prev($previp) hunk($hunk)" if ($debug);
$address_map{$previp} .= " $hunk"
if ($address_map{$previp} !~ /\s*$hunk\s*/);
if ((defined $prevname) && (exists $host_map{$prevname}) && ($host_map{$prevname} !~ /\b$hunk\b/)) {
$host_map{$hunk} .= " $previp";
}
}
elsif ($lfhostname) {
$more_hostnames{$hunk} = 1;
if ($host_map{$prevname} !~ /\b$hunk\b/) {
$host_map{$hunk} .= " $previp";
}
}
else { # strange word
if ("" eq $previp) {
print " hunk($hunk) has no IP***" if ($debug);
$more_hostnames{$hunk} = 1;
}
elsif ($address_map{$previp} !~ /\s*$hunk\s*/) {
$address_map{$previp} .= " $hunk";
if ($host_map{$prevname} !~ /\b$hunk\b/) {
$host_map{$hunk} .= " $previp";
}
}
}
}
elsif (($hunk =~ /::/)
|| ($hunk =~ /[0-9A-Fa-f]+:/)) {
# print " hunk6($hunk)" if ($debug);
$linehasip++;
if (!(defined $address_map{$hunk})) {
$address_map{$hunk} = "";
}
$previp = $hunk;
}
elsif ($address_map{$previp} !~ /\s*$hunk\s*/) {
$address_map{$previp} .= " $hunk";
$host_map{$hunk} .= " $previp";
# is hostname and not an ip
$address_map{$previp} .= " $hunk";
if ($host_map{$prevname} !~ /\b$hunk\b/) {
$host_map{$hunk} .= " $previp";
}
}
}
}
elsif (($hunk =~ /::/)
|| ($hunk =~ /[0-9A-Fa-f]+:/)) {
print " hunk6($hunk)" if ($debug);
$linehasip++;
if (!(defined $address_map{$hunk})) {
$address_map{$hunk} = "";
}
$previp = $hunk;
}
elsif ($address_map{$previp} !~ /\s*$hunk\s*/) { # is hostname and not an ip
$address_map{$previp} .= " $hunk";
$host_map{$hunk} .= " $previp";
}
} # ~foreach hunk
} # ~foreach line
} # ~foreach hunk
} # ~foreach line
if (($host_map{$MgrHostname} !~ /^\s*$/) && ($host_map{$MgrHostname} =~ /\S+\s+\S+/)) {
print("Multiple IPs for this hostname: ".$host_map{$MgrHostname}."\n");
my @iphunks = split(/\s+/, $host_map{$MgrHostname});
print "WARNING changing $MgrHostname for to $ip; line was <<$host_map{$MgrHostname}>> addrmap: <<$address_map{$ip}>>\n"
if ($debug);
$host_map{$MgrHostname} = $ip;
}
for my $name (sort keys %more_hostnames) {
$address_map{$ip} .= " $name";
print "NEWSTUFF $ip $address_map{$ip}\n" if ($debug);
}
# this might be premature
unshift(@newlines, "192.168.1.101 ".$address_map{"192.168.1.101"});
unshift(@newlines, "127.0.0.1 ".$address_map{"127.0.0.1"});
unshift(@newlines, "::1 ".$address_map{"::1"});
delete($address_map{"192.168.1.101"});
delete($address_map{"127.0.0.1"});
delete($address_map{"::1"});
if ($debug) {
print "# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n";
print "\nAddress map\n";
print Dumper(\%address_map);
print "\nHost map\n";
print Dumper(\%host_map);
print "# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n";
sleep 2;
}
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
# we want to maintain the original line ordering as faithfully as possible
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
for my $ln (@lines) {
$ln = "" if (!(defined $ln));
print "OLD[$ln]\n" if ($debug);
# if we are comments or blank lines, preserve them
if (($ln =~ /^\s*$/) || ($ln =~ /^\s*#/)) {
push(@newlines, $ln);
next;
if (($host_map{$MgrHostname} !~ /^\s*$/) && ($host_map{$MgrHostname} =~ /\S+\s+\S+/)) {
print("Multiple IPs for this hostname: " . $host_map{$MgrHostname} . "\n") if ($debug);
my @iphunks = split(/\s+/, $host_map{$MgrHostname});
print "Changing $MgrHostname to $ip; hostmap: <<$host_map{$MgrHostname}>> addrmap: <<$address_map{$ip}>>\n"
if ($debug);
$host_map{$MgrHostname} = $ip;
}
@hunks = split(/\s+/, $ln);
if (exists $address_map{$hunks[0]}) {
if (exists $address_marker_map{$hunks[0]}) {
print "already printed $hunks[0]\n" if ($debug);
next;
}
my $comment = "";
if (exists $comment_map{$hunks[0]}) {
$comment = " $comment_map{$hunks[0]}";
}
push(@newlines, "$hunks[0] $address_map{$hunks[0]}$comment");
$address_marker_map{$hunks[0]} = 1;
next;
for my $name (sort keys %more_hostnames) {
$address_map{$ip} .= " $name";
print "updated address_map entry: $ip -> $address_map{$ip}\n" if ($debug);
}
else {
die("unknown IP $hunks[0]");
# this might be premature
unshift(@newlines, "192.168.1.101 " . $address_map{"192.168.1.101"});
unshift(@newlines, "127.0.0.1 " . $address_map{"127.0.0.1"});
unshift(@newlines, "::1 " . $address_map{"::1"});
my %used_addresses = ();
delete($address_map{"192.168.1.101"});
$used_addresses{"192.168.1.101"} = 1;
delete($address_map{"127.0.0.1"});
$used_addresses{"127.0.0.1"} = 1;
delete($address_map{"::1"});
$used_addresses{"::1"} = 1;
if ($debug) {
print "# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n";
print "\nAddress map\n";
print Dumper(\%address_map);
print "\nHost map\n";
print Dumper(\%host_map);
print "# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n";
sleep 2;
}
}
if ($debug) {
print "# ----- NEW ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n";
for my $ln (@newlines) {
print "$ln\n";
}
}
#for my $key (sort keys %address_map){
# next if ($key eq $ip);
# if ($address_map{$key} =~ /\s*$MgrHostname\s*/) {
# print("SKIPPING $key / $address_map{$key}\n")
# if ($debug);
# next;
# }
# push(@newlines, $key." ".$address_map{$key});
#}
push(@newlines, "###-LF-HOSTNAME-NEXT-###");
push(@newlines, $ip." ".$address_map{$ip});
if ($debug) {
print Dumper(\@newlines);
sleep 5;
}
for my $ln (@newlines) {
print FILE "$ln\n";
}
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
# we want to maintain the original line ordering as faithfully as possible
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
for my $ln (@lines) {
$ln = "" if (!(defined $ln));
print "old[$ln]\n" if ($debug);
# if we are comments or blank lines, preserve them
next if ($ln =~ /LF-HOSTNAME-NEXT/);
next if ($ln =~ /\b$MgrHostname\b/); # skip our mgt hostname
next if ($ln =~ /^$host_map{$MgrHostname}\s+/); # line starts with present IP addr
print FILE "\n";
close FILE;
} # ~if found file
if (($ln =~ /^\s*$/) || ($ln =~ /^\s*#/)) {
push(@newlines, $ln);
next;
}
@hunks = split(/\s+/, $ln);
if (exists $address_map{$hunks[0]}) {
if ((exists $address_marker_map{$hunks[0]})
|| (exists $used_addresses{$hunks[0]})) {
print "already printed $hunks[0]\n" if ($debug);
next;
}
my $comment = "";
if (exists $comment_map{$hunks[0]}) {
$comment = " $comment_map{$hunks[0]}";
}
push(@newlines, "$hunks[0] $address_map{$hunks[0]}$comment");
$address_marker_map{$hunks[0]} = 1;
next;
}
if (!(exists $used_addresses{$hunks[0]})) {
warning("untracked IP <<$hunks[0]>> Used addresses:");
print Dumper(\%address_marker_map) if ($debug);
print Dumper(\%used_addresses) if ($debug);
}
}
push(@newlines, "###-LF-HOSTNAME-NEXT-###");
push(@newlines, $ip . " " . $address_map{$ip});
if ($debug) {
print "# ----- new /etc/hosts ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n";
for my $ln (@newlines) {
print "$ln\n";
}
print "# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n";
sleep 5;
}
# write to /tmp/t_hosts_$random
for my $ln (@newlines) {
print $fh "$ln\n";
}
close $fh;
my $wc_edit_file = `wc -l < $editfile`;
chomp($wc_edit_file);
my $wc_orig_file = `wc -l < $backup`;
if ($wc_edit_file == 0) {
die_err("Abandoning $editfile, it was blank.");
exit(1);
}
my $there_are_diffs = `/bin/diff /etc/hosts $editfile > /dev/null && echo 0 || echo 1`;
chomp($there_are_diffs);
$there_are_diffs = int($there_are_diffs);
if (! $there_are_diffs) {
info("No difference in hosts file.");
sleep(1) if (!$do_automatic);
}
elsif (!$do_automatic) {
my $msg = "Original /etc/hosts file backed up to $backup\n"
. "The hosts file differs by " . ($wc_orig_file - $wc_edit_file) . "lines, at: $editfile\n"
. "Displaying difference...\n";
info($msg);
sleep(2);
my $diffcmd = "diff -y /etc/hosts $editfile";
if ( -x "/usr/bin/colordiff" ) {
$diffcmd = "colordiff -y /etc/hosts $editfile";
}
open(my $diff_in, "-|", $diffcmd);
my ($diff_out, $diff_file) = tempfile( "diff_hosts_XXXX", DIR=>"/tmp" );
my @diff_lines = <$diff_in>;
close($diff_in);
print $diff_out join("", @diff_lines);
close($diff_out);
system("/bin/less -Nr $diff_file");
print "/bin/less -dNr $diff_file\n" if ($debug);
# prompt to exit
print "Press Enter to continue, [ctrl-c] to quit >";
my $i = <STDIN>;
}
if ($there_are_diffs) {
warning("Line comparison: $backup\: $wc_orig_file, $editfile\: $wc_edit_file");
warning("Installing new hosts file from $editfile, backup at $backup");
system("cp $editfile /etc/hosts");
}
} # ~if found hosts file
my $local_crt ="";
my $local_key ="";
my $hostname_crt ="";
my $hostname_key ="";
# check for hostname shaped cert files
if ( -f "/etc/pki/tls/certs/localhost.crt") {
$local_crt = "/etc/pki/tls/certs/localhost.crt";
if (-f "/etc/pki/tls/certs/localhost.crt") {
$local_crt = "/etc/pki/tls/certs/localhost.crt";
}
if ( -f "/etc/pki/tls/private/localhost.key") {
$local_key = "/etc/pki/tls/private/localhost.key";
if (-f "/etc/pki/tls/private/localhost.key") {
$local_key = "/etc/pki/tls/private/localhost.key";
}
if ( -f "/etc/pki/tls/certs/$MgrHostname.crt") {
$hostname_crt = "/etc/pki/tls/certs/$MgrHostname.crt";
if (-f "/etc/pki/tls/certs/$MgrHostname.crt") {
$hostname_crt = "/etc/pki/tls/certs/$MgrHostname.crt";
}
if ( -f "/etc/pki/tls/private/$MgrHostname.key") {
$hostname_key = "/etc/pki/tls/private/$MgrHostname.key";
if (-f "/etc/pki/tls/private/$MgrHostname.key") {
$hostname_key = "/etc/pki/tls/private/$MgrHostname.key";
}
# grab the 0000-default.conf file
my @places_to_check = (
"/etc/apache2/apache2.conf",
"/etc/apache2/ports.conf",
"/etc/apache2/sites-available/000-default.conf",
"/etc/apache2/sites-available/0000-default.conf",
"/etc/httpd/conf/http.conf",
"/etc/httpd/conf/httpd.conf",
"/etc/httpd/conf.d/ssl.conf",
"/etc/httpd/conf.d/00-ServerName.conf",
"/etc/apache2/apache2.conf",
"/etc/apache2/ports.conf",
"/etc/apache2/sites-available/000-default.conf",
"/etc/apache2/sites-available/0000-default.conf",
"/etc/httpd/conf/http.conf",
"/etc/httpd/conf/httpd.conf",
"/etc/httpd/conf.d/ssl.conf",
"/etc/httpd/conf.d/00-ServerName.conf",
);
foreach my $file (@places_to_check) {
if ( -f $file) {
print "Checking $file...\n";
my @lines = `cat $file`;
chomp @lines;
# we want to match Listen 80$ or Listen 443 https$
# we want to replace with Listen lanforge-mgr:80$ or Listen lanforge-mgr:443 https$
@hunks = grep { /^\s*(Listen|SSLCertificate)/ } @lines;
if (@hunks) {
my $edited = 0;
my @newlines = ();
@hunks = (@hunks, "\n");
print "Something to change in $file\n";
print "These lines are interesting:\n";
print join("\n", @hunks);
foreach my $confline (@lines) {
if ($confline =~ /^\s*Listen\s+(?:80|443) */) {
$confline =~ s/Listen /Listen ${MgrHostname}:/;
print "$confline\n";
if (-f $file) {
print "Checking $file...\n";
my @lines = `cat $file`;
chomp @lines;
# we want to match Listen 80$ or Listen 443 https$
# we want to replace with Listen lanforge-mgr:80$ or Listen lanforge-mgr:443 https$
@hunks = grep {/^\s*(Listen|SSLCertificate)/} @lines;
if (@hunks) {
my $edited = 0;
my @newlines = ();
@hunks = (@hunks, "\n");
print "Something to change in $file\n";
print "These lines are interesting:\n";
print join("\n", @hunks);
foreach my $confline (@lines) {
if ($confline =~ /^\s*Listen\s+(?:80|443) */) {
$confline =~ s/Listen /Listen ${MgrHostname}:/;
print "$confline\n";
}
elsif ($confline =~ /^\s*Listen\s+(?:[^:]+:(80|443)) */) {
$confline =~ s/Listen [^:]+:/Listen ${MgrHostname}:/;
print "$confline\n";
}
if ($confline =~ /^\s*SSLCertificateFile /) {
$confline = "SSLCertificateFile $hostname_crt" if ("" ne $hostname_crt);
}
if ($confline =~ /^\s*SSLCertificateKeyFile /) {
$confline = "SSLCertificateKeyFile $hostname_key" if ("" ne $hostname_key);
}
push @newlines, $confline;
$edited++ if ($confline =~ /# modified by lanforge/);
}
elsif ($confline =~ /^\s*Listen\s+(?:[^:]+:(80|443)) */) {
$confline =~ s/Listen [^:]+:/Listen ${MgrHostname}:/;
print "$confline\n";
}
if ($confline =~ /^\s*SSLCertificateFile /) {
$confline = "SSLCertificateFile $hostname_crt" if ("" ne $hostname_crt);
}
if ($confline =~ /^\s*SSLCertificateKeyFile /) {
$confline = "SSLCertificateKeyFile $hostname_key" if ("" ne $hostname_key);
}
push @newlines, $confline;
$edited++ if ($confline =~ /# modified by lanforge/);
}
push(@newlines, "# modified by lanforge\n") if ($edited == 0);
push(@newlines, "# modified by lanforge\n") if ($edited == 0);
my $fh;
die ($!) unless open($fh, ">", $file);
print $fh join("\n", @newlines);
close $fh;
}
else {
print "Nothing looking like [Listen 80|443] in $file\n";
}
}
my $fh;
die($!) unless open($fh, ">", $file);
print $fh join("\n", @newlines);
close $fh;
}
else {
print "Nothing looking like [Listen 80|443] in $file\n";
}
}
} # ~for places_to_check
if ( -d "/etc/httpd/conf.d") {
die($!) unless open(FILE, ">", "/etc/httpd/conf.d/00-ServerName.conf");
print FILE "ServerName $MgrHostname\n";
#print FILE "Listen $MgrHostname:80\n";
#print FILE "Listen $MgrHostname:443\n";
close FILE;
if (-d "/etc/httpd/conf.d") {
die($!) unless open(FILE, ">", "/etc/httpd/conf.d/00-ServerName.conf");
print FILE "ServerName $MgrHostname\n";
#print FILE "Listen $MgrHostname:80\n";
#print FILE "Listen $MgrHostname:443\n";
close FILE;
}
#

466
ap_ctl.py Executable file
View File

@@ -0,0 +1,466 @@
#!/usr/bin/python3
'''
LANforge 192.168.100.178
Controller at 192.168.100.112 admin/Cisco123
Controller is 192.1.0.10
AP is on serial port /dev/ttyUSB1 9600 8 n 1
make sure pexpect is installed:
$ sudo yum install python3-pexpect
You might need to install pexpect-serial using pip:
$ pip3 install pexpect-serial
$ sudo pip3 install pexpect-serial
./ap_ctl.py
'''
import sys
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit()
import logging
import time
from time import sleep
import argparse
import pexpect
import serial
from pexpect_serial import SerialSpawn
# pip install pexpect-serial (on Ubuntu)
# sudo pip install pexpect-serial (on Ubuntu for everyone)
default_host = "localhost"
default_ports = {
"serial": None,
"ssh": 22,
"telnet": 23
}
NL = "\n"
CR = "\r\n"
Q = '"'
A = "'"
FORMAT = '%(asctime)s %(name)s %(levelname)s: %(message)s'
band = "a"
logfile = "stdout"
# regex101.com ,
# this will be in the tx_power script
# ^\s+1\s+6\s+\S+\s+\S+\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)
def usage():
print("$0 used connect to Cisco AP:")
print("-a|--ap: AP to act upon")
print("-d|--dest: destination host")
print("-o|--port: destination port")
print("-u|--user: AP login name")
print("-p|--pass: AP password")
print("-s|--scheme (serial|telnet|ssh): connect to controller via serial, ssh or telnet")
print("--tty Serial port for accessing AP")
print("-l|--log file: log messages here")
print("-b|--band: a (5Ghz) or b (2.4Ghz) or abgn for dual-band 2.4Ghz AP")
print("-z|--action: action")
print("-h|--help")
# see https://stackoverflow.com/a/13306095/11014343
class FileAdapter(object):
def __init__(self, logger):
self.logger = logger
def write(self, data):
# NOTE: data can be a partial line, multiple lines
data = data.strip() # ignore leading/trailing whitespace
if data: # non-blank
self.logger.info(data)
def flush(self):
pass # leave it to logging to flush properly
# Test command if lanforge connected ttyUSB0
# sudo ./ap_ctl.py -a lanforge -d 0 -o 0 -u "lanforge" -p "lanforge" -s "serial" -t "/dev/ttyUSB0"
# sample for lanforge 192.168.100.178
# sudo ./ap_ctl.py -a APA53.0E7B.EF9C -d 0 -o 0 -u "admin" -p "Admin123" -s "serial" -t "/dev/ttyUSB2" -z "show_log"
def main():
global logfile
AP_ESCAPE = "Escape character is '^]'."
AP_USERNAME = "Username:"
AP_PASSWORD = "Password:"
AP_EN = "en"
AP_MORE = "--More--"
AP_EXIT = "exit"
LF_PROMPT = "$"
CR = "\r\n"
parser = argparse.ArgumentParser(description="Cisco AP Control Script")
parser.add_argument("-a", "--prompt", type=str, help="ap prompt")
parser.add_argument("-d", "--dest", type=str, help="address of the AP 172.19.27.55")
parser.add_argument("-o", "--port", type=int, help="control port on the AP, 2008")
parser.add_argument("-u", "--user", type=str, help="credential login/username, admin")
parser.add_argument("-p", "--passwd", type=str, help="credential password Wnbulab@123")
parser.add_argument("-s", "--scheme", type=str, choices=["serial", "ssh", "telnet"], help="Connect via serial, ssh or telnet")
parser.add_argument("-t", "--tty", type=str, help="tty serial device for connecting to AP")
parser.add_argument("-l", "--log", type=str, help="logfile for messages, stdout means output to console",default="stdout")
parser.add_argument("-z", "--action", type=str, help="action, current action is powercfg")
parser.add_argument("-b", "--baud", type=str, help="action, baud rate lanforge: 115200 cisco: 9600")
args = None
try:
args = parser.parse_args()
host = args.dest
scheme = args.scheme
port = (default_ports[scheme], args.port)[args.port != None]
user = args.user
if (args.log != None):
logfile = args.log
except Exception as e:
logging.exception(e)
usage()
exit(2)
console_handler = logging.StreamHandler()
formatter = logging.Formatter(FORMAT)
logg = logging.getLogger(__name__)
logg.setLevel(logging.DEBUG)
file_handler = None
if (logfile is not None):
if (logfile != "stdout"):
file_handler = logging.FileHandler(logfile, "w")
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
logg.addHandler(file_handler)
logging.basicConfig(format=FORMAT, handlers=[file_handler])
else:
# stdout logging
logging.basicConfig(format=FORMAT, handlers=[console_handler])
egg = None # think "eggpect"
ser = None
try:
if (scheme == "serial"):
#eggspect = pexpect.fdpexpect.fdspan(telcon, logfile=sys.stdout.buffer)
ser = serial.Serial(args.tty, int(args.baud), timeout=5)
print("Created serial connection on %s, open: %s"%(args.tty, ser.is_open))
egg = SerialSpawn(ser)
egg.logfile = FileAdapter(logg)
time.sleep(1)
egg.sendline(CR)
time.sleep(1)
elif (scheme == "ssh"):
if (port is None):
port = 22
cmd = "ssh -p%d %s@%s"%(port, user, host)
logg.info("Spawn: "+cmd+NL)
egg = pexpect.spawn(cmd)
#egg.logfile_read = sys.stdout.buffer
egg.logfile = FileAdapter(logg)
elif (scheme == "telnet"):
if (port is None):
port = 23
cmd = "telnet {} {}".format(host, port)
logg.info("Spawn: "+cmd+NL)
egg = pexpect.spawn(cmd)
egg.logfile = FileAdapter(logg)
# Will login below as needed.
else:
usage()
exit(1)
except Exception as e:
logging.exception(e)
AP_PROMPT = "{}>".format(args.prompt)
AP_HASH = "{}#".format(args.prompt)
time.sleep(0.1)
logged_in = False
loop_count = 0
while (loop_count <= 8 and logged_in == False):
loop_count += 1
i = egg.expect_exact([AP_ESCAPE,AP_PROMPT,AP_HASH,AP_USERNAME,AP_PASSWORD,AP_MORE,LF_PROMPT,pexpect.TIMEOUT],timeout=5)
if i == 0:
logg.info("Expect: {} i: {} before: {} after: {}".format(AP_ESCAPE,i,egg.before,egg.after))
egg.sendline(CR) # Needed after Escape or should just do timeout and then a CR?
sleep(1)
if i == 1:
logg.info("Expect: {} i: {} before: {} after: {}".format(AP_PROMPT,i,egg.before,egg.after))
egg.sendline(AP_EN)
sleep(1)
j = egg.expect_exact([AP_PASSWORD,pexpect.TIMEOUT],timeout=5)
if j == 0:
logg.info("Expect: {} i: {} j: {} before: {} after: {}".format(AP_PASSWORD,i,j,egg.before,egg.after))
egg.sendline(args.passwd)
sleep(1)
k = egg.expect_exact([AP_HASH,pexpect.TIMEOUT],timeout=5)
if k == 0:
logg.info("Expect: {} i: {} j: {} k: {} before: {} after: {}".format(AP_PASSWORD,i,j,k,egg.before,egg.after))
logged_in = True
if k == 1:
logg.info("Expect: {} i: {} j: {} k: {} before: {} after: {}".format("Timeout",i,j,k,egg.before,egg.after))
if j == 1:
logg.info("Expect: {} i: {} j: {} before: {} after: {}".format("Timeout",i,j,egg.before,egg.after))
if i == 2:
logg.info("Expect: {} i: {} before: {} after: {}".format(AP_HASH,i,egg.before,egg.after))
logged_in = True
sleep(1)
if i == 3:
logg.info("Expect: {} i: {} before: {} after: {}".format(AP_USERNAME,i,egg.before,egg.after))
egg.sendline(args.user)
sleep(1)
if i == 4:
logg.info("Expect: {} i: {} before: {} after: {}".format(AP_PASSWORD,i,egg.before,egg.after))
egg.sendline(args.passwd)
sleep(1)
if i == 5:
logg.info("Expect: {} i: {} before: {} after: {}".format(AP_MORE,i,egg.before,egg.after))
if (scheme == "serial"):
egg.sendline("r")
else:
egg.sendcontrol('c')
sleep(1)
# for Testing serial connection using Lanforge
if i == 6:
logg.info("Expect: {} i: {} before: {} after: {}".format(LF_PROMPT,i,egg.before.decode('utf-8', 'ignore'),egg.after.decode('utf-8', 'ignore')))
if (loop_count < 3):
egg.send("ls -lrt")
sleep(1)
if (loop_count > 4):
logged_in = True # basically a test mode using lanforge serial
if i == 7:
logg.info("Expect: {} i: {} before: {} after: {}".format("Timeout",i,egg.before,egg.after))
egg.sendline(CR)
sleep(1)
if (args.action == "powercfg"):
logg.info("execute: show controllers dot11Radio 1 powercfg | g T1")
egg.sendline('show controllers dot11Radio 1 powercfg | g T1')
egg.expect([pexpect.TIMEOUT], timeout=3) # do not delete this for it allows for subprocess to see output
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
i = egg.expect_exact([AP_MORE,pexpect.TIMEOUT],timeout=5)
if i == 0:
egg.sendcontrol('c')
if i == 1:
logg.info("send cntl c anyway")
egg.sendcontrol('c')
elif (args.action == "clear_log"):
logg.info("execute: clear log")
egg.sendline('clear log')
sleep(0.4)
egg.sendline('show log')
egg.expect([pexpect.TIMEOUT], timeout=2) # do not delete this for it allows for subprocess to see output
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
# allow for normal logout below
elif (args.action == "show_log"):
logg.info("execute: show log")
egg.sendline('show log')
sleep(0.4)
egg.expect([pexpect.TIMEOUT], timeout=2) # do not delete this for it allows for subprocess to see output
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
i = egg.expect_exact([AP_MORE,pexpect.TIMEOUT],timeout=4)
if i == 0:
egg.sendline('r')
egg.expect([pexpect.TIMEOUT], timeout=4) # do not delete this for it allows for subprocess to see output
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
if i == 1:
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
# allow for normal logout below
# show log | g DOT11_DRV
# CAC_EXPIRY_EVT: CAC finished on DFS channel 52
elif (args.action == "cac_expiry_evt"):
logg.info("execute: show log | g CAC_EXPIRY_EVT")
egg.sendline('show log | g CAC_EXPIRY_EVT')
sleep(0.4)
egg.expect([pexpect.TIMEOUT], timeout=2) # do not delete this for it allows for subprocess to see output
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
i = egg.expect_exact([AP_MORE,pexpect.TIMEOUT],timeout=4)
if i == 0:
egg.sendline('r')
egg.expect([pexpect.TIMEOUT], timeout=4) # do not delete this for it allows for subprocess to see output
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
if i == 1:
print(egg.before.decode('utf-8', 'ignore')) # do not delete this for it allows for subprocess to see output
else: # no other command at this time so send the same power command
#logg.info("no action so execute: show controllers dot11Radio 1 powercfg | g T1")
logg.info("no action")
i = egg.expect_exact([AP_PROMPT,AP_HASH,pexpect.TIMEOUT],timeout=1)
if i == 0:
logg.info("received {} we are done send exit".format(AP_PROMPT))
egg.sendline(AP_EXIT)
if i == 1:
logg.info("received {} send exit".format(AP_HASH))
egg.sendline(AP_EXIT)
if i == 2:
logg.info("timed out waiting for {} or {}".format(AP_PROMPT,AP_HASH))
# ctlr.execute(cn_cmd)
''' NOTES for AP DFS
#############################
1. Do "show AP summary" on the controller to see the list of AP's connected.
2. Now, check the channel configured on the AP using the commend "show ap channel <AP-Name>"
3. Check for the current channel and Channel width for Slot id 1. See the output of this command.
4. Before making any changes, please connect at least 1 client to this AP in 5GHz radio.
Keep running the ping traffic to the default gateway of the AP.
4. Now, configure dfs channel for this AP with 20MHz as channel width.
6. After CAC Expiry, Client should connect back - Verify the pings are passing through or not.
Note time:
"show logging" in the AP will show the CAC timer details. You can grep for "DFS CAC timer enabled time 60" and "changed to DFS channel 52, running CAC for 60 seconds.
Wait for 60 sec and check for this log "CAC_EXPIRY_EVT: CAC finished on DFS channel 52"
[*07/07/2020 23:37:48.1460] changed to DFS channel 52, running CAC for 60 seconds.
[*07/07/2020 23:38:48.7240] CAC_EXPIRY_EVT: CAC finished on DFS channel 52
"make a note of the time and check the CAC timer expired in 60-61 seconds.
7. Now, trigger the radar on Channel 52. AP should move to another channel.
Also, When the radar is triggered, capture the CSA frames and verify the CSA count is set to 10 or not.
8. Now, verify the black-list time of the channel for this AP. : show ap auto-rf 802.11a <AP-Name>
In the controller, give the command "show ap auto-rf 802.11a <AP-Name>" under Radar information you should see the "Detected Channels and Blacklist Times" .
Black list time will be 1800 seconds which is 30 minutes.
Radar Information
DFS stats on serving radio................... 0
DFS stats on RHL radio....................... 0
DFS stats triggered.......................... 0
Cumulative stats on serving radio............ 0
Cumulative stats on RHL radio................ 0
Cumulative stats triggered................... 0
Detected Channels
Channel 100................................ 5 seconds ago
Blacklist Times
Channel 100................................ 1795 seconds remaining
(Cisco Controller) >show ap channel APA453.0E7B.CF9C
Slot Id ..................................... 0
802.11b/g Current Channel ..................... 11*
Allowed Channel List........................... 1,2,3,4,5,6,7,8,9,10,11
Slot Id ..................................... 1
802.11a Current Channel ....................... (36,40) 40MHz / Cap 160MHz
Allowed Channel List........................... 36,40,44,48,52,56,60,64,100,
........................... 104,108,112,116,120,124,128,
........................... 132,136,140,144,149,153,157,
........................... 161,165
###########################
Password: [*02/09/2021 14:30:04.2290] Radio [1] Admininstrative state ENABLED change to DISABLED
[*02/09/2021 14:30:04.2300] DOT11_DRV[1]: Stop Radio1
[*02/09/2021 14:30:04.2520] DOT11_DRV[1]: DFS CAC timer enabled time 60
[*02/09/2021 14:30:04.2740] DOT11_DRV[1]: DFS CAC timer enabled time 60
[*02/09/2021 14:30:04.2740] Stopped Radio 1
[*02/09/2021 14:30:36.2810] Radio [1] Admininstrative state DISABLED change to ENABLED
[*02/09/2021 14:30:36.3160] DOT11_DRV[1]: set_channel Channel set to 52/20 <<<<<< ????
[*02/09/2021 14:30:36.3390] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:36.4420] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:36.5440] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:36.6490] DOT11_DRV[1]: DFS CAC timer enabled time 60 <<<<<< ????
[*02/09/2021 14:30:37.2100] wl0: wlc_iovar_ext: vap_amsdu_rx_max: BCME -23
[*02/09/2021 14:30:37.2100] wl: Unsupported
[*02/09/2021 14:30:37.2100] ERROR: return from vap_amsdu_rx_max was -45
[*02/09/2021 14:30:37.4100] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:37.5040] DOT11_CFG[1]: Starting radio 1
[*02/09/2021 14:30:37.5050] DOT11_DRV[1]: Start Radio1 <<<<<<<<<
[*02/09/2021 14:30:37.5120] DOT11_DRV[1]: set_channel Channel set to 52/20
[*02/09/2021 14:30:37.5340] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:37.6360] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:37.7370] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:37.8410] DOT11_DRV[1]: DFS CAC timer enabled time 60 <<<<<<<
[*02/09/2021 14:30:37.8650] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 14:30:37.9800] changed to DFS channel 52, running CAC for 60 seconds. <<<<< Note use this one
[*02/09/2021 14:30:38.0020] Started Radio 1 <<<<< After start radio
[*02/09/2021 14:31:07.4650] wl0: wlc_iovar_ext: olpc_cal_force: BCME -16
[*02/09/2021 14:31:38.4210] CAC_EXPIRY_EVT: CAC finished on DFS channel 52 <<<<<< Start with this very unique CAC finished
[*02/09/2021 14:31:48.2850] chatter: client_ip_table :: ClientIPTable no client entry found, dropping packet 04:F0:21:F
# note lf_hackrf.py begins transmitting immediately... see if that is what is to happen?
[*02/09/2021 15:20:53.7470] wcp/dfs :: RadarDetection: radar detected <<<<< Radar detected
[*02/09/2021 15:20:53.7470] wcp/dfs :: RadarDetection: sending packet out to capwapd, slotId=1, msgLen=386, chanCnt=1 2
[*02/09/2021 15:20:53.7720] DOT11_DRV[1]: DFS CAC timer disabled time 0
[*02/09/2021 15:20:53.7780] Enabling Channel and channel width Switch Announcement on current channel
[*02/09/2021 15:20:53.7870] DOT11_DRV[1]: set_dfs Channel set to 36/20, CSA count 6 <<<<<<< Channel Set
[*02/09/2021 15:20:53.8530] DOT11_DRV[1]: DFS CAC timer enabled time 60
Trying another station
*02/09/2021 15:25:32.6130] Radio [1] Admininstrative state ENABLED change to DISABLED
[*02/09/2021 15:25:32.6450] DOT11_DRV[1]: Stop Radio1
[*02/09/2021 15:25:32.6590] Stopped Radio 1
[*02/09/2021 15:25:52.1700] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:04.6640] Radio [1] Admininstrative state DISABLED change to ENABLED
[*02/09/2021 15:26:04.6850] DOT11_DRV[1]: set_channel Channel set to 36/20
[*02/09/2021 15:26:04.7070] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:04.8090] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:04.9090] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:05.5620] wl0: wlc_iovar_ext: vap_amsdu_rx_max: BCME -23
[*02/09/2021 15:26:05.5620] wl: Unsupported
[*02/09/2021 15:26:05.5620] ERROR: return from vap_amsdu_rx_max was -45
[*02/09/2021 15:26:05.7600] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:05.8530] DOT11_CFG[1]: Starting radio 1
[*02/09/2021 15:26:05.8540] DOT11_DRV[1]: Start Radio1
[*02/09/2021 15:26:05.8610] DOT11_DRV[1]: set_channel Channel set to 36/20
[*02/09/2021 15:26:05.8830] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:05.9890] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:06.0900] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:06.2080] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:26:06.5350] Started Radio 1
[*02/09/2021 15:26:15.9750] chatter: client_ip_table :: ClientIPTable no client entry found, dropping packet 04:F0:21:
Username: [*02/09/2021 15:33:49.8680] Radio [1] Admininstrative state ENABLED change to DISABLED
[*02/09/2021 15:33:49.9010] DOT11_DRV[1]: Stop Radio1
[*02/09/2021 15:33:49.9160] Stopped Radio 1
[*02/09/2021 15:34:14.4150] DOT11_DRV[1]: set_channel Channel set to 56/20
[*02/09/2021 15:34:14.4370] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:14.5390] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:14.6400] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:14.7450] DOT11_DRV[1]: DFS CAC timer enabled time 60
[*02/09/2021 15:34:21.9160] Radio [1] Admininstrative state DISABLED change to ENABLED
[*02/09/2021 15:34:21.9370] DOT11_DRV[1]: set_channel Channel set to 56/20
[*02/09/2021 15:34:21.9590] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:22.0610] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:22.1610] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:22.2650] DOT11_DRV[1]: DFS CAC timer enabled time 60
[*02/09/2021 15:34:22.8270] wl0: wlc_iovar_ext: vap_amsdu_rx_max: BCME -23
[*02/09/2021 15:34:22.8270] wl: Unsupported
[*02/09/2021 15:34:22.8270] ERROR: return from vap_amsdu_rx_max was -45
[*02/09/2021 15:34:23.0280] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:23.1210] DOT11_CFG[1]: Starting radio 1
[*02/09/2021 15:34:23.1210] DOT11_DRV[1]: Start Radio1
[*02/09/2021 15:34:23.1280] DOT11_DRV[1]: set_channel Channel set to 56/20
[*02/09/2021 15:34:23.1510] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:23.2520] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:23.3520] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:23.4560] DOT11_DRV[1]: DFS CAC timer enabled time 60
[*02/09/2021 15:34:23.4800] wlc_ucode_download: wl0: Loading 129 MU ucode
[*02/09/2021 15:34:23.5960] changed to DFS channel 56, running CAC for 60 seconds.
[*02/09/2021 15:34:23.6180] Started Radio 1
'''
if __name__ == '__main__':
main()
####
####
####

View File

@@ -1,6 +1,13 @@
#!/bin/bash
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- #
# Check for large files and purge many of the most inconsequencial #
# Check for large files and purge the ones requested #
# #
# The -a switch will automatically purge core files when there #
# is only 5GB of space left on filesystem. #
# #
# To install as a cron-job, add the following line to /etc/crontab: #
# 1 * * * * root /home/lanforge/scripts/check_large_files.sh -a 2>&1 | logger -t check_large_files
# #
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- #
# set -x
# set -e
@@ -11,6 +18,9 @@ show_menu=1
verbose=0
quiet=0
starting_dir="$PWD"
cleanup_size_mb=$(( 1024 * 5 ))
# do not name this file "core_x" because it will get removed
lf_core_log="/home/lanforge/found_cores_log.txt"
USAGE="$0 # Check for large files and purge many of the most inconsequencial
-a # automatic: disable menu and clean automatically
@@ -44,7 +54,7 @@ note() {
echo "# $1"
}
function contains () {
function contains() {
if [[ x$1 = x ]] || [[ x$2 = x ]]; then
echo "contains wants ARRAY and ITEM arguments: if contains name joe; then... }$"
exit 1
@@ -67,12 +77,60 @@ function contains () {
return 1
}
function remove() {
if [[ x$1 = x ]] || [[ x$2 = x ]]; then
echo "remove wants ARRAY and ITEM arguments: if contains name joe; then... }$"
exit 1
fi
# these two lines below are important to not modify
local tmp="${1}[@]"
local array=( ${!tmp} )
# if [[ x$verbose = x1 ]]; then
# printf "contains array %s\n" "${array[@]}"
# fi
if (( ${#array[@]} < 1 )); then
return 1
fi
local item
for i in "${!array[@]}"; do
if [[ ${array[$i]} = "$2" ]]; then
unset 'array[i]'
debug "removed $2 from $1"
return 0
fi
done
return 1
}
function disk_space_below() {
if [[ x$1 = x ]] || [[ x$2 = x ]]; then
echo "disk_free: needs to know what filesystem, size in bytes to alarm on"
return
fi
local amount_left_mb=`df -BM --output=iavail | tail -1`
if (( $amount_left_mb < $cleanup_size_mb )) ; then
debug "amount left $amount_left_mb lt $cleanup_size_mb"
return 0
fi
debug "amount left $amount_left_mb ge $cleanup_size_mb"
return 1
}
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- #
# ----- ----- M A I N ----- ----- #
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- #
#opts=""
opts="abcdhklmqrtv"
while getopts $opts opt; do
case "$opt" in
a)
verbose=0
if contains "selections" "v"; then
verbose=1
else
verbose=0
fi
quiet=1
selections+=($opt)
show_menu=0
@@ -180,8 +238,9 @@ kernel_to_relnum() {
#set -euxv
local hunks=()
# 1>&2 echo "KERNEL RELNUM:[$1]"
local my1="${1/*[^0-9+]-/}" # Dang, this is not intuitive to a PCRE user
local my1="${1/*[^0-9]-/}" # Dang, this is not intuitive to a PCRE user
#1>&2 echo "KERNEL [$1] REGEX:[$my1]"
my1="${my1//\+/}"
if [[ $my1 =~ ^[^0-9] ]]; then
1>&2 echo "BAD SERIES: [$1]"
exit 1
@@ -216,7 +275,7 @@ clean_old_kernels() {
if (( ${#removable_packages[@]} > 0 )); then
for f in "${removable_packages[@]}"; do
echo "$f\*"
done | xargs /usr/bin/rpm -hve
done | xargs /usr/bin/rpm --nodeps -hve
fi
if (( ${#removable_kernels[@]} > 0 )); then
for f in "${removable_kernels[@]}"; do
@@ -238,16 +297,42 @@ clean_core_files() {
debug "No core files ?"
return 0
fi
#set -vux
local counter=0
if [ ! -f "$lf_core_log" ]; then
touch "$lf_core_log"
fi
date +"%Y-%m-%d-%H:%M.%S" >> $lf_core_log
for f in "${core_files[@]}"; do
echo -n "-"
rm -f "$f"
counter=$(( counter + 1 ))
if (( ($counter % 100) == 0 )); then
sleep 0.2
fi
file "$f" >> "$lf_core_log"
done
note "Recorded ${#core_files[@]} core files to $lf_core_log: "
tail -n $(( 1 + ${#core_files[@]} )) $lf_core_log
local do_delete=0
if contains "selections" "a"; then
disk_space_below / $cleanup_size_mb && do_delete=$(( $do_delete + 1 ))
disk_space_below /home $cleanup_size_mb && do_delete=$(( $do_delete + 1 ))
(( $do_delete > 0)) && note "disk space below $cleanup_size_mb, removing core files"
elif contains "selections" "c"; then
do_delete=1
note "core file cleaning selected"
fi
if (( $do_delete > 0 )); then
for f in "${core_files[@]}"; do
echo -n "-"
rm -f "$f" && remove "core_files" "$f"
counter=$(( counter + 1 ))
if (( ($counter % 100) == 0 )); then
sleep 0.2
fi
done
else
note "disk space above $cleanup_size_mb, not removing core files"
fi
#set +vux
echo ""
totals[c]=0
survey_core_files
}
clean_lf_downloads() {
@@ -311,10 +396,11 @@ clean_dnf_cache() {
}
clean_mnt_lf_files() {
note "clean mnt lf files WIP"
note "cleaning mnt lf files..."
if (( $verbose > 0 )); then
printf "%s\n" "${mnt_lf_files[@]}"
fi
rm -f "${mnt_lf_files[@]}"
}
compress_report_data() {
@@ -323,7 +409,7 @@ compress_report_data() {
while read f; do
(( $verbose > 0 )) && echo " compressing $f"
gzip -9 "$f"
done < <(find /home/lanforge -iname "*.csv")
done < <(find /home/lanforge/report-data /home/lanforge/html-reports -iname "*.csv")
}
clean_var_tmp() {
@@ -356,6 +442,7 @@ survey_kernel_files() {
unset kernel_sort_serial
unset pkg_sort_names
unset libmod_sort_names
declare -A kernel_sort_serial=()
declare -A kernel_sort_names=()
declare -A pkg_sort_names=()
declare -A libmod_sort_names=()
@@ -373,9 +460,10 @@ survey_kernel_files() {
local file
local fiile
for file in "${kernel_files[@]}"; do
# echo "kernel_file [$file]"
debug "kernel_file [$file]"
[[ $file =~ /boot/initramfs* ]] && continue
[[ $file =~ *.fc*.x86_64 ]] && continue
[[ $file = *initrd-plymouth.img ]] && continue
fiile=$( basename $file )
fiile=${fiile%.img}
@@ -525,6 +613,7 @@ core_files=()
survey_core_files() {
debug "Surveying core files"
cd /
#set -vux
mapfile -t core_files < <(ls /core* /home/lanforge/core* 2>/dev/null) 2>/dev/null
if [[ $verbose = 1 ]] && (( ${#core_files[@]} > 0 )); then
printf " %s\n" "${core_files[@]}" | head
@@ -532,6 +621,7 @@ survey_core_files() {
if (( ${#core_files[@]} > 0 )); then
totals[c]=$(du -hc "${core_files[@]}" | awk '/total/{print $1}')
fi
#set +vux
#set +x
[[ x${totals[c]} = x ]] && totals[c]=0
cd "$starting_dir"
@@ -582,7 +672,7 @@ mnt_lf_files=()
survey_mnt_lf_files() {
[ ! -d /mnt/lf ] && return 0
debug "Surveying mnt lf"
mapfile -t mnt_lf_files < <(find /mnt/lf -type f --one_filesystem 2>/dev/null)
mapfile -t mnt_lf_files < <(find /mnt/lf -xdev -type f 2>/dev/null)
totals[m]=$(du -xhc "${mnt_lf_files[@]}" 2>/dev/null | awk '/total/{print $1}')
[[ x${totals[m]} = x ]] && totals[m]=0
}
@@ -610,7 +700,7 @@ survey_report_data() {
cd /home/lanforge
# set -veux
local fsiz=0
local fnum=$( find -type f -a -name '*.csv' 2>/dev/null ||: | wc -l )
local fnum=$( find -type f -a -name '*.csv' 2>/dev/null | wc -l ||:)
# if (( $verbose > 0 )); then
# hr
# find -type f -a -name '*.csv' 2>/dev/null ||:
@@ -702,8 +792,8 @@ fi
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- #
if contains "selections" "a" ; then
note "Automatic deletion will include: "
printf "%s\n" "${selections[@]}"
# note "Automatic deletion will include: "
# printf "%s\n" "${selections[@]}"
debug "Doing automatic cleanup"
for z in "${selections[@]}"; do
debug "Will perform ${desc[$z]}"
@@ -716,12 +806,13 @@ fi
if (( ${#selections[@]} > 0 )) ; then
debug "Doing selected cleanup: "
printf " %s\n" "${selections[@]}"
sleep 1
# printf " %s\n" "${selections[@]}"
# sleep 1
for z in "${selections[@]}"; do
debug "Performing ${desc[$z]}"
${cleaners_map[$z]}
selections=("${selections[@]/$z}")
# selections=("${selections[@]/$z}")
remove selections "$z"
done
survey_areas
disk_usage_report

View File

@@ -1,3 +1,4 @@
#!/usr/bin/env python3
''' Author: Nikita Yadav
This script will calculate the cpu and memory utilization by the system during runtime the output of which is a graph on html page also the second tab of html gives the system logs

View File

@@ -5,7 +5,7 @@
# This script sets up connections of types:
# lf, lf_udp, lf_tcp, custom_ether, custom_udp, and custom_tcp
# across 1 real port and manny macvlan ports on 2 machines.
# across 1 real port and many macvlan ports on 2 machines.
# It then continously starts and stops the connections.
# Un-buffer output

View File

@@ -1,4 +1,12 @@
#!/usr/bin/perl -w
# This program creates a UDP broadcast connection
# Written by Candela Technologies Inc.
# Udated by:
#
#
use strict;
use warnings;
use Carp;

View File

@@ -80,6 +80,7 @@ def main():
parser.add_argument("--duration", type=float, help="Duration to sniff, in minutes")
parser.add_argument("--moni_flags", type=str, help="Monitor port flags, see LANforge CLI help for set_wifi_monitor. Default enables 160Mhz")
parser.add_argument("--upstreams", type=str, help="Upstream ports to sniff (1.eth1 ...)")
parser.add_argument("--moni_idx", type=str, help="Optional monitor number", default=None)
args = None
try:
@@ -213,11 +214,13 @@ def main():
"--show_port", "Port"], stdout=PIPE, stderr=PIPE);
pss = port_stats.stdout.decode('utf-8', 'ignore');
moni_idx = "0"
for line in pss.splitlines():
m = re.search('Port:\s+(.*)', line)
if (m != None):
moni_idx = m.group(1)
moni_idx = args.moni_idx
if args.moni_idx is None:
for line in pss.splitlines():
m = re.search('Port:\s+(.*)', line)
if (m != None):
moni_idx = m.group(1)
# Create monitor interface
mname = "moni%sa"%(moni_idx);
@@ -259,7 +262,7 @@ def main():
print("Starting sniffer on port %s.%s for %s seconds, saving to file %s.pcap on resource %s\n"%(r, m, dur, m, r))
subprocess.run(["./lf_portmod.pl", "--manager", lfmgr,
"--cli_cmd", "sniff_port 1 %s %s NA %s %s.pcap %i"%(r, m, sflags, m, int(dur))]);
"--cli_cmd", "sniff_port 1 %s %s NA %s %s.pcap %i"%(r, m, sflags, m, float(dur))]);
idx = idx + 1
# Start sniffing on all upstream ports
@@ -273,7 +276,7 @@ def main():
print("Starting sniffer on upstream port %s.%s for %s seconds, saving to file %s.pcap on resource %s\n"%(u_resource, u_name, dur, u_name, u_resource))
subprocess.run(["./lf_portmod.pl", "--manager", lfmgr,
"--cli_cmd", "sniff_port 1 %s %s NA %s %s.pcap %i"%(u_resource, u_name, sflags, u_name, int(dur))]);
"--cli_cmd", "sniff_port 1 %s %s NA %s %s.pcap %i"%(u_resource, u_name, sflags, u_name, float(dur))]);
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
if __name__ == '__main__':

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,70 @@
#!/usr/bin/env python3
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Class holds default settings for json requests to Grafana -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
import sys
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit()
import requests
import json
class GrafanaRequest:
def __init__(self,
_grafanajson_host,
_grafanajson_port,
_folderID=0,
_api_token=None,
_headers=dict(),
_overwrite='false',
debug_=False,
die_on_error_=False):
self.debug = debug_
self.die_on_error = die_on_error_
self.headers = _headers
self.headers['Authorization'] = 'Bearer ' + _api_token
self.headers['Content-Type'] = 'application/json'
self.grafanajson_url = "http://%s:%s" % (_grafanajson_host, _grafanajson_port)
self.data = dict()
self.data['overwrite'] = _overwrite
def create_bucket(self,
bucket_name=None):
# Create a bucket in Grafana
if bucket_name is not None:
pass
def list_dashboards(self):
url = self.grafanajson_url + '/api/search?folderIds=0&query=&starred=false'
return requests.get(url).text
def create_dashboard(self,
dashboard_name=None,
):
self.grafanajson_url = self.grafanajson_url + "/api/dashboards/db"
datastore = dict()
dashboard = dict()
dashboard['id'] = None
dashboard['title'] = dashboard_name
dashboard['tags'] = ['templated']
dashboard['timezone'] = 'browser'
dashboard['shemaVersion'] = 6
dashboard['version'] = 0
datastore['dashboard'] = dashboard
datastore['overwrite'] = False
data = json.dumps(datastore, indent=4)
return requests.post(self.grafanajson_url, headers=self.headers, data=data, verify=False)
def delete_dashboard(self,
dashboard_uid=None):
self.grafanajson_url = self.grafanajson_url + "/api/dashboards/uid/" + dashboard_uid
return requests.post(self.grafanajson_url, headers=self.headers, verify=False)
def create_dashboard_from_data(self,
json_file=None):
pass

View File

@@ -10,6 +10,7 @@ if sys.version_info[0] != 3:
import pprint
import urllib
import time
import traceback
from urllib import request
from urllib import error
from urllib import parse
@@ -179,11 +180,12 @@ class LFRequest:
try:
resp = request.urlopen(myrequest)
resp_data = resp.read().decode('utf-8')
if (debug):
if (debug and die_on_error_):
print("----- LFRequest::json_post:128 debug: --------------------------------------------")
print("URL: %s :%d "% (self.requested_url, resp.status))
LFUtils.debug_printer.pprint(resp.getheaders())
print("----- resp_data -------------------------------------------------")
if resp.status != 200:
LFUtils.debug_printer.pprint(resp.getheaders())
print("----- resp_data:128 -------------------------------------------------")
print(resp_data)
print("-------------------------------------------------")
responses.append(resp)
@@ -219,7 +221,8 @@ class LFRequest:
print("----- Response: --------------------------------------------------------")
LFUtils.debug_printer.pprint(responses[0].reason)
print("------------------------------------------------------------------------")
if die_on_error_ or (error.code != 404):
if die_on_error_:
traceback.print_stack(limit=15)
exit(1)
except urllib.error.URLError as uerror:
if show_error:
@@ -227,6 +230,7 @@ class LFRequest:
print("Reason: %s; URL: %s"%(uerror.reason, myrequest.get_full_url()))
print("------------------------------------------------------------------------")
if (die_on_error_ == True) or (self.die_on_error == True):
traceback.print_stack(limit=15)
exit(1)
return None
@@ -259,36 +263,34 @@ class LFRequest:
myrequest = request.Request(url=self.requested_url,
headers=self.default_headers,
method=method_)
myresponses = []
try:
myresponses.append(request.urlopen(myrequest))
return myresponses[0]
except urllib.error.HTTPError as error:
if debug:
print("----- LFRequest::get:181 HTTPError: --------------------------------------------")
print("<%s> HTTP %s: %s"%(myrequest.get_full_url(), error.code, error.reason, ))
if error.code != 404:
if error.code == 404:
print("HTTP 404: <%s>" % myrequest.get_full_url())
else:
print("----- LFRequest::get:181 HTTPError: --------------------------------------------")
print("<%s> HTTP %s: %s"%(myrequest.get_full_url(), error.code, error.reason, ))
print("Error: ", sys.exc_info()[0])
print("Request URL:", myrequest.get_full_url())
print("Request Content-type:", myrequest.get_header('Content-type'))
print("Request Accept:", myrequest.get_header('Accept'))
print("Request Data:")
print("E Request URL:", myrequest.get_full_url())
print("E Request Content-type:", myrequest.get_header('Content-type'))
print("E Request Accept:", myrequest.get_header('Accept'))
print("E Request Data:")
LFUtils.debug_printer.pprint(myrequest.data)
if error.headers:
if (error.code != 404) and error.headers:
# the HTTPError is of type HTTPMessage a subclass of email.message
# print(type(error.keys()))
for headername in sorted(error.headers.keys()):
print ("Response %s: %s "%(headername, error.headers.get(headername)))
if len(myresponses) > 0:
print ("H Response %s: %s "%(headername, error.headers.get(headername)))
if (error.code != 404) and (len(myresponses) > 0):
print("----- Response: --------------------------------------------------------")
LFUtils.debug_printer.pprint(myresponses[0].reason)
print("------------------------------------------------------------------------")
if die_on_error_ == True:
# print("--------------------------------------------- s.doe %s v doe %s ---------------------------" % (self.die_on_error, die_on_error_))
print("------------------------------------------------------------------------")
if (error.code != 404) and (die_on_error_ == True):
traceback.print_stack(limit=15)
exit(1)
except urllib.error.URLError as uerror:
if debug:
@@ -296,6 +298,7 @@ class LFRequest:
print("Reason: %s; URL: %s"%(uerror.reason, myrequest.get_full_url()))
print("------------------------------------------------------------------------")
if die_on_error_ == True:
traceback.print_stack(limit=15)
exit(1)
return None

View File

@@ -365,6 +365,43 @@ def port_list_to_alias_map(json_list, debug_=False):
return reverse_map
def list_to_alias_map(json_list=None, from_element=None, debug_=False):
reverse_map = {}
if (json_list is None) or (len(json_list) < 1):
if debug_:
print("port_list_to_alias_map: no json_list provided")
raise ValueError("port_list_to_alias_map: no json_list provided")
return reverse_map
if debug_:
pprint.pprint(("list_to_alias_map:json_list: ", json_list))
json_interfaces = json_list
if from_element in json_list:
json_interfaces = json_list[from_element]
for record in json_interfaces:
if debug_:
pprint.pprint(("list_to_alias_map: %s record:" % from_element, record))
if len(record.keys()) < 1:
if debug_:
print("list_to_alias_map: no record.keys")
continue
record_keys = record.keys()
k2 = ""
# we expect one key in record keys, but we can't expect [0] to be populated
json_entry = None
for k in record_keys:
k2 = k
json_entry = record[k]
# skip uninitialized port records
if k2.find("Unknown") >= 0:
continue
port_json = record[k2]
reverse_map[k2] = json_entry
if debug_:
pprint.pprint(("list_to_alias_map: reverse_map", reverse_map))
return reverse_map
def findPortEids(resource_id=1, base_url="http://localhost:8080", port_names=(), debug=False):
return find_port_eids(resource_id=resource_id, base_url=base_url, port_names=port_names, debug=debug)
@@ -445,30 +482,59 @@ def waitUntilPortsDisappear(base_url="http://localhost:8080", port_list=[], debu
wait_until_ports_disappear(base_url, port_list, debug)
def wait_until_ports_disappear(base_url="http://localhost:8080", port_list=[], debug=False):
print("Waiting until ports disappear...")
if (port_list is None) or (len(port_list) < 1):
if debug:
print("LFUtils: wait_until_ports_disappear: empty list, zipping back")
return
print("LFUtils: Waiting until %s ports disappear..." % len(port_list))
url = "/port/1"
if isinstance(port_list, list):
found_stations = port_list.copy()
else:
found_stations = [port_list]
temp_names_by_resource = {1:[]}
temp_query_by_resource = {1:""}
for port_eid in port_list:
eid = name_to_eid(port_eid)
# shelf = eid[0]
resource_id = eid[1]
if resource_id == 0:
continue
if resource_id not in temp_names_by_resource.keys():
temp_names_by_resource[resource_id] = []
port_name = eid[2]
temp_names_by_resource[resource_id].append(port_name)
temp_query_by_resource[resource_id] = "%s/%s/%s?fields=alias" % (url, resource_id, ",".join(temp_names_by_resource[resource_id]))
if debug:
pprint.pprint(("temp_query_by_resource", temp_query_by_resource))
while len(found_stations) > 0:
found_stations = []
for port_eid in port_list:
eid = name_to_eid(port_eid)
shelf = eid[0]
resource_id = eid[1]
port_name = eid[2]
check_url = "%s/%s/%s" % (url, resource_id, port_name)
for (resource, check_url) in temp_query_by_resource.items():
if debug:
print("checking:" + check_url)
lf_r = LFRequest.LFRequest(base_url, check_url)
json_response = lf_r.get_as_json(debug_=debug)
if (json_response != None):
found_stations.append(port_name)
pprint.pprint([
("base_url", base_url),
("check_url", check_url),
])
lf_r = LFRequest.LFRequest(base_url, check_url, debug_=debug)
json_response = lf_r.get_as_json(debug_=debug, die_on_error_=False)
if (json_response == None):
print("Request returned None")
else:
if debug:
pprint.pprint(("wait_until_ports_disappear json_response:", json_response))
if "interface" in json_response:
found_stations.append(json_response["interface"])
elif "interfaces" in json_response:
mapped_list = list_to_alias_map(json_response, from_element="interfaces", debug_=debug)
found_stations.extend(mapped_list.keys())
if debug:
pprint.pprint([("port_list", port_list),
("found_stations", found_stations)])
if len(found_stations) > 0:
sleep(1)
if debug:
pprint.pprint(("wait_until_ports_disappear found_stations:", found_stations))
sleep(1) # safety
return
@@ -483,12 +549,14 @@ def waitUntilPortsAppear(base_url="http://localhost:8080", port_list=(), debug=F
"""
return wait_until_ports_appear(base_url, port_list, debug=debug)
def name_to_eid(input):
rv = [1, 1, ""]
def name_to_eid(input, non_port=False):
rv = [1, 1, "", ""]
info = []
if (input is None) or (input == ""):
raise ValueError("name_to_eid wants eid like 1.1.sta0 but given[%s]" % input)
if type(input) is not str:
raise ValueError("name_to_eid wants string formatted like '1.2.name', not a tuple or list or [%s]" % type(input))
info = input.split('.')
if len(info) == 1:
rv[2] = info[0]; # just port name
@@ -514,6 +582,15 @@ def name_to_eid(input):
rv[2] = info[1]+"."+info[2]
return rv
if non_port:
# Maybe attenuator or similar shelf.card.atten.index
rv[0] = int(info[0])
rv[1] = int(info[1])
rv[2] = int(info[2])
if (len(info) >= 4):
rv[3] = int(info[3])
return rv
if len(info) == 4: # shelf.resource.port-name.qvlan
rv[0] = int(info[0])
rv[1] = int(info[1])

View File

@@ -0,0 +1,21 @@
# Flags for the add_l4_endp command
HTTP_auth_flags = {
"BASIC" : 0x1, # Basic authentication
"DIGEST" : 0x2, # Digest (MD5) authentication
"GSSNEGOTIATE" : 0x4, # GSS authentication
"NTLM" : 0x8, # NTLM authentication
}
proxy_auth_type_flags = {
"BASIC" : 0x1, # 1 Basic authentication
"DIGEST" : 0x2, # 2 Digest (MD5) authentication
"GSSNEGOTIATE" : 0x4, # 4 GSS authentication
"NTLM" : 0x8, # 8 NTLM authentication
"USE_PROXY_CACHE" : 0x20, # 32 Use proxy cache
"USE_GZIP_COMPRESSION" : 0x40, # 64 Use gzip compression
"USE_DEFLATE_COMPRESSION" : 0x80, # 128 Use deflate compression
"INCLUDE_HEADERS" : 0x100, # 256 especially for IMAP
"BIND_DNS" : 0x200, # 512 Make DNS requests go out endpoints Port.
"USE_IPV6" : 0x400, # 1024 Resolve URL is IPv6. Will use IPv4 if not selected.
"DISABLE_PASV" : 0x800, # 2048 Disable FTP PASV option (will use PORT command)
"DISABLE_EPSV" : 0x1000, # 4096 Disable FTP EPSV option
}

View File

@@ -6,24 +6,30 @@ import traceback
# Extend this class to use common set of debug and request features for your script
from pprint import pprint
import time
import random
import string
import datetime
import argparse
import LANforge.LFUtils
from LANforge.LFUtils import *
import argparse
from LANforge import LFRequest
import LANforge.LFRequest
import csv
import pandas as pd
import os
class LFCliBase:
SHOULD_RUN = 0 # indicates normal operation
SHOULD_QUIT = 1 # indicates to quit loops, close files, send SIGQUIT to threads and return
SHOULD_HALT = 2 # indicates to quit loops, send SIGABRT to threads and exit
SHOULD_HALT = 2 # indicates to quit loops, send SIGABRT to threads and exit
# do not use `super(LFCLiBase,self).__init__(self, host, port, _debug)
# that is py2 era syntax and will force self into the host variable, making you
# very confused.
def __init__(self, _lfjson_host, _lfjson_port,
_debug=False,
_halt_on_error=False,
_exit_on_error=False,
_exit_on_fail=False,
_local_realm=None,
@@ -46,7 +52,6 @@ class LFCliBase:
# print("LFCliBase._proxy_str: %s" % _proxy_str)
self.lfclient_url = "http://%s:%s" % (self.lfclient_host, self.lfclient_port)
self.test_results = []
self.halt_on_error = _halt_on_error
self.exit_on_error = _exit_on_error
self.exit_on_fail = _exit_on_fail
self.capture_signals = _capture_signal_list
@@ -204,12 +209,12 @@ class LFCliBase:
if debug_ and (response_json_list_ is not None):
pprint.pprint(response_json_list_)
except Exception as x:
if debug_ or self.halt_on_error or self.exit_on_error:
if debug_ or self.exit_on_error:
print("json_post posted to %s" % _req_url)
pprint.pprint(_data)
print("Exception %s:" % x)
traceback.print_exception(Exception, x, x.__traceback__, chain=True)
if self.halt_on_error or self.exit_on_error:
if self.exit_on_error:
exit(1)
return json_response
@@ -242,12 +247,12 @@ class LFCliBase:
if debug_ and (response_json_list_ is not None):
pprint.pprint(response_json_list_)
except Exception as x:
if debug_ or self.halt_on_error or self.exit_on_error:
if debug_ or self.exit_on_error:
print("json_put submitted to %s" % _req_url)
pprint.pprint(_data)
print("Exception %s:" % x)
traceback.print_exception(Exception, x, x.__traceback__, chain=True)
if self.halt_on_error or self.exit_on_error:
if self.exit_on_error:
exit(1)
return json_response
@@ -265,17 +270,17 @@ class LFCliBase:
proxies_=self.proxy,
debug_=debug_,
die_on_error_=self.exit_on_error)
json_response = lf_r.get_as_json(debug_=debug_, die_on_error_=self.halt_on_error)
json_response = lf_r.get_as_json(debug_=debug_, die_on_error_=False)
#debug_printer.pprint(json_response)
if (json_response is None) and debug_:
print("LFCliBase.json_get: no entity/response, probabily status 404")
return None
except ValueError as ve:
if debug_ or self.halt_on_error or self.exit_on_error:
if debug_ or self.exit_on_error:
print("jsonGet asked for " + _req_url)
print("Exception %s:" % ve)
traceback.print_exception(ValueError, ve, ve.__traceback__, chain=True)
if self.halt_on_error or self.exit_on_error:
if self.exit_on_error:
sys.exit(1)
return json_response
@@ -292,17 +297,18 @@ class LFCliBase:
proxies_=self.proxy,
debug_=debug_,
die_on_error_=self.exit_on_error)
json_response = lf_r.json_delete(debug=debug_, die_on_error_=self.halt_on_error)
json_response = lf_r.json_delete(debug=debug_, die_on_error_=False)
print(json_response)
#debug_printer.pprint(json_response)
if (json_response is None) and debug_:
print("LFCliBase.json_delete: no entity/response, probabily status 404")
return None
except ValueError as ve:
if debug_ or self.halt_on_error or self.exit_on_error:
if debug_ or self.exit_on_error:
print("json_delete asked for " + _req_url)
print("Exception %s:" % ve)
traceback.print_exception(ValueError, ve, ve.__traceback__, chain=True)
if self.halt_on_error or self.exit_on_error:
if self.exit_on_error:
sys.exit(1)
# print("----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ")
return json_response
@@ -342,11 +348,6 @@ class LFCliBase:
# print("lfcli_base error: %s" % exception)
pprint.pprint(exception)
traceback.print_exception(Exception, exception, exception.__traceback__, chain=True)
if self.halt_on_error:
print("halting on error")
sys.exit(1)
# else:
# print("continuing...")
def check_connect(self):
if self.debug:
@@ -362,6 +363,7 @@ class LFCliBase:
if duration >= 300:
print("Could not connect to LANforge GUI")
sys.exit(1)
#return ALL messages in list form
def get_result_list(self):
return self.test_results
@@ -384,11 +386,11 @@ class LFCliBase:
def get_pass_message(self):
pass_messages = self.get_passed_result_list()
return "\n".join(pass_messages)
return "\n".join(pass_messages)
def get_fail_message(self):
fail_messages = self.get_failed_result_list()
return "\n".join(fail_messages)
return "\n".join(fail_messages)
def get_all_message(self):
return "\n".join(self.test_results)
@@ -407,7 +409,7 @@ class LFCliBase:
return False
#EXIT script with a fail
def exit_fail(self,message="%d out of %d tests failed. Exiting script with script failure."):
def exit_fail(self, message="%d out of %d tests failed. Exiting script with script failure."):
total_len=len(self.get_result_list())
fail_len=len(self.get_failed_result_list())
print(message %(fail_len,total_len))
@@ -421,13 +423,18 @@ class LFCliBase:
if self.exit_on_fail:
sys.exit(1)
#EXIT script with a success
#EXIT script with a success
def exit_success(self,message="%d out of %d tests passed successfully. Exiting script with script success."):
num_total=len(self.get_result_list())
num_passing=len(self.get_passed_result_list())
print(message %(num_passing,num_total))
sys.exit(0)
def success(self,message="%d out of %d tests passed successfully."):
num_total=len(self.get_result_list())
num_passing=len(self.get_passed_result_list())
print(message %(num_passing,num_total))
# use this inside the class to log a pass result and print if wished.
def _pass(self, message, print_=False):
self.test_results.append(self.pass_pref + message)
@@ -451,6 +458,63 @@ class LFCliBase:
# pprint.pprint(self.proxy)
def logg2(self, level="debug", mesg=None):
if (mesg is None) or (mesg == ""):
return
print("[{level}]: {msg}".format(level=level, msg=mesg))
def logg(self,
level=None,
mesg=None,
filename=None,
scriptname=None):
if (mesg is None) or (mesg == "") or (level is None):
return
userhome=os.path.expanduser('~')
session = str(datetime.datetime.now().strftime("%Y-%m-%d-%H-h-%M-m-%S-s")).replace(':','-')
if filename == None:
try:
os.mkdir("%s/report-data/%s" % (userhome, session))
except:
pass
filename = ("%s/report-data/%s/%s.log" % (userhome,session,scriptname))
import logging
logging.basicConfig(filename=filename, level=logging.DEBUG)
if level == "debug":
logging.debug(mesg)
elif level == "info":
logging.info(mesg)
elif level == "warning":
logging.warning(mesg)
elif level == "error":
logging.error(mesg)
@staticmethod
def parse_time(time_string):
if isinstance(time_string, str):
pattern = re.compile("^(\d+)([dhms]$)")
td = pattern.match(time_string)
if td is not None:
dur_time = int(td.group(1))
dur_measure = str(td.group(2))
if dur_measure == "d":
duration_time = datetime.timedelta(days=dur_time)
elif dur_measure == "h":
duration_time = datetime.timedelta(hours=dur_time)
elif dur_measure == "m":
duration_time = datetime.timedelta(minutes=dur_time)
elif dur_measure == "ms":
duration_time = datetime.timedelta(milliseconds=dur_time)
elif dur_measure == "w":
duration_time = datetime.timedelta(weeks=dur_time)
else:
duration_time = datetime.timedelta(seconds=dur_time)
else:
raise ValueError("Cannot compute time string provided: %s" % time_string)
else:
raise ValueError("time_string must be of type str. Type %s provided" % type(time_string))
return duration_time
# This style of Action subclass for argparse can't do much unless we incorporate
# our argparse as a member of LFCliBase. Then we can do something like automatically
# parse our proxy string without using _init_ arguments
@@ -461,7 +525,10 @@ class LFCliBase:
# zelf.adjust_proxy(values)
@staticmethod
def create_bare_argparse(prog=None, formatter_class=None, epilog=None, description=None):
def create_bare_argparse(prog=None,
formatter_class=argparse.RawTextHelpFormatter,
epilog=None,
description=None):
if (prog is not None) or (formatter_class is not None) or (epilog is not None) or (description is not None):
parser = argparse.ArgumentParser(prog=prog,
formatter_class=formatter_class,
@@ -474,7 +541,7 @@ class LFCliBase:
required = parser.add_argument_group('required arguments')
optional.add_argument('--mgr', help='hostname for where LANforge GUI is running', default='localhost')
optional.add_argument('--mgr_port', help='port LANforge GUI HTTP service is running on', default=8080)
optional.add_argument('--debug', help='Enable debugging', default=False, action="store_true")
optional.add_argument('--debug', '-d', help='Enable debugging', default=False, action="store_true")
optional.add_argument('--proxy', nargs='?', default=None, # action=ProxyAction,
help='Connection proxy like http://proxy.localnet:80 or https://user:pass@proxy.localnet:3128')
@@ -486,7 +553,9 @@ class LFCliBase:
def create_basic_argparse(prog=None,
formatter_class=None,
epilog=None,
description=None):
description=None,
more_optional=None,
more_required=None):
if (prog is not None) or (formatter_class is not None) or (epilog is not None) or (description is not None):
parser = argparse.ArgumentParser(prog=prog,
formatter_class=formatter_class,
@@ -496,6 +565,7 @@ class LFCliBase:
parser = argparse.ArgumentParser()
optional = parser.add_argument_group('optional arguments')
required = parser.add_argument_group('required arguments')
#Optional Args
optional.add_argument('--mgr', help='hostname for where LANforge GUI is running', default='localhost')
optional.add_argument('--mgr_port', help='port LANforge GUI HTTP service is running on', default=8080)
@@ -507,16 +577,35 @@ class LFCliBase:
optional.add_argument('--debug', help='Enable debugging', default=False, action="store_true")
optional.add_argument('--proxy', nargs='?', default=None,
help='Connection proxy like http://proxy.localnet:80 or https://user:pass@proxy.localnet:3128')
if more_optional is not None:
for x in more_optional:
if 'default' in x.keys():
optional.add_argument(x['name'], help=x['help'], default=x['default'])
else:
optional.add_argument(x['name'], help=x['help'])
#Required Args
required.add_argument('--radio', help='radio EID, e.g: 1.wiphy2')
required.add_argument('--security', help='WiFi Security protocol: < open | wep | wpa | wpa2 | wpa3 >')
required.add_argument('--security', help='WiFi Security protocol: < open | wep | wpa | wpa2 | wpa3 >', default="open")
required.add_argument('--ssid', help='WiFi SSID for script objects to associate to')
required.add_argument('--passwd', '--password' ,'--key', help='WiFi passphrase/password/key')
required.add_argument('--passwd', '--password' ,'--key', help='WiFi passphrase/password/key', default="[BLANK]")
if more_required is not None:
for x in more_required:
if 'default' in x.keys():
required.add_argument(x['name'], help=x['help'], default=x['default'])
else:
required.add_argument(x['name'], help=x['help'])
return parser
# use this function to add an event You can see these events when watching websocket_client at 8081 port
def add_event(self, message=None, event_id="new", name="custom", priority=1, debug_=False):
def add_event(self,
message=None,
event_id="new",
name="custom",
priority=1,
debug_=False):
data = {
"event_id": event_id,
"details": message,
@@ -525,6 +614,25 @@ class LFCliBase:
}
self.json_post("/cli-json/add_event", data, debug_=debug_)
def read_file(self, filename):
filename = open(filename, 'r')
return [line.split(',') for line in filename.readlines()]
#Function creates random characters made of letters
def random_chars(self, size, chars=None):
if chars is None:
chars = string.ascii_letters
return ''.join(random.choice(chars) for x in range(size))
def get_milliseconds(self, timestamp):
return (timestamp - datetime.datetime(1970,1,1)).total_seconds()*1000
def get_seconds(self, timestamp):
return (timestamp - datetime.datetime(1970,1,1)).total_seconds()
def replace_special_char(self, str):
return str.replace('+', ' ').replace('_', ' ').strip(' ')
Help_Mode = """Station WiFi modes: use the number value below:
auto : 0,
a : 1,
@@ -540,6 +648,119 @@ class LFCliBase:
bgnAC : 11,
abgnAX : 12,
bgnAX : 13
"""
"""
#================ Pandas Dataframe Functions ======================================
#takes any dataframe and returns the specified file extension of it
def df_to_file(self, output_f=None,dataframe=None, save_path=None):
if output_f.lower() == 'hdf':
import tables
dataframe.to_hdf(save_path.replace('csv','h5',1), 'table', append=True)
if output_f.lower() == 'parquet':
import pyarrow as pa
dataframe.to_parquet(save_path.replace('csv','parquet',1), engine='pyarrow')
if output_f.lower() == 'png':
fig = dataframe.plot().get_figure()
fig.savefig(save_path.replace('csv','png',1))
if output_f.lower() == 'xlsx':
dataframe.to_excel(save_path.replace('csv','xlsx',1))
if output_f.lower() == 'json':
dataframe.to_json(save_path.replace('csv','json',1))
if output_f.lower() == 'stata':
dataframe.to_stata(save_path.replace('csv','dta',1))
if output_f.lower() == 'pickle':
dataframe.to_pickle(save_path.replace('csv','pkl',1))
if output_f.lower() == 'html':
dataframe.to_html(save_path.replace('csv','html',1))
#takes any format of a file and returns a dataframe of it
def file_to_df(self,file_name):
if file_name.split('.')[-1] == 'csv':
return pd.read_csv(file_name)
#only works for test_ipv4_variable_time at the moment
def compare_two_df(self,dataframe_one=None,dataframe_two=None):
#df one = current report
#df two = compared report
pd.set_option("display.max_rows", None, "display.max_columns", None)
#get all of common columns besides Timestamp, Timestamp milliseconds
common_cols = list(set(dataframe_one.columns).intersection(set(dataframe_two.columns)))
cols_to_remove = ['Timestamp milliseconds epoch','Timestamp','LANforge GUI Build: 5.4.3']
com_cols = [i for i in common_cols if i not in cols_to_remove]
#check if dataframes have the same endpoints
if dataframe_one.name.unique().tolist().sort() == dataframe_two.name.unique().tolist().sort():
endpoint_names = dataframe_one.name.unique().tolist()
if com_cols is not None:
dataframe_one = dataframe_one[[c for c in dataframe_one.columns if c in com_cols]]
dataframe_two = dataframe_two[[c for c in dataframe_one.columns if c in com_cols]]
dataframe_one = dataframe_one.loc[:, ~dataframe_one.columns.str.startswith('Script Name:')]
dataframe_two = dataframe_two.loc[:, ~dataframe_two.columns.str.startswith('Script Name:')]
lowest_duration=min(dataframe_one['Duration elapsed'].max(),dataframe_two['Duration elapsed'].max())
print("The max duration in the new dataframe will be... " + str(lowest_duration))
compared_values_dataframe = pd.DataFrame(columns=[col for col in com_cols if not col.startswith('Script Name:')])
cols = compared_values_dataframe.columns.tolist()
cols=sorted(cols, key=lambda L: (L.lower(), L))
compared_values_dataframe= compared_values_dataframe[cols]
print(compared_values_dataframe)
for duration_elapsed in range(lowest_duration):
for endpoint in endpoint_names:
#check if value has a space in it or is a str.
# if value as a space, only take value before space for calc, append that calculated value after space.
#if str. check if values match from 2 df's. if values do not match, write N/A
for_loop_df1 = dataframe_one.loc[(dataframe_one['name'] == endpoint) & (dataframe_one['Duration elapsed'] == duration_elapsed)]
for_loop_df2 = dataframe_two.loc[(dataframe_one['name'] == endpoint) & (dataframe_two['Duration elapsed'] == duration_elapsed)]
# print(for_loop_df1)
# print(for_loop_df2)
cols_to_loop = [i for i in com_cols if i not in ['Duration elapsed', 'Name', 'Script Name: test_ipv4_variable_time']]
cols_to_loop=sorted(cols_to_loop, key=lambda L: (L.lower(), L))
print(cols_to_loop)
row_to_append={}
row_to_append["Duration elapsed"] = duration_elapsed
for col in cols_to_loop:
print(col)
print(for_loop_df1)
#print(for_loop_df2)
print(for_loop_df1.at[0, col])
print(for_loop_df2.at[0, col])
if type(for_loop_df1.at[0, col]) == str and type(for_loop_df2.at[0, col]) == str:
if (' ' in for_loop_df1.at[0,col]) == True:
#do subtraction
new_value = float(for_loop_df1.at[0, col].split(" ")[0]) - float(for_loop_df2.at[0, col].split(" ")[0])
#add on last half of string
new_value = str(new_value)+ for_loop_df2.at[0, col].split(" ")[1]
# print(new_value)
row_to_append[col] = new_value
else:
if for_loop_df1.at[0, col] != for_loop_df2.at[0, col]:
row_to_append[col] = 'NaN'
else:
row_to_append[col] = for_loop_df1.at[0,col]
elif type(for_loop_df1.at[0, col]) == int and type(for_loop_df2.at[0, col]) == int or type(for_loop_df1.at[0, col]) == float and type(for_loop_df2.at[0,col]) == float:
new_value = for_loop_df1.at[0, col] - for_loop_df2.at[0, col]
row_to_append[col] = new_value
compared_values_dataframe = compared_values_dataframe.append(row_to_append, ignore_index=True,)
print(compared_values_dataframe)
#add col name to new df
print(dataframe_one)
print(dataframe_two)
print(compared_values_dataframe)
else:
ValueError("Unable to execute report comparison due to inadequate file commonalities. ")
exit(1)
else:
ValueError("Two files do not have the same endpoints. Please try file comparison with files that have the same endpoints.")
exit(1)
#take those columns and separate those columns from others in DF.
pass
#return compared_df
def append_df_to_file(self,dataframe, file_name):
pass
# ~class

View File

@@ -10,6 +10,8 @@ Follow our [getting started cookbook](http://www.candelatech.com/cookbook.php?vo
to learn more about how to operate your LANforge client.
## Getting Started ##
The first step is to run update_deps.py which is located in py-scripts. This command will install all dependencies necessary for lanforge-scripts on your system.
New automation tests and JSON client scripts should go in `../py-scripts`. This directory
is intended for utility and library scripts. To use this module, make sure your include path
captures this module by adding it to your `sys.path`. We recommend your scripts in `../py-scripts`
@@ -33,7 +35,7 @@ begin with these imports:
`import LANforge`
`from LANforge import LFUtils`
`from LANforge import LFRequest`
## create_sta.py ##
## create_sta.py ##
Please follow though `create_sta.py` to see how you can
utilize the JSON API provided by the LANforge client. It
is possible to use similar commands to create virtual Access points.
@@ -41,73 +43,79 @@ is possible to use similar commands to create virtual Access points.
Example that creates a WANlink
## generic_cx.py ##
Example that creates a cross connect
## realm.py ##
## realm.py ##
Module defining the Realm class. `Realm` is a toolbox class that also serves as a facade for finer-grained methods in LFUtils and LFRequest:
* `__init__`: our constructor
* `load()`: load a test scenario database, as you would find in the GUI Status tab
* `cx_list()`: request json list of cross connects
* `station_map()`: request a map of stations via `/port/list` and alter the list to name based map of only stations
* `station_list()`: request a list of stations
* `vap_list()`: request a list of virtual APs
* `remove_vlan_by_eid()`: a way of deleting a port/station/vAP
* `find_ports_like()`: returns a list of ports matching a string prefix, like:
*`def __init__()`: our constructor
*`def wait_until_ports_appear()`: takes a list of ports and waits until they all appear in the list of existing stations
*`def wait_until_ports_disappear()`: takes a list of ports and waits until they all disappear from the list of existing stations
*`def rm_port()`: takes a string in eid format and attempts to remove it
*`def port_exists()`: takes a string in eid format and returns a boolean depending on if the port exists
*`def admin_up()`: takes a string in eid format attempts to set it to admin up
*`def admin_down()`: takes a string in eid format attempts to set it to admin down
*`def reset_port()`: takes a string in eid format requests a port reset
*`def rm_cx()`: takes a cross connect name as a string and attempts to remove it from LANforge
*`def rm_endp()`: takes an endpoint name as a string and attempts to remove it from LANforge
*`def set_endp_tos()`: attempts to set tos of a specified endpoint name
*`def stop_cx()`: attempts to stop a cross connect with the given name
*`def cleanup_cxe_prefix()`: attempts to remove all existing cross connects and endpoints
*`def channel_freq()`: takes a channel and returns its corresponding frequency
*`def freq_channel()`: takes a frequency and returns its corresponding channel
*`def wait_while_building()`: checks for OK or BUSY when querying cli-json/cv+is_built
*`def load()`: loads a database from the GUI
*`def cx_list()`: request json list of cross connects
*`def waitUntilEndpsAppear()`: takes a list of endpoints and waits until they all disappear from the list of existing endpoints
*deprecated method use def wait_until_endps_appear() instead*
*`def wait_until_endps_appear()`: takes a list of endpoints and waits until they all appear in the list of existing endpoints
*`def waitUntilCxsAppear()`: takes a list of cross connects and waits until they all disappear from the list of existing cross connects
*deprecated method use def wait_until_cxs_appear() instead*
*`def wait_until_cxs_appear()`: takes a list of cross connects and waits until they all disappear from the list of existing cross connects
*`def station_map()`: request a map of stations via `/port/list` and alter the list to name based map of only stations
*`def station_list()`: request a list of stations
*`def vap_list()`: request a list of virtual APs
*`def remove_vlan_by_eid()`: a way of deleting a port/station/vAP
*`def find_ports_like()`: returns a list of ports matching a string prefix, like:
* `sta\*` matches names starting with `sta`
* `sta10+` matches names with port numbers 10 or greater
* `sta[10..20]` matches a range of stations including the range sta10 -- sta20
* `name_to_eid()`: takes a name like `1.1.eth1` and returns it split into an array `[1, 1, "eth1"]`
* `parse_time()`: returns numeric seconds when given strings like `1d`, `2h`, or `3m` or `4s`
* `parse_link()`:
* `new_station_profile()`: creates a blank station profile, configure station properties in this profile
*`def name_to_eid()`: takes a name like `1.1.eth1` and returns it split into an array `[1, 1, "eth1"]`
*`def wait_for_ip()`: takes a list of stations and waits until they all have an ip address. Default wait time is 360 seconds,
can take -1 as timeout argument to determine timeout based on mean ip acquisition time
*`def get_curr_num_ips()`: returns the number of stations with an ip address
*`def duration_time_to_seconds()`: returns an integer for a time string converted to seconds
*`def remove_all_stations()`: attempts to remove all currently existing stations
*`def remove_all_endps()`: attempts to remove all currently existing endpoints
*`def remove_all_cxs()`: attempts to remove all currently existing cross connects
*`def new_station_profile()`: creates a blank station profile, configure station properties in this profile
and then use its `create()` method to create a series of stations
* `new_l3_cx_profile()`: creates a blank Layer-3 profile, configure this connection profile and
*`def new_multicast_profile()`: creates a blank multicast profile, configure it then call `create()`
*`def new_wifi_monitor_profile()`: creates a blank wifi monitor profile, configure it then call `create()`
*`def new_l3_cx_profile()`: creates a blank Layer-3 profile, configure this connection profile and
then use its `create()` method to create a series of endpoints and cross connects
* `new_l4_cx_profile()`: creates a blank Layer-4 (http/ftp) profile, configure it then call `create()`
* `new_generic_cx_profile()`: creates a blank Generic connection profile (for lfping/iperf3/curl-post/speedtest.net)
*`def new_l4_cx_profile()`: creates a blank Layer-4 (http/ftp) profile, configure it then call `create()`
*`def new_generic_endp_profile()`: creates a blank Generic endpoint profile, configure it then call `create()`
*`def new_generic_cx_profile()`: creates a blank Generic connection profile (for lfping/iperf3/curl-post/speedtest.net)
then configure and call `create()`
* class `L3CXProfile`: this class is the Layer-3 connection profile **unfinished**
* `__init__`: should be called by `Realm::new_l3_cx_profile()`
* `create()`: pass endpoint-type, side-a list, side-b list, and sleep_time between creating endpoints and connections
* Parameters for this profile include:
* prefix
* txbps
* class `L4CXProfile`: this class is the Layer-4 connection profile **unfinished**
* `__init__`: should be called by `Realm::new_l4_cx_profile()`
* `create()`: pass a list of ports to create endpoints on, note that resulting cross connects are prefixed with `CX_`
* Parameters for this profile include:
* url
* requests_per_ten: number of requests to make in ten minutes
* class `GenCXProfile`: this class is the Generic connection profile **unfinished**
* `__init__`: should be called by `Realm::new_gen_cx_profile()`
* `create()`: pass a list of ports to create connections on
* Parameters for this profile include:
* type: includes lfping, iperf3, speedtest, lfcurl or cmd
* dest: IP address of destination for command
* class `StationProfile`: configure instances of this class for creating series of ports
* `__init__`: should be called by `Realm::new_station_profile()`
* `use_wpa2()`: pass on=True,ssid=a,passwd,b to set station_command_param add_sta/ssid, add_sta_key
pass on=False,ssid=a to turn off command_flag add_sta/flags/wpa2_enable
* `set_command_param()`
* `set_command_flag()`
* `set_prefix()`
* `add_named_flags()`
* `create()`: you can use either an integer number of stations or a list of station names, if you want to create a
specific range of ports, create the names first and do not specify `num_stations`
* resource: the resource number for the radio
* radio: name of the radio, like 'wiphy0'
* num_stations: `value > 0` indicates creating station series `sta0000..sta$value`
* sta_names_: a list of station names to create, please use `LFUtils.port_name_series()`
* dry_run: True avoids posting commands
* debug:
* class `PacketFilter` : This class provides filters that can be used with tshark
* `get_filter_wlan_assoc_packets()` : This packet filter will look for wlan.fc.type_subtype<=3. It takes
*`def new_vap_profile()`: creates a blank VAP profile, configure it then call `create()`
*`def new_vr_profile()`: creates a blank VR profile, configure it then call `create()`
*`def new_http_profile()`: creates a blank HTTP profile, configure it then call `create()`
*`def new_fio_endp_profile()`: creates a blank FileIO profile, configure it then call `create()`
*`def new_dut_profile()`: creates a blank DUT profile, configure it then call `create()`
*`def new_mvlan_profile()`: creates a blank MACVLAN profile, configure it then call `create()`
*`def new_qvlan_profile()`: creates a blank QVLAN profile, configure it then call `create()`
*`def new_test_group_profile()`: creates a blank Test Group profile, configure it then call `create()`
*`class PacketFilter()`: This class provides filters that can be used with tshark
*`def get_filter_wlan_assoc_packets()`: This packet filter will look for wlan.fc.type_subtype<=3. It takes
two arguments: `ap_mac` and `sta_mac`
* `get_filter_wlan_null_packets()` : This packet filter will look for wlan.fc.type_subtype==44. It takes
*`def get_filter_wlan_null_packets()`: This packet filter will look for wlan.fc.type_subtype==44. It takes
two arguments: `ap_mac` and `sta_mac`
* `run_filter()` : This function will run the filter specified by the `filter` argument on the pcap
*`def run_filter()`: This function will run the filter specified by the `filter` argument on the pcap
file specified by the `pcap_file` argument. It redirects this output into a txt file in /tmp
and returns the lines in that file as an array.
## realm_test.py ##
## realm_test.py ##
Exercises realm.py
## show_ports.py ##
This simple example shows how to gather a digest of ports
@@ -115,7 +123,7 @@ This simple example shows how to gather a digest of ports
Example of how to use LFRequest to create a L4 endpoint
## wct-example.py ##
Example of using expect on port 3990 to operate a WiFi Capacity Test
## ws-sta-monitor.py ##
## ws-sta-monitor.py ##
Websocket 8081 client that filters interesting station events from the lfclient websocket
@@ -172,4 +180,3 @@ This directory defines the LANforge module holding the following classes:
Have fun coding!
support@candelatech.com

View File

@@ -1,2 +1,5 @@
from .LFRequest import LFRequest
from .LANforge import LFUtils
from .LANforge import LFRequest
from .LANforge import lfcli_base
from .LANforge.lfcli_base import LFCliBase

130
py-json/base_profile.py Normal file
View File

@@ -0,0 +1,130 @@
#!/usr/bin/env python3
import re
import time
import pprint
import csv
import datetime
import random
import string
import pprint
from pprint import pprint
#from LANforge.lfcriteria import LFCriteria
class BaseProfile:
def __init__(self, local_realm, debug=False):
self.parent_realm = local_realm
#self.halt_on_error = False
self.exit_on_error = False
self.debug = debug or local_realm.debug
self.profiles = []
def json_get(self, _req_url, debug_=False):
return self.parent_realm.json_get(_req_url, debug_=False)
def json_post(self, req_url=None, data=None, debug_=False, suppress_related_commands_=None):
return self.parent_realm.json_post(_req_url=req_url,
_data=data,
suppress_related_commands_=suppress_related_commands_,
debug_=debug_)
def parse_time(self, time_string):
return self.parent_realm.parse_time(time_string)
def stopping_cx(self, name):
return self.parent_realm.stop_cx(name)
def cleanup_cxe_prefix(self, prefix):
return self.parent_realm.cleanup_cxe_prefix(prefix)
def rm_cx(self, cx_name):
return self.parent_realm.rm_cx(cx_name)
def rm_endp(self, ename, debug_=False, suppress_related_commands_=True):
self.parent_realm.rm_endp(ename, debug_=False, suppress_related_commands_=True)
def name_to_eid(self, eid):
return self.parent_realm.name_to_eid(eid)
def set_endp_tos(self, ename, _tos, debug_=False, suppress_related_commands_=True):
return self.parent_realm.set_endp_tos(ename, _tos, debug_=False, suppress_related_commands_=True)
def wait_until_endps_appear(self, these_endp, debug=False):
return self.parent_realm.wait_until_endps_appear(these_endp, debug=False)
def wait_until_cxs_appear(self, these_cx, debug=False):
return self.parent_realm.wait_until_cxs_appear(these_cx, debug=False)
def logg(self, message=None, audit_list=None):
if audit_list is None:
self.parent_realm.logg(message)
for item in audit_list:
if (item is None):
continue
message += ("\n" + pprint.pformat(item, indent=4))
self.parent_realm.logg(message)
def replace_special_char(self, str):
return str.replace('+', ' ').replace('_', ' ').strip(' ')
# @deprecate me
def get_milliseconds(self, timestamp):
return (timestamp - datetime.datetime(1970,1,1)).total_seconds()*1000
# @deprecate me
def get_seconds(self, timestamp):
return (timestamp - datetime.datetime(1970,1,1)).total_seconds()
def read_file(self, filename):
filename = open(filename, 'r')
return [line.split(',') for line in filename.readlines()]
#Function to create random characters made of letters
def random_chars(self, size, chars=None):
if chars is None:
chars = string.ascii_letters
return ''.join(random.choice(chars) for x in range(size))
#--------------- create file path / find file path code - to be put into functions
# #Find file path to save data/csv to:
# if args.report_file is None:
# new_file_path = str(datetime.datetime.now().strftime("%Y-%m-%d-%H-h-%M-m-%S-s")).replace(':',
# '-') + '-test_ipv4_variable_time' # create path name
# try:
# path = os.path.join('/home/lanforge/report-data/', new_file_path)
# os.mkdir(path)
# except:
# curr_dir_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# path = os.path.join(curr_dir_path, new_file_path)
# os.mkdir(path)
# if args.output_format in ['csv', 'json', 'html', 'hdf','stata', 'pickle', 'pdf', 'png', 'parquet',
# 'xlsx']:
# report_f = str(path) + '/data.' + args.output_format
# output = args.output_format
# else:
# print('Not supporting this report format or cannot find report format provided. Defaulting to csv data file output type, naming it data.csv.')
# report_f = str(path) + '/data.csv'
# output = 'csv'
# else:
# report_f = args.report_file
# if args.output_format is None:
# output = str(args.report_file).split('.')[-1]
# else:
# output = args.output_format
# print("Saving final report data in ... " + report_f)
# compared_rept=None
# if args.compared_report:
# compared_report_format=args.compared_report.split('.')[-1]
# #if compared_report_format not in ['csv', 'json', 'dta', 'pkl','html','xlsx','parquet','h5']:
# if compared_report_format != 'csv':
# print(ValueError("Cannot process this file type. Please select a different file and re-run script."))
# exit(1)
# else:
# compared_rept=args.compared_report

View File

@@ -1,8 +1,10 @@
#!/usr/bin/python3
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# example of how to create a WAN Link using JSON -
# -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Create and modify WAN Links Using LANforge JSON AP : http://www.candelatech.com/cookbook.php?vol=cli&book=JSON:+Managing+WANlinks+using+JSON+and+Python
# Written by Candela Technologies Inc.
# Updated by:
import sys
import urllib
@@ -23,13 +25,16 @@ j_printer = pprint.PrettyPrinter(indent=2)
# typically you're using resource 1 in stand alone realm
resource_id = 1
def main():
base_url = "http://localhost:8080"
def main(base_url="http://localhost:8080"):
json_post = ""
json_response = ""
num_wanlinks = -1
# see if there are old wanlinks to remove
lf_r = LFRequest.LFRequest(base_url+"/wl/list")
print(lf_r.get_as_json())
port_a ="rd0a"
port_b ="rd1a"
try:
json_response = lf_r.getAsJson()
LFUtils.debug_printer.pprint(json_response)
@@ -69,7 +74,7 @@ def main():
'alias': 'wl_eg1-A',
'shelf': 1,
'resource': '1',
'port': 'eth3',
'port': port_a,
'latency': '75',
'max_rate': '128000',
'description': 'cookbook-example'
@@ -83,7 +88,7 @@ def main():
'alias': 'wl_eg1-B',
'shelf': 1,
'resource': '1',
'port': 'eth5',
'port': port_b,
'latency': '95',
'max_rate': '256000',
'description': 'cookbook-example'

90
py-json/cv_commands.py Normal file
View File

@@ -0,0 +1,90 @@
"""
Note: This is a library file used to create a chamber view scenario.
import this file as showed in create_chamberview.py to create a scenario
"""
import time
# !/usr/bin/env python3
# ---- ---- ---- ---- LANforge Base Imports ---- ---- ---- ----
from LANforge.lfcli_base import LFCliBase
class chamberview(LFCliBase):
def __init__(self,
lfclient_host="localhost",
lfclient_port=8080,
):
super().__init__(_lfjson_host=lfclient_host,
_lfjson_port=lfclient_port)
#behaves same as chamberview manage scenario
def manage_cv_scenario(self,
scenario_name="Automation",
Resources="1.1",
Profile="STA-AC",
Amount="1",
DUT="DUT",
Dut_Radio="Radio-1" ,
Uses1="wiphy0",
Uses2="AUTO",
Traffic="http",
Freq="-1",
VLAN=""):
req_url = "/cli-json/add_text_blob"
text_blob = "profile_link" + " " + Resources + " " + Profile + " " + Amount + " " + "\'DUT:" + " " + DUT\
+ " " + Dut_Radio + "\' " + Traffic + " " + Uses1 + ","+Uses2 + " " + Freq + " " + VLAN
print(text_blob)
data = {
"type": "Network-Connectivity",
"name": scenario_name,
"text": text_blob
}
rsp = self.json_post(req_url, data)
time.sleep(2)
def show_changes(self,scenario_name):
req_url = "/cli-json/show_text_blob"
data = {
"type": "ALL",
"name": "ALL",
"brief": "brief"
}
rsp = self.json_post(req_url, data)
print(rsp)
print("scenario is pushed")
#This is for chamber view buttons
def apply_cv_scenario(self, cv_scenario):
cmd = "cv apply '%s'" % cv_scenario #To apply scenario
self.run_cv_cmd(cmd)
print("Apply scenario")
def build_cv_scenario(self):#build chamber view scenario
cmd = "cv build"
self.run_cv_cmd(cmd)
print("Build scenario")
def is_cv_build(self):#check if scenario is build
cmd = "cv is_build"
self.run_cv_cmd(cmd)
def sync_cv(self):#sync
cmd = "cv sync"
self.run_cv_cmd(cmd)
def run_cv_cmd(self, command):#Send chamber view commands
req_url = "/gui-json/cmd"
data = {
"cmd": command
}
rsp = self.json_post(req_url, data)
print(rsp)

385
py-json/cv_test_manager.py Normal file
View File

@@ -0,0 +1,385 @@
"""
Note: This script is working as library for chamberview tests.
It holds different commands to automate test.
"""
import time
from LANforge.lfcli_base import LFCliBase
from realm import Realm
import json
from pprint import pprint
import argparse
from cv_test_reports import lanforge_reports as lf_rpt
from csv_to_influx import *
def cv_base_adjust_parser(args):
if args.test_rig != "":
# TODO: In future, can use TestRig once that GUI update has propagated
args.set.append(["Test Rig ID:", args.test_rig])
if args.influx_host is not None:
if (not args.pull_report):
print("Specified influx host without pull_report, will enabled pull_request.")
args.pull_report = True
def cv_add_base_parser(parser):
parser.add_argument("-m", "--mgr", type=str, default="localhost",
help="address of the LANforge GUI machine (localhost is default)")
parser.add_argument("-o", "--port", type=int, default=8080,
help="IP Port the LANforge GUI is listening on (8080 is default)")
parser.add_argument("--lf_user", type=str, default="lanforge",
help="LANforge username to pull reports")
parser.add_argument("--lf_password", type=str, default="lanforge",
help="LANforge Password to pull reports")
parser.add_argument("-i", "--instance_name", type=str,
help="create test instance")
parser.add_argument("-c", "--config_name", type=str,
help="Config file name")
parser.add_argument("-r", "--pull_report", default=False, action='store_true',
help="pull reports from lanforge (by default: False)")
parser.add_argument("--load_old_cfg", default=False, action='store_true',
help="Should we first load defaults from previous run of the capacity test? Default is False")
parser.add_argument("--enable", action='append', nargs=1, default=[],
help="Specify options to enable (set cfg-file value to 1). See example raw text config for possible options. May be specified multiple times. Most tests are enabled by default, except: longterm")
parser.add_argument("--disable", action='append', nargs=1, default=[],
help="Specify options to disable (set value to 0). See example raw text config for possible options. May be specified multiple times.")
parser.add_argument("--set", action='append', nargs=2, default=[],
help="Specify options to set values based on their label in the GUI. Example: --set 'Basic Client Connectivity' 1 May be specified multiple times.")
parser.add_argument("--raw_line", action='append', nargs=1, default=[],
help="Specify lines of the raw config file. Example: --raw_line 'test_rig: Ferndale-01-Basic' See example raw text config for possible options. This is catch-all for any options not available to be specified elsewhere. May be specified multiple times.")
parser.add_argument("--raw_lines_file", default="",
help="Specify a file of raw lines to apply.")
# Reporting info
parser.add_argument("--test_rig", default="",
help="Specify the test rig info for reporting purposes, for instance: testbed-01")
influx_add_parser_args(parser) # csv_to_influx
class cv_test(Realm):
def __init__(self,
lfclient_host="localhost",
lfclient_port=8080,
):
super().__init__(lfclient_host=lfclient_host,
lfclient_port=lfclient_port)
self.report_dir=""
# Add a config line to a text blob. Will create new text blob
# if none exists already.
def create_test_config(self, config_name, blob_test_name, text):
req_url = "/cli-json/add_text_blob"
data = {
"type": "Plugin-Settings",
"name": str(blob_test_name + config_name),
"text": text
}
print("adding- " + text + " " + "to test config")
rsp = self.json_post(req_url, data)
# time.sleep(1)
# Tell LANforge GUI Chamber View to launch a test
def create_test(self, test_name, instance, load_old_cfg):
cmd = "cv create '{0}' '{1}' '{2}'".format(test_name, instance, load_old_cfg)
return self.run_cv_cmd(str(cmd))
# Tell LANforge chamber view to load a scenario.
def load_test_scenario(self, instance, scenario):
cmd = "cv load '{0}' '{1}'".format(instance, scenario)
self.run_cv_cmd(cmd)
#load test config for a chamber view test instance.
def load_test_config(self, test_config, instance):
cmd = "cv load '{0}' '{1}'".format(instance, test_config)
self.run_cv_cmd(cmd)
#start the test
def start_test(self, instance):
cmd = "cv click '%s' Start" % instance
return self.run_cv_cmd(cmd)
#close test
def close_test(self, instance):
cmd = "cv click '%s' 'Close'" % instance
self.run_cv_cmd(cmd)
#Cancel
def cancel_test(self, instance):
cmd = "cv click '%s' Cancel" % instance
self.run_cv_cmd(cmd)
# Send chamber view commands to the LANforge GUI
def run_cv_cmd(self, command):
response_json = []
req_url = "/gui-json/cmd"
data = {
"cmd": command
}
debug_par = ""
rsp = self.json_post("/gui-json/cmd%s" % debug_par, data, debug_=False, response_json_list_=response_json)
return response_json
#For auto save report
def auto_save_report(self, instance):
cmd = "cv click %s 'Auto Save Report'" % instance
self.run_cv_cmd(cmd)
#To get the report location
def get_report_location(self, instance):
cmd = "cv get %s 'Report Location:'" % instance
location = self.run_cv_cmd(cmd)
return location
#To get if test is running or not
def get_is_running(self, instance):
cmd = "cv get %s 'StartStop'" % instance
val = self.run_cv_cmd(cmd)
#pprint(val)
return val[0]["LAST"]["response"] == 'StartStop::Stop'
#To save to html
def save_html(self, instance):
cmd = "cv click %s 'Save HTML'" % instance
self.run_cv_cmd(cmd)
#Check if test instance exists
def get_exists(self, instance):
cmd = "cv exists %s" % instance
val = self.run_cv_cmd(cmd)
#pprint(val)
return val[0]["LAST"]["response"] == 'YES'
#Check if chamberview is built
def get_cv_is_built(self):
cmd = "cv is_built"
val = self.run_cv_cmd(cmd)
#pprint(val)
rv = val[0]["LAST"]["response"] == 'YES'
print("is-built: ", rv)
return rv
#delete the test instance
def delete_instance(self, instance):
cmd = "cv delete %s" % instance
self.run_cv_cmd(cmd)
# It can take a while, some test rebuild the old scenario upon exit, for instance.
tries = 0
while (True):
if self.get_exists(instance):
print("Waiting %i/60 for test instance: %s to be deleted."%(tries, instance))
tries += 1
if (tries > 60):
break
time.sleep(1)
else:
break
# And make sure chamber-view is properly re-built
tries = 0
while (True):
if not self.get_cv_is_built():
print("Waiting %i/60 for Chamber-View to be built."%(tries))
tries += 1
if (tries > 60):
break
time.sleep(1)
else:
break
#Get port listing
def get_ports(self):
response = self.json_get("/ports/")
return response
def show_text_blob(self, config_name, blob_test_name, brief):
req_url = "/cli-json/show_text_blob"
data = {"type": "Plugin-Settings"}
if config_name and blob_test_name:
data["name"] = "%s%s"%(blob_test_name, config_name) # config name
else:
data["name"] = "ALL"
if brief:
data["brief"] = "brief"
return self.json_post(req_url, data)
def rm_text_blob(self, config_name, blob_test_name):
req_url = "/cli-json/rm_text_blob"
data = {
"type": "Plugin-Settings",
"name": str(blob_test_name + config_name), # config name
}
rsp = self.json_post(req_url, data)
def apply_cfg_options(self, cfg_options, enables, disables, raw_lines, raw_lines_file):
# Read in calibration data and whatever else.
if raw_lines_file != "":
with open(raw_lines_file) as fp:
line = fp.readline()
while line:
cfg_options.append(line)
line = fp.readline()
fp.close()
for en in enables:
cfg_options.append("%s: 1"%(en[0]))
for en in disables:
cfg_options.append("%s: 0"%(en[0]))
for r in raw_lines:
cfg_options.append(r[0])
def build_cfg(self, config_name, blob_test, cfg_options):
for value in cfg_options:
self.create_test_config(config_name, blob_test, value)
# Request GUI update its text blob listing.
self.show_text_blob(config_name, blob_test, False)
# Hack, not certain if the above show returns before the action has been completed
# or not, so we sleep here until we have better idea how to query if GUI knows about
# the text blob.
time.sleep(5)
# load_old_config is boolean
# test_name is specific to the type of test being launched (Dataplane, tr398, etc)
# ChamberViewFrame.java has list of supported test names.
# instance_name is per-test instance, it does not matter much, just use the same name
# throughout the entire run of the test.
# config_name what to call the text-blob that configures the test. Does not matter much
# since we (re)create it during the run.
# sets: Arrany of [key,value] pairs. The key is the widget name, typically the label
# before the entry field.
# pull_report: Boolean, should we download the report to current working directory.
# lf_host: LANforge machine running the GUI.
# lf_password: Password for LANforge machine running the GUI.
# cv_cmds: Array of raw chamber-view commands, such as "cv click 'button-name'"
# These (and the sets) are applied after the test is created and before it is started.
def create_and_run_test(self, load_old_cfg, test_name, instance_name, config_name, sets,
pull_report, lf_host, lf_user, lf_password, cv_cmds):
load_old = "false"
if load_old_cfg:
load_old = "true"
start_try = 0
while (True):
response = self.create_test(test_name, instance_name, load_old)
d1 = {k: v for e in response for (k, v) in e.items()}
if d1["LAST"]["response"] == "OK":
break
else:
start_try += 1
if start_try > 60:
print("ERROR: Could not start within 60 tries, aborting.")
exit(1)
time.sleep(1)
self.load_test_config(config_name, instance_name)
self.auto_save_report(instance_name)
for kv in sets:
cmd = "cv set '%s' '%s' '%s'"%(instance_name, kv[0], kv[1]);
print("Running CV set command: ", cmd)
self.run_cv_cmd(cmd)
for cmd in cv_cmds:
print("Running CV command: ", cmd)
self.run_cv_cmd(cmd)
response = self.start_test(instance_name)
d1 = {k: v for e in response for (k, v) in e.items()}
if d1["LAST"]["response"].__contains__("Could not find instance:"):
print("ERROR: start_test failed: ", d1["LAST"]["response"], "\n");
# pprint(response)
exit(1)
not_running = 0
while (True):
cmd = "cv get_and_close_dialog"
dialog = self.run_cv_cmd(cmd);
if dialog[0]["LAST"]["response"] != "NO-DIALOG":
print("Popup Dialog:\n")
print(dialog[0]["LAST"]["response"])
check = self.get_report_location(instance_name)
location = json.dumps(check[0]["LAST"]["response"])
if location != "\"Report Location:::\"":
location = location.replace("Report Location:::", "")
location = location.strip("\"")
report = lf_rpt()
print(location)
try:
if pull_report:
report.pull_reports(hostname=lf_host, username=lf_user, password=lf_password,
report_location=location)
self.report_dir=location
except:
raise Exception("Could not find Reports")
break
# Of if test stopped for some reason and could not generate report.
if not self.get_is_running(instance_name):
print("Detected test is not running.")
not_running += 1
if not_running > 5:
break;
time.sleep(1)
# Ensure test is closed and cleaned up
self.delete_instance(instance_name)
# Clean up any remaining popups.
while (True):
dialog = self.run_cv_cmd(cmd);
if dialog[0]["LAST"]["response"] != "NO-DIALOG":
print("Popup Dialog:\n")
print(dialog[0]["LAST"]["response"])
else:
break
# Takes cmd-line args struct or something that looks like it.
# See csv_to_influx.py::influx_add_parser_args for options, or --help.
def check_influx_kpi(self, args):
if self.report_dir == "":
# Nothing to report on.
print("Not submitting to influx, no report-dir.\n")
return
if args.influx_host is None:
# No influx configured, return.
print("Not submitting to influx, influx_host not configured.\n")
return
print("Creating influxdb connection.\n")
# lfjson_host would be if we are reading out of LANforge or some other REST
# source, which we are not. So dummy those out.
influxdb = RecordInflux(_lfjson_host = "",
_lfjson_port = "",
_influx_host = args.influx_host,
_influx_port = args.influx_port,
_influx_org = args.influx_org,
_influx_token = args.influx_token,
_influx_bucket = args.influx_bucket)
path = "%s/kpi.csv"%(self.report_dir)
print("Attempt to submit kpi: ", path)
csvtoinflux = CSVtoInflux(influxdb = influxdb,
target_csv = path,
_influx_tag = args.influx_tag)
print("Posting to influx...\n")
csvtoinflux.post_to_influx()
print("All done posting to influx.\n")

View File

@@ -0,0 +1,14 @@
import paramiko
from scp import SCPClient
class lanforge_reports:
def pull_reports(self,hostname="localhost", username="lanforge", password="lanforge",report_location="/home/lanforge/html-reports/"):
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname=hostname,username=username,password=password)
with SCPClient(ssh.get_transport()) as scp:
scp.get(report_location,recursive=True)
scp.close()

View File

@@ -0,0 +1,39 @@
#!/usr/bin/env python3
"""
Library to Run Dataplane Test: Using lf_cv_base class
"""
from lf_cv_base import ChamberViewBase
class DataPlaneTest(ChamberViewBase):
def __init__(self, lfclient_host="localhost", lfclient_port=8080, debug_=False):
super().__init__(_lfjson_host=lfclient_host, _lfjson_port=lfclient_port, _debug=debug_)
self.set_config()
def set_config(self):
blob_data = """show_events: 1 show_log: 0 port_sorting: 0 kpi_id: Dataplane Pkt-Size bg: 0xE0ECF8 test_rig: show_scan: 1 auto_helper: 0 skip_2: 0 skip_5: 0 skip_5b: 1 skip_dual: 0 skip_tri: 1 selected_dut: TIP duration: 15000 traffic_port: 1.1.136 sta00500 upstream_port: 1.1.2 eth2 path_loss: 10 speed: 85% speed2: 0Kbps min_rssi_bound: -150 max_rssi_bound: 0 channels: AUTO modes: Auto pkts: 60;142;256;512;1024;MTU spatial_streams: AUTO security_options: AUTObandw_options: AUTO traffic_types: UDP;TCP directions: DUT Transmit;DUT Receive txo_preamble: OFDM txo_mcs: 0 CCK, OFDM, HT, VHT txo_retries: No Retry txo_sgi: OFF txo_txpower: 15 attenuator: 0 attenuator2: 0 attenuator_mod: 255 attenuator_mod2: 255 attenuations: 0..+50..950 attenuations2: 0..+50..950 chamber: 0 tt_deg: 0..+45..359 cust_pkt_sz: show_bar_labels: 1 show_prcnt_tput: 0 show_3s: 0 show_ll_graphs: 1 show_gp_graphs: 1 show_1m: 1 pause_iter: 0 show_realtime: 1 operator: mconn: 1 mpkt: 1000 tos: 0 loop_iterations: 1"""
self.add_text_blobs(type="Plugin-Settings", name="dataplane-test-latest-shivam", data=blob_data)
pass
def set_params(self):
pass
def run_test(self):
pass
def wait_until_test_finishes(self):
pass
def collect_reports(self):
pass
def main():
obj = DataPlaneTest(lfclient_host="localhost", lfclient_port=8080, debug_=True)
if __name__ == '__main__':
main()

119
py-json/dut_profile.py Normal file
View File

@@ -0,0 +1,119 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
from LANforge import add_dut
import pprint
from pprint import pprint
import time
import base64
class DUTProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port, local_realm, debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_, _halt_on_error=True, _local_realm=local_realm)
self.name = "NA"
self.flags = "NA"
self.img_file = "NA"
self.sw_version = "NA"
self.hw_version = "NA"
self.model_num = "NA"
self.serial_num = "NA"
self.serial_port = "NA"
self.wan_port = "NA"
self.lan_port = "NA"
self.ssid1 = "NA"
self.ssid2 = "NA"
self.ssid3 = "NA"
self.passwd1 = "NA"
self.passwd2 = "NA"
self.passwd3 = "NA"
self.mgt_ip = "NA"
self.api_id = "NA"
self.flags_mask = "NA"
self.antenna_count1 = "NA"
self.antenna_count2 = "NA"
self.antenna_count3 = "NA"
self.bssid1 = "NA"
self.bssid2 = "NA"
self.bssid3 = "NA"
self.top_left_x = "NA"
self.top_left_y = "NA"
self.eap_id = "NA"
self.flags = 0
self.flags_mask = 0
self.notes = []
self.append = []
def set_param(self, name, value):
if (name in self.__dict__):
self.__dict__[name] = value
def create(self, name=None, param_=None, flags=None, flags_mask=None, notes=None):
data = {}
if (name is not None) and (name != ""):
data["name"] = name
elif (self.name is not None) and (self.name != ""):
data["name"] = self.name
else:
raise ValueError("cannot create/update DUT record lacking a name")
for param in add_dut.dut_params:
if (param.name in self.__dict__):
if (self.__dict__[param.name] is not None) \
and (self.__dict__[param.name] != "NA"):
data[param.name] = self.__dict__[param.name]
else:
print("---------------------------------------------------------")
pprint(self.__dict__[param.name])
print("---------------------------------------------------------")
raise ValueError("parameter %s not in dut_profile" % param)
if (flags is not None) and (int(flags) > -1):
data["flags"] = flags
elif (self.flags is not None) and (self.flags > -1):
data["flags"] = self.flags
if (flags_mask is not None) and (int(flags_mask) > -1):
data["flags_mask"] = flags_mask
elif (self.flags_mask is not None) and (int(self.flags_mask) > -1):
data["flags_mask"] = self.flags_mask
url = "/cli-json/add_dut"
if self.debug:
print("---- DATA -----------------------------------------------")
pprint(data)
pprint(self.notes)
pprint(self.append)
print("---------------------------------------------------------")
self.json_post(url, data, debug_=self.debug)
if (self.notes is not None) and (len(self.notes) > 0):
self.json_post("/cli-json/add_dut_notes", {
"dut": self.name,
"text": "[BLANK]"
}, self.debug)
notebytes = None
for line in self.notes:
notebytes = base64.b64encode(line.encode('ascii'))
if self.debug:
print("------ NOTES ---------------------------------------------------")
pprint(self.notes)
pprint(str(notebytes))
print("---------------------------------------------------------")
self.json_post("/cli-json/add_dut_notes", {
"dut": self.name,
"text-64": notebytes.decode('ascii')
}, self.debug)
if (self.append is not None) and (len(self.append) > 0):
notebytes = None
for line in self.append:
notebytes = base64.b64encode(line.encode('ascii'))
if self.debug:
print("----- APPEND ----------------------------------------------------")
pprint(line)
pprint(str(notebytes))
print("---------------------------------------------------------")
self.json_post("/cli-json/add_dut_notes", {
"dut": self.name,
"text-64": notebytes.decode('ascii')
}, self.debug)

190
py-json/fio_endp_profile.py Normal file
View File

@@ -0,0 +1,190 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
import pprint
from pprint import pprint
import time
# Class: FIOEndpProfile(LFCliBase)
#
# Written by Candela Technologies Inc.
# Updated by:
#
class FIOEndpProfile(LFCliBase):
"""
Very often you will create the FileIO writer profile first so that it creates the data
that a reader profile will subsequently use.
"""
def __init__(self, lfclient_host, lfclient_port, local_realm, io_direction="write", debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_)
self.local_realm = local_realm
self.fs_type = "fe_nfsv4"
self.min_rw_size = 128 * 1024
self.max_rw_size = 128 * 1024
self.min_file_size = 10 * 1024 * 1024
self.max_file_size = 10 * 1024 * 1024
self.min_read_rate_bps = 10 * 1000 * 1000
self.max_read_rate_bps = 10 * 1000 * 1000
self.min_write_rate_bps = 1000 * 1000 * 1000
self.max_write_rate_bps = 1000 * 1000 * 1000
self.file_num = 10 # number of files to write
self.directory = None # directory like /mnt/lf/$endp_name
# this refers to locally mounted directories presently used for writing
# you would set this when doing read tests simultaneously to write tests
# so like, if your endpoint names are like wo_300GB_001, your Directory value
# defaults to /mnt/lf/wo_300GB_001; but reader enpoint would be named
# /mnt/lf/ro_300GB_001, this overwrites a readers directory name to wo_300GB_001
self.mount_dir = "AUTO"
self.server_mount = None # like cifs://10.0.0.1/bashful or 192.168.1.1:/var/tmp
self.mount_options = None
self.iscsi_vol = None
self.retry_timer_ms = 2000
self.io_direction = io_direction # read / write
self.quiesce_ms = 3000
self.pattern = "increasing"
self.file_prefix = "AUTO" # defaults to endp_name
self.cx_prefix = "wo_"
self.created_cx = {}
self.created_endp = []
def start_cx(self):
print("Starting CXs...")
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name],
"cx_state": "RUNNING"
}, debug_=self.debug)
print(".", end='')
print("")
def stop_cx(self):
print("Stopping CXs...")
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name],
"cx_state": "STOPPED"
}, debug_=self.debug)
print(".", end='')
print("")
def create_ro_profile(self):
ro_profile = self.local_realm.new_fio_endp_profile()
ro_profile.realm = self.local_realm
ro_profile.fs_type = self.fs_type
ro_profile.min_read_rate_bps = self.min_write_rate_bps
ro_profile.max_read_rate_bps = self.max_write_rate_bps
ro_profile.min_write_rate_bps = self.min_read_rate_bps
ro_profile.max_write_rate_bps = self.max_read_rate_bps
ro_profile.file_num = self.file_num
ro_profile.directory = self.directory
ro_profile.mount_dir = self.directory
ro_profile.server_mount = self.server_mount
ro_profile.mount_options = self.mount_options
ro_profile.iscsi_vol = self.iscsi_vol
ro_profile.retry_timer_ms = self.retry_timer_ms
ro_profile.io_direction = "read"
ro_profile.quiesce_ms = self.quiesce_ms
ro_profile.pattern = self.pattern
ro_profile.file_prefix = self.file_prefix
ro_profile.cx_prefix = "ro_"
return ro_profile
def cleanup(self):
print("Cleaning up cxs and endpoints")
if len(self.created_cx) != 0:
for cx_name in self.created_cx.keys():
req_url = "cli-json/rm_cx"
data = {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name]
}
self.json_post(req_url, data)
# pprint(data)
req_url = "cli-json/rm_endp"
data = {
"endp_name": cx_name
}
self.json_post(req_url, data)
# pprint(data)
def create(self, ports=[], connections_per_port=1, sleep_time=.5, debug_=False, suppress_related_commands_=None):
cx_post_data = []
for port_name in ports:
for num_connection in range(connections_per_port):
#
if len(self.local_realm.name_to_eid(port_name)) >= 3:
shelf = self.local_realm.name_to_eid(port_name)[0]
resource = self.local_realm.name_to_eid(port_name)[1]
name = self.local_realm.name_to_eid(port_name)[2]
else:
raise ValueError("Unexpected name for port_name %s" % port_name)
if self.directory is None or self.server_mount is None or self.fs_type is None:
raise ValueError("directory [%s], server_mount [%s], and type [%s] must not be None" % (
self.directory, self.server_mount, self.fs_type))
endp_data = {
"alias": self.cx_prefix + name + "_" + str(num_connection) + "_fio",
"shelf": shelf,
"resource": resource,
"port": name,
"type": self.fs_type,
"min_read_rate": self.min_read_rate_bps,
"max_read_rate": self.max_read_rate_bps,
"min_write_rate": self.min_write_rate_bps,
"max_write_rate": self.max_write_rate_bps,
"directory": self.directory,
"server_mount": self.server_mount,
"mount_dir": self.mount_dir,
"prefix": self.file_prefix,
"payload_pattern": self.pattern,
}
# Read direction is copy of write only directory
if self.io_direction == "read":
endp_data["prefix"] = "wo_" + name + "_" + str(num_connection) + "_fio"
endp_data["directory"] = "/mnt/lf/wo_" + name + "_" + str(num_connection) + "_fio"
url = "cli-json/add_file_endp"
self.local_realm.json_post(url, endp_data, debug_=False,
suppress_related_commands_=suppress_related_commands_)
time.sleep(sleep_time)
data = {
"name": self.cx_prefix + name + "_" + str(num_connection) + "_fio",
"io_direction": self.io_direction,
"num_files": 5
}
self.local_realm.json_post("cli-json/set_fe_info", data, debug_=debug_,
suppress_related_commands_=suppress_related_commands_)
self.local_realm.json_post("/cli-json/nc_show_endpoints", {"endpoint": "all"})
for port_name in ports:
for num_connection in range(connections_per_port):
shelf = self.local_realm.name_to_eid(port_name)[0]
resource = self.local_realm.name_to_eid(port_name)[1]
name = self.local_realm.name_to_eid(port_name)[2]
endp_data = {
"alias": "CX_" + self.cx_prefix + name + "_" + str(num_connection) + "_fio",
"test_mgr": "default_tm",
"tx_endp": self.cx_prefix + name + "_" + str(num_connection) + "_fio",
"rx_endp": "NA"
}
cx_post_data.append(endp_data)
self.created_cx[self.cx_prefix + name + "_" + str(
num_connection) + "_fio"] = "CX_" + self.cx_prefix + name + "_" + str(num_connection) + "_fio"
for cx_data in cx_post_data:
url = "/cli-json/add_cx"
self.local_realm.json_post(url, cx_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands_)
time.sleep(sleep_time)

602
py-json/gen_cxprofile.py Normal file
View File

@@ -0,0 +1,602 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
import pprint
from pprint import pprint
from LANforge.lfcli_base import LFCliBase
import csv
import pandas as pd
import time
import datetime
import json
class GenCXProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port, local_realm, debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_)
self.lfclient_host = lfclient_host
self.lfclient_port = lfclient_port
self.lfclient_url = "http://%s:%s" % (lfclient_host, lfclient_port)
self.debug = debug_
self.type = "lfping"
self.dest = "127.0.0.1"
self.interval = 1
self.cmd = ""
self.local_realm = local_realm
self.name_prefix = "generic"
self.created_cx = []
self.created_endp = []
self.file_output = "/dev/null"
self.loop_count = 1
self.speedtest_min_dl = 0
self.speedtest_min_up = 0
self.speedtest_max_ping = 0
def parse_command(self, sta_name, gen_name):
if self.type == "lfping":
if ((self.dest is not None) or (self.dest != "")) and ((self.interval is not None) or (self.interval > 0)):
self.cmd = "%s -i %s -I %s %s" % (self.type, self.interval, sta_name, self.dest)
# print(self.cmd)
else:
raise ValueError("Please ensure dest and interval have been set correctly")
elif self.type == "generic":
if self.cmd == "":
raise ValueError("Please ensure cmd has been set correctly")
elif self.type == "speedtest":
self.cmd = "vrf_exec.bash %s speedtest-cli --json --share" % (sta_name)
elif self.type == "iperf3" and self.dest is not None:
self.cmd = "iperf3 --forceflush --format k --precision 4 -c %s -t 60 --tos 0 -b 1K --bind_dev %s -i 1 " \
"--pidfile /tmp/lf_helper_iperf3_%s.pid" % (self.dest, sta_name, gen_name)
elif self.type == "iperf3_serv" and self.dest is not None:
self.cmd = "iperf3 --forceflush --format k --precision 4 -s --bind_dev %s -i 1 " \
"--pidfile /tmp/lf_helper_iperf3_%s.pid" % (sta_name, gen_name)
elif self.type == "lfcurl":
if self.file_output is not None:
self.cmd = "./scripts/lf_curl.sh -p %s -i AUTO -o %s -n %s -d %s" % \
(sta_name, self.file_output, self.loop_count, self.dest)
else:
raise ValueError("Please ensure file_output has been set correctly")
else:
raise ValueError("Unknown command type")
def start_cx(self):
print("Starting CXs...")
# print(self.created_cx)
# print(self.created_endp)
for cx_name in self.created_cx:
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": cx_name,
"cx_state": "RUNNING"
}, debug_=self.debug)
print(".", end='')
print("")
def stop_cx(self):
print("Stopping CXs...")
for cx_name in self.created_cx:
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": cx_name,
"cx_state": "STOPPED"
}, debug_=self.debug)
print(".", end='')
print("")
def cleanup(self):
print("Cleaning up cxs and endpoints")
for cx_name in self.created_cx:
req_url = "cli-json/rm_cx"
data = {
"test_mgr": "default_tm",
"cx_name": cx_name
}
self.json_post(req_url, data)
for endp_name in self.created_endp:
req_url = "cli-json/rm_endp"
data = {
"endp_name": endp_name
}
self.json_post(req_url, data)
def set_flags(self, endp_name, flag_name, val):
data = {
"name": endp_name,
"flag": flag_name,
"val": val
}
self.json_post("cli-json/set_endp_flag", data, debug_=self.debug)
def set_cmd(self, endp_name, cmd):
data = {
"name": endp_name,
"command": cmd
}
self.json_post("cli-json/set_gen_cmd", data, debug_=self.debug)
def parse_command_gen(self, sta_name, dest):
if self.type == "lfping":
if ((self.dest is not None) or (self.dest != "")) and ((self.interval is not None) or (self.interval > 0)):
self.cmd = "%s -i %s -I %s %s" % (self.type, self.interval, sta_name, dest)
# print(self.cmd)
else:
raise ValueError("Please ensure dest and interval have been set correctly")
elif self.type == "generic":
if self.cmd == "":
raise ValueError("Please ensure cmd has been set correctly")
elif self.type == "speedtest":
self.cmd = "vrf_exec.bash %s speedtest-cli --json --share" % (sta_name)
elif self.type == "iperf3" and self.dest is not None:
self.cmd = "iperf3 --forceflush --format k --precision 4 -c %s -t 60 --tos 0 -b 1K --bind_dev %s -i 1 " \
"--pidfile /tmp/lf_helper_iperf3_test.pid" % (self.dest, sta_name)
elif self.type == "lfcurl":
if self.file_output is not None:
self.cmd = "./scripts/lf_curl.sh -p %s -i AUTO -o %s -n %s -d %s" % \
(sta_name, self.file_output, self.loop_count, self.dest)
else:
raise ValueError("Please ensure file_output has been set correctly")
else:
raise ValueError("Unknown command type")
def create_gen(self, sta_port, dest, add, sleep_time=.5, debug_=False, suppress_related_commands_=None):
if self.debug:
debug_ = True
post_data = []
endp_tpls = []
if type(sta_port) == str:
if sta_port != "1.1.eth1":
count = 5
else:
count = 40
for i in range(0, count):
port_info = self.local_realm.name_to_eid(sta_port)
resource = port_info[0]
shelf = port_info[1]
name = port_info[2]
gen_name_a = "%s-%s" % (self.name_prefix, name) + "_" + str(i) + add
gen_name_b = "D_%s-%s" % (self.name_prefix, name) + "_" + str(i) + add
endp_tpls.append((shelf, resource, name, gen_name_a, gen_name_b))
print(endp_tpls)
elif type(sta_port) == list:
for port_name in sta_port:
print("hello............", sta_port)
for i in range(0, 5):
port_info = self.local_realm.name_to_eid(port_name)
try:
resource = port_info[0]
shelf = port_info[1]
name = port_info[2]
except:
raise ValueError("Unexpected name for port_name %s" % port_name)
# this naming convention follows what you see when you use
# lf_firemod.pl --action list_endp after creating a generic endpoint
gen_name_a = "%s-%s" % (self.name_prefix, name) + "_" + str(i) + add
gen_name_b = "D_%s-%s" % (self.name_prefix, name) + "_" + str(i) + add
endp_tpls.append((shelf, resource, name, gen_name_a, gen_name_b))
# exit(1)
print(endp_tpls)
for endp_tpl in endp_tpls:
shelf = endp_tpl[0]
resource = endp_tpl[1]
name = endp_tpl[2]
gen_name_a = endp_tpl[3]
# gen_name_b = endp_tpl[3]
# (self, alias=None, shelf=1, resource=1, port=None, type=None)
data = {
"alias": gen_name_a,
"shelf": shelf,
"resource": resource,
"port": name,
"type": "gen_generic"
}
pprint(data)
if self.debug:
pprint(data)
self.json_post("cli-json/add_gen_endp", data, debug_=self.debug)
self.local_realm.json_post("/cli-json/nc_show_endpoints", {"endpoint": "all"})
time.sleep(sleep_time)
for endp_tpl in endp_tpls:
gen_name_a = endp_tpl[3]
gen_name_b = endp_tpl[4]
self.set_flags(gen_name_a, "ClearPortOnStart", 1)
time.sleep(sleep_time)
if type(dest) == str:
for endp_tpl in endp_tpls:
name = endp_tpl[2]
gen_name_a = endp_tpl[3]
# gen_name_b = endp_tpl[4]
self.parse_command_gen(name, dest)
self.set_cmd(gen_name_a, self.cmd)
time.sleep(sleep_time)
elif type(dest) == list:
mm = 0
for endp_tpl in endp_tpls:
name = endp_tpl[2]
gen_name_a = endp_tpl[3]
# gen_name_b = endp_tpl[4]
self.parse_command_gen(name, dest[mm])
self.set_cmd(gen_name_a, self.cmd)
mm = mm + 1
if mm == 8:
mm = 0
time.sleep(sleep_time)
j = 0
for endp_tpl in endp_tpls:
name = endp_tpl[2]
gen_name_a = endp_tpl[3]
gen_name_b = endp_tpl[4]
cx_name = "CX_%s-%s" % (self.name_prefix, name) + "_" + str(j) + add
j = j + 1
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": gen_name_a,
"rx_endp": gen_name_b
}
post_data.append(data)
# self.created_cx = []
self.created_cx.append(cx_name)
# self.created_endp = []
self.created_endp.append(gen_name_a)
self.created_endp.append(gen_name_b)
time.sleep(sleep_time)
print(self.created_cx)
for data in post_data:
url = "/cli-json/add_cx"
pprint(data)
if self.debug:
pprint(data)
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands_)
time.sleep(2)
time.sleep(sleep_time)
for data in post_data:
self.local_realm.json_post("/cli-json/show_cx", {
"test_mgr": "default_tm",
"cross_connect": data["alias"]
})
time.sleep(sleep_time)
def create(self, ports=[], sleep_time=.5, debug_=False, suppress_related_commands_=None):
if self.debug:
debug_ = True
post_data = []
endp_tpls = []
for port_name in ports:
port_info = self.local_realm.name_to_eid(port_name)
resource = port_info[0]
shelf = port_info[1]
name = port_info[2]
# this naming convention follows what you see when you use
# lf_firemod.pl --action list_endp after creating a generic endpoint
gen_name_a = "%s-%s" % (self.name_prefix, name)
gen_name_b = "D_%s-%s" % (self.name_prefix, name)
endp_tpls.append((shelf, resource, name, gen_name_a, gen_name_b))
for endp_tpl in endp_tpls:
shelf = endp_tpl[0]
resource = endp_tpl[1]
name = endp_tpl[2]
gen_name_a = endp_tpl[3]
# gen_name_b = endp_tpl[3]
# (self, alias=None, shelf=1, resource=1, port=None, type=None)
data = {
"alias": gen_name_a,
"shelf": shelf,
"resource": resource,
"port": name,
"type": "gen_generic"
}
if self.debug:
pprint(data)
self.json_post("cli-json/add_gen_endp", data, debug_=self.debug)
self.local_realm.json_post("/cli-json/nc_show_endpoints", {"endpoint": "all"})
time.sleep(sleep_time)
for endp_tpl in endp_tpls:
gen_name_a = endp_tpl[3]
gen_name_b = endp_tpl[4]
self.set_flags(gen_name_a, "ClearPortOnStart", 1)
time.sleep(sleep_time)
for endp_tpl in endp_tpls:
name = endp_tpl[2]
gen_name_a = endp_tpl[3]
# gen_name_b = endp_tpl[4]
self.parse_command(name, gen_name_a)
self.set_cmd(gen_name_a, self.cmd)
time.sleep(sleep_time)
for endp_tpl in endp_tpls:
name = endp_tpl[2]
gen_name_a = endp_tpl[3]
gen_name_b = endp_tpl[4]
cx_name = "CX_%s-%s" % (self.name_prefix, name)
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": gen_name_a,
"rx_endp": gen_name_b
}
post_data.append(data)
self.created_cx.append(cx_name)
self.created_endp.append(gen_name_a)
self.created_endp.append(gen_name_b)
time.sleep(sleep_time)
for data in post_data:
url = "/cli-json/add_cx"
if self.debug:
pprint(data)
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands_)
time.sleep(2)
time.sleep(sleep_time)
for data in post_data:
self.local_realm.json_post("/cli-json/show_cx", {
"test_mgr": "default_tm",
"cross_connect": data["alias"]
})
time.sleep(sleep_time)
def choose_ping_command(self):
gen_results = self.json_get("generic/list?fields=name,last+results", debug_=self.debug)
if self.debug:
print(gen_results)
if gen_results['endpoints'] is not None:
for name in gen_results['endpoints']:
for k, v in name.items():
if v['name'] in self.created_endp and not v['name'].endswith('1'):
if v['last results'] != "" and "Unreachable" not in v['last results']:
return True, v['name']
else:
return False, v['name']
def choose_lfcurl_command(self):
gen_results = self.json_get("generic/list?fields=name,last+results", debug_=self.debug)
if self.debug:
print(gen_results)
if gen_results['endpoints'] is not None:
for name in gen_results['endpoints']:
for k, v in name.items():
if v['name'] != '':
results = v['last results'].split()
if 'Finished' in v['last results']:
if results[1][:-1] == results[2]:
return True, v['name']
else:
return False, v['name']
def choose_iperf3_command(self):
gen_results = self.json_get("generic/list?fields=name,last+results", debug_=self.debug)
if gen_results['endpoints'] is not None:
pprint.pprint(gen_results['endpoints'])
#for name in gen_results['endpoints']:
# pprint.pprint(name.items)
#for k,v in name.items():
exit(1)
def choose_speedtest_command(self):
gen_results = self.json_get("generic/list?fields=name,last+results", debug_=self.debug)
if gen_results['endpoints'] is not None:
for name in gen_results['endpoints']:
for k, v in name.items():
if v['last results'] is not None and v['name'] in self.created_endp and v['last results'] != '':
last_results = json.loads(v['last results'])
if last_results['download'] is None and last_results['upload'] is None and last_results['ping'] is None:
return False, v['name']
elif last_results['download'] >= self.speedtest_min_dl and \
last_results['upload'] >= self.speedtest_min_up and \
last_results['ping'] <= self.speedtest_max_ping:
return True, v['name']
def choose_generic_command(self):
gen_results = self.json_get("generic/list?fields=name,last+results", debug_=self.debug)
if (gen_results['endpoints'] is not None):
for name in gen_results['endpoints']:
for k, v in name.items():
if v['name'] in self.created_endp and not v['name'].endswith('1'):
if v['last results'] != "" and "not known" not in v['last results']:
return True, v['name']
else:
return False, v['name']
def monitor(self,
duration_sec=60,
monitor_interval_ms=1,
sta_list=None,
generic_cols=None,
port_mgr_cols=None,
created_cx=None,
monitor=True,
report_file=None,
systeminfopath=None,
output_format=None,
script_name=None,
arguments=None,
compared_report=None,
debug=False):
try:
duration_sec = self.parse_time(duration_sec).seconds
except:
if (duration_sec is None) or (duration_sec <= 1):
raise ValueError("GenCXProfile::monitor wants duration_sec > 1 second")
if (duration_sec <= monitor_interval_ms):
raise ValueError("GenCXProfile::monitor wants duration_sec > monitor_interval")
if report_file is None:
raise ValueError("Monitor requires an output file to be defined")
if systeminfopath is None:
raise ValueError("Monitor requires a system info path to be defined")
if created_cx is None:
raise ValueError("Monitor needs a list of Layer 3 connections")
if (monitor_interval_ms is None) or (monitor_interval_ms < 1):
raise ValueError("GenCXProfile::monitor wants monitor_interval >= 1 second")
if generic_cols is None:
raise ValueError("GenCXProfile::monitor wants a list of column names to monitor")
if output_format is not None:
if output_format.lower() != report_file.split('.')[-1]:
raise ValueError(
'Filename %s has an extension that does not match output format %s .' % (report_file, output_format))
else:
output_format = report_file.split('.')[-1]
# default save to csv first
if report_file.split('.')[-1] != 'csv':
report_file = report_file.replace(str(output_format), 'csv', 1)
print("Saving rolling data into..." + str(report_file))
# ================== Step 1, set column names and header row
generic_cols = [self.replace_special_char(x) for x in generic_cols]
generic_fields = ",".join(generic_cols)
default_cols = ['Timestamp', 'Timestamp milliseconds epoch', 'Timestamp seconds epoch', 'Duration elapsed']
default_cols.extend(generic_cols)
if port_mgr_cols is not None:
default_cols.extend(port_mgr_cols)
header_row = default_cols
# csvwriter.writerow([systeminfo['VersionInfo']['BuildVersion'], script_name, str(arguments)])
if port_mgr_cols is not None:
port_mgr_cols = [self.replace_special_char(x) for x in port_mgr_cols]
port_mgr_cols_labelled = []
for col_name in port_mgr_cols:
port_mgr_cols_labelled.append("port mgr - " + col_name)
port_mgr_fields = ",".join(port_mgr_cols)
header_row.extend(port_mgr_cols_labelled)
# create sys info file
systeminfo = self.json_get('/')
sysinfo = [str("LANforge GUI Build: " + systeminfo['VersionInfo']['BuildVersion']),
str("Script Name: " + script_name), str("Argument input: " + str(arguments))]
with open(systeminfopath, 'w') as filehandle:
for listitem in sysinfo:
filehandle.write('%s\n' % listitem)
# ================== Step 2, monitor columns
start_time = datetime.datetime.now()
end_time = start_time + datetime.timedelta(seconds=duration_sec)
passes = 0
expected_passes = 0
# instantiate csv file here, add specified column headers
csvfile = open(str(report_file), 'w')
csvwriter = csv.writer(csvfile, delimiter=",")
csvwriter.writerow(header_row)
# wait 10 seconds to get proper port data
time.sleep(10)
# for x in range(0,int(round(iterations,0))):
initial_starttime = datetime.datetime.now()
print("Starting Test...")
while datetime.datetime.now() < end_time:
passes = 0
expected_passes = 0
time.sleep(15)
result = False
cur_time = datetime.datetime.now()
if self.type == "lfping":
result = self.choose_ping_command()
elif self.type == "generic":
result = self.choose_generic_command()
elif self.type == "lfcurl":
result = self.choose_lfcurl_command()
elif self.type == "speedtest":
result = self.choose_speedtest_command()
elif self.type == "iperf3":
result = self.choose_iperf3_command()
else:
continue
expected_passes += 1
if result is not None:
if result[0]:
passes += 1
else:
self._fail("%s Failed to ping %s " % (result[1], self.dest))
break
time.sleep(1)
if passes == expected_passes:
self._pass("PASS: All tests passed")
t = datetime.datetime.now()
timestamp = t.strftime("%m/%d/%Y %I:%M:%S")
t_to_millisec_epoch = int(self.get_milliseconds(t))
t_to_sec_epoch = int(self.get_seconds(t))
time_elapsed = int(self.get_seconds(t)) - int(self.get_seconds(initial_starttime))
basecolumns = [timestamp, t_to_millisec_epoch, t_to_sec_epoch, time_elapsed]
generic_response = self.json_get("/generic/%s?fields=%s" % (created_cx, generic_fields))
if port_mgr_cols is not None:
port_mgr_response = self.json_get("/port/1/1/%s?fields=%s" % (sta_list, port_mgr_fields))
# get info from port manager with list of values from cx_a_side_list
if "endpoints" not in generic_response or generic_response is None:
print(generic_response)
raise ValueError("Cannot find columns requested to be searched. Exiting script, please retry.")
if debug:
print("Json generic_response from LANforge... " + str(generic_response))
if port_mgr_cols is not None:
if "interfaces" not in port_mgr_response or port_mgr_response is None:
print(port_mgr_response)
raise ValueError("Cannot find columns requested to be searched. Exiting script, please retry.")
if debug:
print("Json port_mgr_response from LANforge... " + str(port_mgr_response))
for endpoint in generic_response["endpoints"]: # each endpoint is a dictionary
endp_values = list(endpoint.values())[0]
temp_list = basecolumns
for columnname in header_row[len(basecolumns):]:
temp_list.append(endp_values[columnname])
if port_mgr_cols is not None:
for sta_name in sta_list_edit:
if sta_name in current_sta:
for interface in port_mgr_response["interfaces"]:
if sta_name in list(interface.keys())[0]:
merge = temp_endp_values.copy()
# rename keys (separate port mgr 'rx bytes' from generic 'rx bytes')
port_mgr_values_dict = list(interface.values())[0]
renamed_port_cols = {}
for key in port_mgr_values_dict.keys():
renamed_port_cols['port mgr - ' + key] = port_mgr_values_dict[key]
merge.update(renamed_port_cols)
for name in port_mgr_cols:
temp_list.append(merge[name])
csvwriter.writerow(temp_list)
time.sleep(monitor_interval_ms)
csvfile.close()
# comparison to last report / report inputted
if compared_report is not None:
compared_df = self.compare_two_df(dataframe_one=self.file_to_df(report_file),
dataframe_two=self.file_to_df(compared_report))
exit(1)
# append compared df to created one
if output_format.lower() != 'csv':
self.df_to_file(dataframe=pd.read_csv(report_file), output_f=output_format, save_path=report_file)
else:
if output_format.lower() != 'csv':
self.df_to_file(dataframe=pd.read_csv(report_file), output_f=output_format, save_path=report_file)

195
py-json/http_profile.py Normal file
View File

@@ -0,0 +1,195 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
import port_utils
from port_utils import PortUtils
from pprint import pprint
import pprint
import time
class HTTPProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port, local_realm, debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_)
self.lfclient_url = "http://%s:%s" % (lfclient_host, lfclient_port)
self.debug = debug_
self.requests_per_ten = 600
self.local_realm = local_realm
self.created_cx = {}
self.created_endp = []
self.ip_map = {}
self.direction = "dl"
self.dest = "/dev/null"
self.port_util = PortUtils(self.local_realm)
self.max_speed = 0 #infinity
def check_errors(self, debug=False):
fields_list = ["!conn", "acc.+denied", "bad-proto", "bad-url", "other-err", "total-err", "rslv-p", "rslv-h",
"timeout", "nf+(4xx)", "http-r", "http-p", "http-t", "login-denied"]
endp_list = self.json_get("layer4/list?fields=%s" % ','.join(fields_list))
debug_info = {}
if endp_list is not None and endp_list['endpoint'] is not None:
endp_list = endp_list['endpoint']
expected_passes = len(endp_list)
passes = len(endp_list)
for item in range(len(endp_list)):
for name, info in endp_list[item].items():
for field in fields_list:
if info[field.replace("+", " ")] > 0:
passes -= 1
debug_info[name] = {field: info[field.replace("+", " ")]}
if debug:
print(debug_info)
if passes == expected_passes:
return True
else:
print(list(debug_info), " Endps in this list showed errors getting to its URL") # %s") % self.url)
return False
def start_cx(self):
print("Starting CXs...")
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name],
"cx_state": "RUNNING"
}, debug_=self.debug)
print(".", end='')
print("")
def stop_cx(self):
print("Stopping CXs...")
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name],
"cx_state": "STOPPED"
}, debug_=self.debug)
print(".", end='')
print("")
def cleanup(self):
print("Cleaning up cxs and endpoints")
if len(self.created_cx) != 0:
for cx_name in self.created_cx.keys():
req_url = "cli-json/rm_cx"
data = {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name]
}
self.json_post(req_url, data)
# pprint(data)
req_url = "cli-json/rm_endp"
data = {
"endp_name": cx_name
}
self.json_post(req_url, data)
# pprint(data)
def map_sta_ips(self, sta_list=[]):
for sta_eid in sta_list:
eid = self.local_realm.name_to_eid(sta_eid)
sta_list = self.json_get("/port/%s/%s/%s?fields=alias,ip" %
(eid[0], eid[1], eid[2]))
if sta_list['interface'] is not None:
self.ip_map[sta_list['interface']['alias']] = sta_list['interface']['ip']
def create(self, ports=[], sleep_time=.5, debug_=False, suppress_related_commands_=None, http=False, ftp=False,
https=False, user=None, passwd=None, source=None, ftp_ip=None, upload_name=None, http_ip=None, https_ip=None):
cx_post_data = []
self.map_sta_ips(ports)
print("Create CXs...")
for i in range(len(list(self.ip_map))):
url = None
if i != len(list(self.ip_map)) - 1:
port_name = list(self.ip_map)[i]
ip_addr = self.ip_map[list(self.ip_map)[i + 1]]
else:
port_name = list(self.ip_map)[i]
ip_addr = self.ip_map[list(self.ip_map)[0]]
if (ip_addr is None) or (ip_addr == ""):
raise ValueError("HTTPProfile::create encountered blank ip/hostname")
shelf = self.local_realm.name_to_eid(port_name)[0]
resource = self.local_realm.name_to_eid(port_name)[1]
name = self.local_realm.name_to_eid(port_name)[2]
if upload_name != None:
name = upload_name
if http:
if http_ip is not None:
self.port_util.set_http(port_name=name, resource=resource, on=True)
url = "%s http://%s %s" % (self.direction, http_ip, self.dest)
else:
self.port_util.set_http(port_name=name, resource=resource, on=True)
url = "%s http://%s/ %s" % (self.direction, ip_addr, self.dest)
if https:
if https_ip is not None:
self.port_util.set_http(port_name=name, resource=resource, on=True)
url = "%s https://%s %s" % (self.direction, https_ip, self.dest)
else:
self.port_util.set_http(port_name=name, resource=resource, on=True)
url = "%s https://%s/ %s" % (self.direction, ip_addr, self.dest)
if ftp:
self.port_util.set_ftp(port_name=name, resource=resource, on=True)
if user is not None and passwd is not None and source is not None:
if ftp_ip is not None:
ip_addr=ftp_ip
url = "%s ftp://%s:%s@%s%s %s" % (self.direction, user, passwd, ip_addr, source, self.dest)
print("###### url:{}".format(url))
else:
raise ValueError("user: %s, passwd: %s, and source: %s must all be set" % (user, passwd, source))
if not http and not ftp and not https:
raise ValueError("Please specify ftp and/or http")
if (url is None) or (url == ""):
raise ValueError("HTTPProfile::create: url unset")
if upload_name ==None:
endp_data = {
"alias": name + "_l4",
"shelf": shelf,
"resource": resource,
"port": name,
"type": "l4_generic",
"timeout": 10,
"url_rate": self.requests_per_ten,
"url": url,
"proxy_auth_type": 0x200
}
else:
endp_data = {
"alias": name + "_l4",
"shelf": shelf,
"resource": resource,
"port": ports[0],
"type": "l4_generic",
"timeout": 10,
"url_rate": self.requests_per_ten,
"url": url,
"ssl_cert_fname": "ca-bundle.crt",
"proxy_port": 0,
"max_speed": self.max_speed,
"proxy_auth_type": 0x200
}
url = "cli-json/add_l4_endp"
self.local_realm.json_post(url, endp_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands_)
time.sleep(sleep_time)
endp_data = {
"alias": "CX_" + name + "_l4",
"test_mgr": "default_tm",
"tx_endp": name + "_l4",
"rx_endp": "NA"
}
cx_post_data.append(endp_data)
self.created_cx[name + "_l4"] = "CX_" + name + "_l4"
for cx_data in cx_post_data:
url = "/cli-json/add_cx"
self.local_realm.json_post(url, cx_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands_)
time.sleep(sleep_time)

550
py-json/l3_cxprofile.py Normal file
View File

@@ -0,0 +1,550 @@
#!/usr/bin/env python3
import pprint
from pprint import pprint
from LANforge.lfcli_base import LFCliBase
import csv
import pandas as pd
import time
import datetime
class L3CXProfile(LFCliBase):
def __init__(self,
lfclient_host,
lfclient_port,
local_realm,
side_a_min_bps=None,
side_b_min_bps=None,
side_a_max_bps=0,
side_b_max_bps=0,
side_a_min_pdu=-1,
side_b_min_pdu=-1,
side_a_max_pdu=0,
side_b_max_pdu=0,
report_timer_=3000,
name_prefix_="Unset",
number_template_="00000",
mconn=0,
debug_=False):
"""
:param lfclient_host:
:param lfclient_port:
:param local_realm:
:param side_a_min_bps:
:param side_b_min_bps:
:param side_a_max_bps:
:param side_b_max_bps:
:param side_a_min_pdu:
:param side_b_min_pdu:
:param side_a_max_pdu:
:param side_b_max_pdu:
:param name_prefix_: prefix string for connection
:param number_template_: how many zeros wide we padd, possibly a starting integer with left padding
:param mconn: Multi-conn setting for this connection.
:param debug_:
"""
super().__init__(lfclient_host, lfclient_port, _debug = debug_)
self.debug = debug_
self.local_realm = local_realm
self.side_a_min_pdu = side_a_min_pdu
self.side_b_min_pdu = side_b_min_pdu
self.side_a_max_pdu = side_a_max_pdu
self.side_b_max_pdu = side_b_max_pdu
self.side_a_min_bps = side_a_min_bps
self.side_b_min_bps = side_b_min_bps
self.side_a_max_bps = side_a_max_bps
self.side_b_max_bps = side_b_max_bps
self.report_timer = report_timer_
self.created_cx = {}
self.created_endp = {}
self.name_prefix = name_prefix_
self.number_template = number_template_
self.mconn = mconn
def get_cx_count(self):
return len(self.created_cx.keys())
def get_cx_names(self):
return self.created_cx.keys()
def get_cx_report(self):
self.data = {}
for cx_name in self.get_cx_names():
self.data[cx_name] = self.json_get("/cx/" + cx_name).get(cx_name)
return self.data
def __get_rx_values(self):
cx_list = self.json_get("endp?fields=name,rx+bytes")
if self.debug:
print(self.created_cx.values())
print("==============\n", cx_list, "\n==============")
cx_rx_map = {}
for cx_name in cx_list['endpoint']:
if cx_name != 'uri' and cx_name != 'handler':
for item, value in cx_name.items():
for value_name, value_rx in value.items():
if value_name == 'rx bytes' and item in self.created_cx.values():
cx_rx_map[item] = value_rx
return cx_rx_map
def __compare_vals(self, old_list, new_list):
passes = 0
expected_passes = 0
if len(old_list) == len(new_list):
for item, value in old_list.items():
expected_passes += 1
if new_list[item] > old_list[item]:
passes += 1
if passes == expected_passes:
return True
else:
return False
else:
return False
def instantiate_file(self, file_name, file_format):
pass
def monitor(self,
duration_sec=60,
monitor_interval_ms=1,
sta_list=None,
layer3_cols=None,
port_mgr_cols=None,
created_cx=None,
monitor=True,
report_file=None,
systeminfopath=None,
output_format=None,
script_name=None,
arguments=None,
compared_report=None,
debug=False):
try:
duration_sec = self.parse_time(duration_sec).seconds
except:
if (duration_sec is None) or (duration_sec <= 1):
raise ValueError("L3CXProfile::monitor wants duration_sec > 1 second")
if (duration_sec <= monitor_interval_ms):
raise ValueError("L3CXProfile::monitor wants duration_sec > monitor_interval")
if report_file == None:
raise ValueError("Monitor requires an output file to be defined")
if systeminfopath == None:
raise ValueError("Monitor requires a system info path to be defined")
if created_cx == None:
raise ValueError("Monitor needs a list of Layer 3 connections")
if (monitor_interval_ms is None) or (monitor_interval_ms < 1):
raise ValueError("L3CXProfile::monitor wants monitor_interval >= 1 second")
if layer3_cols is None:
raise ValueError("L3CXProfile::monitor wants a list of column names to monitor")
if output_format is not None:
if output_format.lower() != report_file.split('.')[-1]:
raise ValueError('Filename %s has an extension that does not match output format %s .' % (report_file, output_format))
else:
output_format = report_file.split('.')[-1]
#default save to csv first
if report_file.split('.')[-1] != 'csv':
report_file = report_file.replace(str(output_format),'csv',1)
print("Saving rolling data into..." + str(report_file))
#================== Step 1, set column names and header row
layer3_cols=[self.replace_special_char(x) for x in layer3_cols]
layer3_fields = ",".join(layer3_cols)
default_cols=['Timestamp','Timestamp milliseconds epoch','Timestamp seconds epoch','Duration elapsed']
default_cols.extend(layer3_cols)
if port_mgr_cols is not None:
default_cols.extend(port_mgr_cols)
header_row=default_cols
#csvwriter.writerow([systeminfo['VersionInfo']['BuildVersion'], script_name, str(arguments)])
if port_mgr_cols is not None:
port_mgr_cols=[self.replace_special_char(x) for x in port_mgr_cols]
port_mgr_cols_labelled =[]
for col_name in port_mgr_cols:
port_mgr_cols_labelled.append("port mgr - " + col_name)
port_mgr_fields=",".join(port_mgr_cols)
header_row.extend(port_mgr_cols_labelled)
#create sys info file
systeminfo = self.json_get('/')
sysinfo=[str("LANforge GUI Build: " + systeminfo['VersionInfo']['BuildVersion']), str("Script Name: " + script_name), str("Argument input: " + str(arguments))]
with open(systeminfopath,'w') as filehandle:
for listitem in sysinfo:
filehandle.write('%s\n' % listitem)
#================== Step 2, monitor columns
start_time = datetime.datetime.now()
end_time = start_time + datetime.timedelta(seconds=duration_sec)
passes = 0
expected_passes = 0
old_cx_rx_values = self.__get_rx_values()
#instantiate csv file here, add specified column headers
csvfile=open(str(report_file),'w')
csvwriter = csv.writer(csvfile,delimiter=",")
csvwriter.writerow(header_row)
#wait 10 seconds to get proper port data
time.sleep(10)
# for x in range(0,int(round(iterations,0))):
initial_starttime= datetime.datetime.now()
while datetime.datetime.now() < end_time:
t = datetime.datetime.now()
timestamp= t.strftime("%m/%d/%Y %I:%M:%S")
t_to_millisec_epoch= int(self.get_milliseconds(t))
t_to_sec_epoch= int(self.get_seconds(t))
time_elapsed=int(self.get_seconds(t))-int(self.get_seconds(initial_starttime))
basecolumns=[timestamp,t_to_millisec_epoch,t_to_sec_epoch,time_elapsed]
layer_3_response = self.json_get("/endp/%s?fields=%s" % (created_cx, layer3_fields))
if port_mgr_cols is not None:
port_mgr_response=self.json_get("/port/1/1/%s?fields=%s" % (sta_list, port_mgr_fields))
#get info from port manager with list of values from cx_a_side_list
if "endpoint" not in layer_3_response or layer_3_response is None:
print(layer_3_response)
raise ValueError("Cannot find columns requested to be searched. Exiting script, please retry.")
if debug:
print("Json layer_3_response from LANforge... " + str(layer_3_response))
if port_mgr_cols is not None:
if "interfaces" not in port_mgr_response or port_mgr_response is None:
print(port_mgr_response)
raise ValueError("Cannot find columns requested to be searched. Exiting script, please retry.")
if debug:
print("Json port_mgr_response from LANforge... " + str(port_mgr_response))
for endpoint in layer_3_response["endpoint"]: #each endpoint is a dictionary
endp_values=list(endpoint.values())[0]
temp_list=basecolumns
for columnname in header_row[len(basecolumns):]:
temp_list.append(endp_values[columnname])
if port_mgr_cols is not None:
for sta_name in sta_list_edit:
if sta_name in current_sta:
for interface in port_mgr_response["interfaces"]:
if sta_name in list(interface.keys())[0]:
merge=temp_endp_values.copy()
#rename keys (separate port mgr 'rx bytes' from layer3 'rx bytes')
port_mgr_values_dict =list(interface.values())[0]
renamed_port_cols={}
for key in port_mgr_values_dict.keys():
renamed_port_cols['port mgr - ' +key]=port_mgr_values_dict[key]
merge.update(renamed_port_cols)
for name in port_mgr_cols:
temp_list.append(merge[name])
csvwriter.writerow(temp_list)
new_cx_rx_values = self.__get_rx_values()
if debug:
print(old_cx_rx_values, new_cx_rx_values)
print("\n-----------------------------------")
print(t)
print("-----------------------------------\n")
expected_passes += 1
if self.__compare_vals(old_cx_rx_values, new_cx_rx_values):
passes += 1
else:
self.fail("FAIL: Not all stations increased traffic")
self.exit_fail()
old_cx_rx_values = new_cx_rx_values
time.sleep(monitor_interval_ms)
csvfile.close()
#comparison to last report / report inputted
if compared_report is not None:
compared_df = self.compare_two_df(dataframe_one=self.file_to_df(report_file), dataframe_two=self.file_to_df(compared_report))
exit(1)
#append compared df to created one
if output_format.lower() != 'csv':
self.df_to_file(dataframe=pd.read_csv(report_file), output_f=output_format, save_path=report_file)
else:
if output_format.lower() != 'csv':
self.df_to_file(dataframe=pd.read_csv(report_file), output_f=output_format, save_path=report_file)
def refresh_cx(self):
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/show_cxe", {
"test_mgr": "ALL",
"cross_connect": cx_name
}, debug_=self.debug)
print(".", end='')
def start_cx(self):
print("Starting CXs...")
for cx_name in self.created_cx.keys():
if self.debug:
print("cx-name: %s" % (cx_name))
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": cx_name,
"cx_state": "RUNNING"
}, debug_=self.debug)
if self.debug:
print(".", end='')
if self.debug:
print("")
def stop_cx(self):
print("Stopping CXs...")
for cx_name in self.created_cx.keys():
self.local_realm.stop_cx(cx_name)
print(".", end='')
print("")
def cleanup_prefix(self):
self.local_realm.cleanup_cxe_prefix(self.name_prefix)
def cleanup(self):
print("Cleaning up cxs and endpoints")
if len(self.created_cx) != 0:
for cx_name in self.created_cx.keys():
if self.debug:
print("Cleaning cx: %s"%(cx_name))
self.local_realm.rm_cx(cx_name)
for side in range(len(self.created_cx[cx_name])):
ename = self.created_cx[cx_name][side]
if self.debug:
print("Cleaning endpoint: %s"%(ename))
self.local_realm.rm_endp(self.created_cx[cx_name][side])
self.clean_cx_lists()
def clean_cx_lists(self):
# Clean out our local lists, this by itself does NOT remove anything from LANforge manager.
# but, if you are trying to modify existing connections, then clearing these arrays and
# re-calling 'create' will do the trick.
self.created_cx.clear()
self.created_endp.clear()
def create(self, endp_type, side_a, side_b, sleep_time=0.03, suppress_related_commands=None, debug_=False,
tos=None):
if self.debug:
debug_ = True
cx_post_data = []
timer_post_data = []
these_endp = []
these_cx = []
# print(self.side_a_min_rate, self.side_a_max_rate)
# print(self.side_b_min_rate, self.side_b_max_rate)
if (self.side_a_min_bps is None) \
or (self.side_a_max_bps is None) \
or (self.side_b_min_bps is None) \
or (self.side_b_max_bps is None):
raise ValueError(
"side_a_min_bps, side_a_max_bps, side_b_min_bps, and side_b_max_bps must all be set to a value")
if type(side_a) == list and type(side_b) != list:
side_b_info = self.local_realm.name_to_eid(side_b)
side_b_shelf = side_b_info[0]
side_b_resource = side_b_info[1]
for port_name in side_a:
side_a_info = self.local_realm.name_to_eid(port_name,debug=debug_)
side_a_shelf = side_a_info[0]
side_a_resource = side_a_info[1]
if port_name.find('.') < 0:
port_name = "%d.%s" % (side_a_info[1], port_name)
cx_name = "%s%s-%i" % (self.name_prefix, side_a_info[2], len(self.created_cx))
endp_a_name = cx_name + "-A"
endp_b_name = cx_name + "-B"
self.created_cx[cx_name] = [endp_a_name, endp_b_name]
self.created_endp[endp_a_name] = endp_a_name
self.created_endp[endp_b_name] = endp_b_name
these_cx.append(cx_name)
these_endp.append(endp_a_name)
these_endp.append(endp_b_name)
mconn_b = self.mconn
if mconn_b > 1:
mconn_b = 1
endp_side_a = {
"alias": endp_a_name,
"shelf": side_a_shelf,
"resource": side_a_resource,
"port": side_a_info[2],
"type": endp_type,
"min_rate": self.side_a_min_bps,
"max_rate": self.side_a_max_bps,
"min_pkt": self.side_a_min_pdu,
"max_pkt": self.side_a_max_pdu,
"ip_port": -1,
"multi_conn": self.mconn,
}
endp_side_b = {
"alias": endp_b_name,
"shelf": side_b_shelf,
"resource": side_b_resource,
"port": side_b_info[2],
"type": endp_type,
"min_rate": self.side_b_min_bps,
"max_rate": self.side_b_max_bps,
"min_pkt": self.side_b_min_pdu,
"max_pkt": self.side_b_max_pdu,
"ip_port": -1,
"multi_conn": mconn_b,
}
#print("1: endp-side-b: ", endp_side_b)
url = "/cli-json/add_endp"
self.local_realm.json_post(url, endp_side_a, debug_=debug_, suppress_related_commands_=suppress_related_commands)
self.local_realm.json_post(url, endp_side_b, debug_=debug_, suppress_related_commands_=suppress_related_commands)
#print("napping %f sec"%sleep_time)
time.sleep(sleep_time)
url = "cli-json/set_endp_flag"
data = {
"name": endp_a_name,
"flag": "AutoHelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
data["name"] = endp_b_name
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
if (endp_type == "lf_udp") or (endp_type == "udp") or (endp_type == "lf_udp6") or (endp_type == "udp6"):
data["name"] = endp_a_name
data["flag"] = "UseAutoNAT"
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
data["name"] = endp_b_name
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
if tos != None:
self.local_realm.set_endp_tos(endp_a_name, tos)
self.local_realm.set_endp_tos(endp_b_name, tos)
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": endp_a_name,
"rx_endp": endp_b_name,
}
# pprint(data)
cx_post_data.append(data)
timer_post_data.append({
"test_mgr": "default_tm",
"cx_name": cx_name,
"milliseconds": self.report_timer
})
elif type(side_b) == list and type(side_a) != list:
side_a_info = self.local_realm.name_to_eid(side_a,debug=debug_)
side_a_shelf = side_a_info[0]
side_a_resource = side_a_info[1]
# side_a_name = side_a_info[2]
for port_name in side_b:
print(side_b)
side_b_info = self.local_realm.name_to_eid(port_name,debug=debug_)
side_b_shelf = side_b_info[0]
side_b_resource = side_b_info[1]
side_b_name = side_b_info[2]
cx_name = "%s%s-%i" % (self.name_prefix, port_name, len(self.created_cx))
endp_a_name = cx_name + "-A"
endp_b_name = cx_name + "-B"
self.created_cx[cx_name] = [endp_a_name, endp_b_name]
self.created_endp[endp_a_name] = endp_a_name
self.created_endp[endp_b_name] = endp_b_name
these_cx.append(cx_name)
these_endp.append(endp_a_name)
these_endp.append(endp_b_name)
mconn_b = self.mconn
if mconn_b > 1:
mconn_b = 1
endp_side_a = {
"alias": endp_a_name,
"shelf": side_a_shelf,
"resource": side_a_resource,
"port": side_a_info[2],
"type": endp_type,
"min_rate": self.side_a_min_bps,
"max_rate": self.side_a_max_bps,
"min_pkt": self.side_a_min_pdu,
"max_pkt": self.side_a_max_pdu,
"ip_port": -1,
"multi_conn": self.mconn,
}
endp_side_b = {
"alias": endp_b_name,
"shelf": side_b_shelf,
"resource": side_b_resource,
"port": side_b_info[2],
"type": endp_type,
"min_rate": self.side_b_min_bps,
"max_rate": self.side_b_max_bps,
"min_pkt": self.side_b_min_pdu,
"max_pkt": self.side_b_max_pdu,
"ip_port": -1,
"multi_conn": mconn_b,
}
#print("2: endp-side-b: ", endp_side_b)
url = "/cli-json/add_endp"
self.local_realm.json_post(url, endp_side_a, debug_=debug_, suppress_related_commands_=suppress_related_commands)
self.local_realm.json_post(url, endp_side_b, debug_=debug_, suppress_related_commands_=suppress_related_commands)
#print("napping %f sec" %sleep_time )
time.sleep(sleep_time)
url = "cli-json/set_endp_flag"
data = {
"name": endp_a_name,
"flag": "autohelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
url = "cli-json/set_endp_flag"
data = {
"name": endp_b_name,
"flag": "autohelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
#print("CXNAME451: %s" % cx_name)
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": endp_a_name,
"rx_endp": endp_b_name,
}
cx_post_data.append(data)
timer_post_data.append({
"test_mgr": "default_tm",
"cx_name": cx_name,
"milliseconds": self.report_timer
})
else:
raise ValueError(
"side_a or side_b must be of type list but not both: side_a is type %s side_b is type %s" % (
type(side_a), type(side_b)))
print("wait_until_endps_appear these_endp: {} debug_ {}".format(these_endp,debug_))
self.local_realm.wait_until_endps_appear(these_endp, debug=debug_)
for data in cx_post_data:
url = "/cli-json/add_cx"
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
time.sleep(0.01)
self.local_realm.wait_until_cxs_appear(these_cx, debug=debug_)
return these_cx, these_endp
def to_string(self):
pprint.pprint(self)

678
py-json/l3_cxprofile2.py Normal file
View File

@@ -0,0 +1,678 @@
#!/usr/bin/env python3
import re
import time
import pprint
from lfdata import LFDataCollection
from base_profile import BaseProfile
import os
import datetime
import base64
import csv
from pprint import pprint
import time
import random
import string
import datetime
class L3CXProfile2(BaseProfile):
def __init__(self,
lfclient_host,
lfclient_port,
local_realm,
side_a_min_bps=None,
side_b_min_bps=None,
side_a_max_bps=0,
side_b_max_bps=0,
side_a_min_pdu=-1,
side_b_min_pdu=-1,
side_a_max_pdu=0,
side_b_max_pdu=0,
report_timer_=3000,
name_prefix_="Unset",
number_template_="00000",
debug_=False):
"""
:param lfclient_host:
:param lfclient_port:
:param local_realm:
:param side_a_min_bps:
:param side_b_min_bps:
:param side_a_max_bps:
:param side_b_max_bps:
:param side_a_min_pdu:
:param side_b_min_pdu:
:param side_a_max_pdu:
:param side_b_max_pdu:
:param name_prefix_: prefix string for connection
:param number_template_: how many zeros wide we padd, possibly a starting integer with left padding
:param debug_:
"""
super().__init__(local_realm=local_realm,
debug=debug_)
self.side_a_min_pdu = side_a_min_pdu
self.side_b_min_pdu = side_b_min_pdu
self.side_a_max_pdu = side_a_max_pdu
self.side_b_max_pdu = side_b_max_pdu
self.side_a_min_bps = side_a_min_bps
self.side_b_min_bps = side_b_min_bps
self.side_a_max_bps = side_a_max_bps
self.side_b_max_bps = side_b_max_bps
self.report_timer = report_timer_
self.created_cx = {}
self.created_endp = {}
self.name_prefix = name_prefix_
self.number_template = number_template_
def get_cx_names(self):
return self.created_cx.keys()
def get_cx_report(self):
self.data = {}
for cx_name in self.get_cx_names():
self.data[cx_name] = self.json_get("/cx/" + cx_name).get(cx_name)
return self.data
def instantiate_file(self, file_name, file_format):
pass
############################################ transfer into lfcriteria.py
#get current rx values
def __get_rx_values(self):
cx_list = self.json_get("endp?fields=name,rx+bytes")
if self.debug:
print(self.created_cx.values())
print("==============\n", cx_list, "\n==============")
cx_rx_map = {}
for cx_name in cx_list['endpoint']:
if cx_name != 'uri' and cx_name != 'handler':
for item, value in cx_name.items():
for value_name, value_rx in value.items():
if value_name == 'rx bytes' and item in self.created_cx.values():
cx_rx_map[item] = value_rx
return cx_rx_map
#compare vals
def __compare_vals(self, old_list, new_list):
passes = 0
expected_passes = 0
if len(old_list) == len(new_list):
for item, value in old_list.items():
expected_passes += 1
if new_list[item] > old_list[item]:
passes += 1
if passes == expected_passes:
return True
else:
return False
else:
return False
############################################ transfer into lfcriteria.py
def refresh_cx(self):
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/show_cxe", {
"test_mgr": "ALL",
"cross_connect": cx_name
}, debug_=self.debug)
print(".", end='')
def start_cx(self):
print("Starting CXs...")
for cx_name in self.created_cx.keys():
if self.debug:
print("cx-name: %s" % (cx_name))
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": cx_name,
"cx_state": "RUNNING"
}, debug_=self.debug)
if self.debug:
print(".", end='')
if self.debug:
print("")
def stop_cx(self):
print("Stopping CXs...")
for cx_name in self.created_cx.keys():
self.local_realm.stop_cx(cx_name)
print(".", end='')
print("")
def cleanup_prefix(self):
self.local_realm.cleanup_cxe_prefix(self.name_prefix)
def cleanup(self):
print("Cleaning up cxs and endpoints")
if len(self.created_cx) != 0:
for cx_name in self.created_cx.keys():
if self.debug:
print("Cleaning cx: %s"%(cx_name))
self.local_realm.rm_cx(cx_name)
for side in range(len(self.created_cx[cx_name])):
ename = self.created_cx[cx_name][side]
if self.debug:
print("Cleaning endpoint: %s"%(ename))
self.local_realm.rm_endp(self.created_cx[cx_name][side])
# added this function create_cx by taking reference from the existing function (def create()) to pass the arguments with the script requirement
def create_cx(self, endp_type, side_a, side_b, count, sleep_time=0.03, suppress_related_commands=None, debug_=False,
tos=None):
if self.debug:
debug_ = True
cx_post_data = []
timer_post_data = []
these_endp = []
these_cx = []
# print(self.side_a_min_rate, self.side_a_max_rate)
# print(self.side_b_min_rate, self.side_b_max_rate)
if (self.side_a_min_bps is None) \
or (self.side_a_max_bps is None) \
or (self.side_b_min_bps is None) \
or (self.side_b_max_bps is None):
raise ValueError(
"side_a_min_bps, side_a_max_bps, side_b_min_bps, and side_b_max_bps must all be set to a value")
if type(side_a) != list and type(side_b) != list:
side_b_info = self.local_realm.name_to_eid(side_b)
side_b_shelf = side_b_info[0]
side_b_resource = side_b_info[1]
for i in range(count):
side_a_info = self.local_realm.name_to_eid(side_a, debug=debug_)
side_a_shelf = side_a_info[0]
side_a_resource = side_a_info[1]
if side_a.find('.') < 0:
port_name = "%d.%s" % (side_a_info[1], side_a)
cx_name = "%s%s-%i" % (self.name_prefix, side_a_info[2], len(self.created_cx)) + str(i)
endp_a_name = cx_name + "-A"
endp_b_name = cx_name + "-B"
self.created_cx[cx_name] = [endp_a_name, endp_b_name]
self.created_endp[endp_a_name] = endp_a_name
self.created_endp[endp_b_name] = endp_b_name
these_cx.append(cx_name)
these_endp.append(endp_a_name)
these_endp.append(endp_b_name)
endp_side_a = {
"alias": endp_a_name,
"shelf": side_a_shelf,
"resource": side_a_resource,
"port": side_a_info[2],
"type": endp_type,
"min_rate": self.side_a_min_bps,
"max_rate": self.side_a_max_bps,
"min_pkt": self.side_a_min_pdu,
"max_pkt": self.side_a_max_pdu,
"ip_port": -1
}
endp_side_b = {
"alias": endp_b_name,
"shelf": side_b_shelf,
"resource": side_b_resource,
"port": side_b_info[2],
"type": endp_type,
"min_rate": self.side_b_min_bps,
"max_rate": self.side_b_max_bps,
"min_pkt": self.side_b_min_pdu,
"max_pkt": self.side_b_max_pdu,
"ip_port": -1
}
url = "/cli-json/add_endp"
self.local_realm.json_post(url, endp_side_a, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
self.local_realm.json_post(url, endp_side_b, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
# print("napping %f sec"%sleep_time)
time.sleep(sleep_time)
url = "cli-json/set_endp_flag"
data = {
"name": endp_a_name,
"flag": "AutoHelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
data["name"] = endp_b_name
self.local_realm.json_post(url, data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
if (endp_type == "lf_udp") or (endp_type == "udp") or (endp_type == "lf_udp6") or (endp_type == "udp6"):
data["name"] = endp_a_name
data["flag"] = "UseAutoNAT"
self.local_realm.json_post(url, data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
data["name"] = endp_b_name
self.local_realm.json_post(url, data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
if tos != None:
self.local_realm.set_endp_tos(endp_a_name, tos)
self.local_realm.set_endp_tos(endp_b_name, tos)
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": endp_a_name,
"rx_endp": endp_b_name,
}
# pprint(data)
cx_post_data.append(data)
timer_post_data.append({
"test_mgr": "default_tm",
"cx_name": cx_name,
"milliseconds": self.report_timer
})
elif type(side_b) == list and type(side_a) != list:
side_a_info = self.local_realm.name_to_eid(side_a, debug=debug_)
side_a_shelf = side_a_info[0]
side_a_resource = side_a_info[1]
# side_a_name = side_a_info[2]
for port_name in side_b:
for inc in range(count):
# print(side_b)
side_b_info = self.local_realm.name_to_eid(port_name, debug=debug_)
side_b_shelf = side_b_info[0]
side_b_resource = side_b_info[1]
side_b_name = side_b_info[2]
cx_name = "%s%s-%i" % (self.name_prefix, port_name, len(self.created_cx)) + str(inc)
endp_a_name = cx_name + "-A"
endp_b_name = cx_name + "-B"
self.created_cx[cx_name] = [endp_a_name, endp_b_name]
self.created_endp[endp_a_name] = endp_a_name
self.created_endp[endp_b_name] = endp_b_name
these_cx.append(cx_name)
these_endp.append(endp_a_name)
these_endp.append(endp_b_name)
endp_side_a = {
"alias": endp_a_name,
"shelf": side_a_shelf,
"resource": side_a_resource,
"port": side_a_info[2],
"type": endp_type,
"min_rate": self.side_a_min_bps,
"max_rate": self.side_a_max_bps,
"min_pkt": self.side_a_min_pdu,
"max_pkt": self.side_a_max_pdu,
"ip_port": -1
}
endp_side_b = {
"alias": endp_b_name,
"shelf": side_b_shelf,
"resource": side_b_resource,
"port": side_b_info[2],
"type": endp_type,
"min_rate": self.side_b_min_bps,
"max_rate": self.side_b_max_bps,
"min_pkt": self.side_b_min_pdu,
"max_pkt": self.side_b_max_pdu,
"ip_port": -1
}
url = "/cli-json/add_endp"
self.local_realm.json_post(url, endp_side_a, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
self.local_realm.json_post(url, endp_side_b, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
# print("napping %f sec" %sleep_time )
time.sleep(sleep_time)
url = "cli-json/set_endp_flag"
data = {
"name": endp_a_name,
"flag": "autohelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
url = "cli-json/set_endp_flag"
data = {
"name": endp_b_name,
"flag": "autohelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
# print("CXNAME451: %s" % cx_name)
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": endp_a_name,
"rx_endp": endp_b_name,
}
cx_post_data.append(data)
timer_post_data.append({
"test_mgr": "default_tm",
"cx_name": cx_name,
"milliseconds": self.report_timer
})
else:
raise ValueError(
"side_a or side_b must be of type list but not both: side_a is type %s side_b is type %s" % (
type(side_a), type(side_b)))
print("wait_until_endps_appear these_endp: {} debug_ {}".format(these_endp, debug_))
self.local_realm.wait_until_endps_appear(these_endp, debug=debug_)
for data in cx_post_data:
url = "/cli-json/add_cx"
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
time.sleep(0.01)
self.local_realm.wait_until_cxs_appear(these_cx, debug=debug_)
def create(self, endp_type, side_a, side_b, sleep_time=0.03, suppress_related_commands=None, debug_=False,
tos=None):
if self.debug:
debug_ = True
cx_post_data = []
timer_post_data = []
these_endp = []
these_cx = []
# print(self.side_a_min_rate, self.side_a_max_rate)
# print(self.side_b_min_rate, self.side_b_max_rate)
if (self.side_a_min_bps is None) \
or (self.side_a_max_bps is None) \
or (self.side_b_min_bps is None) \
or (self.side_b_max_bps is None):
raise ValueError(
"side_a_min_bps, side_a_max_bps, side_b_min_bps, and side_b_max_bps must all be set to a value")
if type(side_a) == list and type(side_b) != list:
side_b_info = self.local_realm.name_to_eid(side_b)
side_b_shelf = side_b_info[0]
side_b_resource = side_b_info[1]
for port_name in side_a:
side_a_info = self.local_realm.name_to_eid(port_name,debug=debug_)
side_a_shelf = side_a_info[0]
side_a_resource = side_a_info[1]
if port_name.find('.') < 0:
port_name = "%d.%s" % (side_a_info[1], port_name)
cx_name = "%s%s-%i" % (self.name_prefix, side_a_info[2], len(self.created_cx))
endp_a_name = cx_name + "-A"
endp_b_name = cx_name + "-B"
self.created_cx[cx_name] = [endp_a_name, endp_b_name]
self.created_endp[endp_a_name] = endp_a_name
self.created_endp[endp_b_name] = endp_b_name
these_cx.append(cx_name)
these_endp.append(endp_a_name)
these_endp.append(endp_b_name)
endp_side_a = {
"alias": endp_a_name,
"shelf": side_a_shelf,
"resource": side_a_resource,
"port": side_a_info[2],
"type": endp_type,
"min_rate": self.side_a_min_bps,
"max_rate": self.side_a_max_bps,
"min_pkt": self.side_a_min_pdu,
"max_pkt": self.side_a_max_pdu,
"ip_port": -1
}
endp_side_b = {
"alias": endp_b_name,
"shelf": side_b_shelf,
"resource": side_b_resource,
"port": side_b_info[2],
"type": endp_type,
"min_rate": self.side_b_min_bps,
"max_rate": self.side_b_max_bps,
"min_pkt": self.side_b_min_pdu,
"max_pkt": self.side_b_max_pdu,
"ip_port": -1
}
url = "/cli-json/add_endp"
self.local_realm.json_post(url, endp_side_a, debug_=debug_, suppress_related_commands_=suppress_related_commands)
self.local_realm.json_post(url, endp_side_b, debug_=debug_, suppress_related_commands_=suppress_related_commands)
#print("napping %f sec"%sleep_time)
time.sleep(sleep_time)
url = "cli-json/set_endp_flag"
data = {
"name": endp_a_name,
"flag": "AutoHelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
data["name"] = endp_b_name
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
if (endp_type == "lf_udp") or (endp_type == "udp") or (endp_type == "lf_udp6") or (endp_type == "udp6"):
data["name"] = endp_a_name
data["flag"] = "UseAutoNAT"
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
data["name"] = endp_b_name
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
if tos != None:
self.local_realm.set_endp_tos(endp_a_name, tos)
self.local_realm.set_endp_tos(endp_b_name, tos)
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": endp_a_name,
"rx_endp": endp_b_name,
}
# pprint(data)
cx_post_data.append(data)
timer_post_data.append({
"test_mgr": "default_tm",
"cx_name": cx_name,
"milliseconds": self.report_timer
})
elif type(side_b) == list and type(side_a) != list:
side_a_info = self.local_realm.name_to_eid(side_a,debug=debug_)
side_a_shelf = side_a_info[0]
side_a_resource = side_a_info[1]
# side_a_name = side_a_info[2]
for port_name in side_b:
print(side_b)
side_b_info = self.local_realm.name_to_eid(port_name,debug=debug_)
side_b_shelf = side_b_info[0]
side_b_resource = side_b_info[1]
side_b_name = side_b_info[2]
cx_name = "%s%s-%i" % (self.name_prefix, port_name, len(self.created_cx))
endp_a_name = cx_name + "-A"
endp_b_name = cx_name + "-B"
self.created_cx[cx_name] = [endp_a_name, endp_b_name]
self.created_endp[endp_a_name] = endp_a_name
self.created_endp[endp_b_name] = endp_b_name
these_cx.append(cx_name)
these_endp.append(endp_a_name)
these_endp.append(endp_b_name)
endp_side_a = {
"alias": endp_a_name,
"shelf": side_a_shelf,
"resource": side_a_resource,
"port": side_a_info[2],
"type": endp_type,
"min_rate": self.side_a_min_bps,
"max_rate": self.side_a_max_bps,
"min_pkt": self.side_a_min_pdu,
"max_pkt": self.side_a_max_pdu,
"ip_port": -1
}
endp_side_b = {
"alias": endp_b_name,
"shelf": side_b_shelf,
"resource": side_b_resource,
"port": side_b_info[2],
"type": endp_type,
"min_rate": self.side_b_min_bps,
"max_rate": self.side_b_max_bps,
"min_pkt": self.side_b_min_pdu,
"max_pkt": self.side_b_max_pdu,
"ip_port": -1
}
url = "/cli-json/add_endp"
self.local_realm.json_post(url, endp_side_a, debug_=debug_, suppress_related_commands_=suppress_related_commands)
self.local_realm.json_post(url, endp_side_b, debug_=debug_, suppress_related_commands_=suppress_related_commands)
#print("napping %f sec" %sleep_time )
time.sleep(sleep_time)
url = "cli-json/set_endp_flag"
data = {
"name": endp_a_name,
"flag": "autohelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
url = "cli-json/set_endp_flag"
data = {
"name": endp_b_name,
"flag": "autohelper",
"val": 1
}
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
#print("CXNAME451: %s" % cx_name)
data = {
"alias": cx_name,
"test_mgr": "default_tm",
"tx_endp": endp_a_name,
"rx_endp": endp_b_name,
}
cx_post_data.append(data)
timer_post_data.append({
"test_mgr": "default_tm",
"cx_name": cx_name,
"milliseconds": self.report_timer
})
else:
raise ValueError(
"side_a or side_b must be of type list but not both: side_a is type %s side_b is type %s" % (
type(side_a), type(side_b)))
print("wait_until_endps_appear these_endp: {} debug_ {}".format(these_endp,debug_))
self.local_realm.wait_until_endps_appear(these_endp, debug=debug_)
for data in cx_post_data:
url = "/cli-json/add_cx"
self.local_realm.json_post(url, data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
time.sleep(0.01)
self.local_realm.wait_until_cxs_appear(these_cx, debug=debug_)
def to_string(self):
pprint(self)
# temp transfer of functions from test script to class
def build(self):
self.create(endp_type="lf_udp", side_a=self.station_profile.station_names, side_b=self.upstream,
sleep_time=0)
def start(self):
self.start_cx()
def stop(self):
self.stop_cx()
#to do : have the variables saved in l3cx profile, upon creation of profile , and called)
def monitor_record(self,
duration_sec=60,
monitor_interval_ms=1,
sta_list=None,
layer3_cols=None,
port_mgr_cols=None,
created_cx=None,
report_file=None,
output_format=None,
script_name=None,
arguments=None,
compared_report=None,
debug=False):
try:
duration_sec = self.parse_time(duration_sec).seconds
except:
if (duration_sec is None) or (duration_sec <= 1):
raise ValueError("L3CXProfile::monitor wants duration_sec > 1 second")
if (duration_sec <= monitor_interval_ms):
raise ValueError("L3CXProfile::monitor wants duration_sec > monitor_interval")
if report_file == None:
raise ValueError("Monitor requires an output file to be defined")
if created_cx == None:
raise ValueError("Monitor needs a list of Layer 3 connections")
if (monitor_interval_ms is None) or (monitor_interval_ms < 1):
raise ValueError("L3CXProfile::monitor wants monitor_interval >= 1 second")
if layer3_cols is None:
raise ValueError("L3CXProfile::monitor wants a list of column names to monitor")
if output_format is not None:
if output_format.lower() != report_file.split('.')[-1]:
raise ValueError('Filename %s has an extension that does not match output format %s .' % (report_file, output_format))
else:
output_format = report_file.split('.')[-1]
#default save to csv first
if report_file.split('.')[-1] != 'csv':
report_file = report_file.replace(str(output_format),'csv',1)
print("Saving rolling data into..." + str(report_file))
#add layer3 cols to header row
layer3_cols=[self.replace_special_char(x) for x in layer3_cols]
layer3_fields = ",".join(layer3_cols)
default_cols=['Timestamp','Timestamp milliseconds epoch','Duration elapsed']
default_cols.extend(layer3_cols)
header_row=default_cols
#add port mgr columns to header row
if port_mgr_cols is not None:
port_mgr_cols=[self.replace_special_char(x) for x in port_mgr_cols]
port_mgr_cols_labelled =[]
for col_name in port_mgr_cols:
port_mgr_cols_labelled.append("port mgr - " + col_name)
header_row.extend(port_mgr_cols_labelled)
#add sys info to header row
systeminfo = self.json_get('/')
header_row.extend([str("LANforge GUI Build: " + systeminfo['VersionInfo']['BuildVersion']), str("Script Name: " + script_name), str("Argument input: " + str(arguments))])
#cut "sta" off all "sta_names"
sta_list_edit=[]
if sta_list is not None:
for sta in sta_list:
sta_list_edit.append(sta[4:])
sta_list=",".join(sta_list_edit)
#instantiate csv file here, add specified column headers
csvfile=open(str(report_file),'w')
csvwriter = csv.writer(csvfile,delimiter=",")
csvwriter.writerow(header_row)
#wait 10 seconds to get IPs
time.sleep(10)
start_time = datetime.datetime.now()
end_time = start_time + datetime.timedelta(seconds=duration_sec)
#create lf data object
lf_data_collection= LFDataCollection(local_realm=self.local_realm,debug=self.debug)
while datetime.datetime.now() < end_time:
csvwriter.writerow(lf_data_collection.monitor_interval(start_time_=start_time,sta_list_=sta_list_edit, created_cx_=created_cx, layer3_fields_=layer3_fields,port_mgr_fields_=",".join(port_mgr_cols)))
time.sleep(monitor_interval_ms)
csvfile.close()
def pre_cleanup(self):
self.cleanup_prefix()

273
py-json/l4_cxprofile.py Normal file
View File

@@ -0,0 +1,273 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
import pprint
from pprint import pprint
import requests
import pandas as pd
import time
import datetime
import ast
import csv
import os
class L4CXProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port, local_realm, debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_)
self.lfclient_url = "http://%s:%s" % (lfclient_host, lfclient_port)
self.debug = debug_
self.url = "http://localhost/"
self.requests_per_ten = 600
self.local_realm = local_realm
self.created_cx = {}
self.created_endp = []
self.lfclient_port = lfclient_port
self.lfclient_host = lfclient_host
def check_errors(self, debug=False):
fields_list = ["!conn", "acc.+denied", "bad-proto", "bad-url", "other-err", "total-err", "rslv-p", "rslv-h",
"timeout", "nf+(4xx)", "http-r", "http-p", "http-t", "login-denied"]
endp_list = self.json_get("layer4/list?fields=%s" % ','.join(fields_list))
debug_info = {}
if endp_list is not None and endp_list['endpoint'] is not None:
endp_list = endp_list['endpoint']
expected_passes = len(endp_list)
passes = len(endp_list)
for item in range(len(endp_list)):
for name, info in endp_list[item].items():
for field in fields_list:
if info[field.replace("+", " ")] > 0:
passes -= 1
debug_info[name] = {field: info[field.replace("+", " ")]}
if debug:
print(debug_info)
if passes == expected_passes:
return True
else:
print(list(debug_info), " Endps in this list showed errors getting to %s " % self.url)
return False
def start_cx(self):
print("Starting CXs...")
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name],
"cx_state": "RUNNING"
}, debug_=self.debug)
print(".", end='')
print("")
def stop_cx(self):
print("Stopping CXs...")
for cx_name in self.created_cx.keys():
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name],
"cx_state": "STOPPED"
}, debug_=self.debug)
print(".", end='')
print("")
def check_request_rate(self):
endp_list = self.json_get("layer4/list?fields=urls/s")
expected_passes = 0
passes = 0
# TODO: this might raise a nameerror lower down
# if self.target_requests_per_ten is None:
# raise NameError("check request rate: missing self.target_requests_per_ten")
if endp_list is not None and endp_list['endpoint'] is not None:
endp_list = endp_list['endpoint']
for item in endp_list:
for name, info in item.items():
if name in self.created_cx.keys():
expected_passes += 1
if info['urls/s'] * self.requests_per_ten >= self.target_requests_per_ten * .9:
print(name, info['urls/s'], info['urls/s'] * self.requests_per_ten, self.target_requests_per_ten * .9)
passes += 1
return passes == expected_passes
def cleanup(self):
print("Cleaning up cxs and endpoints")
if len(self.created_cx) != 0:
for cx_name in self.created_cx.keys():
req_url = "cli-json/rm_cx"
data = {
"test_mgr": "default_tm",
"cx_name": self.created_cx[cx_name]
}
self.json_post(req_url, data)
# pprint(data)
req_url = "cli-json/rm_endp"
data = {
"endp_name": cx_name
}
self.json_post(req_url, data)
# pprint(data)
def create(self, ports=[], sleep_time=.5, debug_=False, suppress_related_commands_=None):
cx_post_data = []
for port_name in ports:
print("port_name: {} len: {} self.local_realm.name_to_eid(port_name): {}".format(port_name,len(self.local_realm.name_to_eid(port_name)),self.local_realm.name_to_eid(port_name),))
shelf = self.local_realm.name_to_eid(port_name)[0]
resource = self.local_realm.name_to_eid(port_name)[1]
name = self.local_realm.name_to_eid(port_name)[2]
endp_data = {
"alias": name + "_l4",
"shelf": shelf,
"resource": resource,
"port": name,
"type": "l4_generic",
"timeout": 10,
"url_rate": self.requests_per_ten,
"url": self.url,
"proxy_auth_type": 0x200
}
url = "cli-json/add_l4_endp"
self.local_realm.json_post(url, endp_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands_)
time.sleep(sleep_time)
endp_data = {
"alias": "CX_" + name + "_l4",
"test_mgr": "default_tm",
"tx_endp": name + "_l4",
"rx_endp": "NA"
}
cx_post_data.append(endp_data)
self.created_cx[name + "_l4"] = "CX_" + name + "_l4"
for cx_data in cx_post_data:
url = "/cli-json/add_cx"
self.local_realm.json_post(url, cx_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands_)
time.sleep(sleep_time)
def monitor(self,
duration_sec=60,
monitor_interval=1,
col_names=None,
created_cx=None,
monitor=True,
report_file=None,
output_format=None,
script_name=None,
arguments=None,
iterations=0,
debug=False):
try:
duration_sec = LFCliBase.parse_time(duration_sec).seconds
except:
if (duration_sec is None) or (duration_sec <= 1):
raise ValueError("L4CXProfile::monitor wants duration_sec > 1 second")
if (duration_sec <= monitor_interval):
raise ValueError("L4CXProfile::monitor wants duration_sec > monitor_interval")
if report_file == None:
raise ValueError("Monitor requires an output file to be defined")
if created_cx == None:
raise ValueError("Monitor needs a list of Layer 4 connections")
if (monitor_interval is None) or (monitor_interval < 1):
raise ValueError("L4CXProfile::monitor wants monitor_interval >= 1 second")
if output_format is not None:
if output_format.lower() != report_file.split('.')[-1]:
raise ValueError('Filename %s does not match output format %s' % (report_file, output_format))
else:
output_format = report_file.split('.')[-1]
# Step 1 - Assign column names
if col_names is not None and len(col_names) > 0:
header_row=col_names
else:
header_row=list((list(self.json_get("/layer4/all")['endpoint'][0].values())[0].keys()))
if debug:
print(header_row)
# Step 2 - Monitor columns
start_time = datetime.datetime.now()
end_time = start_time + datetime.timedelta(seconds=duration_sec)
sleep_interval = round(duration_sec // 5)
if debug:
print("Sleep_interval is %s ", sleep_interval)
print("Start time is %s " , start_time)
print("End time is %s " ,end_time)
value_map = dict()
passes = 0
expected_passes = 0
timestamps = []
for test in range(1+iterations):
while datetime.datetime.now() < end_time:
if col_names is None:
response = self.json_get("/layer4/all")
else:
fields = ",".join(col_names)
response = self.json_get("/layer4/%s?fields=%s" % (created_cx, fields))
if debug:
print(response)
if response is None:
print(response)
raise ValueError("Cannot find any endpoints")
if monitor:
if debug:
print(response)
time.sleep(sleep_interval)
t = datetime.datetime.now()
timestamps.append(t)
value_map[t] = response
expected_passes += 1
if self.check_errors(debug):
if self.check_request_rate():
passes += 1
else:
self._fail("FAIL: Request rate did not exceed 90% target rate")
self.exit_fail()
else:
self._fail("FAIL: Errors found getting to %s " % self.url)
self.exit_fail()
time.sleep(monitor_interval)
print(value_map)
#[further] post-processing data, after test completion
full_test_data_list = []
for test_timestamp, data in value_map.items():
#reduce the endpoint data to single dictionary of dictionaries
for datum in data["endpoint"]:
for endpoint_data in datum.values():
if debug:
print(endpoint_data)
endpoint_data["Timestamp"] = test_timestamp
full_test_data_list.append(endpoint_data)
header_row.append("Timestamp")
header_row.append('Timestamp milliseconds')
df = pd.DataFrame(full_test_data_list)
df["Timestamp milliseconds"] = [self.get_milliseconds(x) for x in df["Timestamp"]]
#round entire column
df["Timestamp milliseconds"]=df["Timestamp milliseconds"].astype(int)
df["Timestamp"]=df["Timestamp"].apply(lambda x:x.strftime("%m/%d/%Y %I:%M:%S"))
df=df[["Timestamp","Timestamp milliseconds", *header_row[:-2]]]
#compare previous data to current data
systeminfo = ast.literal_eval(requests.get('http://'+str(self.lfclient_host)+':'+str(self.lfclient_port)).text)
if output_format == 'hdf':
df.to_hdf(report_file, 'table', append=True)
if output_format == 'parquet':
df.to_parquet(report_file, engine='pyarrow')
if output_format == 'png':
fig = df.plot().get_figure()
fig.savefig(report_file)
if output_format.lower() in ['excel', 'xlsx'] or report_file.split('.')[-1] == 'xlsx':
df.to_excel(report_file, index=False)
if output_format == 'df':
return df
supported_formats = ['csv', 'json', 'stata', 'pickle','html']
for x in supported_formats:
if output_format.lower() == x or report_file.split('.')[-1] == x:
exec('df.to_' + x + '("'+report_file+'")')

77
py-json/lf_cv_base.py Normal file
View File

@@ -0,0 +1,77 @@
#!/usr/bin/env python3
"""
Base Class to be used for Chamber View Tests
Methods:
1.) Add a CV Profile
2.) Remove a CV Profile
3.) Add a DUT
4.) Show a CV Profile
"""
from LANforge.lfcli_base import LFCliBase
class ChamberViewBase(LFCliBase):
def __init__(self, _lfjson_host="localhost", _lfjson_port=8080, _debug=False):
super().__init__(_lfjson_host=_lfjson_host, _lfjson_port=_lfjson_port, _debug=_debug)
def remove_text_blobs(self):
pass
def add_text_blobs(self, type="", name="", data="", debug=False):
data = {'type': type,
'name': name,
"text": data
}
self.json_post("/cli-json/add_text_blob/", data, debug_=debug)
def get_text_blob(self, type="", name="", debug=False):
data = {'type': type,
'name': name,
}
return self.json_post("/cli-json/show_text_blob/", data, debug_=debug)
def add_dut(self):
"""
//for DUT
/cli-json/add_dut
(
{
"name": Dut name which we want to give,
"flags": "4098",
"img_file" : "NONE",
"sw_version" : "[BLANK]",
"hw_version": "[BLANK]",
"model_num":"[BLANK]",
"serial_num":"[BLANK]",
"serial_port":"[BLANK]",
"wan_port":"[BLANK]",
"lan_port": "[BLANK]",
"ssid1": SSIDname1,
"passwd1": SSIDpassword1,
"ssid2": SSIDname2,
"passwd2": SSIDpassword2,
"ssid3":"[BLANK]",
"passwd3" :"[BLANK]",
"mgt_ip" : "0.0.0.0",
"api_id": "0",
"flags_mask" : "NA",
"antenna_count1" : "0",
"antenna_count2":"0",
"antenna_count3":"0",
"bssid1" : "00:00:00:00:00:00",
"bssid2" : "00:00:00:00:00:00",
"bssid3" : "00:00:00:00:00:00",
"top_left_x": "0",
"top_left_y": "0",
"eap_id": "[BLANK]",
}
)
"""
pass

102
py-json/lfdata.py Normal file
View File

@@ -0,0 +1,102 @@
#!/usr/bin/env python3
import re
import time
import pprint
from pprint import pprint
import os
import datetime
import base64
import xlsxwriter
import pandas as pd
import requests
import ast
import csv
# LFData class actions:
# - Methods to collect data/store data (use from monitor instance) - used by Profile class.
# - file open/save
# - save row (rolling) - to CSV (standard)
# - headers
# - file to data-storage-type conversion and vice versa (e.g. dataframe (or datatable) to file type and vice versa)
# - other common util methods related to immediate data storage
# - include compression method
# - monitoring truncates every 5 mins and sends to report? --- need clarification. truncate file and rewrite to same file?
# - large data collection use NFS share to NAS.
# Websocket class actions:
#reading data from websockets
class LFDataCollection:
def __init__(self, local_realm, debug=False):
self.parent_realm = local_realm
self.exit_on_error = False
self.debug = debug or local_realm.debug
def json_get(self, _req_url, debug_=False):
return self.parent_realm.json_get(_req_url, debug_=False)
def check_json_validity(self, keyword=None, json_response=None):
if json_response is None:
raise ValueError("Cannot find columns requested to be searched in port manager. Exiting script, please retry.")
if keyword is not None and keyword not in json_response:
raise ValueError("Cannot find proper information from json. Please check your json request. Exiting script, please retry.")
def get_milliseconds(self,
timestamp):
return (timestamp - datetime.datetime(1970,1,1)).total_seconds()*1000
def get_seconds(self,
timestamp):
return (timestamp - datetime.datetime(1970,1,1)).total_seconds()
#only for ipv4_variable_time at the moment
def monitor_interval(self, header_row_= None,
start_time_= None, sta_list_= None,
created_cx_= None, layer3_fields_= None,
port_mgr_fields_= None):
#time calculations for while loop and writing to csv
t = datetime.datetime.now()
timestamp= t.strftime("%m/%d/%Y %I:%M:%S")
t_to_millisec_epoch= int(self.get_milliseconds(t))
time_elapsed=int(self.get_seconds(t))-int(self.get_seconds(start_time_))
#get responses from json
layer_3_response = self.json_get("/endp/%s?fields=%s" % (created_cx_, layer3_fields_),debug_=self.debug)
if port_mgr_fields_ is not None:
port_mgr_response=self.json_get("/port/1/1/%s?fields=%s" % (sta_list_, port_mgr_fields_), debug_=self.debug)
#check json response validity
self.check_json_validity(keyword="endpoint",json_response=layer_3_response)
self.check_json_validity(keyword="interfaces",json_response=port_mgr_response)
#dict manipulation
temp_list=[]
for endpoint in layer_3_response["endpoint"]:
if self.debug:
print("Current endpoint values list... ")
print(list(endpoint.values())[0])
temp_endp_values=list(endpoint.values())[0] #dict
temp_list.extend([timestamp,t_to_millisec_epoch,time_elapsed])
current_sta = temp_endp_values['name']
merge={}
if port_mgr_fields_ is not None:
for sta_name in sta_list_:
if sta_name in current_sta:
for interface in port_mgr_response["interfaces"]:
if sta_name in list(interface.keys())[0]:
merge=temp_endp_values.copy()
port_mgr_values_dict =list(interface.values())[0]
renamed_port_cols={}
for key in port_mgr_values_dict.keys():
renamed_port_cols['port mgr - ' +key]=port_mgr_values_dict[key]
merge.update(renamed_port_cols)
for name in header_row_[3:-3]:
temp_list.append(merge[name])
return temp_list
#class WebSocket():

194
py-json/mac_vlan_profile.py Normal file
View File

@@ -0,0 +1,194 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
from LANforge import LFRequest
from LANforge import LFUtils
from LANforge import set_port
import pprint
from pprint import pprint
import time
class MACVLANProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port,
local_realm,
macvlan_parent="eth1",
num_macvlans=1,
admin_down=False,
dhcp=False,
debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_)
self.local_realm = local_realm
self.num_macvlans = num_macvlans
self.macvlan_parent = macvlan_parent
self.resource = 1
self.shelf = 1
self.desired_macvlans = []
self.created_macvlans = []
self.dhcp = dhcp
self.netmask = None
self.first_ip_addr = None
self.gateway = None
self.ip_list = []
self.COMMANDS = ["set_port"]
self.desired_set_port_cmd_flags = []
self.desired_set_port_current_flags = [] # do not default down, "if_down"
self.desired_set_port_interest_flags = ["current_flags"] # do not default down, "ifdown"
self.set_port_data = {
"shelf": 1,
"resource": 1,
"port": None,
"current_flags": 0,
"interest": 0, # (0x2 + 0x4000 + 0x800000) # current, dhcp, down,
}
def add_named_flags(self, desired_list, command_ref):
if desired_list is None:
raise ValueError("addNamedFlags wants a list of desired flag names")
if len(desired_list) < 1:
print("addNamedFlags: empty desired list")
return 0
if (command_ref is None) or (len(command_ref) < 1):
raise ValueError("addNamedFlags wants a maps of flag values")
result = 0
for name in desired_list:
if (name is None) or (name == ""):
continue
if name not in command_ref:
if self.debug:
pprint(command_ref)
raise ValueError("flag %s not in map" % name)
result += command_ref[name]
return result
def set_command_param(self, command_name, param_name, param_value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
raise ValueError("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
# return
if command_name == "set_port":
self.set_port_data[param_name] = param_value
def set_command_flag(self, command_name, param_name, value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
print("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
return
elif command_name == "set_port":
if (param_name not in set_port.set_port_current_flags) and (
param_name not in set_port.set_port_cmd_flags) and (
param_name not in set_port.set_port_interest_flags):
print("Parameter name [%s] not defined in set_port.py" % param_name)
if self.debug:
pprint(set_port.set_port_cmd_flags)
pprint(set_port.set_port_current_flags)
pprint(set_port.set_port_interest_flags)
return
if (param_name in set_port.set_port_cmd_flags):
if (value == 1) and (param_name not in self.desired_set_port_cmd_flags):
self.desired_set_port_cmd_flags.append(param_name)
elif value == 0:
self.desired_set_port_cmd_flags.remove(param_name)
elif (param_name in set_port.set_port_current_flags):
if (value == 1) and (param_name not in self.desired_set_port_current_flags):
self.desired_set_port_current_flags.append(param_name)
elif value == 0:
self.desired_set_port_current_flags.remove(param_name)
elif (param_name in set_port.set_port_interest_flags):
if (value == 1) and (param_name not in self.desired_set_port_interest_flags):
self.desired_set_port_interest_flags.append(param_name)
elif value == 0:
self.desired_set_port_interest_flags.remove(param_name)
else:
raise ValueError("Unknown param name: " + param_name)
def create(self, admin_down=False, debug=False, sleep_time=1):
print("Creating MACVLANs...")
req_url = "/cli-json/add_mvlan"
if not self.dhcp and self.first_ip_addr is not None and self.netmask is not None and self.gateway is not None:
self.desired_set_port_interest_flags.append("ip_address")
self.desired_set_port_interest_flags.append("ip_Mask")
self.desired_set_port_interest_flags.append("ip_gateway")
self.ip_list = LFUtils.gen_ip_series(ip_addr=self.first_ip_addr, netmask=self.netmask,
num_ips=self.num_macvlans)
if self.dhcp:
print("Using DHCP")
self.desired_set_port_current_flags.append("use_dhcp")
self.desired_set_port_interest_flags.append("dhcp")
self.set_port_data["current_flags"] = self.add_named_flags(self.desired_set_port_current_flags,
set_port.set_port_current_flags)
self.set_port_data["interest"] = self.add_named_flags(self.desired_set_port_interest_flags,
set_port.set_port_interest_flags)
set_port_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/set_port")
for i in range(len(self.desired_macvlans)):
data = {
"shelf": self.shelf,
"resource": self.resource,
"mac": "xx:xx:xx:*:*:xx",
"port": self.local_realm.name_to_eid(self.macvlan_parent)[2],
"index": int(self.desired_macvlans[i][self.desired_macvlans[i].index('#') + 1:]),
#"dhcp": self.dhcp,
"flags": None
}
if admin_down:
data["flags"] = 1
else:
data["flags"] = 0
self.created_macvlans.append("%s.%s.%s#%d" % (self.shelf, self.resource,
self.macvlan_parent, int(
self.desired_macvlans[i][self.desired_macvlans[i].index('#') + 1:])))
self.local_realm.json_post(req_url, data)
time.sleep(sleep_time)
LFUtils.wait_until_ports_appear(base_url=self.lfclient_url, port_list=self.created_macvlans)
print(self.created_macvlans)
time.sleep(5)
for i in range(len(self.created_macvlans)):
eid = self.local_realm.name_to_eid(self.created_macvlans[i])
name = eid[2]
self.set_port_data["port"] = name # for set_port calls.
if not self.dhcp and self.first_ip_addr is not None and self.netmask is not None \
and self.gateway is not None:
self.set_port_data["ip_addr"] = self.ip_list[i]
self.set_port_data["netmask"] = self.netmask
self.set_port_data["gateway"] = self.gateway
set_port_r.addPostData(self.set_port_data)
json_response = set_port_r.jsonPost(debug)
time.sleep(sleep_time)
def cleanup(self):
print("Cleaning up MACVLANs...")
print(self.created_macvlans)
for port_eid in self.created_macvlans:
self.local_realm.rm_port(port_eid, check_exists=True)
time.sleep(.02)
# And now see if they are gone
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=self.created_macvlans)
def admin_up(self):
for macvlan in self.created_macvlans:
self.local_realm.admin_up(macvlan)
def admin_down(self):
for macvlan in self.created_macvlans:
self.local_realm.admin_down(macvlan)

View File

@@ -0,0 +1,190 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
import pprint
from pprint import pprint
class MULTICASTProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port, local_realm,
report_timer_=3000, name_prefix_="Unset", number_template_="00000", debug_=False):
"""
:param lfclient_host:
:param lfclient_port:
:param local_realm:
:param name_prefix_: prefix string for connection
:param number_template_: how many zeros wide we padd, possibly a starting integer with left padding
:param debug_:
"""
super().__init__(lfclient_host, lfclient_port, debug_)
self.lfclient_url = "http://%s:%s" % (lfclient_host, lfclient_port)
self.debug = debug_
self.local_realm = local_realm
self.report_timer = report_timer_
self.created_mc = {}
self.name_prefix = name_prefix_
self.number_template = number_template_
def clean_mc_lists(self):
# Clean out our local lists, this by itself does NOT remove anything from LANforge manager.
# but, if you are trying to modify existing connections, then clearing these arrays and
# re-calling 'create' will do the trick.
created_mc = {}
def get_mc_names(self):
return self.created_mc.keys()
def refresh_mc(self, debug_=False):
for endp_name in self.get_mc_names():
self.json_post("/cli-json/show_endpoints", {
"endpoint": endp_name
}, debug_=debug_)
def start_mc(self, suppress_related_commands=None, debug_=False):
if self.debug:
debug_ = True
for endp_name in self.get_mc_names():
print("Starting mcast endpoint: %s" % (endp_name))
json_data = {
"endp_name": endp_name
}
url = "cli-json/start_endp"
self.local_realm.json_post(url, json_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
pass
def stop_mc(self, suppress_related_commands=None, debug_=False):
if self.debug:
debug_ = True
for endp_name in self.get_mc_names():
json_data = {
"endp_name": endp_name
}
url = "cli-json/stop_endp"
self.local_realm.json_post(url, json_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
pass
def cleanup_prefix(self):
self.local_realm.cleanup_cxe_prefix(self.name_prefix)
def cleanup(self, suppress_related_commands=None, debug_ = False):
if self.debug:
debug_ = True
for endp_name in self.get_mc_names():
self.local_realm.rm_endp(endp_name, debug_=debug_, suppress_related_commands_=suppress_related_commands)
def create_mc_tx(self, endp_type, side_tx, mcast_group="224.9.9.9", mcast_dest_port=9999,
suppress_related_commands=None, debug_=False):
if self.debug:
debug_ = True
side_tx_info = self.local_realm.name_to_eid(side_tx)
side_tx_shelf = side_tx_info[0]
side_tx_resource = side_tx_info[1]
side_tx_port = side_tx_info[2]
side_tx_name = "%smtx-%s-%i" % (self.name_prefix, side_tx_port, len(self.created_mc))
json_data = []
# add_endp mcast-xmit-sta 1 1 side_tx mc_udp -1 NO 4000000 0 NO 1472 0 INCREASING NO 32 0 0
json_data = {
'alias': side_tx_name,
'shelf': side_tx_shelf,
'resource': side_tx_resource,
'port': side_tx_port,
'type': endp_type,
'ip_port': -1,
'is_rate_bursty':
'NO', 'min_rate': 256000,
'max_rate': 0,
'is_pkt_sz_random': 'NO',
'min_pkt': 1472,
'max_pkt': 0,
'payload_pattern': 'INCREASING',
'use_checksum': 'NO',
'ttl': 32,
'send_bad_crc_per_million': 0,
'multi_conn': 0
}
url = "/cli-json/add_endp"
self.local_realm.json_post(url, json_data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
json_data = {
'name': side_tx_name,
'ttl': 32,
'mcast_group': mcast_group,
'mcast_dest_port': mcast_dest_port,
'rcv_mcast': 'No'
}
url = "cli-json/set_mc_endp"
self.local_realm.json_post(url, json_data, debug_=debug_, suppress_related_commands_=suppress_related_commands)
self.created_mc[side_tx_name] = side_tx_name
these_endp = [side_tx_name]
self.local_realm.wait_until_endps_appear(these_endp, debug=debug_)
def create_mc_rx(self, endp_type, side_rx, mcast_group="224.9.9.9", mcast_dest_port=9999,
suppress_related_commands=None, debug_=False):
if self.debug:
debug_ = True
these_endp = []
for port_name in side_rx:
side_rx_info = self.local_realm.name_to_eid(port_name)
side_rx_shelf = side_rx_info[0]
side_rx_resource = side_rx_info[1]
side_rx_port = side_rx_info[2]
side_rx_name = "%smrx-%s-%i" % (self.name_prefix, side_rx_port, len(self.created_mc))
# add_endp mcast-rcv-sta-001 1 1 sta0002 mc_udp 9999 NO 0 0 NO 1472 0 INCREASING NO 32 0 0
json_data = {
'alias': side_rx_name,
'shelf': side_rx_shelf,
'resource': side_rx_resource,
'port': side_rx_port,
'type': endp_type,
'ip_port': 9999,
'is_rate_bursty': 'NO',
'min_rate': 0,
'max_rate': 0,
'is_pkt_sz_random': 'NO',
'min_pkt': 1472,
'max_pkt': 0,
'payload_pattern': 'INCREASING',
'use_checksum': 'NO',
'ttl': 32,
'send_bad_crc_per_million': 0,
'multi_conn': 0
}
url = "cli-json/add_endp"
self.local_realm.json_post(url, json_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
json_data = {
'name': side_rx_name,
'ttl': 32,
'mcast_group': mcast_group,
'mcast_dest_port': mcast_dest_port,
'rcv_mcast': 'Yes'
}
url = "cli-json/set_mc_endp"
self.local_realm.json_post(url, json_data, debug_=debug_,
suppress_related_commands_=suppress_related_commands)
self.created_mc[side_rx_name] = side_rx_name
these_endp.append(side_rx_name)
self.local_realm.wait_until_endps_appear(these_endp, debug=debug_)
def to_string(self):
pprint.pprint(self)

View File

@@ -1,4 +1,7 @@
#!/usr/bin/env python3
"""
This script is out-dated, please see py-scripts/test_ipv4_variable_time.py
"""
import sys
import pprint
from pprint import pprint

View File

@@ -2,7 +2,8 @@
'''
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# -
# Example of how to operate a WCT instance using cli-socket -
# Example of how to operate a WCT instance using cli-socket. -
# This script is out-dated. Please refer to py-scripts/run_cv_scenario.py -
# -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
make sure pexpect is installed:
@@ -156,4 +157,4 @@ if __name__ == '__main__':
####
####
####
####

45
py-json/port_utils.py Normal file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env python3
class PortUtils():
def __init__(self, local_realm):
self.local_realm = local_realm
def set_ftp(self, port_name="", resource=1, on=False):
if port_name != "":
data = {
"shelf": 1,
"resource": resource,
"port": port_name,
"current_flags": 0,
"interest": 0
}
if on:
data["current_flags"] = 0x400000000000
data["interest"] = 0x10000000
else:
data["interest"] = 0x10000000
self.local_realm.json_post("cli-json/set_port", data)
else:
raise ValueError("Port name required")
def set_http(self, port_name="", resource=1, on=False):
if port_name != "":
data = {
"shelf": 1,
"resource": resource,
"port": port_name,
"current_flags": 0,
"interest": 0
}
if on:
data["current_flags"] = 0x200000000000
data["interest"] = 0x8000000
else:
data["interest"] = 0x8000000
self.local_realm.json_post("cli-json/set_port", data)
else:
raise ValueError("Port name required")

169
py-json/qvlan_profile.py Normal file
View File

@@ -0,0 +1,169 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
from LANforge import LFRequest
from LANforge import LFUtils
from LANforge import set_port
import pprint
from pprint import pprint
import time
class QVLANProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port,
local_realm,
qvlan_parent="eth1",
num_qvlans=1,
admin_down=False,
dhcp=False,
debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_)
self.local_realm = local_realm
self.num_qvlans = num_qvlans
self.qvlan_parent = qvlan_parent
self.resource = 1
self.shelf = 1
self.desired_qvlans = []
self.created_qvlans = []
self.dhcp = dhcp
self.netmask = None
self.first_ip_addr = None
self.gateway = None
self.ip_list = []
self.COMMANDS = ["set_port"]
self.desired_set_port_cmd_flags = []
self.desired_set_port_current_flags = [] # do not default down, "if_down"
self.desired_set_port_interest_flags = ["current_flags"] # do not default down, "ifdown"
self.set_port_data = {
"shelf": 1,
"resource": 1,
"port": None
}
def add_named_flags(self, desired_list, command_ref):
if desired_list is None:
raise ValueError("addNamedFlags wants a list of desired flag names")
if len(desired_list) < 1:
print("addNamedFlags: empty desired list")
return 0
if (command_ref is None) or (len(command_ref) < 1):
raise ValueError("addNamedFlags wants a maps of flag values")
result = 0
for name in desired_list:
if (name is None) or (name == ""):
continue
if name not in command_ref:
if self.debug:
pprint(command_ref)
raise ValueError("flag %s not in map" % name)
result += command_ref[name]
return result
def set_command_param(self, command_name, param_name, param_value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
raise ValueError("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
# return
if command_name == "set_port":
self.set_port_data[param_name] = param_value
def set_command_flag(self, command_name, param_name, value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
print("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
return
elif command_name == "set_port":
if (param_name not in set_port.set_port_current_flags) and (
param_name not in set_port.set_port_cmd_flags) and (
param_name not in set_port.set_port_interest_flags):
print("Parameter name [%s] not defined in set_port.py" % param_name)
if self.debug:
pprint(set_port.set_port_cmd_flags)
pprint(set_port.set_port_current_flags)
pprint(set_port.set_port_interest_flags)
return
if (param_name in set_port.set_port_cmd_flags):
if (value == 1) and (param_name not in self.desired_set_port_cmd_flags):
self.desired_set_port_cmd_flags.append(param_name)
elif value == 0:
self.desired_set_port_cmd_flags.remove(param_name)
elif (param_name in set_port.set_port_current_flags):
if (value == 1) and (param_name not in self.desired_set_port_current_flags):
self.desired_set_port_current_flags.append(param_name)
elif value == 0:
self.desired_set_port_current_flags.remove(param_name)
elif (param_name in set_port.set_port_interest_flags):
if (value == 1) and (param_name not in self.desired_set_port_interest_flags):
self.desired_set_port_interest_flags.append(param_name)
elif value == 0:
self.desired_set_port_interest_flags.remove(param_name)
else:
raise ValueError("Unknown param name: " + param_name)
def create(self, admin_down=False, debug=False, sleep_time=1):
print("Creating qvlans...")
req_url = "/cli-json/add_vlan"
if not self.dhcp and self.first_ip_addr is not None and self.netmask is not None and self.gateway is not None:
self.desired_set_port_interest_flags.append("ip_address")
self.desired_set_port_interest_flags.append("ip_Mask")
self.desired_set_port_interest_flags.append("ip_gateway")
self.ip_list = LFUtils.gen_ip_series(ip_addr=self.first_ip_addr, netmask=self.netmask,
num_ips=self.num_qvlans)
if self.dhcp:
print("Using DHCP")
self.desired_set_port_current_flags.append("use_dhcp")
self.desired_set_port_interest_flags.append("dhcp")
self.set_port_data["current_flags"] = self.add_named_flags(self.desired_set_port_current_flags,
set_port.set_port_current_flags)
self.set_port_data["interest"] = self.add_named_flags(self.desired_set_port_interest_flags,
set_port.set_port_interest_flags)
set_port_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/set_port")
for i in range(len(self.desired_qvlans)):
data = {
"shelf": self.shelf,
"resource": self.resource,
"port": self.local_realm.name_to_eid(self.qvlan_parent)[2],
"vid": i+1
}
self.created_qvlans.append("%s.%s.%s#%d" % (self.shelf, self.resource,
self.qvlan_parent, int(
self.desired_qvlans[i][self.desired_qvlans[i].index('#') + 1:])))
self.local_realm.json_post(req_url, data)
time.sleep(sleep_time)
print(self.created_qvlans)
def cleanup(self):
print("Cleaning up qvlans...")
print(self.created_qvlans)
for port_eid in self.created_qvlans:
self.local_realm.rm_port(port_eid, check_exists=True)
time.sleep(.02)
# And now see if they are gone
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=self.created_qvlans)
def admin_up(self):
for qvlan in self.created_qvlans:
self.local_realm.admin_up(qvlan)
def admin_down(self):
for qvlan in self.created_qvlans:
self.local_realm.admin_down(qvlan)

File diff suppressed because it is too large Load Diff

480
py-json/station_profile.py Normal file
View File

@@ -0,0 +1,480 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
from LANforge import LFRequest
from LANforge import LFUtils
from LANforge import set_port
from LANforge import add_sta
import pprint
from pprint import pprint
import time
# use the station profile to set the combination of features you want on your stations
# once this combination is configured, build the stations with the build(resource, radio, number) call
# build() calls will fail if the station already exists. Please survey and clean your resource
# before calling build()
# survey = Realm.findStations(resource=1)
# Realm.removeStations(survey)
# profile = Realm.newStationProfile()
# profile.set...
# profile.build(resource, radio, 64)
#
class StationProfile:
def __init__(self, lfclient_url, local_realm,
ssid="NA",
ssid_pass="NA",
security="open",
number_template_="00000",
mode=0,
up=True,
resource=1,
shelf=1,
dhcp=True,
debug_=False,
use_ht160=False):
self.debug = debug_
self.lfclient_url = lfclient_url
self.ssid = ssid
self.ssid_pass = ssid_pass
self.mode = mode
self.up = up
self.resource = resource
self.shelf = shelf
self.dhcp = dhcp
self.security = security
self.local_realm = local_realm
self.use_ht160 = use_ht160
self.COMMANDS = ["add_sta", "set_port"]
self.desired_add_sta_flags = ["wpa2_enable", "80211u_enable", "create_admin_down"]
self.desired_add_sta_flags_mask = ["wpa2_enable", "80211u_enable", "create_admin_down"]
self.number_template = number_template_
self.station_names = [] # eids, these are created station names
self.add_sta_data = {
"shelf": 1,
"resource": 1,
"radio": None,
"sta_name": None,
"ssid": None,
"key": None,
"mode": 0,
"mac": "xx:xx:xx:xx:*:xx",
"flags": 0, # (0x400 + 0x20000 + 0x1000000000) # create admin down
}
self.desired_set_port_cmd_flags = []
self.desired_set_port_current_flags = ["if_down"]
self.desired_set_port_interest_flags = ["current_flags", "ifdown"]
if self.dhcp:
self.desired_set_port_current_flags.append("use_dhcp")
self.desired_set_port_interest_flags.append("dhcp")
self.set_port_data = {
"shelf": 1,
"resource": 1,
"port": None,
"current_flags": 0,
"interest": 0, # (0x2 + 0x4000 + 0x800000) # current, dhcp, down,
}
self.wifi_extra_data_modified = False
self.wifi_extra_data = {
"shelf": 1,
"resource": 1,
"port": None,
"key_mgmt": None,
"eap": None,
"hessid": None,
"identity": None,
"password": None,
"realm": None,
"domain": None
}
self.reset_port_extra_data = {
"shelf": 1,
"resource": 1,
"port": None,
"test_duration": 0,
"reset_port_enable": False,
"reset_port_time_min": 0,
"reset_port_time_max": 0,
"reset_port_timer_started": False,
"port_to_reset": 0,
"seconds_till_reset": 0
}
def set_wifi_extra(self, key_mgmt="WPA-EAP",
pairwise="CCMP TKIP",
group="CCMP TKIP",
psk="[BLANK]",
wep_key="[BLANK]", # wep key
ca_cert="[BLANK]",
eap="TTLS",
identity="testuser",
anonymous_identity="[BLANK]",
phase1="NA", # outter auth
phase2="NA", # inner auth
passwd="testpasswd", # eap passphrase
pin="NA",
pac_file="NA",
private_key="NA",
pk_password="NA", # priv key password
hessid="00:00:00:00:00:01",
realm="localhost.localdomain",
client_cert="NA",
imsi="NA",
milenage="NA",
domain="localhost.localdomain",
roaming_consortium="NA",
venue_group="NA",
network_type="NA",
ipaddr_type_avail="NA",
network_auth_type="NA",
anqp_3gpp_cell_net="NA"
):
self.wifi_extra_data_modified = True
self.wifi_extra_data["key_mgmt"] = key_mgmt
self.wifi_extra_data["pairwise"] = pairwise
self.wifi_extra_data["group"] = group
self.wifi_extra_data["psk"] = psk
self.wifi_extra_data["key"] = wep_key
self.wifi_extra_data["ca_cert"] = ca_cert
self.wifi_extra_data["eap"] = eap
self.wifi_extra_data["identity"] = identity
self.wifi_extra_data["anonymous_identity"] = anonymous_identity
self.wifi_extra_data["phase1"] = phase1
self.wifi_extra_data["phase2"] = phase2
self.wifi_extra_data["password"] = passwd
self.wifi_extra_data["pin"] = pin
self.wifi_extra_data["pac_file"] = pac_file
self.wifi_extra_data["private_key"] = private_key
self.wifi_extra_data["pk_passwd"] = pk_password
self.wifi_extra_data["hessid"] = hessid
self.wifi_extra_data["realm"] = realm
self.wifi_extra_data["client_cert"] = client_cert
self.wifi_extra_data["imsi"] = imsi
self.wifi_extra_data["milenage"] = milenage
self.wifi_extra_data["domain"] = domain
self.wifi_extra_data["roaming_consortium"] = roaming_consortium
self.wifi_extra_data["venue_group"] = venue_group
self.wifi_extra_data["network_type"] = network_type
self.wifi_extra_data["ipaddr_type_avail"] = ipaddr_type_avail
self.wifi_extra_data["network_auth_type"] = network_auth_type
self.wifi_extra_data["anqp_3gpp_cell_net"] = anqp_3gpp_cell_net
def set_reset_extra(self, reset_port_enable=False, test_duration=0, reset_port_min_time=0, reset_port_max_time=0,
reset_port_timer_start=False, port_to_reset=0, time_till_reset=0):
self.reset_port_extra_data["reset_port_enable"] = reset_port_enable
self.reset_port_extra_data["test_duration"] = test_duration
self.reset_port_extra_data["reset_port_time_min"] = reset_port_min_time
self.reset_port_extra_data["reset_port_time_max"] = reset_port_max_time
def use_security(self, security_type, ssid=None, passwd=None):
types = {"wep": "wep_enable", "wpa": "wpa_enable", "wpa2": "wpa2_enable", "wpa3": "use-wpa3", "open": "[BLANK]"}
self.add_sta_data["ssid"] = ssid
if security_type in types.keys():
if (ssid is None) or (ssid == ""):
raise ValueError("use_security: %s requires ssid" % security_type)
if (passwd is None) or (passwd == ""):
raise ValueError("use_security: %s requires passphrase or [BLANK]" % security_type)
for name in types.values():
if name in self.desired_add_sta_flags and name in self.desired_add_sta_flags_mask:
self.desired_add_sta_flags.remove(name)
self.desired_add_sta_flags_mask.remove(name)
if security_type != "open":
self.desired_add_sta_flags.append(types[security_type])
# self.set_command_flag("add_sta", types[security_type], 1)
self.desired_add_sta_flags_mask.append(types[security_type])
else:
passwd = "[BLANK]"
self.set_command_param("add_sta", "ssid", ssid)
self.set_command_param("add_sta", "key", passwd)
# unset any other security flag before setting our present flags
if security_type == "wpa3":
self.set_command_param("add_sta", "ieee80211w", 2)
# self.add_sta_data["key"] = passwd
def set_command_param(self, command_name, param_name, param_value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
raise ValueError("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
# return
if command_name == "add_sta":
self.add_sta_data[param_name] = param_value
elif command_name == "set_port":
self.set_port_data[param_name] = param_value
def set_command_flag(self, command_name, param_name, value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
print("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
return
if command_name == "add_sta":
if (param_name not in add_sta.add_sta_flags) and (param_name not in add_sta.add_sta_modes):
print("Parameter name [%s] not defined in add_sta.py" % param_name)
if self.debug:
pprint(add_sta.add_sta_flags)
return
if (value == 1) and (param_name not in self.desired_add_sta_flags):
self.desired_add_sta_flags.append(param_name)
self.desired_add_sta_flags_mask.append(param_name)
elif value == 0:
self.desired_add_sta_flags.remove(param_name)
self.desired_add_sta_flags_mask.append(param_name)
elif command_name == "set_port":
if (param_name not in set_port.set_port_current_flags) and (
param_name not in set_port.set_port_cmd_flags) and (
param_name not in set_port.set_port_interest_flags):
print("Parameter name [%s] not defined in set_port.py" % param_name)
if self.debug:
pprint(set_port.set_port_cmd_flags)
pprint(set_port.set_port_current_flags)
pprint(set_port.set_port_interest_flags)
return
if (param_name in set_port.set_port_cmd_flags):
if (value == 1) and (param_name not in self.desired_set_port_cmd_flags):
self.desired_set_port_cmd_flags.append(param_name)
elif value == 0:
self.desired_set_port_cmd_flags.remove(param_name)
elif (param_name in set_port.set_port_current_flags):
if (value == 1) and (param_name not in self.desired_set_port_current_flags):
self.desired_set_port_current_flags.append(param_name)
elif value == 0:
self.desired_set_port_current_flags.remove(param_name)
elif (param_name in set_port.set_port_interest_flags):
if (value == 1) and (param_name not in self.desired_set_port_interest_flags):
self.desired_set_port_interest_flags.append(param_name)
elif value == 0:
self.desired_set_port_interest_flags.remove(param_name)
else:
raise ValueError("Unknown param name: " + param_name)
# use this for hinting station name; stations begin with 'sta', the
# stations created with a prefix '0100' indicate value 10100 + n with
# resulting substring(1,) applied; station 900 becomes 'sta1000'
def set_number_template(self, pref):
self.number_template = pref
def add_named_flags(self, desired_list, command_ref):
if desired_list is None:
raise ValueError("addNamedFlags wants a list of desired flag names")
if len(desired_list) < 1:
print("addNamedFlags: empty desired list")
return 0
if (command_ref is None) or (len(command_ref) < 1):
raise ValueError("addNamedFlags wants a maps of flag values")
result = 0
for name in desired_list:
if (name is None) or (name == ""):
continue
if name not in command_ref:
if self.debug:
pprint(command_ref)
raise ValueError("flag %s not in map" % name)
result += command_ref[name]
return result
def admin_up(self):
for eid in self.station_names:
# print("3139: admin_up sta "+eid)
# time.sleep(2)
self.local_realm.admin_up(eid)
time.sleep(0.005)
def admin_down(self):
for sta_name in self.station_names:
self.local_realm.admin_down(sta_name)
def cleanup(self, desired_stations=None, delay=0.03, debug_=False):
print("Cleaning up stations")
if (desired_stations is None):
desired_stations = self.station_names
if len(desired_stations) < 1:
print("ERROR: StationProfile cleanup, list is empty")
return
# First, request remove on the list.
for port_eid in desired_stations:
self.local_realm.rm_port(port_eid, check_exists=True, debug_=debug_)
time.sleep(delay)
# And now see if they are gone
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=desired_stations)
# Checks for errors in initialization values and creates specified number of stations using init parameters
def create(self, radio,
num_stations=0,
sta_names_=None,
dry_run=False,
up_=None,
debug=False,
suppress_related_commands_=True,
use_radius=False,
hs20_enable=False,
sleep_time=0.02):
if (radio is None) or (radio == ""):
raise ValueError("station_profile.create: will not create stations without radio")
radio_eid = self.local_realm.name_to_eid(radio)
radio_shelf = radio_eid[0]
radio_resource = radio_eid[1]
radio_port = radio_eid[2]
if self.use_ht160:
self.desired_add_sta_flags.append("ht160_enable")
self.desired_add_sta_flags_mask.append("ht160_enable")
if self.mode is not None:
self.add_sta_data["mode"] = self.mode
if use_radius:
self.desired_add_sta_flags.append("8021x_radius")
self.desired_add_sta_flags_mask.append("8021x_radius")
if hs20_enable:
self.desired_add_sta_flags.append("hs20_enable")
self.desired_add_sta_flags_mask.append("hs20_enable")
if up_ is not None:
self.up = up_
if (sta_names_ is None) and (num_stations == 0):
raise ValueError("StationProfile.create needs either num_stations= or sta_names_= specified")
if self.up:
if "create_admin_down" in self.desired_add_sta_flags:
del self.desired_add_sta_flags[self.desired_add_sta_flags.index("create_admin_down")]
elif "create_admin_down" not in self.desired_add_sta_flags:
self.desired_add_sta_flags.append("create_admin_down")
# create stations down, do set_port on them, then set stations up
self.add_sta_data["flags"] = self.add_named_flags(self.desired_add_sta_flags, add_sta.add_sta_flags)
self.add_sta_data["flags_mask"] = self.add_named_flags(self.desired_add_sta_flags_mask, add_sta.add_sta_flags)
self.add_sta_data["radio"] = radio_port
self.add_sta_data["resource"] = radio_resource
self.add_sta_data["shelf"] = radio_shelf
self.set_port_data["resource"] = radio_resource
self.set_port_data["shelf"] = radio_shelf
self.set_port_data["current_flags"] = self.add_named_flags(self.desired_set_port_current_flags,
set_port.set_port_current_flags)
self.set_port_data["interest"] = self.add_named_flags(self.desired_set_port_interest_flags,
set_port.set_port_interest_flags)
self.wifi_extra_data["resource"] = radio_resource
self.wifi_extra_data["shelf"] = radio_shelf
self.reset_port_extra_data["resource"] = radio_resource
self.reset_port_extra_data["shelf"] = radio_shelf
# these are unactivated LFRequest objects that we can modify and
# re-use inside a loop, reducing the number of object creations
add_sta_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/add_sta", debug_=debug)
set_port_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/set_port", debug_=debug)
wifi_extra_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/set_wifi_extra", debug_=debug)
my_sta_names = []
# add radio here
if (num_stations > 0) and (len(sta_names_) < 1):
# print("CREATING MORE STA NAMES == == == == == == == == == == == == == == == == == == == == == == == ==")
my_sta_names = LFUtils.portNameSeries("sta", 0, num_stations - 1, int("1" + self.number_template))
# print("CREATING MORE STA NAMES == == == == == == == == == == == == == == == == == == == == == == == ==")
else:
my_sta_names = sta_names_
if (len(my_sta_names) >= 15) or (suppress_related_commands_ == True):
self.add_sta_data["suppress_preexec_cli"] = "yes"
self.add_sta_data["suppress_preexec_method"] = 1
self.set_port_data["suppress_preexec_cli"] = "yes"
self.set_port_data["suppress_preexec_method"] = 1
num = 0
if debug:
print("== == Created STA names == == == == == == == == == == == == == == == == == == == == == == == ==")
pprint(self.station_names)
print("== == vs Pending STA names == ==")
pprint(my_sta_names)
print("== == == == == == == == == == == == == == == == == == == == == == == == == ==")
# track the names of stations in case we have stations added multiple times
finished_sta = []
for eidn in my_sta_names:
if eidn in self.station_names:
print("Station %s already created, skipping." % eidn)
continue
# print (" EIDN "+eidn);
if eidn in finished_sta:
# pprint(my_sta_names)
# raise ValueError("************ duplicate ****************** "+eidn)
if self.debug:
print("Station %s already created" % eidn)
continue
eid = self.local_realm.name_to_eid(eidn)
name = eid[2]
num += 1
self.add_sta_data["shelf"] = radio_shelf
self.add_sta_data["resource"] = radio_resource
self.add_sta_data["radio"] = radio_port
self.add_sta_data["sta_name"] = name # for create station calls
self.set_port_data["port"] = name # for set_port calls.
self.set_port_data["shelf"] = radio_shelf
self.set_port_data["resource"] = radio_resource
add_sta_r.addPostData(self.add_sta_data)
if debug:
print("- 3254 - %s- - - - - - - - - - - - - - - - - - " % eidn)
pprint(add_sta_r.requested_url)
pprint(add_sta_r.proxies)
pprint(self.add_sta_data)
print(self.set_port_data)
print("- ~3254 - - - - - - - - - - - - - - - - - - - ")
if dry_run:
print("dry run: not creating " + eidn)
continue
# print("- 3264 - ## %s ## add_sta_r.jsonPost - - - - - - - - - - - - - - - - - - "%eidn)
json_response = add_sta_r.jsonPost(debug=self.debug)
finished_sta.append(eidn)
# print("- ~3264 - %s - add_sta_r.jsonPost - - - - - - - - - - - - - - - - - - "%eidn)
time.sleep(0.01)
set_port_r.addPostData(self.set_port_data)
# print("- 3270 -- %s -- set_port_r.jsonPost - - - - - - - - - - - - - - - - - - "%eidn)
json_response = set_port_r.jsonPost(debug)
# print("- ~3270 - %s - set_port_r.jsonPost - - - - - - - - - - - - - - - - - - "%eidn)
time.sleep(0.01)
self.wifi_extra_data["resource"] = radio_resource
self.wifi_extra_data["port"] = name
if self.wifi_extra_data_modified:
wifi_extra_r.addPostData(self.wifi_extra_data)
json_response = wifi_extra_r.jsonPost(debug)
# append created stations to self.station_names
self.station_names.append("%s.%s.%s" % (radio_shelf, radio_resource, name))
time.sleep(sleep_time)
# print("- ~3287 - waitUntilPortsAppear - - - - - - - - - - - - - - - - - - "%eidn)
LFUtils.wait_until_ports_appear(self.lfclient_url, my_sta_names)
# and set ports up
if dry_run:
return
if (self.up):
self.admin_up()
# for sta_name in self.station_names:
# req = LFUtils.portUpRequest(resource, sta_name, debug_on=False)
# set_port_r.addPostData(req)
# json_response = set_port_r.jsonPost(debug)
# time.sleep(0.03)
if self.debug:
print("created %s stations" % num)
#

70
py-json/test_base.py Normal file
View File

@@ -0,0 +1,70 @@
#!/usr/bin/env python3
from lfdata import LFDataCollection
#import lfreporting
class TestBase:
def __init__(self):
self.profiles = list()
def pre_clean_up(self):
if self.profiles:
for profile in self.profiles:
profile.precleanup()
def clean_up(self):
if self.profiles:
for profile in self.profiles:
profile.cleanup()
def start(self):
if self.profiles:
for profile in self.profiles:
profile.start()
def stop(self):
if self.profiles:
for profile in self.profiles:
profile.stop()
def build(self):
# - create station profile
# - create 2 criteria [ex: not down, continually_receiving] object (for ex)
# - station_profile.add_criteria([not_down, continually_receiving, etc_3])
# design - inversion of control
if self.profiles:
for profile in self.profiles:
profile.build()
def passes(self):
if self.profiles:
for profile in self.profiles:
profile.check_passes()
def run_duration(self, monitor_enabled= False):
#here check if monitor is enabled or not, then run loop accordingly
self.check_for_halt()
if self.profiles:
if monitor_enabled:
for profile in self.profiles:
profile.monitor_record() #check for halt in monitor record?
for profile in self.profiles:
profile.grade()
if self.exit_on_fail:
if self.fails():
self.exit_fail()
self.check_for_quit()
def report(self, enabled= False):
#here check if monitor is enabled or not, then run loop accordingly with lfreporting
pass
def begin(self):
self.pre_clean_up()
self.build()
self.start()
self.run_duration()
self.stop()
self.report()
self.clean_up()

View File

@@ -0,0 +1,82 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
import pprint
from pprint import pprint
import time
class TestGroupProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port, local_realm, test_group_name=None, debug_=False):
super().__init__(lfclient_host, lfclient_port, debug_)
self.local_realm = local_realm
self.group_name = test_group_name
self.cx_list = []
def start_group(self):
if self.group_name is not None:
self.local_realm.json_post("/cli-json/start_group", {"name": self.group_name})
else:
raise ValueError("test_group name must be set.")
def quiesce_group(self):
if self.group_name is not None:
self.local_realm.json_post("/cli-json/quiesce_group", {"name": self.group_name})
else:
raise ValueError("test_group name must be set.")
def stop_group(self):
if self.group_name is not None:
self.local_realm.json_post("/cli-json/stop_group", {"name": self.group_name})
else:
raise ValueError("test_group name must be set.")
def create_group(self):
if self.group_name is not None:
self.local_realm.json_post("/cli-json/add_group", {"name": self.group_name})
else:
raise ValueError("test_group name must be set.")
def rm_group(self):
if self.group_name is not None:
self.local_realm.json_post("/cli-json/rm_group", {"name": self.group_name})
else:
raise ValueError("test_group name must be set.")
def add_cx(self, cx_name):
self.local_realm.json_post("/cli-json/add_tgcx", {"tgname": self.group_name, "cxname": cx_name})
def rm_cx(self, cx_name):
self.local_realm.json_post("/cli-json/rm_tgcx", {"tgname": self.group_name, "cxname": cx_name})
def check_group_exists(self):
test_groups = self.local_realm.json_get("/testgroups/all")
if test_groups is not None and "groups" in test_groups:
test_groups = test_groups["groups"]
for group in test_groups:
for k, v in group.items():
if v['name'] == self.group_name:
return True
else:
return False
def list_groups(self):
test_groups = self.local_realm.json_get("/testgroups/all")
tg_list = []
if test_groups is not None:
test_groups = test_groups["groups"]
for group in test_groups:
for k, v in group.items():
tg_list.append(v['name'])
return tg_list
def list_cxs(self):
test_groups = self.local_realm.json_get("/testgroups/all")
if test_groups is not None:
test_groups = test_groups["groups"]
for group in test_groups:
for k, v in group.items():
if v['name'] == self.group_name:
return v['cross connects']
else:
return []

View File

@@ -9,6 +9,9 @@ Date :
"""
import sys
from pprint import pprint
from uuid import uuid1
if 'py-json' not in sys.path:
sys.path.append('../py-json')
from LANforge import LFUtils
@@ -282,21 +285,49 @@ class RuntimeUpdates():
f.close()
if __name__ == "__main__":
thread1 = ClientVisualization(lfclient_host="192.168.200.15", thread_id=1)
thread1.start()
for i in range(0, 100):
time.sleep(1)
#print(thread1.client_data)
thread1.stop()
# obj = RuntimeUpdates("1", {"test_status": 1, "data": "None"})
# for i in range(0, 10):
# time.sleep(3)
# print(i)
# obj.send_update({"test_status": i, "data": "None"})
# thread1 = ClientVisualization(lfclient_host="192.168.200.15", thread_id=1)
# thread1.start()
# for i in range(30):
# print(thread1.client_data)
# thread1.stop()
class StatusSession(LFCliBase):
def __init__(self, lfclient_host="localhost", lfclient_port=8080,
_deep_clean=False,
session_id="0",
_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(lfclient_host, lfclient_port, _debug=_debug_on, _halt_on_error=_exit_on_error, _exit_on_fail=_exit_on_fail)
self.deep_clean = _deep_clean
self.session_id = session_id
self.json_put("/status-msg/" + self.session_id, {})
def update(self, key, message):
"""
Method to add new Message into a session
"""
self.json_post("/status-msg/" + self.session_id, {
"key": key,
"content-type": "text/plain",
"message": message
})
def read(self):
"""
Method to read all the messages for a particular session
"""
keys = []
for i in self.json_get("/status-msg/"+self.session_id)['messages']:
keys.append(i['key'])
json_uri = "/status-msg/"+self.session_id + "/"
for i in keys:
json_uri = json_uri + i + ","
return self.json_get(json_uri)['messages']
if __name__ == "__main__":
obj = StatusMsg(lfclient_host="localhost", lfclient_port=8090, session_id="01_18_21_20_04_20")
print(obj.read())

363
py-json/vap_profile.py Normal file
View File

@@ -0,0 +1,363 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
from LANforge import LFRequest
from LANforge import add_vap
from LANforge import set_port
from LANforge import LFUtils
import pprint
from pprint import pprint
import time
class VAPProfile(LFCliBase):
def __init__(self, lfclient_host, lfclient_port, local_realm,
vap_name="",
ssid="NA",
ssid_pass="NA",
mode=0,
debug_=False):
super().__init__(_lfjson_host=lfclient_host, _lfjson_port=lfclient_port, _debug=debug_)
self.debug = debug_
# self.lfclient_url = lfclient_url # done in super()
self.ssid = ssid
self.ssid_pass = ssid_pass
self.mode = mode
self.local_realm = local_realm
self.vap_name = vap_name
self.COMMANDS = ["add_vap", "set_port"]
self.desired_add_vap_flags = ["wpa2_enable", "80211u_enable", "create_admin_down"]
self.desired_add_vap_flags_mask = ["wpa2_enable", "80211u_enable", "create_admin_down"]
self.add_vap_data = {
"shelf": 1,
"resource": 1,
"radio": None,
"ap_name": None,
"flags": 0,
"flags_mask": 0,
"mode": 0,
"ssid": None,
"key": None,
"mac": "xx:xx:xx:xx:*:xx"
}
self.desired_set_port_cmd_flags = []
self.desired_set_port_current_flags = ["if_down"]
self.desired_set_port_interest_flags = ["current_flags", "ifdown"]
self.set_port_data = {
"shelf": 1,
"resource": 1,
"port": None,
"current_flags": 0,
"interest": 0, # (0x2 + 0x4000 + 0x800000) # current, dhcp, down
}
self.wifi_extra_data_modified = False
self.wifi_extra_data = {
"shelf": 1,
"resource": 1,
"port": None,
"key_mgmt": None,
"eap": None,
"hessid": None,
"identity": None,
"password": None,
"realm": None,
"domain": None
}
def set_wifi_extra(self,
key_mgmt="WPA-EAP",
pairwise="DEFAULT",
group="DEFAULT",
psk="[BLANK]",
eap="TTLS",
identity="testuser",
passwd="testpasswd",
realm="localhost.localdomain",
domain="localhost.localdomain",
hessid="00:00:00:00:00:01"):
self.wifi_extra_data_modified = True
self.wifi_extra_data["key_mgmt"] = key_mgmt
self.wifi_extra_data["eap"] = eap
self.wifi_extra_data["identity"] = identity
self.wifi_extra_data["password"] = passwd
self.wifi_extra_data["realm"] = realm
self.wifi_extra_data["domain"] = domain
self.wifi_extra_data["hessid"] = hessid
def admin_up(self, resource):
set_port_r = LFRequest.LFRequest(self.lfclient_url, "/cli-json/set_port", debug_=self.debug)
req_json = LFUtils.portUpRequest(resource, None, debug_on=self.debug)
req_json["port"] = self.vap_name
set_port_r.addPostData(req_json)
json_response = set_port_r.jsonPost(self.debug)
time.sleep(0.03)
def admin_down(self, resource):
set_port_r = LFRequest.LFRequest(self.lfclient_url, "/cli-json/set_port", debug_=self.debug)
req_json = LFUtils.port_down_request(resource, None, debug_on=self.debug)
req_json["port"] = self.vap_name
set_port_r.addPostData(req_json)
json_response = set_port_r.jsonPost(self.debug)
time.sleep(0.03)
def use_security(self, security_type, ssid=None, passwd=None):
types = {"wep": "wep_enable", "wpa": "wpa_enable", "wpa2": "wpa2_enable", "wpa3": "use-wpa3", "open": "[BLANK]"}
self.add_vap_data["ssid"] = ssid
if security_type in types.keys():
if (ssid is None) or (ssid == ""):
raise ValueError("use_security: %s requires ssid" % security_type)
if (passwd is None) or (passwd == ""):
raise ValueError("use_security: %s requires passphrase or [BLANK]" % security_type)
for name in types.values():
if name in self.desired_add_vap_flags and name in self.desired_add_vap_flags_mask:
self.desired_add_vap_flags.remove(name)
self.desired_add_vap_flags_mask.remove(name)
if security_type != "open":
self.desired_add_vap_flags.append(types[security_type])
self.desired_add_vap_flags_mask.append(types[security_type])
else:
passwd = "[BLANK]"
self.set_command_param("add_vap", "ssid", ssid)
self.set_command_param("add_vap", "key", passwd)
# unset any other security flag before setting our present flags
if security_type == "wpa3":
self.set_command_param("add_vap", "ieee80211w", 2)
def set_command_flag(self, command_name, param_name, value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
print("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
return
if command_name == "add_vap":
if (param_name not in add_vap.add_vap_flags):
print("Parameter name [%s] not defined in add_vap.py" % param_name)
if self.debug:
pprint(add_vap.add_vap_flags)
return
if (value == 1) and (param_name not in self.desired_add_vap_flags):
self.desired_add_vap_flags.append(param_name)
self.desired_add_vap_flags_mask.append(param_name)
elif value == 0:
self.desired_add_vap_flags.remove(param_name)
self.desired_add_vap_flags_mask.append(param_name)
elif command_name == "set_port":
if (param_name not in set_port.set_port_current_flags) and (
param_name not in set_port.set_port_cmd_flags) and (
param_name not in set_port.set_port_interest_flags):
print("Parameter name [%s] not defined in set_port.py" % param_name)
if self.debug:
pprint(set_port.set_port_cmd_flags)
pprint(set_port.set_port_current_flags)
pprint(set_port.set_port_interest_flags)
return
if param_name in set_port.set_port_cmd_flags:
if (value == 1) and (param_name not in self.desired_set_port_cmd_flags):
self.desired_set_port_cmd_flags.append(param_name)
elif value == 0:
self.desired_set_port_cmd_flags.remove(param_name)
elif param_name in set_port.set_port_current_flags:
if (value == 1) and (param_name not in self.desired_set_port_current_flags):
self.desired_set_port_current_flags.append(param_name)
elif value == 0:
self.desired_set_port_current_flags.remove(param_name)
elif param_name in set_port.set_port_interest_flags:
if (value == 1) and (param_name not in self.desired_set_port_interest_flags):
self.desired_set_port_interest_flags.append(param_name)
elif value == 0:
self.desired_set_port_interest_flags.remove(param_name)
else:
raise ValueError("Unknown param name: " + param_name)
def set_command_param(self, command_name, param_name, param_value):
# we have to check what the param name is
if (command_name is None) or (command_name == ""):
return
if (param_name is None) or (param_name == ""):
return
if command_name not in self.COMMANDS:
self.error("Command name name [%s] not defined in %s" % (command_name, self.COMMANDS))
return
if command_name == "add_vap":
self.add_vap_data[param_name] = param_value
elif command_name == "set_port":
self.set_port_data[param_name] = param_value
def add_named_flags(self, desired_list, command_ref):
if desired_list is None:
raise ValueError("addNamedFlags wants a list of desired flag names")
if len(desired_list) < 1:
print("addNamedFlags: empty desired list")
return 0
if (command_ref is None) or (len(command_ref) < 1):
raise ValueError("addNamedFlags wants a maps of flag values")
result = 0
for name in desired_list:
if (name is None) or (name == ""):
continue
if name not in command_ref:
if self.debug:
pprint(command_ref)
raise ValueError("flag %s not in map" % name)
# print("add-named-flags: %s %i"%(name, command_ref[name]))
result |= command_ref[name]
return result
def create(self, resource, radio, channel=None, up_=None, debug=False, use_ht40=True, use_ht80=True,
use_ht160=False,
suppress_related_commands_=True, use_radius=False, hs20_enable=False):
port_list = self.local_realm.json_get("port/1/1/list")
if port_list is not None:
port_list = port_list['interfaces']
for port in port_list:
for k, v in port.items():
if v['alias'] == self.vap_name:
self.local_realm.rm_port(k, check_exists=True)
if use_ht160:
self.desired_add_vap_flags.append("enable_80211d")
self.desired_add_vap_flags_mask.append("enable_80211d")
self.desired_add_vap_flags.append("80211h_enable")
self.desired_add_vap_flags_mask.append("80211h_enable")
self.desired_add_vap_flags.append("ht160_enable")
self.desired_add_vap_flags_mask.append("ht160_enable")
if not use_ht40:
self.desired_add_vap_flags.append("disable_ht40")
self.desired_add_vap_flags_mask.append("disable_ht40")
if not use_ht80:
self.desired_add_vap_flags.append("disable_ht80")
self.desired_add_vap_flags_mask.append("disable_ht80")
if use_radius:
self.desired_add_vap_flags.append("8021x_radius")
self.desired_add_vap_flags_mask.append("8021x_radius")
if hs20_enable:
self.desired_add_vap_flags.append("hs20_enable")
self.desired_add_vap_flags_mask.append("hs20_enable")
# print("MODE ========= ", self.mode)
jr = self.local_realm.json_get("/radiostatus/1/%s/%s?fields=channel,frequency,country" % (resource, radio),
debug_=self.debug)
if jr is None:
raise ValueError("No radio %s.%s found" % (resource, radio))
eid = "1.%s.%s" % (resource, radio)
frequency = 0
country = 0
if eid in jr:
country = jr[eid]["country"]
data = {
"shelf": 1,
"resource": resource,
"radio": radio,
"mode": self.mode, # "NA", #0 for AUTO or "NA"
"channel": channel,
"country": country,
"frequency": self.local_realm.channel_freq(channel_=channel)
}
self.local_realm.json_post("/cli-json/set_wifi_radio", _data=data)
if up_ is not None:
self.up = up_
if self.up:
if "create_admin_down" in self.desired_add_vap_flags:
del self.desired_add_vap_flags[self.desired_add_vap_flags.index("create_admin_down")]
elif "create_admin_down" not in self.desired_add_vap_flags:
self.desired_add_vap_flags.append("create_admin_down")
# create vaps down, do set_port on them, then set vaps up
self.add_vap_data["mode"] = self.mode
self.add_vap_data["flags"] = self.add_named_flags(self.desired_add_vap_flags, add_vap.add_vap_flags)
self.add_vap_data["flags_mask"] = self.add_named_flags(self.desired_add_vap_flags_mask, add_vap.add_vap_flags)
self.add_vap_data["radio"] = radio
self.add_vap_data["resource"] = resource
self.set_port_data["current_flags"] = self.add_named_flags(self.desired_set_port_current_flags,
set_port.set_port_current_flags)
self.set_port_data["interest"] = self.add_named_flags(self.desired_set_port_interest_flags,
set_port.set_port_interest_flags)
# these are unactivated LFRequest objects that we can modify and
# re-use inside a loop, reducing the number of object creations
add_vap_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/add_vap")
set_port_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/set_port")
wifi_extra_r = LFRequest.LFRequest(self.lfclient_url + "/cli-json/set_wifi_extra")
if suppress_related_commands_:
self.add_vap_data["suppress_preexec_cli"] = "yes"
self.add_vap_data["suppress_preexec_method"] = 1
self.set_port_data["suppress_preexec_cli"] = "yes"
self.set_port_data["suppress_preexec_method"] = 1
# pprint(self.station_names)
# exit(1)
self.set_port_data["port"] = self.vap_name
self.add_vap_data["ap_name"] = self.vap_name
add_vap_r.addPostData(self.add_vap_data)
if debug:
print("- 1502 - %s- - - - - - - - - - - - - - - - - - " % self.vap_name)
pprint(self.add_vap_data)
pprint(self.set_port_data)
pprint(add_vap_r)
print("- ~1502 - - - - - - - - - - - - - - - - - - - ")
json_response = add_vap_r.jsonPost(debug)
# time.sleep(0.03)
time.sleep(2)
set_port_r.addPostData(self.set_port_data)
json_response = set_port_r.jsonPost(debug)
time.sleep(0.03)
self.wifi_extra_data["resource"] = resource
self.wifi_extra_data["port"] = self.vap_name
if self.wifi_extra_data_modified:
wifi_extra_r.addPostData(self.wifi_extra_data)
json_response = wifi_extra_r.jsonPost(debug)
port_list = self.local_realm.json_get("port/1/1/list")
if port_list is not None:
port_list = port_list['interfaces']
for port in port_list:
for k, v in port.items():
if v['alias'] == 'br0':
self.local_realm.rm_port(k, check_exists=True)
time.sleep(5)
# create bridge
data = {
"shelf": 1,
"resource": resource,
"port": "br0",
"network_devs": "eth1,%s" % self.vap_name
}
self.local_realm.json_post("cli-json/add_br", data)
bridge_set_port = {
"shelf": 1,
"resource": 1,
"port": "br0",
"current_flags": 0x80000000,
"interest": 0x4000 # (0x2 + 0x4000 + 0x800000) # current, dhcp, down
}
self.local_realm.json_post("cli-json/set_port", bridge_set_port)
if (self.up):
self.admin_up(1)
def cleanup(self, resource, delay=0.03):
print("Cleaning up VAPs")
desired_ports = ["1.%s.%s" % (resource, self.vap_name), "1.%s.br0" % resource]
del_count = len(desired_ports)
# First, request remove on the list.
for port_eid in desired_ports:
self.local_realm.rm_port(port_eid, check_exists=True)
# And now see if they are gone
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=desired_ports)

790
py-json/vr_profile2.py Normal file
View File

@@ -0,0 +1,790 @@
import time
from pprint import pprint
from random import randint
from geometry import Rect, Group
from LANforge import LFUtils
from base_profile import BaseProfile
class VRProfile(BaseProfile):
Default_Margin = 15 # margin between routers and router connections
Default_VR_height = 250
Default_VR_width = 50
"""
Virtual Router profile
"""
def __init__(self,
local_realm,
debug=False):
super().__init__(local_realm=local_realm,
debug=debug)
self.vr_eid = None
self.vr_name = None
# self.created_rdds = []
self.cached_vrcx = {}
self.cached_routers = {}
# self.vrcx_data = {
# 'shelf': 1,
# 'resource': 1,
# 'vr-name': None,
# 'local_dev': None, # outer rdd
# 'remote_dev': None, # inner rdd
# "x": 200+ran,
# "y": 0,
# "width": 10,
# "height": 10,
# 'flags': 0,
# "subnets": None,
# "nexthop": None,
# "vrrp_ip": "0.0.0.0"
# }
#
# self.set_port_data = {
# "shelf": 1,
# "resource": 1,
# "port": None,
# "ip_addr": None,
# "netmask": None,
# "gateway": None
# }
"""
https://unihd-cag.github.io/simple-geometry/reference/rect.html
"""
def get_netsmith_bounds(self, resource=None, debug=False):
if (resource is None) or (resource < 1):
raise ValueError("get_netsmith_bounds wants resource id")
debug |= self.debug
occupied_area = self.get_occupied_area(resource=resource, debug=debug)
return Rect(x=0, y=0, height=occupied_area.height, width=occupied_area.width)
def get_all_vrcx_bounds(self, resource=None, debug=False):
"""
Computes bounds of all free vrcx ports but omits Virtual Routers
:param resource:
:param debug:
:return: rectangle encompasing all free vrcx ports or None
"""
if (resource is None) or (resource < 1):
raise ValueError("get_netsmith_bounds wants resource id")
vrcx_map = self.vrcx_list(resource=resource, debug=debug)
rect_list = []
for eid,item in vrcx_map.items():
rect_list.append(self.vr_to_rect(item))
if len(rect_list) < 1:
return None
bounding_group = Group()
for item in rect_list:
bounding_group.append(item)
bounding_group.update()
return Rect(x=bounding_group.x,
y=bounding_group.y,
width=bounding_group.width,
height=bounding_group.height)
def vr_eid_to_url(self, eid_str=None, debug=False):
debug |= self.debug
if (eid_str is None) or ("" == eid_str) or (eid_str.index(".") < 1):
raise ValueError("vr_eid_to_url cannot read eid[%s]" % eid_str)
hunks = eid_str.split(".")
if len(hunks) > 3:
return "/vr/1/%s/%s" % (hunks[1], hunks[2])
if len(hunks) > 2:
return "/vr/1/%s/%s" % (hunks[1], hunks[2])
return "/vr/1/%s/%s" % (hunks[0], hunks[1]) # probably a short eid
def vr_to_rect(self, vr_dict=None, debug=False):
debug |= self.debug
if vr_dict is None:
raise ValueError(__name__+": vr_dict should not be none")
if debug:
pprint(("vr_dict: ", vr_dict))
if "x" not in vr_dict:
if "eid" not in vr_dict:
raise ValueError("vr_to_rect: Unable to determine eid of rectangle to query")
router_url = self.vr_eid_to_url(vr_dict["eid"])
expanded_router_j = self.json_get(router_url, debug_=debug)
if expanded_router_j is None:
raise ValueError("vr_to_rect: unable to determine vr using url [%s]"%router_url)
vr_dict = expanded_router_j
return self.to_rect(x=int(vr_dict["x"]),
y=int(vr_dict["y"]),
width=int(vr_dict["width"]),
height=int(vr_dict["height"]))
def to_rect(self, x=0, y=0, width=10, height=10):
rect = Rect(x=int(x), y=int(y), width=int(width), height=int(height))
return rect
def get_occupied_area(self,
resource=1,
debug=False):
debug |= self.debug
if (resource is None) or (resource == 0) or ("" == resource):
raise ValueError("resource needs to be a number greater than 1")
router_map = self.router_list(resource=resource, debug=debug)
vrcx_map = self.vrcx_list(resource=resource, debug=debug)
rect_list = []
for eid,item in router_map.items():
rect_list.append(self.vr_to_rect(item))
for eid,item in vrcx_map.items():
rect_list.append(self.vr_to_rect(item))
if len(rect_list) < 1:
return None
bounding_group = Group()
for item in rect_list:
#if debug:
# pprint(("item:", item))
bounding_group.append(item)
bounding_group.update()
if debug:
pprint(("get_occupied_area: bounding_group:", bounding_group))
time.sleep(5)
return Rect(x=bounding_group.x,
y=bounding_group.y,
width=bounding_group.width,
height=bounding_group.height)
def vrcx_list(self, resource=None,
do_sync=False,
fields=["eid","x","y","height","width"],
debug=False):
"""
:param resource:
:param do_sync:
:param debug:
:return:
"""
debug |= self.debug
if (resource is None) or (resource == ""):
raise ValueError(__name__+ ": resource cannot be blank")
if do_sync or (self.cached_vrcx is None) or (len(self.cached_vrcx) < 1):
self.sync_netsmith(resource=resource, debug=debug)
fields_str = ",".join(fields)
if debug:
pprint([
("vrcx_list: fields", fields_str),
("fields_str", fields_str)
])
time.sleep(5)
list_of_vrcx = self.json_get("/vrcx/1/%s/list?fields=%s" % (resource, fields_str),
debug_=debug)
mapped_vrcx = LFUtils.list_to_alias_map(json_list=list_of_vrcx,
from_element="router-connections",
debug_=debug)
self.cached_vrcx = mapped_vrcx
return self.cached_vrcx
def router_list(self,
resource=None,
do_refresh=True,
fields=("eid", "x", "y", "height", "width"),
debug=False):
"""
Provides an updated list of routers, and caches the results to self.cached_routers.
Call this method again to update the cached list.
:param resource:
:param debug:
:return: list of routers provided by /vr/1/{resource}?fields=eid,x,y,height,width
"""
debug |= self.debug
fields_str = ",".join(fields)
if (resource is None) or (resource == ""):
raise ValueError(__name__+"; router_list needs valid resource parameter")
if do_refresh or (self.cached_routers is None) or (len(self.cached_routers) < 1):
list_of_routers = self.json_get("/vr/1/%s/list?%s" % (resource, fields_str),
debug_=debug)
mapped_routers = LFUtils.list_to_alias_map(json_list=list_of_routers,
from_element="virtual-routers",
debug_=debug)
self.cached_routers = mapped_routers
if debug:
pprint(("cached_routers: ", self.cached_routers))
return self.cached_routers
def create_rdd(self,
resource=1,
ip_addr=None,
netmask=None,
gateway=None,
suppress_related_commands_=True,
debug_=False):
rdd_data = {
"shelf": 1,
"resource": resource,
"port": "rdd0",
"peer_ifname": "rdd1"
}
# print("creating rdd0")
self.json_post("/cli-json/add_rdd",
rdd_data,
)
rdd_data = {
"shelf": 1,
"resource": resource,
"port": "rdd1",
"peer_ifname": "rdd0"
}
# print("creating rdd1")
# self.json_post("/cli-json/add_rdd",
# rdd_data,
# suppress_related_commands_=suppress_related_commands_,
# debug_=debug_)
#
# self.set_port_data["port"] = "rdd0"
# self.set_port_data["ip_addr"] = gateway
# self.set_port_data["netmask"] = netmask
# self.set_port_data["gateway"] = gateway
# self.json_post("/cli-json/set_port",
# self.set_port_data,
# suppress_related_commands_=suppress_related_commands_,
# debug_=debug_)
#
# self.set_port_data["port"] = "rdd1"
# self.set_port_data["ip_addr"] = ip_addr
# self.set_port_data["netmask"] = netmask
# self.set_port_data["gateway"] = gateway
# self.json_post("/cli-json/set_port",
# self.set_port_data,
# suppress_related_commands_=suppress_related_commands_,
# debug_=debug_)
#
# self.created_rdds.append("rdd0")
# self.created_rdds.append("rdd1")
def create_vrcx(self,
resource=1,
local_dev=None,
remote_dev=None,
subnets=None,
nexthop=None,
flags=0,
suppress_related_commands_=True,
debug_=False):
if self.vr_name is None:
raise ValueError("vr_name must be set. Current name: %s" % self.vr_name)
vrcx_data = {}
vrcx_data["resource"] = resource
vrcx_data["vr_name"] = self.vr_name
vrcx_data["local_dev"] = local_dev
vrcx_data["remote_dev"] = remote_dev
vrcx_data["subnets"] = subnets
vrcx_data["nexthop"] = nexthop
vrcx_data["flags"] = flags
self.json_post("/cli-json/add_vrcx",
vrcx_data,
suppress_related_commands_=suppress_related_commands_,
debug_=debug_)
def find_position(self, eid=None, target_group=None, debug=False):
debug |= self.debug
"""
get rectangular coordinates of VR or VRCX
:param eid:
:param target_group:
:return:
"""
pass
def next_available_area(self,
go_right=True,
go_down=False,
debug=False,
height=Default_VR_height,
width=Default_VR_width):
"""
Returns a coordinate adjacent to the right or bottom of the presently occupied area with a 15px margin.
:param go_right: look to right
:param go_down: look to bottom
:param debug:
:return: rectangle that that next next VR could occupy
"""
debug |= self.debug
# pprint(("used_vrcx_area:", used_vrcx_area))
# print("used x %s, y %s" % (used_vrcx_area.right+15, used_vrcx_area.top+15 ))
if not (go_right or go_down):
raise ValueError("Either go right or go down")
used_vrcx_area = self.get_occupied_area(resource=self.vr_eid[1], debug=debug)
next_area = None
if (go_right):
next_area = Rect(x=used_vrcx_area.right+15,
y=15,
width=50,
height=250)
elif (go_down):
next_area = Rect(x=15,
y=used_vrcx_area.bottom+15,
width=50,
height=250)
else:
raise ValueError("Unexpected positioning")
# pprint(("next_rh_area", next_area))
# print("next_rh_area: right %s, top %s" % (next_area.right, next_area.top ))
# print("next_rh_area: x %s, y %s" % (next_area.x, next_area.y ))
return next_area
def is_inside_virtual_router(self, resource=None, vrcx_rect=None, vr_eid=None, debug=False):
"""
:param resource: resource id
:param vrcx_rect: port rectangle, probably 10px x 10px
:param vr_eid: 'all' or router_eid, None is not acceptable
:param debug:
:return: True if area is inside listed virtual router(s)
"""
debug |= self.debug
if (resource is None) or (resource == 0) or ("" == resource):
raise ValueError("resource needs to be a number greater than 1")
if (vrcx_rect is None) or type(vrcx_rect ) or ("" == resource):
raise ValueError("resource needs to be a number greater than 1")
router_list = self.router_list(resource=resource, debug=debug)
#router_list = self.json_get("/vr/1/%s/%s?fields=eid,x,y,height,width")
if (router_list is None) or (len(router_list) < 1):
return False
for router in router_list:
rect = self.vr_to_rect(router)
if (vr_eid == "all"):
if (vrcx_rect.is_inside_of(rect)):
return True
else:
if (vr_eid == router["eid"]) and (vrcx_rect.is_inside_of(rect)):
return True
return False
def find_cached_router(self, resource=0, router_name=None, debug=False):
debug |= self.debug
if (resource is None) or (resource == 0):
raise ValueError(__name__+": find_cached_router needs resource_id")
if (router_name is None) or (router_name == ""):
raise ValueError(__name__+": find_cached_router needs router_name")
temp_eid_str = "1.%s.1.65535.%s" % (resource, router_name)
if temp_eid_str in self.cached_routers.keys():
return self.cached_routers[temp_eid_str]
temp_eid_str = "1.%s." % resource
for router in self.cached_routers.keys():
if debug:
pprint(("cached_router: ", router))
if router.startswith(temp_eid_str) and router.endswith(router_name):
return self.cached_routers[router]
if self.exit_on_error:
raise ValueError("Unable to find cached router %s"%temp_eid_str)
# exit(1)
return None
def add_vrcx_to_router(self, vrcx_name=None, vr_eid=None, debug=False):
"""
This is the Java psuedocode:
def moveConnection:
found_router = findRouter(x, y)
if connection.getRouter() is None:
if found_router.addConnection():
free_vrxc.remove(connection)
connection.setPosition(x, y)
return
if found_router is not None:
router.remove(connection)
free_vrcx.add(connection)
else:
if found_router != router:
router.remove(connection)
found_router.add(connection)
connection.setPosition(x, y)
:param vrcx_name:
:param vr_eid:
:param debug:
:return: new coordinates tuple
"""
debug |= self.debug
if debug:
pprint([("move_vrcx: vr_eid:", vr_eid),
("vrcx_name:", vrcx_name),
("self.cached_routers, check vr_eid:", self.cached_routers)])
time.sleep(5)
if (vrcx_name is None) or (vrcx_name == ""):
raise ValueError(__name__+"empty vrcx_name")
if (vr_eid is None) or (vr_eid == ""):
raise ValueError(__name__+"empty vr_eid")
my_vrcx_name = vrcx_name
if (vrcx_name.index(".") > 0):
hunks = vrcx_name.split(".")
my_vrcx_name = hunks[-1]
if debug:
pprint([("move_vrcx: vr_eid:", vr_eid),
("vrcx_name:", my_vrcx_name),
("self.cached_routers, check vr_eid:", self.cached_routers)])
router_val = self.find_cached_router(resource=vr_eid[1], router_name=vr_eid[2])
if router_val is None:
self.router_list(resource=vr_eid[1], debug=debug)
router_val = self.find_cached_router(resource=vr_eid[1], router_name=vr_eid[2])
if router_val is None:
raise ValueError(__name__+": move_vrcx: No router matches %s"%vr_eid)
new_bounds = self.vr_to_rect(vr_dict=router_val, debug=self.debug)
new_location = self.vrcx_landing_spot(bounds=new_bounds, debug=debug)
self.json_post("/cli-json/add_vrcx", {
"shelf": 1,
"resource": vr_eid[1],
"vr_name": vr_eid[2],
"local_dev": my_vrcx_name,
"x": new_location[0],
"y": new_location[1],
}, debug_=debug)
if debug:
pprint([
("router_val", router_val),
("new_bounds", new_bounds),
("new_location", new_location),
("my_vrcx_name",my_vrcx_name),
("router_val",router_val)
])
return new_location
def move_vr(self, eid=None, go_right=True, go_down=False, upper_left_x=None, upper_left_y=None, debug=False):
"""
:param eid: virtual router EID
:param go_right: select next area to the right of things
:param go_down: select next area below all things
:param upper_left_x: integer value for specific x
:param upper_left_y: integer value for specific y
:return:
"""
debug |= self.debug
used_vrcx_area = self.get_occupied_area(resource=self.vr_eid[1], debug=debug)
def sync_netsmith(self, resource=0, delay=0.1, debug=False):
"""
This syncs the netsmith window. Doing a sync could destroy any move changes you just did.
:param resource:
:param delay:
:param debug:
:return:
"""
debug |= self.debug
if (resource is None) or (resource < 1):
raise ValueError("sync_netsmith: resource must be > 0")
self.json_post("/vr/1/%s/0" % resource, { "action": "sync" }, debug_=True)
time.sleep(delay)
def apply_netsmith(self, resource=0, delay=2, timeout=30, debug=False):
debug |= self.debug
if resource is None or resource < 1:
raise ValueError("refresh_netsmith: resource must be > 0")
self.json_post("/vr/1/%s/0" % resource, { "action":"apply" }, debug_=debug)
# now poll vrcx to check state
state = "UNSET"
cur_time = int(time.time())
end_time = int(time.time()) + (1000 * timeout)
while (cur_time < end_time) and (state != "OK"):
time.sleep(delay)
state = "UNSET"
connection_list = self.vrcx_list(resource=resource,
do_sync=True,
fields=["eid", "netsmith-state"],
debug=debug)
vrcx_list_keys = list(connection_list.keys())
if debug:
pprint([
("vrcx_list", connection_list),
("keys", vrcx_list_keys)])
time.sleep(5)
if (connection_list is not None) and (len(vrcx_list_keys) > 0):
if (vrcx_list_keys[0] is not None) and ("netsmith-state" in connection_list[vrcx_list_keys[0]]):
item = connection_list[vrcx_list_keys[0]]
if debug:
pprint(("item zero", item))
state = item["netsmith-state"]
else:
self.logg("apply_netsmith: no vrcx list?")
if (state != "UNSET"):
continue
vr_list = self.router_list(resource=resource,
fields=("eid", "netsmith-state"),
debug=debug)
if (vr_list is not None) or (len(vr_list) > 0):
if (vr_list[0] is not None) and ("netsmith-state" in vr_list[0]):
state = vr_list[0]["netsmith-state"]
else:
self.logg("apply_netsmith: no vr_list?")
return state
def refresh_netsmith(self, resource=0, delay=0.03, debug=False):
"""
This does not do a netsmith->Apply.
This does not do a netsmith sync. Doing a sync could destroy any move changes you just did.
This is VirtualRouterPanel.privDoUpdate:
for vr in virtual_routers:
vr.ensurePortsCreated()
for connection in free_router_connections:
connection.ensurePortsCreated()
for vr in virtual_routers:
... remove connections that are unbound
for vr in virtual_routers:
remove vr that cannot be found
for connections in vrcx:
remove connection not found or remove endpoint from free list
for router in virtual_routers:
update vr
for connection in free_connections:
update connection
apply_vr_cfg
show_card
show_vr
show_vrcx
:param resource:
:param delay:
:param debug:
:return:
"""
debug |= self.debug
if resource is None or resource < 1:
raise ValueError("refresh_netsmith: resource must be > 0")
self.json_post("/cli-json/apply_vr_cfg", {
"shelf": 1,
"resource": resource
}, debug_=debug, suppress_related_commands_=True)
self.json_post("/cli-json/show_resources", {
"shelf": 1,
"resource": resource
}, debug_=debug)
time.sleep(delay)
self.json_post("/cli-json/show_vr", {
"shelf": 1,
"resource": resource,
"router": "all"
}, debug_=debug)
self.json_post("/cli-json/show_vrcx", {
"shelf": 1,
"resource": resource,
"cx_name": "all"
}, debug_=debug)
time.sleep(delay * 2)
def create(self,
vr_name=None,
debug=False,
suppress_related_commands=True):
# Create vr
debug |= self.debug
if vr_name is None:
raise ValueError("vr_name must be set. Current name: %s" % vr_name)
self.vr_eid = self.parent_realm.name_to_eid(vr_name)
if debug:
pprint(("self.vr_eid:", self.vr_eid))
# determine a free area to place a router
next_area = self.next_available_area(go_right=True, debug=debug)
self.add_vr_data = {
"alias": self.vr_eid[2],
"shelf": 1,
"resource": self.vr_eid[1],
"x": int(next_area.x),
"y": 15,
"width": 50,
"height": 250,
"flags": 0
}
self.json_post("/cli-json/add_vr",
self.add_vr_data,
suppress_related_commands_=suppress_related_commands,
debug_=debug)
self.json_post("/cli-json/apply_vr_cfg", {
"shelf": 1,
"resource": self.vr_eid[1]
}, debug_=debug, suppress_related_commands_=suppress_related_commands)
time.sleep(1)
self.apply_netsmith(resource=self.vr_eid[1], debug=debug)
def wait_until_vrcx_appear(self, resource=0, name_list=None, timeout_sec=120, debug=False):
debug |= self.debug
if (name_list is None) or (len(name_list) < 1):
raise ValueError("wait_until_vrcx_appear wants a non-empty name list")
num_expected = len(name_list)
num_found = 0
import time
cur_time = int(time.time())
end_time = cur_time + timeout_sec
sync_time = 10
while (num_found < num_expected) and (cur_time <= end_time):
time.sleep(1)
cur_time = int(time.time())
num_found = 0
response = self.json_get("/vrcx/1/%s/list" % resource)
if (response is None) or ("router-connections" not in response):
raise ValueError("unable to find router-connections for /vrcx/1/%s/list" % resource)
vrcx_list = LFUtils.list_to_alias_map(json_list=response, from_element='router-connections', debug_=debug)
num_found = len(vrcx_list)
if (num_found < 1):
self.logg("wait_until_vrcx_appear: zero vrcx in vrcx_list")
raise ValueError("zero router-connections for /vrcx/1/%s/list" % resource)
num_found = 0
for name in name_list:
name = "1.%s.%s" % (resource, name)
if name in vrcx_list:
num_found += 1
if num_found == len(name_list):
return True
# this is should not be done yet
# self.refresh_netsmith(resource=resource, debug=debug)
if ((end_time - cur_time) % sync_time) == 0:
self.sync_netsmith(resource=resource, debug=debug)
time.sleep(1)
if (num_found > 0) and (num_found < num_expected):
self.refresh_netsmith(resource=resource, debug=debug)
if debug:
pprint([("response", response),
("list", vrcx_list),
("num_found", num_found),
("num_expected", num_expected)
])
self.logg("wait_until_vrcx_appear: timeout waiting for router-connections to appear")
return False
def remove_vr(self, eid=None,
refresh=True,
debug=False,
delay=0.05,
die_on_error=False,
suppress_related_commands=True):
if (eid is None) or (eid[1] is None) or (eid[2] is None):
self.logg("remove_vr: invalid eid: ", audit_list=[eid])
if (die_on_error):
raise ValueError("remove_vr: invalid eid")
data = {
"shelf": 1,
"resource": eid[1],
"router_name": eid[2]
}
self.json_post("/cli-json/rm_vr", data, debug_=debug, suppress_related_commands_=suppress_related_commands)
time.sleep(delay)
if (refresh):
self.json_post("/cli-json/nc_show_vr", {
"shelf": 1,
"resource": eid[1],
"router": "all"
}, debug_=debug, suppress_related_commands_=suppress_related_commands)
self.json_post("/cli-json/nc_show_vrcx", {
"shelf": 1,
"resource": eid[1],
"cx_name": "all"
}, debug_=debug, suppress_related_commands_=suppress_related_commands)
def cleanup(self, resource=0, vr_id=0, delay=0.3, debug=False):
debug |= self.debug
if self.vr_eid is None:
return
if resource == 0:
resource = self.vr_eid[1]
if vr_id == 0:
vr_id = self.vr_eid[2]
data = {
"shelf": 1,
"resource": resource,
"router_name": vr_id
}
self.json_post("/cli-json/rm_vr", data, debug_=debug, suppress_related_commands_=True)
time.sleep(delay)
self.refresh_netsmith(resource=resource, debug=debug)
def add_vrcx(self, vr_eid=None, connection_name_list=None, debug=False):
if (vr_eid is None) or (vr_eid == ""):
raise ValueError(__name__+": add_vrcx wants router EID")
existing_list = self.vrcx_list(resource=vr_eid[1], do_sync=True)
if debug:
pprint([
("vr_eid", vr_eid),
("connect_names", connection_name_list),
("existing_list", existing_list)
])
time.sleep(10)
edited_connection_list = []
if type(connection_name_list) == str:
edited_connection_list.append(connection_name_list)
else:
edited_connection_list = connection_name_list
if debug:
pprint(("my_list was:", edited_connection_list))
time.sleep(1)
# for vrcx_name in my_list:
edited_connection_list[:] = ["1.%s.%s"%(vr_eid[1], x) if (not x.startswith("1.")) else None for x in edited_connection_list]
if debug:
pprint(("my list is now:", edited_connection_list))
# at this point move the vrcx into the vr
for vrcx_name in edited_connection_list:
print ("Looking for old coordinates of %s"%vrcx_name)
if debug:
pprint([("vrcx_name:", vrcx_name),
("existing_list", existing_list.get(vrcx_name))])
if existing_list.get(vrcx_name) is None:
if debug:
pprint(("existing_list:", existing_list))
raise ValueError("Is vrcx mis-named?")
old_coords = self.vr_to_rect( existing_list.get(vrcx_name))
if old_coords is None:
raise ValueError("old coordinates for vrcx disappeared")
new_coords = self.add_vrcx_to_router(vrcx_name=vrcx_name, vr_eid=vr_eid, debug=debug)
if debug:
print("coordinates were %s and will become %s "%(old_coords, new_coords))
def vrcx_landing_spot(self, bounds=None, debug=False):
"""
:param bounds: Rect we will select position within a 15px margin inside
:param debug:
:return: tuple (new_x, new_y) within bounds
"""
if (bounds is None):
raise ValueError(__name__+": missing bounds to land vrcx")
if not isinstance(bounds, Rect):
raise ValueError(__name__+": bounds not of type Rect")
pprint([("bounds.x", bounds.x),
("bounds.y", bounds.y),
("bounds.width", bounds.x+bounds.width),
("bounds.height", bounds.y+bounds.height)
])
new_x = randint(bounds.x+15, bounds.x+bounds.width-15)
new_y = randint(bounds.y+15, bounds.y+bounds.height-15)
return (new_x, new_y)
###
###
###

View File

@@ -0,0 +1,128 @@
#!/usr/bin/env python3
from LANforge.lfcli_base import LFCliBase
from LANforge import add_monitor
from LANforge.add_monitor import *
from LANforge import LFUtils
import pprint
from pprint import pprint
import time
class WifiMonitor:
def __init__(self, lfclient_url, local_realm, up=True, debug_=False, resource_=1):
self.debug = debug_
self.lfclient_url = lfclient_url
self.up = up
self.local_realm = local_realm
self.monitor_name = None
self.resource = resource_
self.flag_names = []
self.flag_mask_names = []
self.flags_mask = add_monitor.default_flags_mask
self.aid = "NA" # used when sniffing /ax radios
self.bsssid = "00:00:00:00:00:00" # used when sniffing on /ax radios
def create(self, resource_=1, channel=None, radio_="wiphy0", name_="moni0"):
print("Creating monitor " + name_)
self.monitor_name = name_
computed_flags = 0
for flag_n in self.flag_names:
computed_flags += add_monitor.flags[flag_n]
# we want to query the existing country code of the radio
# there's no reason to change it but we get hollering from server
# if we don't provide a value for the parameter
jr = self.local_realm.json_get("/radiostatus/1/%s/%s?fields=channel,frequency,country" % (resource_, radio_),
debug_=self.debug)
if jr is None:
raise ValueError("No radio %s.%s found" % (resource_, radio_))
eid = "1.%s.%s" % (resource_, radio_)
#frequency = 0
country = 0
if eid in jr:
country = jr[eid]["country"]
data = {
"shelf": 1,
"resource": resource_,
"radio": radio_,
"mode": 0, # "NA", #0 for AUTO or "NA"
"channel": channel,
"country": country,
"frequency": self.local_realm.channel_freq(channel_=channel)
}
self.local_realm.json_post("/cli-json/set_wifi_radio", _data=data)
time.sleep(1)
self.local_realm.json_post("/cli-json/add_monitor", {
"shelf": 1,
"resource": resource_,
"radio": radio_,
"ap_name": self.monitor_name,
"flags": computed_flags,
"flags_mask": self.flags_mask
})
def set_flag(self, param_name, value):
if (param_name not in add_monitor.flags):
raise ValueError("Flag '%s' does not exist for add_monitor, consult add_monitor.py" % param_name)
if (value == 1) and (param_name not in self.flag_names):
self.flag_names.append(param_name)
elif (value == 0) and (param_name in self.flag_names):
del self.flag_names[param_name]
self.flags_mask |= add_monitor.flags[param_name]
def cleanup(self, resource_=1, desired_ports=None):
print("Cleaning up monitors")
if (desired_ports is None) or (len(desired_ports) < 1):
if (self.monitor_name is None) or (self.monitor_name == ""):
print("No monitor name set to delete")
return
LFUtils.removePort(resource=resource_,
port_name=self.monitor_name,
baseurl=self.lfclient_url,
debug=self.debug)
else:
names = ",".join(desired_ports)
existing_ports = self.local_realm.json_get("/port/1/%d/%s?fields=alias" % (resource_, names), debug_=False)
if (existing_ports is None) or ("interfaces" not in existing_ports) or ("interface" not in existing_ports):
print("No monitor names found to delete")
return
if ("interfaces" in existing_ports):
for eid, info in existing_ports["interfaces"].items():
LFUtils.removePort(resource=resource_,
port_name=info["alias"],
baseurl=self.lfclient_url,
debug=self.debug)
if ("interface" in existing_ports):
for eid, info in existing_ports["interface"].items():
LFUtils.removePort(resource=resource_,
port_name=info["alias"],
baseurl=self.lfclient_url,
debug=self.debug)
def admin_up(self):
up_request = LFUtils.port_up_request(resource_id=self.resource, port_name=self.monitor_name)
self.local_realm.json_post("/cli-json/set_port", up_request)
def admin_down(self):
down_request = LFUtils.portDownRequest(resource_id=self.resource, port_name=self.monitor_name)
self.local_realm.json_post("/cli-json/set_port", down_request)
def start_sniff(self, capname=None, duration_sec=60):
if capname is None:
raise ValueError("Need a capture file name")
data = {
"shelf": 1,
"resource": 1,
"port": self.monitor_name,
"display": "NA",
"flags": 0x2,
"outfile": capname,
"duration": duration_sec
}
self.local_realm.json_post("/cli-json/sniff_port", _data=data)

394
py-json/wlan_test.py → py-json/wlan_theoretical_sta.py Normal file → Executable file
View File

@@ -1,5 +1,10 @@
'''
Candela Technologies Inc.
Info : Standard Script for WLAN Capaity Calculator
Date :
Author : Anjali Rahamatkar
This Script has three classes :
1. abg11_calculator : It will take all the user input of 802.11a/b/g station,calculate Intermediate values and Theoretical values.
2. n11_calculator : It will take all the user input of 802.11n station,calculate Intermediate values and Theoretical values.
@@ -15,7 +20,8 @@ import json
# Class to take all user input (802.11a/b/g Standard)
class abg11_calculator:
class abg11_calculator():
def __init__(self, Traffic_Type, PHY_Bit_Rate, Encryption, QoS, MAC_Frame_802_11, Basic_Rate_Set, Preamble,
slot_name, Codec_Type, RTS_CTS_Handshake, CTS_to_self):
@@ -31,9 +37,77 @@ class abg11_calculator:
self.RTS_CTS_Handshake = RTS_CTS_Handshake
self.CTS_to_self = CTS_to_self
# This function is for calculate intermediate values and Theoretical values
def input_parameter(self):
@staticmethod
def create_argparse(prog=None, formatter_class=None, epilog=None, description=None):
if (prog is not None) or (formatter_class is not None) or (epilog is not None) or (description is not None):
ap = argparse.ArgumentParser(prog=prog,
formatter_class=formatter_class,
allow_abbrev=True,
epilog=epilog,
description=description)
else:
ap = argparse.ArgumentParser()
# Station : 11abg
ap.add_argument("-sta", "--station", help="Enter Station Name : [11abg,11n,11ac](by Default 11abg)")
ap.add_argument("-t", "--traffic", help="Enter the Traffic Type : [Data,Voice](by Default Data)")
ap.add_argument("-p", "--phy",
help="Enter the PHY Bit Rate of Data Flow : [1, 2, 5.5, 11, 6, 9, 12, 18, 24, 36, 48, 54](by Default 54)")
ap.add_argument("-e", "--encryption",
help="Enter the Encryption : [None, WEP , TKIP, CCMP](by Default None)")
ap.add_argument("-q", "--qos", help="Enter the QoS = : [No, Yes](by Default [No for 11abg] and [Yes for 11n])")
ap.add_argument("-m", "--mac",
help="Enter the 802.11 MAC Frame : [Any Value](by Default [106 for 11abg] and [1538 for 11n])")
ap.add_argument("-b", "--basic", nargs='+',
help="Enter the Basic Rate Set : [1,2, 5.5, 11, 6, 9, 12, 18, 24, 36, 48, 54]"
" (by Default [1 2 5.5 11 6 12] for 11abg, [6 12 24] for 11n/11ac])")
ap.add_argument("-pre", "--preamble", help="Enter Preamble value : [ Short, Long, N/A](by Default Short)")
ap.add_argument("-s", "--slot", help="Enter the Slot Time : [Short, Long, N/A](by Default Short)")
ap.add_argument("-co", "--codec", help="Enter the Codec Type (Voice Traffic): {[ G.711 , G.723 , G.729]"
"by Default G.723 for 11abg, G.711 for 11n} and"
"{['Mixed','Greenfield'] by Default Mixed for 11ac}")
ap.add_argument("-r", "--rts", help="Enter the RTS/CTS Handshake : [No, Yes](by Default No)")
ap.add_argument("-c", "--cts", help="Enter the CTS-to-self (protection) : [No, Yes](by Default No)")
# Station : 11n and 11ac
ap.add_argument("-d", "--data",
help="Enter the Data/Voice MCS Index : ['0','1','2','3','4','5','6','7','8','9','10',"
"'11','12','13','14','15','16','17','18','19','20','21','22','23','24','25','26',"
"'27','28','29','30','31']by Default 7")
ap.add_argument("-ch", "--channel",
help="Enter the Channel Bandwidth = : ['20','40'] by Default 40 for 11n and "
"['20','40','80'] by Default 80 for 11ac")
ap.add_argument("-gu", "--guard", help="Enter the Guard Interval = : ['400','800'] (by Default 400)")
ap.add_argument("-high", "--highest",
help="Enter the Highest Basic MCS = : ['0','1','2','3','4','5','6','7','8','9',"
"'10','11','12','13','14','15','16','17','18','19','20','21','22','23','24',"
"'25','26','27','28','29','30','31'](by Default 1)")
ap.add_argument("-pl", "--plcp",
help="Enter the PLCP Configuration = : ['Mixed','Greenfield'] (by Default Mixed) for 11n")
ap.add_argument("-ip", "--ip",
help="Enter the IP Packets per A-MSDU = : ['0','1','2','3','4','5','6','7','8','9',"
"'10','11','12','13','14','15','16','17','18','19','20'] (by Default 0)")
ap.add_argument("-mc", "--mc",
help="Enter the MAC Frames per A-MPDU = : ['0','1','2','3','4','5','6','7','8',"
"'9','10','11','12','13','14','15','16','17','18','19','20','21','22','23',"
"'24','25','26','27','28','29','30','31','32','33','34','35','36','37','38',"
"'39','40','41','42','43','44','45','46','47','48','49','50','51','52','53',"
"'54','55','56','57','58','59','60','61','62','63','64'](by Default [42 for 11n] and [64 for 11ac])")
ap.add_argument("-cw", "--cwin",
help="Enter the CWmin (leave alone for default) = : [Any Value] (by Default 15)")
ap.add_argument("-spa", "--spatial", help="Enter the Spatial Streams = [1,2,3,4] (by Default 4)")
ap.add_argument("-rc", "--rtscts", help="Enter the RTS/CTS Handshake and CTS-to-self "
" = ['No','Yes'] (by Default No for 11ac)")
return ap
def calculate(self):
PHY_Bit_Rate_float = float(self.PHY_Bit_Rate)
PHY_Bit_Rate_int = int(PHY_Bit_Rate_float)
@@ -419,7 +493,7 @@ class abg11_calculator:
Client_5 = Ttxframe_data + SIFS_value + Ttxframe + DIFS_value + RTS_CTS_Handshake_Overhead + CTS_to_self_Handshake + MeanBackoff_value / 20
Client_6 = Ttxframe_data + SIFS_value + Ttxframe + DIFS_value + RTS_CTS_Handshake_Overhead + CTS_to_self_Handshake + MeanBackoff_value / 50
Client_7 = Ttxframe_data + SIFS_value + Ttxframe + DIFS_value + RTS_CTS_Handshake_Overhead + CTS_to_self_Handshake + MeanBackoff_value / 100
Client_1_new = format(Client_1, '.2f')
self.Client_1_new = format(Client_1, '.2f')
Client_2_new = format(Client_2, '.4f')
Client_3_new = format(Client_3, '.4f')
Client_4_new = format(Client_4, '.4f')
@@ -430,7 +504,7 @@ class abg11_calculator:
# Max Frame Rate
Max_Frame_Rate_C1 = 1000000 / Client_1
Max_Frame_Rate_C1_round = round(Max_Frame_Rate_C1)
self.Max_Frame_Rate_C1_round = round(Max_Frame_Rate_C1)
Max_Frame_Rate_C2 = 1000000 / Client_2
Max_Frame_Rate_C2_round = round(Max_Frame_Rate_C2)
Max_Frame_Rate_C3 = 1000000 / Client_3
@@ -447,7 +521,7 @@ class abg11_calculator:
# Max. Offered Load (802.11)
Max_Offered_Load_C1 = Max_Frame_Rate_C1 * Nbits_value / 1000000
Max_Offered_Load_C1_new = format(Max_Offered_Load_C1, '.3f')
self.Max_Offered_Load_C1_new = format(Max_Offered_Load_C1, '.3f')
Max_Offered_Load_C2 = Max_Frame_Rate_C2 * Nbits_value / 1000000
Max_Offered_Load_C2_new = format(Max_Offered_Load_C2, '.3f')
Max_Offered_Load_C3 = Max_Frame_Rate_C3 * Nbits_value / 1000000
@@ -464,7 +538,7 @@ class abg11_calculator:
# Offered Load Per 802.11 Client
Offered_Load_Per_Client1 = Max_Offered_Load_C1 / 1
Offered_Load_Per_Client1_new = format(Offered_Load_Per_Client1, '.3f')
self.Offered_Load_Per_Client1_new = format(Offered_Load_Per_Client1, '.3f')
Offered_Load_Per_Client2 = Max_Offered_Load_C2 / 2
Offered_Load_Per_Client2_new = format(Offered_Load_Per_Client2, '.3f')
Offered_Load_Per_Client3 = Max_Offered_Load_C3 / 5
@@ -481,7 +555,7 @@ class abg11_calculator:
# Offered Load (802.3 Side)
Offered_Load_C1 = Max_Frame_Rate_C1 * Ethernet_MAC_Frame_int * 8 / 1000000
Offered_Load_C1_new = format(Offered_Load_C1, '.3f')
self.Offered_Load_C1_new = format(Offered_Load_C1, '.3f')
Offered_Load_C2 = Max_Frame_Rate_C2 * Ethernet_MAC_Frame_int * 8 / 1000000
Offered_Load_C2_new = format(Offered_Load_C2, '.3f')
Offered_Load_C3 = Max_Frame_Rate_C3 * Ethernet_MAC_Frame_int * 8 / 1000000
@@ -499,7 +573,7 @@ class abg11_calculator:
if ip == 1:
IP_Throughput_C1 = Max_Frame_Rate_C1 * ip_packet * 8 / 1000000
IP_Throughput_C1_new = format(IP_Throughput_C1, '.3f')
self.IP_Throughput_C1_new = format(IP_Throughput_C1, '.3f')
IP_Throughput_C2 = Max_Frame_Rate_C2 * ip_packet * 8 / 1000000
IP_Throughput_C2_new = format(IP_Throughput_C2, '.3f')
IP_Throughput_C3 = Max_Frame_Rate_C3 * ip_packet * 8 / 1000000
@@ -513,7 +587,7 @@ class abg11_calculator:
IP_Throughput_C7 = Max_Frame_Rate_C7 * ip_packet * 8 / 1000000
IP_Throughput_C7_new = format(IP_Throughput_C7, '.3f')
else:
IP_Throughput_C1_new = "N/A"
self.IP_Throughput_C1_new = "N/A"
IP_Throughput_C2_new = "N/A"
IP_Throughput_C3_new = "N/A"
IP_Throughput_C4_new = "N/A"
@@ -521,102 +595,104 @@ class abg11_calculator:
IP_Throughput_C6_new = "N/A"
IP_Throughput_C7_new = "N/A"
print("\n" + "******************Station : 11abgCalculator*****************************" + "\n")
print("Theoretical Maximum Offered Load" + "\n")
print("1 Client:")
All_theoretical_output = {'Packet Interval(usec)': Client_1_new, 'Max Frame Rate(fps)': Max_Frame_Rate_C1_round,
'Max. Offered Load (802.11)(Mb/s)': Max_Offered_Load_C1_new,
'Offered Load Per 802.11 Client(Mb/s)': Offered_Load_Per_Client1_new,
'Offered Load (802.3 Side)(Mb/s)': Offered_Load_C1_new,
'IP Throughput (802.11 -> 802.3)(Mb/s)': IP_Throughput_C1_new}
print(json.dumps(All_theoretical_output, indent=4))
Voice_Call = Max_Frame_Rate_C1 / Codec_Frame_rate
Voice_Call_value = round(Voice_Call)
if "Data" in self.Traffic_Type:
Maximum_Theoretical_R_value = "N/A"
self.Maximum_Theoretical_R_value = "N/A"
else:
if "G.711" in self.Codec_Type:
Maximum_Theoretical_R_value = 85.9
self.Maximum_Theoretical_R_value = 85.9
else:
if "G.723" in self.Codec_Type:
Maximum_Theoretical_R_value = 72.9
self.Maximum_Theoretical_R_value = 72.9
else:
if "G.729" in self.Codec_Type:
Maximum_Theoretical_R_value = 81.7
self.Maximum_Theoretical_R_value = 81.7
else:
Maximum_Theoretical_R_value = 93.2
self.Maximum_Theoretical_R_value = 93.2
if "Data" in self.Traffic_Type:
Estimated_MOS_Score = "N/A"
Maximum_Bidirectional = "N/A"
self.Estimated_MOS_Score = "N/A"
self.Maximum_Bidirectional_Voice_Calls = "N/A"
else:
if (Voice_Call_value <= 1):
Maximum_Bidirectional_Voice_Calls = Max_Frame_Rate_C1_round / Codec_Frame_rate
Maximum_Bidirectional_Voice_Calls1 = self.Max_Frame_Rate_C1_round / Codec_Frame_rate
elif (Voice_Call_value <= 2):
Maximum_Bidirectional_Voice_Calls = Max_Frame_Rate_C2_round / Codec_Frame_rate
Maximum_Bidirectional_Voice_Calls1 = Max_Frame_Rate_C2_round / Codec_Frame_rate
elif (Voice_Call_value <= 5):
Maximum_Bidirectional_Voice_Calls = Max_Frame_Rate_C3_round / Codec_Frame_rate
Maximum_Bidirectional_Voice_Calls1 = Max_Frame_Rate_C3_round / Codec_Frame_rate
elif (Voice_Call_value <= 10):
Maximum_Bidirectional_Voice_Calls = Max_Frame_Rate_C4_round / Codec_Frame_rate
Maximum_Bidirectional_Voice_Calls1 = Max_Frame_Rate_C4_round / Codec_Frame_rate
elif (Voice_Call_value <= 20):
Maximum_Bidirectional_Voice_Calls = Max_Frame_Rate_C5_round / Codec_Frame_rate
Maximum_Bidirectional_Voice_Calls1 = Max_Frame_Rate_C5_round / Codec_Frame_rate
elif (Voice_Call_value <= 50):
Maximum_Bidirectional_Voice_Calls = Max_Frame_Rate_C6_round / Codec_Frame_rate
Maximum_Bidirectional_Voice_Calls1 = Max_Frame_Rate_C6_round / Codec_Frame_rate
else:
Maximum_Bidirectional_Voice_Calls = Max_Frame_Rate_C7_round / Codec_Frame_rate
Maximum_Bidirectional = round(Maximum_Bidirectional_Voice_Calls, 2)
if Maximum_Theoretical_R_value < 0:
Estimated_MOS_Score = 1
Maximum_Bidirectional_Voice_Calls1 = Max_Frame_Rate_C7_round / Codec_Frame_rate
self.Maximum_Bidirectional_Voice_Calls = round(Maximum_Bidirectional_Voice_Calls1, 2)
if self.Maximum_Theoretical_R_value < 0:
self.Estimated_MOS_Score = 1
else:
if Maximum_Theoretical_R_value > 100:
Estimated_MOS_Score = 4.5
if self.Maximum_Theoretical_R_value > 100:
self.Estimated_MOS_Score = 4.5
else:
Estimated_MOS_Score_1 = 1 + 0.035 * Maximum_Theoretical_R_value + Maximum_Theoretical_R_value * (
Maximum_Theoretical_R_value - 60) * (
100 - Maximum_Theoretical_R_value) * 7 * 0.000001
Estimated_MOS_Score = round(Estimated_MOS_Score_1, 2)
Estimated_MOS_Score_1 = 1 + 0.035 * self.Maximum_Theoretical_R_value + self.Maximum_Theoretical_R_value * (
self.Maximum_Theoretical_R_value - 60) * (
100 - self.Maximum_Theoretical_R_value) * 7 * 0.000001
self.Estimated_MOS_Score = round(Estimated_MOS_Score_1, 2)
def get_result(self):
print("\n" + "******************Station : 11abgCalculator*****************************" + "\n")
print("Theoretical Maximum Offered Load" + "\n")
print("1 Client:")
All_theoretical_output = {'Packet Interval(usec)': self.Client_1_new, 'Max Frame Rate(fps)': self.Max_Frame_Rate_C1_round,
'Max. Offered Load (802.11)(Mb/s)': self.Max_Offered_Load_C1_new,
'Offered Load Per 802.11 Client(Mb/s)': self.Offered_Load_Per_Client1_new,
'Offered Load (802.3 Side)(Mb/s)': self.Offered_Load_C1_new,
'IP Throughput (802.11 -> 802.3)(Mb/s)': self.IP_Throughput_C1_new}
print(json.dumps(All_theoretical_output, indent=4))
print("\n" + "Theroretical Voice Call Capacity" + "\n")
All_theoretical_voice = {'Maximum Theoretical R-value': Maximum_Theoretical_R_value,
'Estimated MOS Score': Estimated_MOS_Score,
'Maximum Bidirectional Voice Calls(calls)': Maximum_Bidirectional}
All_theoretical_voice = {'Maximum Theoretical R-value': self.Maximum_Theoretical_R_value,
'Estimated MOS Score': self.Estimated_MOS_Score,
'Maximum Bidirectional Voice Calls(calls)': self.Maximum_Bidirectional_Voice_Calls}
print(json.dumps(All_theoretical_voice, indent=4))
##Class to take all user input (802.11n Standard)
class n11_calculator():
class n11_calculator(abg11_calculator):
def __init__(self, Traffic_Type, Data_Voice_MCS, Channel_Bandwidth, Guard_Interval_value, Highest_Basic_str,
Encryption, QoS,
IP_Packets_MSDU_str, MAC_Frames_per_A_MPDU_str, BSS_Basic_Rate, MAC_MPDU_Size_Data_Traffic,
Codec_Type_Voice_Traffic, PLCP, CWmin, RTS_CTS_Handshake, CTS_to_self_protection):
self.Traffic_Type = Traffic_Type
Codec_Type, PLCP, CWmin, RTS_CTS_Handshake, CTS_to_self,PHY_Bit_Rate=None,MAC_Frame_802_11=None,Basic_Rate_Set=None,Preamble=None,slot_name=None):
super().__init__(Traffic_Type, PHY_Bit_Rate, Encryption, QoS, MAC_Frame_802_11, Basic_Rate_Set, Preamble,
slot_name, Codec_Type, RTS_CTS_Handshake, CTS_to_self)
self.Data_Voice_MCS = Data_Voice_MCS
self.Channel_Bandwidth = Channel_Bandwidth
self.Guard_Interval_value = Guard_Interval_value
self.Highest_Basic_str = Highest_Basic_str
self.Encryption = Encryption
self.QoS = QoS
self.IP_Packets_MSDU_str = IP_Packets_MSDU_str
self.MAC_Frames_per_A_MPDU_str = MAC_Frames_per_A_MPDU_str
self.BSS_Basic_Rate = BSS_Basic_Rate
self.MAC_MPDU_Size_Data_Traffic = MAC_MPDU_Size_Data_Traffic
self.Codec_Type_Voice_Traffic = Codec_Type_Voice_Traffic
self.PLCP = PLCP
self.CWmin = CWmin
self.RTS_CTS_Handshake = RTS_CTS_Handshake
self.CTS_to_self_protection = CTS_to_self_protection
# This function is for calculate intermediate values and Theoretical values
def input_parameter(self):
def calculate(self):
global HT_data_temp
global temp_value
SIFS = 16.00
@@ -773,17 +849,17 @@ class n11_calculator():
Encrypt_Hdr = 16
# c36 Codec IP Packet Size
if "G.711" in self.Codec_Type_Voice_Traffic:
if "G.711" in self.Codec_Type:
Codec_IP_Packet_Size = 200
Codec_Frame_Rate = 100
else:
if "G.723" in self.Codec_Type_Voice_Traffic:
if "G.723" in self.Codec_Type:
Codec_IP_Packet_Size = 60
Codec_Frame_Rate = 67
else:
if "G.729" in self.Codec_Type_Voice_Traffic:
if "G.729" in self.Codec_Type:
Codec_IP_Packet_Size = 60
Codec_Frame_Rate = 100
@@ -1096,7 +1172,7 @@ class n11_calculator():
CTS_to_self_Handshake_Overhead = 0
else:
if "Yes" in self.CTS_to_self_protection:
if "Yes" in self.CTS_to_self:
if "20" in self.Channel_Bandwidth:
CTS_to_self_Handshake_Overhead = 20 + 4 * int((22 + 14 * 8 + 24 * 4 - 1) / (24 * 4)) + SIFS
@@ -1112,7 +1188,7 @@ class n11_calculator():
MAC_PPDU_Interval_1 = RTS_CTS_Handshake_Overhead + CTS_to_self_Handshake_Overhead + Ttxframe + Ack_Response_Overhead + BlockAck_Response_Overhead + DIFS + (
MeanBackoff / 1)
Client_1_new = format(MAC_PPDU_Interval_1, '.2f')
self.Client_1_new = format(MAC_PPDU_Interval_1, '.2f')
MAC_PPDU_Interval_2 = RTS_CTS_Handshake_Overhead + CTS_to_self_Handshake_Overhead + Ttxframe + Ack_Response_Overhead + BlockAck_Response_Overhead + DIFS + (
MeanBackoff / 2)
Client_2_new = format(MAC_PPDU_Interval_2, '.2f')
@@ -1135,7 +1211,7 @@ class n11_calculator():
# Max PPDU Rate
Max_PPDU_Rate_1 = 1000000 / MAC_PPDU_Interval_1
Client_8_new = format(Max_PPDU_Rate_1, '.2f')
self.Client_8_new = format(Max_PPDU_Rate_1, '.2f')
Max_PPDU_Rate_2 = 1000000 / MAC_PPDU_Interval_2
Client_9_new = format(Max_PPDU_Rate_2, '.2f')
Max_PPDU_Rate_3 = 1000000 / MAC_PPDU_Interval_3
@@ -1167,7 +1243,7 @@ class n11_calculator():
Max_MAC_MPDU_Rate_6 = Max_PPDU_Rate_6
Max_MAC_MPDU_Rate_7 = Max_PPDU_Rate_7
Client_15_new = round(Max_MAC_MPDU_Rate_1)
self.Client_15_new = round(Max_MAC_MPDU_Rate_1)
Client_16_new = round(Max_MAC_MPDU_Rate_2)
Client_17_new = round(Max_MAC_MPDU_Rate_3)
Client_18_new = round(Max_MAC_MPDU_Rate_4)
@@ -1195,7 +1271,7 @@ class n11_calculator():
Max_MAC_MSDU_Rate_6 = Max_MAC_MPDU_Rate_6
Max_MAC_MSDU_Rate_7 = Max_MAC_MPDU_Rate_7
Client_22_new = round(Max_MAC_MSDU_Rate_1)
self.Client_22_new = round(Max_MAC_MSDU_Rate_1)
Client_23_new = round(Max_MAC_MSDU_Rate_2)
Client_24_new = round(Max_MAC_MSDU_Rate_3)
Client_25_new = round(Max_MAC_MSDU_Rate_4)
@@ -1212,7 +1288,7 @@ class n11_calculator():
Max_802_11_MAC_Frame_Data_Rate_6 = Max_MAC_MPDU_Rate_6 * MAC_MPDU_Size * 8 / 1000000
Max_802_11_MAC_Frame_Data_Rate_7 = Max_MAC_MPDU_Rate_7 * MAC_MPDU_Size * 8 / 1000000
Client_29_new = format(Max_802_11_MAC_Frame_Data_Rate_1, '.3f')
self.Client_29_new = format(Max_802_11_MAC_Frame_Data_Rate_1, '.3f')
Client_30_new = format(Max_802_11_MAC_Frame_Data_Rate_2, '.3f')
Client_31_new = format(Max_802_11_MAC_Frame_Data_Rate_3, '.3f')
Client_32_new = format(Max_802_11_MAC_Frame_Data_Rate_4, '.3f')
@@ -1230,7 +1306,7 @@ class n11_calculator():
Max_802_11_MAC_Payload_Goodput_6 = MSDU * 8 * Max_MAC_MSDU_Rate_6 / 1000000
Max_802_11_MAC_Payload_Goodput_7 = MSDU * 8 * Max_MAC_MSDU_Rate_7 / 1000000
Client_36_new = format(Max_802_11_MAC_Payload_Goodput_1, '.3f')
self.Client_36_new = format(Max_802_11_MAC_Payload_Goodput_1, '.3f')
Client_37_new = format(Max_802_11_MAC_Payload_Goodput_2, '.3f')
Client_38_new = format(Max_802_11_MAC_Payload_Goodput_3, '.3f')
Client_39_new = format(Max_802_11_MAC_Payload_Goodput_4, '.3f')
@@ -1248,7 +1324,7 @@ class n11_calculator():
MAC_Goodput_Per_802_11_Client_6 = Max_802_11_MAC_Payload_Goodput_6 / 50
MAC_Goodput_Per_802_11_Client_7 = Max_802_11_MAC_Payload_Goodput_7 / 100
Client_43_new = format(MAC_Goodput_Per_802_11_Client_1, '.3f')
self.Client_43_new = format(MAC_Goodput_Per_802_11_Client_1, '.3f')
Client_44_new = format(MAC_Goodput_Per_802_11_Client_2, '.3f')
Client_45_new = format(MAC_Goodput_Per_802_11_Client_3, '.3f')
Client_46_new = format(MAC_Goodput_Per_802_11_Client_4, '.3f')
@@ -1268,7 +1344,7 @@ class n11_calculator():
Offered_Load_8023_Side_5 = Max_MAC_MSDU_Rate_5 * Ethernet_value * 8 / 1000000
Offered_Load_8023_Side_6 = Max_MAC_MSDU_Rate_6 * Ethernet_value * 8 / 1000000
Offered_Load_8023_Side_7 = Max_MAC_MSDU_Rate_7 * Ethernet_value * 8 / 1000000
Client_50_new = format(Offered_Load_8023_Side_1, '.3f')
self.Client_50_new = format(Offered_Load_8023_Side_1, '.3f')
Client_51_new = format(Offered_Load_8023_Side_2, '.3f')
Client_52_new = format(Offered_Load_8023_Side_3, '.3f')
Client_53_new = format(Offered_Load_8023_Side_4, '.3f')
@@ -1277,7 +1353,7 @@ class n11_calculator():
Client_56_new = format(Offered_Load_8023_Side_7, '.3f')
else:
Client_50_new = "N/A"
self.Client_50_new = "N/A"
Client_51_new = "N/A"
Client_52_new = "N/A"
Client_53_new = "N/A"
@@ -1294,7 +1370,7 @@ class n11_calculator():
IP_Goodput_802_11_8023_5 = Max_MAC_MSDU_Rate_5 * ip_1 * 8 / 1000000
IP_Goodput_802_11_8023_6 = Max_MAC_MSDU_Rate_6 * ip_1 * 8 / 1000000
IP_Goodput_802_11_8023_7 = Max_MAC_MSDU_Rate_7 * ip_1 * 8 / 1000000
Client_57_new = format(IP_Goodput_802_11_8023_1, '.3f')
self.Client_57_new = format(IP_Goodput_802_11_8023_1, '.3f')
Client_58_new = format(IP_Goodput_802_11_8023_2, '.3f')
Client_59_new = format(IP_Goodput_802_11_8023_3, '.3f')
Client_60_new = format(IP_Goodput_802_11_8023_4, '.3f')
@@ -1303,7 +1379,7 @@ class n11_calculator():
Client_63_new = format(IP_Goodput_802_11_8023_7, '.3f')
else:
Client_57_new = "N/A"
self.Client_57_new = "N/A"
Client_58_new = "N/A"
Client_59_new = "N/A"
Client_60_new = "N/A"
@@ -1315,31 +1391,31 @@ class n11_calculator():
# c53
if "Data" in self.Traffic_Type:
Maximum_Theoretical_R_value = "N/A"
Estimated_MOS_Score = "N/A"
self.Maximum_Theoretical_R_value = "N/A"
self.Estimated_MOS_Score = "N/A"
else:
if "G.711" in self.Codec_Type_Voice_Traffic:
Maximum_Theoretical_R_value = 85.9
if "G.711" in self.Codec_Type:
self.Maximum_Theoretical_R_value = 85.9
else:
if "G.723" in self.Codec_Type_Voice_Traffic:
Maximum_Theoretical_R_value = 72.9
if "G.723" in self.Codec_Type:
self.Maximum_Theoretical_R_value = 72.9
else:
if "G.729" in self.Codec_Type_Voice_Traffic:
Maximum_Theoretical_R_value = 81.7
if "G.729" in self.Codec_Type:
self.Maximum_Theoretical_R_value = 81.7
else:
Maximum_Theoretical_R_value = 93.2
self.Maximum_Theoretical_R_value = 93.2
if Maximum_Theoretical_R_value < 0:
Estimated_MOS_Score = 1
if self.Maximum_Theoretical_R_value < 0:
self.Estimated_MOS_Score = 1
else:
if Maximum_Theoretical_R_value > 100:
Estimated_MOS_Score = 4.5
if self.Maximum_Theoretical_R_value > 100:
self.Estimated_MOS_Score = 4.5
else:
Estimated_MOS_Score_1 = (
1 + 0.035 * Maximum_Theoretical_R_value + Maximum_Theoretical_R_value * (
Maximum_Theoretical_R_value - 60) * (
100 - Maximum_Theoretical_R_value) * 7 * 0.000001)
Estimated_MOS_Score = format(Estimated_MOS_Score_1, '.2f')
self.Estimated_MOS_Score_1 = (
1 + 0.035 * self.Maximum_Theoretical_R_value + self.Maximum_Theoretical_R_value * (
self.Maximum_Theoretical_R_value - 60) * (
100 - self.Maximum_Theoretical_R_value) * 7 * 0.000001)
self.Estimated_MOS_Score = format(self.Estimated_MOS_Score_1, '.2f')
# Voice_Call_Range
try:
@@ -1376,58 +1452,57 @@ class n11_calculator():
pass
if "Data" in self.Traffic_Type:
Maximum_Bidirectional_Voice_Calls = "N/A"
self.Maximum_Bidirectional_Voice_Calls = "N/A"
else:
Maximum_Bidirectional_Voice_Calls = round(Maximum_Bidirectional, 2)
self.Maximum_Bidirectional_Voice_Calls = round(Maximum_Bidirectional, 2)
def get_result(self):
print("\n" + "******************Station : 11nCalculator*****************************" + "\n")
print("Theoretical Maximum Offered Load" + "\n")
print("1 Client:")
All_theoretical_output = {'MAC PPDU Interval(usec)': Client_1_new, 'Max PPDU Rate(fps)': Client_8_new,
'Max MAC MPDU Rate': Client_15_new,
'Max MAC MSDU Rate': Client_22_new,
'Max. 802.11 MAC Frame Data Rate(Mb/s)': Client_29_new,
'Max. 802.11 MAC Payload Goodput(Mb/s)': Client_36_new,
'MAC Goodput Per 802.11 Client(Mb/s)': Client_43_new,
'Offered Load (802.3 Side)(Mb/s)': Client_50_new,
'IP Goodput (802.11 -> 802.3)(Mb/s)': Client_57_new}
All_theoretical_output = {'MAC PPDU Interval(usec)': self.Client_1_new,
'Max PPDU Rate(fps)': self.Client_8_new,
'Max MAC MPDU Rate': self.Client_15_new,
'Max MAC MSDU Rate': self.Client_22_new,
'Max. 802.11 MAC Frame Data Rate(Mb/s)': self.Client_29_new,
'Max. 802.11 MAC Payload Goodput(Mb/s)': self.Client_36_new,
'MAC Goodput Per 802.11 Client(Mb/s)': self.Client_43_new,
'Offered Load (802.3 Side)(Mb/s)': self.Client_50_new,
'IP Goodput (802.11 -> 802.3)(Mb/s)': self.Client_57_new}
print(json.dumps(All_theoretical_output, indent=4))
print("\n" + "Theroretical Voice Call Capacity" + "\n")
All_theoretical_voice = {'Maximum Theoretical R-value': Maximum_Theoretical_R_value,
'Estimated MOS Score': Estimated_MOS_Score,
'Maximum Bidirectional Voice Calls(calls)': Maximum_Bidirectional_Voice_Calls}
All_theoretical_voice = {'Maximum Theoretical R-value': self.Maximum_Theoretical_R_value,
'Estimated MOS Score': self.Estimated_MOS_Score,
'Maximum Bidirectional Voice Calls(calls)': self.Maximum_Bidirectional_Voice_Calls}
print(json.dumps(All_theoretical_voice, indent=4))
##Class to take all user input (802.11ac Standard)
class ac11_calculator():
class ac11_calculator(n11_calculator):
def __init__(self, Traffic_Type, Data_Voice_MCS, spatial, Channel_Bandwidth, Guard_Interval_value,
Highest_Basic_str, Encryption, QoS,
Highest_Basic_str, Encryption, QoS,IP_Packets_MSDU_str, MAC_Frames_per_A_MPDU_str, BSS_Basic_Rate, MAC_MPDU_Size_Data_Traffic,
Codec_Type, CWmin, RTS_CTS,PLCP = None,RTS_CTS_Handshake=None,CTS_to_self=None):
super().__init__(Traffic_Type, Data_Voice_MCS, Channel_Bandwidth, Guard_Interval_value, Highest_Basic_str,
Encryption, QoS,
IP_Packets_MSDU_str, MAC_Frames_per_A_MPDU_str, BSS_Basic_Rate, MAC_MPDU_Size_Data_Traffic,
Codec_Type_Voice_Traffic, CWmin, RTS_CTS):
self.Traffic_Type = Traffic_Type
self.Data_Voice_MCS = Data_Voice_MCS
self.Channel_Bandwidth = Channel_Bandwidth
self.Guard_Interval_value = Guard_Interval_value
self.Highest_Basic_str = Highest_Basic_str
self.Encryption = Encryption
self.QoS = QoS
self.IP_Packets_MSDU_str = IP_Packets_MSDU_str
self.MAC_Frames_per_A_MPDU_str = MAC_Frames_per_A_MPDU_str
self.BSS_Basic_Rate = BSS_Basic_Rate
self.MAC_MPDU_Size_Data_Traffic = MAC_MPDU_Size_Data_Traffic
self.Codec_Type_Voice_Traffic = Codec_Type_Voice_Traffic
self.CWmin = CWmin
self.RTS_CTS = RTS_CTS
Codec_Type, PLCP, CWmin, RTS_CTS_Handshake, CTS_to_self)
self.spatial = spatial
self.RTS_CTS = RTS_CTS
# This function is for calculate intermediate values and Theoretical values
def input_parameter(self):
def calculate(self):
SIFS = 16.00
DIFS = 34.00
@@ -1572,16 +1647,16 @@ class ac11_calculator():
MeanBackoff = CWmin_leave_alone_for_default * Slot_Time / 2
IP_Packets_MSDU = int(self.IP_Packets_MSDU_str)
if "Mixed" in self.Codec_Type_Voice_Traffic:
if "Mixed" in self.Codec_Type:
plcp = 1
elif "Greenfield" in self.Codec_Type_Voice_Traffic:
elif "Greenfield" in self.Codec_Type:
plcp = 2
RTS_CTS_Handshake = 1
if "No" in self.RTS_CTS:
CTS_to_self_protection = 1
CTS_to_self = 1
elif "Yes" in self.RTS_CTS:
CTS_to_self_protection = 2
CTS_to_self = 2
# g24 QoS Hdr
@@ -1836,7 +1911,7 @@ class ac11_calculator():
if RTS_CTS_Handshake == 2:
CTS_to_self_Handshake_Overhead = 0
else:
if CTS_to_self_protection == 2:
if CTS_to_self == 2:
if "20" in self.Channel_Bandwidth:
CTS_to_self_Handshake_Overhead = 20 + 4 * int((22 + 14 * 8 + 24 * 4 - 1) / (24 * 4)) + SIFS
else:
@@ -1869,7 +1944,7 @@ class ac11_calculator():
MAC_PPDU_Interval_1 = RTS_CTS_Handshake_Overhead + CTS_to_self_Handshake_Overhead + Ttxframe + Ack_Response_Overhead + BlockAck_Response_Overhead + DIFS + (
MeanBackoff / 1)
Client_1_new = format(MAC_PPDU_Interval_1, '.2f')
self.Client_1_new = format(MAC_PPDU_Interval_1, '.2f')
MAC_PPDU_Interval_2 = RTS_CTS_Handshake_Overhead + CTS_to_self_Handshake_Overhead + Ttxframe + Ack_Response_Overhead + BlockAck_Response_Overhead + DIFS + (
MeanBackoff / 2)
Client_2_new = format(MAC_PPDU_Interval_2, '.2f')
@@ -1894,7 +1969,7 @@ class ac11_calculator():
# Max PPDU Rate
Max_PPDU_Rate_1 = 1000000 / MAC_PPDU_Interval_1
Client_8_new = format(Max_PPDU_Rate_1, '.2f')
self.Client_8_new = format(Max_PPDU_Rate_1, '.2f')
Max_PPDU_Rate_2 = 1000000 / MAC_PPDU_Interval_2
Client_9_new = format(Max_PPDU_Rate_2, '.2f')
Max_PPDU_Rate_3 = 1000000 / MAC_PPDU_Interval_3
@@ -1927,7 +2002,7 @@ class ac11_calculator():
Max_MAC_MPDU_Rate_6 = Max_PPDU_Rate_6
Max_MAC_MPDU_Rate_7 = Max_PPDU_Rate_7
Client_15_new = round(Max_MAC_MPDU_Rate_1)
self.Client_15_new = round(Max_MAC_MPDU_Rate_1)
Client_16_new = round(Max_MAC_MPDU_Rate_2)
Client_17_new = round(Max_MAC_MPDU_Rate_3)
Client_18_new = round(Max_MAC_MPDU_Rate_4)
@@ -1955,7 +2030,7 @@ class ac11_calculator():
Max_MAC_MSDU_Rate_6 = Max_MAC_MPDU_Rate_6
Max_MAC_MSDU_Rate_7 = Max_MAC_MPDU_Rate_7
Client_22_new = round(Max_MAC_MSDU_Rate_1)
self.Client_22_new = round(Max_MAC_MSDU_Rate_1)
Client_23_new = round(Max_MAC_MSDU_Rate_2)
Client_24_new = round(Max_MAC_MSDU_Rate_3)
Client_25_new = round(Max_MAC_MSDU_Rate_4)
@@ -1973,7 +2048,7 @@ class ac11_calculator():
Max_802_11_MAC_Frame_Data_Rate_6 = Max_MAC_MPDU_Rate_6 * MAC_MPDU_Size * 8 / 1000000
Max_802_11_MAC_Frame_Data_Rate_7 = Max_MAC_MPDU_Rate_7 * MAC_MPDU_Size * 8 / 1000000
Client_29_new = format(Max_802_11_MAC_Frame_Data_Rate_1, '.3f')
self.Client_29_new = format(Max_802_11_MAC_Frame_Data_Rate_1, '.3f')
Client_30_new = format(Max_802_11_MAC_Frame_Data_Rate_2, '.3f')
Client_31_new = format(Max_802_11_MAC_Frame_Data_Rate_3, '.3f')
Client_32_new = format(Max_802_11_MAC_Frame_Data_Rate_4, '.3f')
@@ -1991,7 +2066,7 @@ class ac11_calculator():
Max_802_11_MAC_Payload_Goodput_6 = MSDU * 8 * Max_MAC_MSDU_Rate_6 / 1000000
Max_802_11_MAC_Payload_Goodput_7 = MSDU * 8 * Max_MAC_MSDU_Rate_7 / 1000000
Client_36_new = format(Max_802_11_MAC_Payload_Goodput_1, '.3f')
self.Client_36_new = format(Max_802_11_MAC_Payload_Goodput_1, '.3f')
Client_37_new = format(Max_802_11_MAC_Payload_Goodput_2, '.3f')
Client_38_new = format(Max_802_11_MAC_Payload_Goodput_3, '.3f')
Client_39_new = format(Max_802_11_MAC_Payload_Goodput_4, '.3f')
@@ -2009,7 +2084,7 @@ class ac11_calculator():
MAC_Goodput_Per_802_11_Client_6 = Max_802_11_MAC_Payload_Goodput_6 / 50
MAC_Goodput_Per_802_11_Client_7 = Max_802_11_MAC_Payload_Goodput_7 / 100
Client_43_new = format(MAC_Goodput_Per_802_11_Client_1, '.3f')
self.Client_43_new = format(MAC_Goodput_Per_802_11_Client_1, '.3f')
Client_44_new = format(MAC_Goodput_Per_802_11_Client_2, '.3f')
Client_45_new = format(MAC_Goodput_Per_802_11_Client_3, '.3f')
Client_46_new = format(MAC_Goodput_Per_802_11_Client_4, '.3f')
@@ -2028,7 +2103,7 @@ class ac11_calculator():
Offered_Load_8023_Side_5 = Max_MAC_MSDU_Rate_5 * Ethernet_value * 8 / 1000000
Offered_Load_8023_Side_6 = Max_MAC_MSDU_Rate_6 * Ethernet_value * 8 / 1000000
Offered_Load_8023_Side_7 = Max_MAC_MSDU_Rate_7 * Ethernet_value * 8 / 1000000
Client_50_new = format(Offered_Load_8023_Side_1, '.3f')
self.Client_50_new = format(Offered_Load_8023_Side_1, '.3f')
Client_51_new = format(Offered_Load_8023_Side_2, '.3f')
Client_52_new = format(Offered_Load_8023_Side_3, '.3f')
Client_53_new = format(Offered_Load_8023_Side_4, '.3f')
@@ -2037,7 +2112,7 @@ class ac11_calculator():
Client_56_new = format(Offered_Load_8023_Side_7, '.3f')
else:
Client_50_new = "N/A"
self.Client_50_new = "N/A"
Client_51_new = "N/A"
Client_52_new = "N/A"
Client_53_new = "N/A"
@@ -2054,7 +2129,7 @@ class ac11_calculator():
IP_Goodput_802_11_8023_5 = Max_MAC_MSDU_Rate_5 * ip_1 * 8 / 1000000
IP_Goodput_802_11_8023_6 = Max_MAC_MSDU_Rate_6 * ip_1 * 8 / 1000000
IP_Goodput_802_11_8023_7 = Max_MAC_MSDU_Rate_7 * ip_1 * 8 / 1000000
Client_57_new = format(IP_Goodput_802_11_8023_1, '.3f')
self.Client_57_new = format(IP_Goodput_802_11_8023_1, '.3f')
Client_58_new = format(IP_Goodput_802_11_8023_2, '.3f')
Client_59_new = format(IP_Goodput_802_11_8023_3, '.3f')
Client_60_new = format(IP_Goodput_802_11_8023_4, '.3f')
@@ -2063,7 +2138,7 @@ class ac11_calculator():
Client_63_new = format(IP_Goodput_802_11_8023_7, '.3f')
else:
Client_57_new = "N/A"
self.Client_57_new = "N/A"
Client_58_new = "N/A"
Client_59_new = "N/A"
Client_60_new = "N/A"
@@ -2074,20 +2149,20 @@ class ac11_calculator():
# Theoretical Voice Call Capacity
if "Data" in self.Traffic_Type:
Maximum_Theoretical_R_value = "N/A"
Estimated_MOS_Score = "N/A"
self.Maximum_Theoretical_R_value = "N/A"
self.Estimated_MOS_Score = "N/A"
else:
Maximum_Theoretical_R_value = 85.9
if Maximum_Theoretical_R_value < 0:
Estimated_MOS_Score = 1
self.Maximum_Theoretical_R_value = 85.9
if self.Maximum_Theoretical_R_value < 0:
self.Estimated_MOS_Score = 1
else:
if Maximum_Theoretical_R_value > 100:
Estimated_MOS_Score = 4.5
if self.Maximum_Theoretical_R_value > 100:
self.Estimated_MOS_Score = 4.5
else:
Estimated_MOS_Score_1 = (1 + 0.035 * Maximum_Theoretical_R_value + Maximum_Theoretical_R_value * (
Maximum_Theoretical_R_value - 60) * (100 - Maximum_Theoretical_R_value) * 7 * 0.000001)
Estimated_MOS_Score = format(Estimated_MOS_Score_1, '.2f')
Estimated_MOS_Score_1 = (1 + 0.035 * self.Maximum_Theoretical_R_value + self.Maximum_Theoretical_R_value * (
self.Maximum_Theoretical_R_value - 60) * (100 - self.Maximum_Theoretical_R_value) * 7 * 0.000001)
self.Estimated_MOS_Score = format(Estimated_MOS_Score_1, '.2f')
# Voice_Call_Range
@@ -2126,26 +2201,29 @@ class ac11_calculator():
pass
if "Data" in self.Traffic_Type:
Maximum_Bidirectional_Voice_Calls = "N/A"
self.Maximum_Bidirectional_Voice_Calls = "N/A"
else:
Maximum_Bidirectional_Voice_Calls = round(Maximum_Bidirectional, 2)
self.Maximum_Bidirectional_Voice_Calls = round(Maximum_Bidirectional, 2)
def get_result(self):
print("\n" + "******************Station : 11ac Calculator*****************************" + "\n")
print("Theoretical Maximum Offered Load" + "\n")
print("1 Client:")
All_theoretical_output = {'MAC PPDU Interval(usec)': Client_1_new, 'Max PPDU Rate(fps)': Client_8_new,
'Max MAC MPDU Rate': Client_15_new,
'Max MAC MSDU Rate': Client_22_new,
'Max. 802.11 MAC Frame Data Rate(Mb/s)': Client_29_new,
'Max. 802.11 MAC Payload Goodput(Mb/s)': Client_36_new,
'MAC Goodput Per 802.11 Client(Mb/s)': Client_43_new,
'Offered Load (802.3 Side)(Mb/s)': Client_50_new,
'IP Goodput (802.11 -> 802.3)(Mb/s)': Client_57_new}
All_theoretical_output = {'MAC PPDU Interval(usec)': self.Client_1_new, 'Max PPDU Rate(fps)': self.Client_8_new,
'Max MAC MPDU Rate': self.Client_15_new,
'Max MAC MSDU Rate': self.Client_22_new,
'Max. 802.11 MAC Frame Data Rate(Mb/s)': self.Client_29_new,
'Max. 802.11 MAC Payload Goodput(Mb/s)': self.Client_36_new,
'MAC Goodput Per 802.11 Client(Mb/s)': self.Client_43_new,
'Offered Load (802.3 Side)(Mb/s)': self.Client_50_new,
'IP Goodput (802.11 -> 802.3)(Mb/s)': self.Client_57_new}
print(json.dumps(All_theoretical_output, indent=4))
print("\n" + "Theroretical Voice Call Capacity" + "\n")
All_theoretical_voice = {'Maximum Theoretical R-value': Maximum_Theoretical_R_value,
'Estimated MOS Score': Estimated_MOS_Score,
'Maximum Bidirectional Voice Calls(calls)': Maximum_Bidirectional_Voice_Calls}
print(json.dumps(All_theoretical_voice, indent=4))
All_theoretical_voice = {'Maximum Theoretical R-value': self.Maximum_Theoretical_R_value,
'Estimated MOS Score': self.Estimated_MOS_Score,
'Maximum Bidirectional Voice Calls(calls)': self.Maximum_Bidirectional_Voice_Calls}
print(json.dumps(All_theoretical_voice, indent=4))

View File

@@ -16,6 +16,7 @@ if sys.version_info[0] != 3:
import argparse
import json
import logging
import pprint
import traceback
import time
from time import sleep
@@ -25,11 +26,7 @@ try:
import thread
except ImportError:
import _thread as thread
import pprint
import LANforge
from LANforge import LFRequest
from LANforge import LFUtils
from LANforge.LFUtils import NA
cre={
"phy": re.compile(r'^(1\.\d+):\s+(\S+)\s+\(phy', re.I),
@@ -69,29 +66,46 @@ rebank = {
"ifname" : re.compile("IFNAME=(\S+)")
}
websock = None
host = "localhost"
base_url = None
port = 8081
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def usage():
print("""Example: __file__ --host 192.168.1.101 --port 8081\n""")
def main():
global websock
host = "localhost"
base_url = "ws://%s:8081"%host
resource_id = 1 # typically you're using resource 1 in stand alone realm
# global host
# global base_url
# resource_id = 1 # typically you're using resource 1 in stand alone realm
parser = argparse.ArgumentParser(description="test creating a station")
parser.add_argument("-m", "--host", type=str, help="json host to connect to")
parser.add_argument("-m", "--host", type=str, help="websocket host to connect to")
parser.add_argument("-p", "--port", type=str, help="websoket port")
args = None
host = "unset"
base_url = "unset"
try:
args = parser.parse_args()
if (args.host is not None):
host = args.host,
baseurl = base_url = "ws://%s:8081"%host
args = parser.parse_args()
if (args.host is None):
host = "localhost"
elif (type(args) is tuple) or (type(args) is list):
host = args.host[0]
else:
host = args.host
base_url = "ws://%s:%s" % (host, port)
except Exception as e:
logging.exception(e)
usage()
exit(2)
print("Exception: "+e)
logging.exception(e)
usage()
exit(2)
# open websocket
# print("Main: base_url: %s, host:%s, port:%s" % (base_url, host, port))
websock = start_websocket(base_url, websock)
@@ -125,16 +139,16 @@ def sock_filter(wsock, text):
if (test in message["details"]):
return;
except KeyError:
print ("Message lacks key 'details'")
print("Message lacks key 'details'")
try:
if ("wifi-event" in message.keys()):
for test in ignore:
#print (" is ",test, " in ", message["wifi-event"])
# print (" is ",test, " in ", message["wifi-event"])
if (test in message["wifi-event"]):
return;
except KeyError:
print("Message lacks key 'wifi-event'" )
print("Message lacks key 'wifi-event'")
if (("time" in message.keys()) and ("timestamp" in message.keys())):
return
@@ -150,27 +164,28 @@ def sock_filter(wsock, text):
station_name = match_result.group(1)
if (message["is_alert"]):
print ("alert: ", message["details"])
#LFUtils.debug_printer.pprint(message)
print("alert: ", message["details"])
# LFUtils.debug_printer.pprint(message)
return
else:
#LFUtils.debug_printer.pprint(message)
# LFUtils.debug_printer.pprint(message)
if (" IP change from " in message["details"]):
if (" to 0.0.0.0" in messsage["details"]):
print ("e: %s.%s lost IP address",[resource,station_name])
print("e: %s.%s lost IP address", [resource, station_name])
else:
print ("e: %s.%s gained IP address",[resource,station_name])
print("e: %s.%s gained IP address", [resource, station_name])
if ("Link DOWN" in message["details"]):
return # duplicates alert
return # duplicates alert
print ("event: ", message["details"])
print("event: ", message["details"])
return
if ("wifi-event" in message.keys()):
if ("CTRL-EVENT-CONNECTED" in message["wifi-event"]):
# redunant
return
if (("CTRL-EVENT-CONNECTED - Connection to " in message["wifi-event"]) and (" complete" in message["wifi-event"])):
if (("CTRL-EVENT-CONNECTED - Connection to " in message["wifi-event"]) and (
" complete" in message["wifi-event"])):
return;
if ((": assoc " in message["wifi-event"]) and ("status: 0: Successful" in message["wifi-event"])):
return
@@ -178,61 +193,61 @@ def sock_filter(wsock, text):
try:
match_result = cre["phy"].match(message["wifi-event"])
if (match_result is not None):
#LFUtils.debug_printer.pprint(match_result)
#LFUtils.debug_printer.pprint(match_result.groups())
# LFUtils.debug_printer.pprint(match_result)
# LFUtils.debug_printer.pprint(match_result.groups())
resource = match_result.group(1)
station_name = match_result.group(2)
else:
match_result = cre["ifname"].match(message["wifi-event"])
#LFUtils.debug_printer.pprint(match_result)
#LFUtils.debug_printer.pprint(match_result.groups())
# LFUtils.debug_printer.pprint(match_result)
# LFUtils.debug_printer.pprint(match_result.groups())
if (match_result is not None):
resource = match_result.group(1)
station_name = match_result.group(2)
else:
print ("Is there some other combination??? :", message["wifi-event"])
print("Is there some other combination??? :", message["wifi-event"])
station_name = 'no-sta'
resource_name = 'no-resource'
print ("bleh!")
print("bleh!")
except Exception as ex2:
print ("No regex match:")
print("No regex match:")
print(repr(ex2))
traceback.print_exc()
sleep(1)
#print ("Determined station name: as %s.%s"%(resource, station_name))
# print ("Determined station name: as %s.%s"%(resource, station_name))
if ((": auth ") and ("status: 0: Successful" in message["wifi-event"])):
match_result = cre["auth"].match(message["wifi-event"])
if (match_result and match_result.groups()):
bssid = match_result.group(1)
print ("station %s.%s auth with %s"%(resource,station_name,bssid))
print("station %s.%s auth with %s" % (resource, station_name, bssid))
return
else:
print ("station %s.%s auth with ??"%(resource,station_name))
print("station %s.%s auth with ??" % (resource, station_name))
LFUtils.debug_printer.pprint(match_result)
if ("Associated with " in message["wifi-event"]):
match_result = cre["associated"].match(message["wifi-event"])
if (match_result and match_result.groups()):
bssid = match_result.group(1)
print ("station %s.%s assocated with %s"%(resource,station_name,bssid))
print("station %s.%s assocated with %s" % (resource, station_name, bssid))
return
else:
print ("station %s.%s assocated with ??"%(resource,station_name))
print("station %s.%s assocated with ??" % (resource, station_name))
LFUtils.debug_printer.pprint(match_result)
if (" - Connection to " in message["wifi-event"]):
match_result = cre["connected"].match(message["wifi-event"])
if (match_result and match_result.groups()):
bssid = match_result.group(1)
print ("station %s.%s connected to %s"%(resource,station_name,bssid))
print("station %s.%s connected to %s" % (resource, station_name, bssid))
return
else:
print ("station %s.%s connected to ??"%(resource,station_name))
print("station %s.%s connected to ??" % (resource, station_name))
LFUtils.debug_printer.pprint(match_result)
if ("disconnected" in message["wifi-event"]):
print ("Station %s.%s down"%(resource,station_name))
print("Station %s.%s down" % (resource, station_name))
return
if ("Trying to associate with " in message["wifi-event"]):
@@ -240,10 +255,10 @@ def sock_filter(wsock, text):
if (match_result and match_result.groups()):
bssid = match_result.group(1)
print ("station %s.%s associating with %s"%(resource,station_name,bssid))
print("station %s.%s associating with %s" % (resource, station_name, bssid))
return
else:
print ("station %s.%s associating with ??"%(resource,station_name))
print("station %s.%s associating with ??" % (resource, station_name))
LFUtils.debug_printer.pprint(match_result)
if ("Trying to authenticate" in message["wifi-event"]):
@@ -251,10 +266,10 @@ def sock_filter(wsock, text):
if (match_result and match_result.groups()):
bssid = match_result.group(1)
print ("station %s.%s authenticating with %s"%(resource,station_name,bssid))
print("station %s.%s authenticating with %s" % (resource, station_name, bssid))
return
else:
print ("station %s.%s authenticating with ??"%(resource,station_name))
print("station %s.%s authenticating with ??" % (resource, station_name))
LFUtils.debug_printer.pprint(match_result)
if ("Authenticated" in message["wifi-event"]):
@@ -262,67 +277,70 @@ def sock_filter(wsock, text):
LFUtils.debug_printer.pprint(match_result)
if (match_result and match_result.groups()):
bssid = match_result.group(1)
print ("station %s.%s authenticated with %s"%(resource,station_name,bssid))
print("station %s.%s authenticated with %s" % (resource, station_name, bssid))
else:
print ("station %s.%s authenticated with ??"%(resource,station_name))
print("station %s.%s authenticated with ??" % (resource, station_name))
print ("w: ", message["wifi-event"])
print("w: ", message["wifi-event"])
else:
print ("\nUnhandled: ")
print("\nUnhandled: ")
LFUtils.debug_printer.pprint(message)
except KeyError as kerr:
print ("# ----- Bad Key: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print ("input: ",text)
print (repr(kerr))
print("# ----- Bad Key: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print("input: ", text)
print(repr(kerr))
traceback.print_exc()
print ("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
sleep(1)
return
except json.JSONDecodeError as derr:
print ("# ----- Decode err: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print ("input: ",text)
print (repr(derr))
print("# ----- Decode err: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print("input: ", text)
print(repr(derr))
traceback.print_exc()
print ("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
sleep(1)
return
except Exception as ex:
print ("# ----- Exception: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print("# ----- Exception: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print(repr(ex))
print ("input: ",text)
print("input: ", text)
LFUtils.debug_printer.pprint(message)
traceback.print_exc()
print ("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
print("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----")
sleep(1)
return
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
def m_error(wsock, err):
print ("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n")
print("# ----- Error: ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n")
LFUtils.debug_printer.pprint(err)
print ("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n")
print("# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----\n")
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
def m_open(wsock):
def run(*args):
time.sleep(0.1)
#ping = json.loads();
# ping = json.loads();
wsock.send('{"text":"ping"}')
thread.start_new_thread(run, ())
print ("started websocket client")
print("Connected...")
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
def m_close(wsock):
LFUtils.debug_printer.pprint(wsock)
# ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
def start_websocket(uri, websock):
#websocket.enableTrace(True)
websock = websocket.WebSocketApp(uri,
on_message = sock_filter,
on_error = m_error,
on_close = m_close)
on_message=sock_filter,
on_error=m_error,
on_close=m_close)
websock.on_open = m_open
websock.run_forever()
return websock

2
py-scripts/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
regression_test.txt
regression_test.rc

View File

@@ -1,665 +0,0 @@
""" file under progress not ffor testing
"""
import time
import threading
import os
import paramiko
from queue import Queue
from cx_time import IPv4Test
class DFS_TESTING:
def __init__(self):
pass
def set_dfs_channel_in_ap(self):
ssh = paramiko.SSHClient() # creating shh client object we use this object to connect to router
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # automatically adds the missing host key
ssh.connect('192.168.200.190', port=22, username='root', password='Lanforge12345!xzsawq@!')
stdin, stdout, stderr = ssh.exec_command('conf_set system:wlanSettings:wlanSettingTable:wlan1:channel 52')
output = stdout.readlines()
print('\n'.join(output))
time.sleep(1)
exit(0)
def create_station_on_GUI(self,y1,y2):
global var1
self.y1 = y1
self.y2 = y2
cmd = "python3 sta_cx.py --mgr 192.168.200.13 --num_stations 1 --ssid TestAP95 --passwd lanforge --security wpa2 --radio wiphy0"
print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/home/lanforge/lanforge-scripts/py-scripts')
print("Current working directory: {0}".format(os.getcwd()))
x = os.popen(cmd).read()
print("station created")
y1 ='station created'
with open("data.txt", "w")as f:
f.write(x)
f.close()
file = open("data.txt", "r")
for i in file:
if "channel associated is " in i:
my_list = list(i.split(" "))
print(my_list[3])
print(type(my_list[3]))
var1 = my_list[3]
print(var1)
var = var1.replace("\n", "")
if var == "52" or var == "56" or var == "60" or var == "64" or var == "100" or var == "104" or var == "108" or var == "112" or var == "116" or var == "120" or var == "124" or var == "128" or var == "132" or var == "136" or var == "140":
print('Station is on DFS Channel')
self.y2 = 'station is on DFS Channel'
else:
print('Station is on Non DFS channel')
self.y2 = 'Station is on Non DFS channel'
return (self.y1 , self.y2)
''' ########### HACKRF ####################### '''
def generate_radar_at_ch52(self, r):
self.r = r
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5260000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
#print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
self.r = "Radar detected"
return self.r
def generate_radar_at_ch56(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5280000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch60(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5300000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch64(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5320000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch100(self,r):
self.r = r
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5500000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
self.r = "Radar received"
return self.r
def generate_radar_at_ch104(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5520000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch108(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5540000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch112(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5560000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch116(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5280000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch120(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5600000"
#print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch124(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5620000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch128(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5640000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch132(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5660000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch136(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5680000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def generate_radar_at_ch140(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5700000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def hackrf_status_off(self):
cmd = "sudo python lf_hackrf.py --pulse_width 1 --pulse_interval 1428 --pulse_count 18 --sweep_time 1000 --freq 5220000"
# print("Current working directory: {0}".format(os.getcwd()))
os.chdir('/usr/lib64/python2.7/site-packages/')
# print("Current working directory: {0}".format(os.getcwd()))
os.system(cmd)
def monitor_station_channel(self,m):
self.m = m
obj = IPv4Test(_host="192.168.200.13",
_port=8080,
_ssid="TestAP95",
_password="lanforge",
_security="wpa2",
_radio="wiphy0")
obj.cleanup(obj.sta_list)
obj.build()
obj.station_profile.admin_up()
obj.local_realm.wait_for_ip(obj.sta_list)
time.sleep(30)
var = obj.json_get("/port/1/1/sta0000?fields=channel")
var_1 = var['interface']['channel']
self.m = var_1
return self.m
def aps_radio_off(self):
pass
def aps_not_switch_automatically(self):
pass
def check_ap_channel_switching_time(self):
pass
def main():
dfs = DFS_TESTING()
que = Queue()
''' algorithm and sequence to be followed '''
print("Hackrf is ON")
print("press s --> enter --> q to stop hackrf")
dfs.hackrf_status_off()
print("Now hackrf is OFF")
#set channel on ap //netgear
threads_list = []
t1 = threading.Thread(target=lambda q, arg1, arg2: q.put(dfs.create_station_on_GUI(arg1, arg2)), args=(que, "", ""))
t1.start()
threads_list.append(t1)
t1.join()
# Check thread's return value
global my_var
result = que.get()
print("hi i reached", result)
my_var = result
list_1 = list(my_var)
print("my list", list_1)
if any("station is on DFS Channel" in s for s in list_1):
t2 = threading.Thread(target=lambda q, arg1: q.put(dfs.generate_radar_at_ch52(arg1)), args=(que, ""))
t2.start()
threads_list.append(t2)
t2.join()
x = que.get()
print("result", x)
else:
print("radar unreachable")
t3=threading.Thread(target=lambda q, arg1: q.put(dfs.monitor_station_channel(arg1)), args=(que, ""))
t3.start()
threads_list.append(t3)
t3.join()
y = que.get()
print("channel after radar is ", y)
if (y != "52"):
print("station is on Non DFS Channel")
else:
print("station is on DFS Channel")
"""t2 = threading.Thread(target=lambda q, arg1: q.put(dfs.generate_radar_at_ch52(arg1)), args=(que, ""))
t2.start()
threads_list.append(t2)
t2.join()"""
# Join all the threads
"""for t in threads_list:
t.join()"""
"""print("my var", my_var)
empty_list = []
list = empty_list.append(my_var)
print("list", list)"""
'''t2 = threading.Thread(target=dfs.generate_radar_at_ch100())
t2.start()
t2.join()
print("radar received")
t3 = threading.Thread(target=dfs.create_station_on_GUI())
t3.start()
t3.join()
print("station reassociated")'''
'''dfs.hackrf_status_off()
dfs.aps_radio_off()
dfs.aps_not_switch_automatically()
#generate radar and check for all dfs channels
dfs.check_ap_channel_switching_time()
#after testing turn off hackrf'''
if __name__ == '__main__':
main()

View File

@@ -1,7 +1,10 @@
# LANForge Python Scripts
# LANForge Python Scripts
This directory contains python scripts useful for unit-tests. It uses libraries in ../py-json. Please place new tests in this directory. Unless they are libraries, please avoid adding python scripts to ../py-json. Please read https://www.candelatech.com/cookbook/cli/json-python to learn about how to use the LANforge client JSON directly. Review http://www.candelatech.com/scripting_cookbook.php to understand more about scripts in general.
# Getting Started
The first step is to make sure all dependencies are installed in your system by running update_deps.py in this folder.
Please consider using the `LFCliBase` class as your script superclass. It will help you with a consistent set of JSON handling methods and pass and fail methods for recording test results. Below is a sample snippet that includes LFCliBase:
if 'py-json' not in sys.path:
@@ -12,11 +15,11 @@ Please consider using the `LFCliBase` class as your script superclass. It will h
from LANforge.LFUtils import *
import realm
from realm import Realm
class Eggzample(LFCliBase):
def __init__(self, lfclient_host, lfclient_port):
super().__init__(lfclient_host, lfclient_port, debug=True)
def main():
eggz = Eggzample("http://localhost", 8080)
frontpage_json = eggz.json_get("/")
@@ -25,7 +28,7 @@ Please consider using the `LFCliBase` class as your script superclass. It will h
"message": "hello world"
}
eggz.json_post("/cli-json/gossip", data, debug_=True)
if __name__ == "__main__":
main()
@@ -47,14 +50,14 @@ The above example will stimulate output on the LANforge client websocket `ws://l
* /stations: entities that are associated to your virtual access points (vAP)
There are more URIs you can explore, these are the more useful ones.
#### Scripts included are:
#### Scripts included are:
* `cicd_TipIntegration.py`: battery of TIP tests that include upgrading DUT and executing sta_connect script
* `cicd_testrail.py`:
* `cicd_testrail.py`:
* `function send_get`: Issues a GET request (read) against the API.
* `function send_post`: Issues a write against the API.
* `function __send_request`:
* `function __send_request`:
* `function get_project_id`: Gets the project ID using the project name
* `function get_run_id`: Gets the run ID using test name and project name
* `function update_testrail`: Update TestRail for a given run_id and case_id
@@ -74,7 +77,7 @@ There are more URIs you can explore, these are the more useful ones.
* `run_cv_scenario.py`:
* class `RunCvScenario`: imports the LFCliBase class.
* function `get_report_file_name`: returns report name
* function `build`: loads and sends the ports available?
* function `build`: loads and sends the ports available?
* function `start`: /gui_cli takes commands keyed on 'cmd' and this function create an array of commands
* `sta_connect.py`: This function creates a station, create TCP and UDP traffic, run it a short amount of time,
and verify whether traffic was sent and received. It also verifies the station connected
@@ -88,7 +91,7 @@ There are more URIs you can explore, these are the more useful ones.
* function `remove_stations`: removes all stations
* function `num_associated`:
* function `clear_test_results`:
* function `run`:
* function `run`:
* function `setup`:
* function `start`:
* function `stop`:
@@ -103,61 +106,61 @@ There are more URIs you can explore, these are the more useful ones.
* function `get_upstream_url`:
* function `compare_vals`: compares pre-test values to post-test values
* function `remove_stations`: removes all ports
* function `num_associated`:
* function `num_associated`:
* function `clear_test_results`
* function `setup`: verifies upstream url, creates stations and turns dhcp on, creates endpoints,
UDP endpoints,
* function `start`:
* function `start`:
* function `stop`:
* function `cleanup`:
* function `main`:
* function `main`:
* `sta_connect_example.py`: example of how to instantiate StaConnect and run the test
* `sta_connect_multi_example.py`: example of how to instantiate StaConnect and run the test and create multiple OPEN stations,have
some stations using WPA2
* `sta_connect_multi_example.py`: example of how to instantiate StaConnect and run the test and create multiple OPEN stations,have
some stations using WPA2
* `stations_connected.py`: Contains examples of using realm to query stations and get specific information from them
* `test_ipv4_connection.py`: This script will create a variable number of stations that will attempt to connect to a chosen SSID using a provided password and security type.
The test is considered passed if all stations are able to associate and obtain IPV4 addresses
* `test_ipv4_connection.py`: This script will create a variable number of stations that will attempt to connect to a chosen SSID using a provided password and security type.
The test is considered passed if all stations are able to associate and obtain IPV4 addresses
* class `IPv4Test`
* function `build`: This function will use the given parameters (Number of stations, SSID, password, and security type) to create a series of stations.
* function `start`: This function will admin-up the stations created in the build phase. It will then check all stations periodically for association and IP addresses.
This will continue until either the specified timeout has been reached or all stations obtain an IP address.
* function `stop`: This function will admin-down all stations once one of the ending criteria is met.
* function `start`: This function will admin-up the stations created in the build phase. It will then check all stations periodically for association and IP addresses.
This will continue until either the specified timeout has been reached or all stations obtain an IP address.
* function `stop`: This function will admin-down all stations once one of the ending criteria is met.
* function `cleanup`: This function will clean up all stations created during the test.
* command line options :
* `--mgr`: Specifies the hostname where LANforge is running. Defaults to http://localhost
* `--mgr_port`: Specifies the port to use when connecting to LANforge. Defaults to 8080
* `--ssid`: Specifies SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--security`: Specifies security type (WEP, WPA, WPA2, WPA3, Open) of SSID to be used in the test
* `--num_stations`: Specifies number of stations to create for the test
* `--radio`: Specifies the radio to be used in the test. Eg wiphy0
* `--debug`: Turns on debug output for the test
* `--help`: Displays help output for the script
* `test_ipv6_connection.py`: This script will create a variable number of stations that will attempt to connect to a chosen SSID using a provided password and security type.
The test is considered passed if all stations are able to associate and obtain IPV6 addresses
* `test_ipv6_connection.py`: This script will create a variable number of stations that will attempt to connect to a chosen SSID using a provided password and security type.
The test is considered passed if all stations are able to associate and obtain IPV6 addresses
* class `IPv6Test`
* function `build`: This function will use the given parameters (Number of stations, SSID, password, and security type) to create a series of stations.
* function `start`: This function will admin-up the stations created in the build phase. It will then check all stations periodically for association and IP addresses.
This will continue until either the specified timeout has been reached or all stations obtain an IP address.
* function `stop`: This function will admin-down all stations once one of the ending criteria is met.
* function `start`: This function will admin-up the stations created in the build phase. It will then check all stations periodically for association and IP addresses.
This will continue until either the specified timeout has been reached or all stations obtain an IP address.
* function `stop`: This function will admin-down all stations once one of the ending criteria is met.
* function `cleanup`: This function will clean up all stations created during the test.
* Command line options :
* `--mgr`: Specifies the hostname where LANforge is running. Defaults to http://localhost
* `--mgr_port`: Specifies the port to use when connecting to LANforge. Defaults to 8080
* `--ssid`: Specifies SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--security`: Specifies security type (WEP, WPA, WPA2, WPA3, Open) of SSID to be used in the test
* `--num_stations`: Specifies number of stations to create for the test
* `--radio`: Specifies the radio to be used in the test. Eg wiphy0
* `--debug`: Turns on debug output for the test
* `--help`: Displays help output for the script
* `test_l3_unicast_traffic_gen.py`: This script will create stations, create traffic between upstream port and stations, run traffic.
* `test_l3_unicast_traffic_gen.py`: This script will create stations, create traffic between upstream port and stations, run traffic.
The traffic on the stations will be checked once per minute to verify that traffic is transmitted and received.
Test will exit on failure of not receiving traffic for one minute on any station.
* class `L3VariableTimeLongevity`
@@ -170,10 +173,10 @@ Test will exit on failure of not receiving traffic for one minute on any station
* `-d, --test_duration`: Determines the total length of the test. Consists of number followed by letter indicating length
10m would be 10 minutes or 3d would be 3 days. Available options for length are Day (d), Hour (h), Minute (m), or Second (s)
* `-t, --endp_type`: Specifies type of endpoint to be used in the test. Options are lf_udp, lf_udp6, lf_tcp, lf_tcp6
* `-u, --upstream_port`: This is the upstream port to be used for traffic. An upstream port is some data source on the wired LAN or WAN beyond the AP
* `-u, --upstream_port`: This is the upstream port to be used for traffic. An upstream port is some data source on the wired LAN or WAN beyond the AP
* `-r, --radio`: This switch will determine the radio name, number of stations, ssid, and ssid password. Security type is fixed at WPA2.
Usage of this switch could look like: `--radio wiphy1 64 candelaTech-wpa2-x2048-5-3 candelaTech-wpa2-x2048-5-3`
* `test_ipv4_l4_urls_per_ten.py`: This script measure the number of urls per ten minutes over layer 4 traffic
* class `IPV4L4`
* function `build`: This function will create all stations and cross-connects to be used in the test
@@ -186,17 +189,17 @@ Test will exit on failure of not receiving traffic for one minute on any station
* `--mgr`: Specifies the hostname where LANforge is running. Defaults to http://localhost
* `--mgr_port`: Specifies the port to use when connecting to LANforge. Defaults to 8080
* `--ssid`: Specifies SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--security`: Specifies security type (WEP, WPA, WPA2, WPA3, Open) of SSID to be used in the test
* `--num_stations`: Specifies number of stations to create for the test
* `--radio`: Specifies the radio to be used in the test. Eg wiphy0
* `--requests_per_ten`: Configures the number of request per ten minutes
* `--num_tests`: Configures the number of tests to be run. Each test runs for ten minutes
* `--url`: Specifies the upload/download, address, and destination. Example: dl http://10.40.0.1 /dev/null
* `--target_per_ten`: Rate of target urls per ten minutes. 90% of this value will be considered the threshold for a passed test.
* `--target_per_ten`: Rate of target urls per ten minutes. 90% of this value will be considered the threshold for a passed test.
* `--debug`: Turns on debug output for the test
* `--help`: Displays help output for the script
* `test_ipv4_l4_ftp_urls_per_ten.py`: This script measure the number of urls per ten minutes over layer 4 ftp traffic
* class `IPV4L4`
@@ -210,29 +213,29 @@ Test will exit on failure of not receiving traffic for one minute on any station
* `--mgr`: Specifies the hostname where LANforge is running. Defaults to http://localhost
* `--mgr_port`: Specifies the port to use when connecting to LANforge. Defaults to 8080
* `--ssid`: Specifies SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--security`: Specifies security type (WEP, WPA, WPA2, WPA3, Open) of SSID to be used in the test
* `--num_stations`: Specifies number of stations to create for the test
* `--radio`: Specifies the radio to be used in the test. Eg wiphy0
* `--requests_per_ten`: Configures the number of request per ten minutes
* `--num_tests`: Configures the number of tests to be run. Each test runs for ten minutes
* `--url`: Specifies the upload/download, address, and destination. Example: dl http://10.40.0.1 /dev/null
* `--target_per_ten`: Rate of target urls per ten minutes. 90% of this value will be considered the threshold for a passed test.
* `--target_per_ten`: Rate of target urls per ten minutes. 90% of this value will be considered the threshold for a passed test.
* `--debug`: Turns on debug output for the test
* `--help`: Displays help output for the script
* `test_generic`:
* class `GenTest`: This script will create
* class `GenTest`: This script will create
* function `build`: This function will create the stations and cross-connects to be used during the test.
* function `start`: This function will start traffic and measure different values dependent on the command chosen.
* function `start`: This function will start traffic and measure different values dependent on the command chosen.
Commands currently available for use: lfping, generic, and speedtest.
* function `stop`: This function will admin-down stations, stop traffic on cross-connects and cleanup any stations or cross-connects associated with the test.
* function `cleanup`: This function will remove any stations and cross-connects created during the test.
* Command line options:
* Command line options:
* `--mgr`: Specifies the hostname where LANforge is running. Defaults to http://localhost
* `--mgr_port`: Specifies the port to use when connecting to LANforge. Defaults to 8080
* `--ssid`: Specifies SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--password`: Specifies the password for the SSID to be used in the test
* `--security`: Specifies security type (WEP, WPA, WPA2, WPA3, Open) of SSID to be used in the test
* `--num_stations`: Specifies number of stations to create for the test
* `--radio`: Specifies the radio to be used in the test. Eg wiphy0
@@ -263,6 +266,3 @@ Test will exit on failure of not receiving traffic for one minute on any station
* class `VapStations`
* function `run`:
* function `main`:

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

BIN
py-scripts/artifacts/banner.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

View File

@@ -1,9 +1,7 @@
"""
Candela Technologies Inc.
Info : Standard Script for Connection Testing
Date :
Author : Shivam Thakur
Info : Standard Script for Connection Testing - Creates HTML and pdf report as a result (Used for web-console)
"""
@@ -22,21 +20,22 @@ import datetime
import time
import os
from test_utility import CreateHTML
from test_utility import RuntimeUpdates
# from test_utility import RuntimeUpdates
from test_utility import StatusMsg
import pdfkit
webconsole_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.getcwd())))
class ConnectionTest(LFCliBase):
def __init__(self, lfclient_host="localhost", lfclient_port=8080, radio="wiphy1", sta_prefix="sta", start_id=0,
num_sta=2,
dut_ssid="lexusdut", dut_security="open", dut_passwd="[BLANK]", upstream="eth1", _test_update=None, name_prefix="L3Test",
dut_ssid="lexusdut", dut_security="open", dut_passwd="[BLANK]", upstream="eth1", name_prefix="L3Test",
session_id="Layer3Test", test_name="Client/s Connectivity Test", pass_criteria=20, _debug_on=False,
_exit_on_error=False, _exit_on_fail=False):
super().__init__(lfclient_host, lfclient_port, _debug=_debug_on, _halt_on_error=_exit_on_error,
_exit_on_fail=_exit_on_fail)
print("Test is about to start")
self.host = lfclient_host
self.port = lfclient_port
self.radio = radio
@@ -53,37 +52,34 @@ class ConnectionTest(LFCliBase):
self.session_id = session_id
self.test_name = test_name
self.test_duration = 1
self.test_update = _test_update
self.local_realm = realm.Realm(lfclient_host=self.host, lfclient_port=self.port)
self.station_profile = self.local_realm.new_station_profile()
self.pass_fail = ""
self.status_msg = StatusMsg(lfclient_host=self.host, lfclient_port=self.port, session_id=self.session_id)
station_list = []
for i in range(0, self.num_sta):
station_list.append(self.sta_prefix + str(i).zfill(4))
print(station_list)
self.station_data = dict.fromkeys(station_list)
for i in station_list:
self.station_data[i] = "None"
print(self.station_data)
self.test_update.send_update({"test_status": '1', "data": 'None', "data": [], "label": "Client Connectivity Time"})
try:
self.status_msg.update('1', {"data": 'Initializing...', "data": [], "label": "Client Connectivity Time"})
except:
pass
self.reports_path = webconsole_dir+"/reports/" + self.test_name + "_" + self.session_id + '/'
print(self.reports_path)
if not os.path.exists(self.reports_path):
os.makedirs(self.reports_path)
print("Test is Initialized")
self.station_list = LFUtils.portNameSeries(prefix_=self.sta_prefix, start_id_=self.sta_start_id,
end_id_=self.num_sta - 1, padding_number_=10000, radio=self.radio)
print(self.station_profile.station_names)
self.test_update.send_update({"test_status": '2', "data": 'None', "data": [], "label": "Client Connectivity Time"})
try:
self.status_msg.update('2', {"data": 'Initialized...', "data": [], "label": "Client Connectivity Time"})
except:
pass
def precleanup(self):
print("precleanup started")
sta_list = []
for i in self.local_realm.station_list():
if (list(i.keys())[0] == '1.1.wlan0'):
@@ -92,19 +88,19 @@ class ConnectionTest(LFCliBase):
pass
else:
sta_list.append(list(i.keys())[0])
print(sta_list)
for sta in sta_list:
self.local_realm.rm_port(sta, check_exists=True)
time.sleep(1)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=sta_list,
debug=self.debug)
print("precleanup done")
self.test_update.send_update({"test_status": '3', "data": 'None', "data": [], "label": "Client Connectivity Time"})
try:
self.status_msg.update('3', {"data": 'Building...', "data": [], "label": "Client Connectivity Time"})
except:
pass
def build(self):
print("Building Test Configuration")
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template("00")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
@@ -113,18 +109,21 @@ class ConnectionTest(LFCliBase):
self.station_profile.create(radio=self.radio, sta_names_=self.station_list, debug=self.debug)
self.local_realm.wait_until_ports_appear(sta_list=self.station_list)
self.update(status="build complete")
print("Test Build done")
self.test_update.send_update({"test_status": '4', "data": 'None', "data": [], "label": "Client Connectivity Time"})
try:
self.status_msg.update('4', {"data": 'Starting...', "data": [], "label": "Client Connectivity Time"})
except:
pass
def update(self, status="None"):
for i in self.station_list:
print(self.json_get("port/1/1/" + i + "/?fields=ip,ap,down"))
self.station_data[i.split(".")[2]] = \
self.json_get("port/1/1/" + i.split(".")[2] + "/?fields=ip,ap,down,phantom&cx%20time%20(us)")['interface']
self.test_update.send_update({"test_status": '5', "data": 'None', "data": [], "label": "Client Connectivity Time"})
try:
self.status_msg.update('5', {"data": 'None', "data": [], "label": "Client Connectivity Time"})
except:
pass
def start(self, print_pass=False, print_fail=False):
print("Test is starting")
def start(self):
self.station_profile.admin_up()
associated_map = {}
self.ip_map = {}
@@ -134,7 +133,7 @@ class ConnectionTest(LFCliBase):
for sta_name in self.station_profile.station_names:
sta_status = self.json_get("port/1/1/" + str(sta_name).split(".")[2] + "?fields=port,alias,ip,ap",
debug_=self.debug)
print(sta_status)
if (sta_status is None or sta_status['interface'] is None) or (sta_status['interface']['ap'] is None):
continue
if (len(sta_status['interface']['ap']) == 17) and (sta_status['interface']['ap'][-3] == ':'):
@@ -147,65 +146,50 @@ class ConnectionTest(LFCliBase):
else:
time.sleep(1)
if self.debug:
print("sta_list", len(self.station_profile.station_names), self.station_profile.station_names)
print("ip_map", len(self.ip_map), self.ip_map)
print("associated_map", len(associated_map), associated_map)
if (len(self.station_profile.station_names) == len(self.ip_map)) and (
len(self.station_profile.station_names) == len(associated_map)):
self._pass("PASS: All stations associated with IP", print_pass)
print("Test Passed")
#("Test Passed")
for sta_name in self.station_profile.station_names:
sta_status = self.json_get("port/1/1/" + str(sta_name).split(".")[2] + "?fields=cx%20time%20(us)",
debug_=self.debug)
print(sta_status)
#(sta_status)
while sta_status['interface']['cx time (us)'] == 0:
sta_status = self.json_get("port/1/1/" + str(sta_name).split(".")[2] + "?fields=cx%20time%20(us)",
debug_=self.debug)
print(sta_status)
# #(sta_status)
continue
cx_time[sta_name] = sta_status['interface']['cx time (us)']
else:
self._fail("FAIL: Not all stations able to associate/get IP", print_fail)
print("sta_list", self.station_profile.station_names)
print("ip_map", self.ip_map)
for sta_name in self.ip_map.keys():
sta_status = self.json_get("port/1/1/" + str(sta_name).split(".")[2] + "?fields=cx%20time%20(us)",
debug_=self.debug)
print(sta_status)
while sta_status['interface']['cx time (us)'] == 0:
sta_status = self.json_get("port/1/1/" + str(sta_name).split(".")[2] + "?fields=cx%20time%20(us)",
debug_=self.debug)
print(sta_status)
# #(sta_status)
continue
cx_time[sta_name] = sta_status['interface']['cx time (us)']
print("associated_map", associated_map)
print("Test Failed")
print(self.ip_map)
print(associated_map)
print("cx time:", cx_time)
self.test_result_data = []
self.keys = ["Client Name", "BSSID", "Channel", "Connection Time (ms)", "DHCP (ms)", "IPv4 Address", "MAC Address", "Mode", "Result"]
for sta_name in self.station_profile.station_names:
sta_status = self.json_get(
"port/1/1/" + str(sta_name).split(".")[2] + "?fields=alias,ap,channel,cx%20time%20(us),ip,mac,mode,dhcp%20(ms)",
debug_=self.debug)
print("ironman")
print(sta_status['interface'])
self.test_result_data.append(sta_status['interface'])
print(self.test_result_data)
offset = 0
self.chart_data = {}
for data in self.test_result_data:
if (data["cx time (us)"]/1000 <= self.pass_criteria) and (data["cx time (us)"]/1000 > 0):
self.chart_data[data['alias']] = data["cx time (us)"]/1000
if (int(data["cx time (us)"])/1000 <= self.pass_criteria) and (int(data["cx time (us)"])/1000 > 0):
self.chart_data[data['alias']] = float(data["cx time (us)"])/1000
data['Result'] = "PASS"
else:
self.chart_data[data['alias']] = data["cx time (us)"] / 1000
self.chart_data[data['alias']] = float(data["cx time (us)"]) / 1000
offset +=1
data['Result'] = "FAIL"
data["cx time (us)"] = str(data["cx time (us)"]/1000)+" / "+str(self.pass_criteria)+"ms"
data["cx time (us)"] = str(float(data["cx time (us)"])/1000)+" / "+str(self.pass_criteria)+"ms"
objective = 'The Client Connectivity Test is designed to test the Performance of the Access Point. It will tell the Average Connection time that station takes to connect to Wifi Access Point. It will tell you Pass/Fail Criteria and detailed Report for Client Connection'
@@ -223,26 +207,32 @@ class ConnectionTest(LFCliBase):
chart_params={"chart_head": "Client Connection Time", "xlabel": "Clients", "ylabel": "Connection Time"})
self.html.write(self.html_data.report)
self.html.close()
options = {
"enable-local-file-access": None
}
pdfkit.from_file(self.reports_path + self.test_name + "_" + self.session_id + ".html",
self.reports_path + self.test_name + "_" + self.session_id + '_report.pdf', options=options)
self.test_update.send_update({"test_status": '6', "data": 'None', "data": [], "label": "Client Connectivity Time"})
try:
self.status_msg.update('6', {"data": 'None', "data": [], "label": "Client Connectivity Time"})
except:
pass
def stop(self):
print("Stopping Test")
self.station_profile.admin_down()
LFUtils.wait_until_ports_admin_down(port_list=self.station_profile.station_names)
self.test_update.send_update({"test_status": '7', "data": 'None', "data": [], "label": "Client Connectivity Time"})
try:
self.status_msg.update('7', {"data": 'None', "data": [], "label": "Client Connectivity Time"})
except:
pass
def postcleanup(self):
self.station_profile.cleanup()
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url,
port_list=self.station_profile.station_names,
debug=self.debug)
print("Test Completed")
self.test_update.send_update({"test_status": '8', "data": 'None', "data": [], "label": "Client Connectivity Time"})
self.station_profile.cleanup(delay=1)
try:
self.status_msg.update('8', {"data": 'None', "data": [], "label": "Client Connectivity Time"})
except:
pass
def main():
# This has --mgr, --mgr_port and --debug
parser = LFCliBase.create_bare_argparse(prog="connection_test.py", formatter_class=argparse.RawTextHelpFormatter,
epilog="About This Script")
@@ -251,27 +241,36 @@ def main():
parser.add_argument('--passwd', help='--passwd of dut', default="[BLANK]")
parser.add_argument('--radio', help='--radio to use on LANforge', default="wiphy1")
parser.add_argument('--security', help='--security of dut', default="open")
parser.add_argument('--session_id', help='--session_id is for websocket', default="local")
parser.add_argument('--session_id', help='--session_id is for websocket', default=getSessionID())
parser.add_argument('--test_name', help='--test_name is for webconsole reports', default="Client Connectivity Test")
parser.add_argument('--num_clients', type=int, help='--num_sta is number of stations you want to create', default=2)
parser.add_argument('--pass_criteria', type=int, help='--pass_criteria is pass criteria for connection Time', default=50)
parser.add_argument('--pass_criteria', type=int, help='--pass_criteria is pass criteria for connection Time', default=300)
args = parser.parse_args()
# args.session_id = "local";
print(args)
update = RuntimeUpdates(args.session_id, {"test_status": '0', "data": 'None', "data": [], "label": "Client Connectivity Time"})
# Start Test
obj = ConnectionTest(lfclient_host="192.168.200.12", lfclient_port=args.mgr_port,
obj = ConnectionTest(lfclient_host=args.mgr, lfclient_port=args.mgr_port,
session_id=args.session_id, test_name=args.test_name,
dut_ssid=args.ssid, dut_passwd=args.passwd, dut_security=args.security,
num_sta=args.num_clients, radio=args.radio, pass_criteria=args.pass_criteria, _test_update=update)
num_sta=args.num_clients, radio=args.radio, pass_criteria=args.pass_criteria)
obj.precleanup()
obj.build()
obj.start()
obj.stop()
obj.postcleanup()
print(obj.chart_data)
update.send_update({"test_status": '10', "data": obj.chart_data, "label": ["Client Names","Client Connectivity Time (ms)"], "result": obj.pass_fail})
# #(obj.chart_data)
try:
obj.status_msg.update('10', {"data": 'done...', "data": [], "label": "Client Connectivity Time"})
except:
pass
for i in obj.status_msg.read()['messages']:
print(i)
def getSessionID():
x = datetime.datetime.now()
id = x.strftime("%x").replace("/","_")+"_"+x.strftime("%x") + "_" + x.strftime("%X").split(":")[0] + "_" + x.strftime("%X").split(":")[1] + "_" + x.strftime("%X").split(":")[2]+str(x).split(".")[1]
id = str(id).replace("/", "_").split("P")[0].replace(" ","")
return id
if __name__ == '__main__':
main()

97
py-scripts/create_bond.py Executable file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env python3
"""create_bond.py Script to create a bond
This script can be used to create a bond, only one can be created at a time. Network devices must be specified
as a list of comma-separated items with no spaces.
Use './create_bond.py --help' to see command line usage and options
Copyright 2021 Candela Technologies Inc
License: Free to distribute and modify. LANforge systems must be licensed.
"""
import sys
import os
import argparse
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import LANforge
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
from realm import Realm
import time
import pprint
class CreateBond(LFCliBase):
def __init__(self, _network_dev_list=None,
_host=None,
_port=None,
_shelf=1,
_resource=1,
_debug_on=False):
super().__init__(_host, _port)
self.host = _host
self.shelf = _shelf
self.resource = _resource
self.timeout = 120
self.debug = _debug_on
self.network_dev_list = _network_dev_list
def build(self):
data = {
'shelf': self.shelf,
'resource': self.resource,
'port': 'bond0000',
'network_devs': self.network_dev_list
}
self.json_post("cli-json/add_bond", data)
time.sleep(3)
bond_set_port = {
"shelf": self.shelf,
"resource": self.resource,
"port": "bond0000",
"current_flags": 0x80000000,
"interest": 0x4000 # (0x2 + 0x4000 + 0x800000) # current, dhcp, down
}
self.json_post("cli-json/set_port", bond_set_port)
def main():
parser = LFCliBase.create_basic_argparse(
prog='create_bond.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Create bonds
''',
description='''\
create_bond.py
--------------------
Command example:
./create_bond.py
--network_dev_list eth0,eth1
--debug
''')
required = parser.add_argument_group('required arguments')
required.add_argument('--network_dev_list', help='list of network devices in the bond, must be comma separated '
'with no spaces', required=True)
args = parser.parse_args()
create_bond = CreateBond(_host=args.mgr,
_port=args.mgr_port,
_network_dev_list=args.network_dev_list,
_debug_on=args.debug
)
create_bond.build()
if __name__ == "__main__":
main()

View File

@@ -136,6 +136,7 @@ Command example:
target_device=args.target_device)
create_bridge.build()
print('Created %s bridges' % num_bridge)
if __name__ == "__main__":
main()

162
py-scripts/create_chamberview.py Executable file
View File

@@ -0,0 +1,162 @@
#!/usr/bin/env python3
"""
Note: To Run this script gui should be opened with
path: cd LANforgeGUI_5.4.3 (5.4.3 can be changed with GUI version)
pwd (Output : /home/lanforge/LANforgeGUI_5.4.3)
./lfclient.bash -cli-socket 3990
Note: Scenario names should be different, for each run of this script.
in case of same scenario name scenario will be appended to the same name.
Note: Script for creating a chamberview scenario.
Run this script to set/create a chamber view scenario.
ex. on how to run this script:
create_chamberview.py -m "localhost" -o "8080" -cs "scenario_name"
--line "Resource=1.1 Profile=STA-AC Amount=1 Uses-1=wiphy0 Uses-2=AUTO Freq=-1
DUT=Test DUT_Radio=Radio-1 Traffic=http VLAN="
--line "Resource=1.1 Profile=upstream Amount=1 Uses-1=eth1 Uses-2=AUTO Freq=-1
DUT=Test DUT_Radio=Radio-1 Traffic=http VLAN="
Output:
You should see build scenario with the given arguments at the end of this script.
To verify this:
open Chamber View -> Manage scenario
"""
import sys
import os
import argparse
import time
import re
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from cv_commands import chamberview as cv
def main():
global Resource, Amount, DUT, DUT_Radio, Profile, Uses1, Uses2, Traffic, Freq, VLAN
parser = argparse.ArgumentParser(
description="""
For Two line scenario use --line twice as shown in example, for multi line scenario
use --line argument to create multiple lines
\n
create_chamberview.py -m "localhost" -o "8080" -cs "scenario_name"
--line "Resource=1.1 Profile=STA-AC Amount=1 Uses-1=wiphy0 Uses-2=AUTO Freq=-1
DUT=Test DUT_Radio=Radio-1 Traffic=http VLAN="
--line "Resource=1.1 Profile=upstream Amount=1 Uses-1=eth1 Uses-2=AUTO Freq=-1
DUT=Test DUT_Radio=Radio-1 Traffic=http VLAN="
""")
parser.add_argument("-m", "--lfmgr", type=str,
help="address of the LANforge GUI machine (localhost is default)")
parser.add_argument("-o", "--port", type=int,
help="IP Port the LANforge GUI is listening on (8080 is default)")
parser.add_argument("-cs", "--create_scenario", "--create_lf_scenario", type=str,
help="name of scenario to be created")
parser.add_argument("-l", "--line", action='append', nargs='+', type=str, required=True,
help="line number")
args = parser.parse_args()
if args.lfmgr is not None:
lfjson_host = args.lfmgr
if args.port is not None:
lfjson_port = args.port
createCV = cv(lfjson_host, lfjson_port); # Create a object
scenario_name = args.create_scenario
line = args.line
Resource = "1.1"
Profile = "STA-AC"
Amount = "1"
DUT = "DUT"
DUT_Radio = "Radio-1"
Uses1 = "wiphy0"
Uses2 = "AUTO"
Traffic = "http"
Freq = "-1"
VLAN = ""
for i in range(len(line)):
if " " in line[i][0]:
line[i][0] = (re.split(' ', line[i][0]))
elif "," in line[i][0]:
line[i][0] = (re.split(',', line[i][0]))
print("in second")
elif ", " in line[i][0]:
line[i][0] = (re.split(',', line[i][0]))
print("in third")
elif " ," in line[i][0]:
line[i][0] = (re.split(',', line[i][0]))
print("in forth")
else:
print("Wrong arguments entered !")
exit(1)
for j in range(len(line[i][0])):
line[i][0][j] = line[i][0][j].split("=")
for k in range(len(line[i][0][j])):
name = line[i][0][j][k]
if str(name) == "Resource" or str(name) == "Res" or str(name) == "R":
Resource = line[i][0][j][k + 1]
elif str(name) == "Profile" or str(name) == "Prof" or str(name) == "P":
Profile = line[i][0][j][k + 1]
elif str(name) == "Amount" or str(name) == "Sta" or str(name) == "A":
Amount = line[i][0][j][k + 1]
elif str(name) == "Uses-1" or str(name) == "U1" or str(name) == "U-1":
Uses1 = line[i][0][j][k + 1]
elif str(name) == "Uses-2" or str(name) == "U2" or str(name) == "U-2":
Uses2 = line[i][0][j][k + 1]
elif str(name) == "Freq" or str(name) == "Freq" or str(name) == "F":
Freq = line[i][0][j][k + 1]
elif str(name) == "DUT" or str(name) == "dut" or str(name) == "D":
DUT = line[i][0][j][k + 1]
elif str(name) == "DUT_Radio" or str(name) == "dr" or str(name) == "DR":
DUT_Radio = line[i][0][j][k + 1]
elif str(name) == "Traffic" or str(name) == "Traf" or str(name) == "T":
Traffic = line[i][0][j][k + 1]
elif str(name) == "VLAN" or str(name) == "Vlan" or str(name) == "V":
VLAN = line[i][0][j][k + 1]
else:
continue
createCV.manage_cv_scenario(scenario_name,
Resource,
Profile,
Amount,
DUT,
DUT_Radio,
Uses1,
Uses2,
Traffic,
Freq,
VLAN
); # To manage scenario
createCV.sync_cv() #chamberview sync
time.sleep(2)
createCV.apply_cv_scenario(scenario_name) #Apply scenario
time.sleep(2)
createCV.sync_cv()
time.sleep(2)
createCV.apply_cv_scenario(scenario_name) # Apply scenario
time.sleep(2)
createCV.build_cv_scenario() #build scenario
print("End")
if __name__ == "__main__":
main()

View File

@@ -1,11 +1,9 @@
#!/usr/bin/env python3
"""
This script will create a variable number of stations each with their own set of cross-connects and endpoints.
It will then create layer 3 traffic over a specified amount of time, testing for increased traffic at regular intervals.
This test will pass if all stations increase traffic over the full test duration.
This script will create a variable number of layer3 stations each with their own set of cross-connects and endpoints.
Use './test_ipv4_variable_time.py --help' to see command line usage and options
Use './create_l3.py --help' to see command line usage and options
"""
import sys
@@ -26,13 +24,14 @@ import time
import datetime
from realm import TestGroupProfile
class CreateL3(Realm):
def __init__(self,
ssid, security, password, sta_list, name_prefix, upstream, radio,
host="localhost", port=8080, mode = 0, ap=None,
host="localhost", port=8080, mode=0, ap=None,
side_a_min_rate=56, side_a_max_rate=0,
side_b_min_rate=56, side_b_max_rate=0,
number_template="00000", use_ht160=False,
number_template="00000", use_ht160=False,
_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
@@ -45,8 +44,8 @@ class CreateL3(Realm):
self.security = security
self.password = password
self.radio = radio
self.mode= mode
self.ap=ap
self.mode = mode
self.ap = ap
self.number_template = number_template
self.debug = _debug_on
self.name_prefix = name_prefix
@@ -63,9 +62,8 @@ class CreateL3(Realm):
self.station_profile.mode = 9
self.station_profile.mode = mode
if self.ap is not None:
self.station_profile.set_command_param("add_sta", "ap",self.ap)
#self.station_list= LFUtils.portNameSeries(prefix_="sta", start_id_=0, end_id_=2, padding_number_=10000, radio='wiphy0') #Make radio a user defined variable from terminal.
self.station_profile.set_command_param("add_sta", "ap", self.ap)
# self.station_list= LFUtils.portNameSeries(prefix_="sta", start_id_=0, end_id_=2, padding_number_=10000, radio='wiphy0') #Make radio a user defined variable from terminal.
self.cx_profile.host = self.host
self.cx_profile.port = self.port
@@ -75,54 +73,16 @@ class CreateL3(Realm):
self.cx_profile.side_b_min_bps = side_b_min_rate
self.cx_profile.side_b_max_bps = side_b_max_rate
def __get_rx_values(self):
cx_list = self.json_get("endp?fields=name,rx+bytes", debug_=self.debug)
if self.debug:
print(self.cx_profile.created_cx.values())
print("==============\n", cx_list, "\n==============")
cx_rx_map = {}
for cx_name in cx_list['endpoint']:
if cx_name != 'uri' and cx_name != 'handler':
for item, value in cx_name.items():
for value_name, value_rx in value.items():
if value_name == 'rx bytes' and item in self.cx_profile.created_cx.values():
cx_rx_map[item] = value_rx
return cx_rx_map
def start(self, print_pass=False, print_fail=False):
self.station_profile.admin_up()
temp_stas = self.station_profile.station_names.copy()
if self.wait_for_ip(temp_stas):
self._pass("All stations got IPs")
else:
self._fail("Stations failed to get IPs")
self.exit_fail()
self.cx_profile.start_cx()
def stop(self):
self.cx_profile.stop_cx()
self.station_profile.admin_down()
def pre_cleanup(self):
self.cx_profile.cleanup_prefix()
for sta in self.sta_list:
self.rm_port(sta, check_exists=True)
def cleanup(self):
self.cx_profile.cleanup()
self.station_profile.cleanup()
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url,
port_list=self.station_profile.station_names,
debug=self.debug)
def build(self):
self.station_profile.use_security(self.security,
self.ssid,
self.password)
self.ssid,
self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
@@ -132,11 +92,12 @@ class CreateL3(Realm):
sta_names_=self.sta_list,
debug=self.debug)
self.cx_profile.create(endp_type="lf_udp",
side_a=self.station_profile.station_names,
side_b=self.upstream,
sleep_time=0)
side_a=self.station_profile.station_names,
side_b=self.upstream,
sleep_time=0)
self._pass("PASS: Station build finished")
def main():
parser = LFCliBase.create_basic_argparse(
prog='create_l3.py',
@@ -175,55 +136,59 @@ python3 ./test_ipv4_variable_time.py
--a_min 1000
--b_min 1000
--ap "00:0e:8e:78:e1:76"
--number_template 0000
--debug
''')
required_args=None
required_args = None
for group in parser._action_groups:
if group.title == "required arguments":
required_args=group
required_args = group
break;
if required_args is not None:
required_args.add_argument('--a_min', help='--a_min bps rate minimum for side_a', default=256000)
required_args.add_argument('--b_min', help='--b_min bps rate minimum for side_b', default=256000)
optional_args=None
optional_args = None
for group in parser._action_groups:
if group.title == "optional arguments":
optional_args=group
optional_args = group
break;
if optional_args is not None:
optional_args.add_argument('--mode',help='Used to force mode of stations')
optional_args.add_argument('--ap',help='Used to force a connection to a particular AP')
optional_args.add_argument('--mode', help='Used to force mode of stations')
optional_args.add_argument('--ap', help='Used to force a connection to a particular AP')
optional_args.add_argument('--number_template', help='Start the station numbering with a particular number. Default is 0000', default=0000)
args = parser.parse_args()
num_sta = 2
if (args.num_stations is not None) and (int(args.num_stations) > 0):
num_sta = int(args.num_stations)
station_list = LFUtils.portNameSeries(prefix_="sta", start_id_=0, end_id_=num_sta-1, padding_number_=10000, radio=args.radio)
station_list = LFUtils.portNameSeries(prefix_="sta", start_id_=0, end_id_=num_sta - 1, padding_number_=10000,
radio=args.radio)
ip_var_test = CreateL3(host=args.mgr,
port=args.mgr_port,
number_template="0000",
sta_list=station_list,
name_prefix="VT",
upstream=args.upstream_port,
ssid=args.ssid,
password=args.passwd,
radio=args.radio,
security=args.security,
use_ht160=False,
side_a_min_rate=args.a_min,
side_b_min_rate=args.b_min,
mode=args.mode,
ap=args.ap,
_debug_on=args.debug)
port=args.mgr_port,
number_template=str(args.number_template),
sta_list=station_list,
name_prefix="VT",
upstream=args.upstream_port,
ssid=args.ssid,
password=args.passwd,
radio=args.radio,
security=args.security,
use_ht160=False,
side_a_min_rate=args.a_min,
side_b_min_rate=args.b_min,
mode=args.mode,
ap=args.ap,
_debug_on=args.debug)
ip_var_test.pre_cleanup()
ip_var_test.build()
if not ip_var_test.passes():
print(ip_var_test.get_fail_message())
ip_var_test.exit_fail()
print('Creates %s stations and connections' % num_sta)
if __name__ == "__main__":

View File

@@ -1,13 +1,11 @@
#!/usr/bin/env python3
"""
This script will create a variable number of stations each with their own set of cross-connects and endpoints.
It will then create layer 3 traffic over a specified amount of time, testing for increased traffic at regular intervals.
This test will pass if all stations increase traffic over the full test duration.
Use './test_ipv4_variable_time.py --help' to see command line usage and options
"""
This script will create a variable number of layer4 stations each with their own set of cross-connects and endpoints.
Use './create_l4.py --help' to see command line usage and options
"""
import sys
import os
@@ -75,38 +73,6 @@ class CreateL4(Realm):
self.cx_profile.side_b_min_bps = side_b_min_rate
self.cx_profile.side_b_max_bps = side_b_max_rate
def __get_rx_values(self):
cx_list = self.json_get("endp?fields=name,rx+bytes", debug_=self.debug)
if self.debug:
print(self.cx_profile.created_cx.values())
print("==============\n", cx_list, "\n==============")
cx_rx_map = {}
for cx_name in cx_list['endpoint']:
if cx_name != 'uri' and cx_name != 'handler':
for item, value in cx_name.items():
for value_name, value_rx in value.items():
if value_name == 'rx bytes' and item in self.cx_profile.created_cx.values():
cx_rx_map[item] = value_rx
return cx_rx_map
def start(self, print_pass=False, print_fail=False):
self.station_profile.admin_up()
temp_stas = self.station_profile.station_names.copy()
if self.wait_for_ip(temp_stas):
self._pass("All stations got IPs")
else:
self._fail("Stations failed to get IPs")
self.exit_fail()
self.cx_profile.start_cx()
def stop(self):
self.cx_profile.stop_cx()
self.station_profile.admin_down()
def cleanup(self):
self.cx_profile.cleanup()
self.station_profile.cleanup()
@@ -136,11 +102,11 @@ def main():
''',
description='''\
test_ipv4_variable_time.py:
layer4.py:
--------------------
Generic command layout:
python3 ./test_ipv4_variable_time.py
python3 ./layer4.py
--upstream_port eth1
--radio wiphy0
--num_stations 32
@@ -215,6 +181,8 @@ python3 ./test_ipv4_variable_time.py
print(ip_var_test.get_fail_message())
ip_var_test.exit_fail()
print('Created %s stations and connections' % num_sta)
if __name__ == "__main__":
main()

View File

@@ -1,325 +1,105 @@
#!/usr/bin/env python3
import sys
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append('../py-json')
# import argparse
from LANforge.lfcli_base import LFCliBase
from LANforge.LFUtils import *
from LANforge import LFUtils
from LANforge import add_file_endp
from LANforge.add_file_endp import *
import argparse
from realm import Realm
import time
import datetime
import pprint
class FileIOTest(Realm):
def __init__(self, host, port, ssid, security, password,
number_template="00000",
class CreateMacVlan(Realm):
def __init__(self, host, port,
radio="wiphy0",
test_duration="5m",
upstream_port="eth1",
num_ports=1,
server_mount="10.40.0.1:/var/tmp/test",
macvlan_parent=None,
first_mvlan_ip=None,
netmask=None,
gateway=None,
dhcp=True,
use_macvlans=False,
use_test_groups=False,
write_only_test_group=None,
read_only_test_group=None,
port_list=[],
ip_list=None,
connections_per_port=1,
mode="both",
update_group_args={"name": None, "action": None, "cxs": None},
_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port)
self.host = host
self.port = port
self.radio = radio
self.upstream_port = upstream_port
self.ssid = ssid
self.security = security
self.password = password
self.number_template = number_template
self.test_duration = test_duration
self.port_list = []
self.connections_per_port = connections_per_port
self.use_macvlans = use_macvlans
self.mode = mode.lower()
self.ip_list = ip_list
self.netmask = netmask
self.gateway = gateway
self.dhcp = dhcp
if self.use_macvlans:
if macvlan_parent is not None:
self.macvlan_parent = macvlan_parent
self.port_list = port_list
else:
if macvlan_parent is not None:
self.macvlan_parent = macvlan_parent
self.port_list = port_list
self.use_test_groups = use_test_groups
if self.use_test_groups:
if self.mode == "write":
if write_only_test_group is not None:
self.write_only_test_group = write_only_test_group
else:
raise ValueError("--write_only_test_group must be used to set test group name")
if self.mode == "read":
if read_only_test_group is not None:
self.read_only_test_group = read_only_test_group
else:
raise ValueError("--read_only_test_group must be used to set test group name")
if self.mode == "both":
if write_only_test_group is not None and read_only_test_group is not None:
self.write_only_test_group = write_only_test_group
self.read_only_test_group = read_only_test_group
else:
raise ValueError("--write_only_test_group and --read_only_test_group "
"must be used to set test group names")
self.wo_profile = self.new_fio_endp_profile()
self.mvlan_profile = self.new_mvlan_profile()
if not self.use_macvlans and len(self.port_list) > 0:
self.station_profile = self.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
self.wo_profile.server_mount = server_mount
self.wo_profile.num_connections_per_port = connections_per_port
self.ro_profile = self.wo_profile.create_ro_profile()
if self.use_macvlans:
self.mvlan_profile.num_macvlans = int(num_ports)
self.mvlan_profile.desired_macvlans = self.port_list
self.mvlan_profile.macvlan_parent = self.macvlan_parent
self.mvlan_profile.dhcp = dhcp
self.mvlan_profile.netmask = netmask
self.mvlan_profile.first_ip_addr = first_mvlan_ip
self.mvlan_profile.gateway = gateway
self.mvlan_profile.num_macvlans = int(num_ports)
self.mvlan_profile.desired_macvlans = self.port_list
self.mvlan_profile.macvlan_parent = self.macvlan_parent
self.mvlan_profile.dhcp = dhcp
self.mvlan_profile.netmask = netmask
self.mvlan_profile.first_ip_addr = first_mvlan_ip
self.mvlan_profile.gateway = gateway
self.created_ports = []
if self.use_test_groups:
if self.mode is not None:
if self.mode == "write":
self.wo_tg_profile = self.new_test_group_profile()
self.wo_tg_profile.group_name = self.write_only_test_group
elif self.mode == "read":
self.ro_tg_profile = self.new_test_group_profile()
self.ro_tg_profile.group_name = self.read_only_test_group
elif self.mode == "both":
self.wo_tg_profile = self.new_test_group_profile()
self.ro_tg_profile = self.new_test_group_profile()
self.wo_tg_profile.group_name = self.write_only_test_group
self.ro_tg_profile.group_name = self.read_only_test_group
else:
raise ValueError("Unknown mode given ", self.mode)
else:
raise ValueError("Mode ( read, write, or both ) must be specified")
if update_group_args is not None and update_group_args['name'] is not None:
temp_tg = self.new_test_group_profile()
temp_cxs = update_group_args['cxs'].split(',')
if update_group_args['action'] == "add":
temp_tg.group_name = update_group_args['name']
if not temp_tg.check_group_exists():
temp_tg.create_group()
for cx in temp_cxs:
if "CX_" not in cx:
cx = "CX_" + cx
temp_tg.add_cx(cx)
if update_group_args['action'] == "del":
temp_tg.group_name = update_group_args['name']
if temp_tg.check_group_exists():
for cx in temp_cxs:
temp_tg.rm_cx(cx)
time.sleep(5)
self.wo_tg_exists = False
self.ro_tg_exists = False
self.wo_tg_cx_exists = False
self.ro_tg_cx_exists = False
print("Checking for pre-existing test groups and cxs")
if self.use_test_groups:
if self.mode == "write":
if self.wo_tg_profile.check_group_exists():
self.wo_tg_exists = True
if len(self.wo_tg_profile.list_cxs()) > 0:
self.wo_tg_cx_exists = True
elif self.mode == "read":
if self.ro_tg_profile.check_group_exists():
self.ro_tg_exists = True
if len(self.ro_tg_profile.list_cxs()) > 0:
self.ro_tg_cx_exists = True
elif self.mode == "both":
if self.wo_tg_profile.check_group_exists():
self.wo_tg_exists = True
if len(self.wo_tg_profile.list_cxs()) > 0:
self.wo_tg_cx_exists = True
if self.ro_tg_profile.check_group_exists():
self.ro_tg_exists = True
if len(self.ro_tg_profile.list_cxs()) > 0:
self.ro_tg_cx_exists = True
def __compare_vals(self, val_list):
passes = 0
expected_passes = 0
# print(val_list)
for item in val_list:
expected_passes += 1
# print(item)
if item[0] == 'r':
# print("TEST", item,
# val_list[item]['read-bps'],
# self.ro_profile.min_read_rate_bps,
# val_list[item]['read-bps'] > self.ro_profile.min_read_rate_bps)
if val_list[item]['read-bps'] > self.wo_profile.min_read_rate_bps:
passes += 1
else:
# print("TEST", item,
# val_list[item]['write-bps'],
# self.wo_profile.min_write_rate_bps,
# val_list[item]['write-bps'] > self.wo_profile.min_write_rate_bps)
if val_list[item]['write-bps'] > self.wo_profile.min_write_rate_bps:
passes += 1
if passes == expected_passes:
return True
else:
return False
else:
return False
def __get_values(self):
time.sleep(3)
if self.mode == "write":
cx_list = self.json_get("fileio/%s?fields=write-bps,read-bps" % (
','.join(self.wo_profile.created_cx.keys())), debug_=self.debug)
elif self.mode == "read":
cx_list = self.json_get("fileio/%s?fields=write-bps,read-bps" % (
','.join(self.ro_profile.created_cx.keys())), debug_=self.debug)
else:
cx_list = self.json_get("fileio/%s,%s?fields=write-bps,read-bps" % (
','.join(self.wo_profile.created_cx.keys()),
','.join(self.ro_profile.created_cx.keys())), debug_=self.debug)
# print(cx_list)
# print("==============\n", cx_list, "\n==============")
cx_map = {}
# pprint.pprint(cx_list)
if cx_list is not None:
cx_list = cx_list['endpoint']
for i in cx_list:
for item, value in i.items():
# print(item, value)
cx_map[self.name_to_eid(item)[2]] = {"read-bps": value['read-bps'], "write-bps": value['write-bps']}
# print(cx_map)
return cx_map
def build(self):
# Build stations
if self.use_macvlans:
print("Creating MACVLANs")
self.mvlan_profile.create(admin_down=False, sleep_time=.5, debug=self.debug)
self._pass("PASS: MACVLAN build finished")
self.created_ports += self.mvlan_profile.created_macvlans
elif not self.use_macvlans and self.ip_list is None:
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=self.radio, sta_names_=self.port_list, debug=self.debug)
self._pass("PASS: Station build finished")
self.created_ports += self.station_profile.station_names
if len(self.ip_list) > 0:
# print("++++++++++++++++\n", self.ip_list, "++++++++++++++++\n")
for num_port in range(len(self.port_list)):
if self.ip_list[num_port] != 0:
if self.gateway is not None and self.netmask is not None:
shelf = self.name_to_eid(self.port_list[num_port])[0]
resource = self.name_to_eid(self.port_list[num_port])[1]
port = self.name_to_eid(self.port_list[num_port])[2]
req_url = "/cli-json/set_port"
data = {
"shelf": shelf,
"resource": resource,
"port": port,
"ip_addr": self.ip_list[num_port],
"netmask": self.netmask,
"gateway": self.gateway
}
self.json_post(req_url, data)
self.created_ports.append("%s.%s.%s" % (shelf, resource, port))
else:
raise ValueError("Netmask and gateway must be specified")
print("Creating MACVLANs")
self.mvlan_profile.create(admin_down=False, sleep_time=.5, debug=self.debug)
self._pass("PASS: MACVLAN build finished")
self.created_ports += self.mvlan_profile.created_macvlans
def main():
parser = LFCliBase.create_bare_argparse(
prog='create_macvlan.py',
# formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''Creates FileIO endpoints which can be NFS, CIFS or iSCSI endpoints.''',
epilog='''Creates MACVLAN endpoints.''',
description='''\
create_macvlan.py:
--------------------
Generic command layout:
./create_macvlan.py --macvlan_parent <port> --num_ports <num ports> --use_macvlans
./create_macvlan.py --macvlan_parent <port> --num_ports <num ports>
--first_mvlan_ip <first ip in series> --netmask <netmask to use> --gateway <gateway ip addr>
./create_macvlan.py --macvlan_parent eth2 --num_ports 3 --use_macvlans --first_mvlan_ip 192.168.92.13
./create_macvlan.py --macvlan_parent eth2 --num_ports 3 --first_mvlan_ip 192.168.92.13
--netmask 255.255.255.0 --gateway 192.168.92.1
./create_macvlan.py --radio 1.wiphy0 --test_duration 1m --macvlan_parent eth1 --num_ports 3 --use_macvlans
--use_ports eth1#0,eth1#1,eth1#2 --connections_per_port 2 --mode write
./create_macvlan.py --radio 1.wiphy0 --macvlan_parent eth1 --num_ports 3
--use_ports eth1#0,eth1#1,eth1#2 --connections_per_port 2
./create_macvlan.py --radio 1.wiphy0 --test_duration 1m --macvlan_parent eth1 --num_ports 3 --use_macvlans
./create_macvlan.py --radio 1.wiphy0 --macvlan_parent eth1 --num_ports 3
--first_mvlan_ip 10.40.3.100 --netmask 255.255.240.0 --gateway 10.40.0.1
--use_test_groups --write_only_test_group test_wo --read_only_test_group test_ro
--add_to_group test_wo
./create_macvlan.py --radio 1.wiphy0 --test_duration 1m --macvlan_parent eth1 --num_ports 3 --use_macvlans
./create_macvlan.py --radio 1.wiphy0 --macvlan_parent eth1 --num_ports 3
--use_ports eth1#0=10.40.3.103,eth1#1,eth1#2 --connections_per_port 2
--netmask 255.255.240.0 --gateway 10.40.0.1
''')
parser.add_argument('--num_stations', help='Number of stations to create', default=0)
parser.add_argument('--radio', help='radio EID, e.g: 1.wiphy2')
parser.add_argument('--ssid', help='SSID for stations to associate to')
parser.add_argument('--passwd', '--password', '--key', help='WiFi passphrase/password/key')
parser.add_argument('--security', help='security type to use for ssid { wep | wpa | wpa2 | wpa3 | open }')
parser.add_argument('-u', '--upstream_port',
help='non-station port that generates traffic: <resource>.<port>, e.g: 1.eth1',
default='1.eth1')
parser.add_argument('--test_duration', help='sets the duration of the test', default="5m")
parser.add_argument('--server_mount', help='--server_mount The server to mount, ex: 192.168.100.5/exports/test1',
default="10.40.0.1:/var/tmp/test")
help='non-station port that generates traffic: <resource>.<port>, e.g: 1.eth1',
default='1.eth1')
parser.add_argument('--macvlan_parent', help='specifies parent port for macvlan creation', default=None)
parser.add_argument('--first_port', help='specifies name of first port to be used', default=None)
parser.add_argument('--num_ports', help='number of ports to create', default=1)
@@ -328,36 +108,13 @@ Generic command layout:
parser.add_argument('--use_ports', help='list of comma separated ports to use with ips, \'=\' separates name and ip'
'{ port_name1=ip_addr1,port_name1=ip_addr2 }. '
'Ports without ips will be left alone', default=None)
parser.add_argument('--use_macvlans', help='will create macvlans', action='store_true', default=False)
parser.add_argument('--first_mvlan_ip', help='specifies first static ip address to be used or dhcp', default=None)
parser.add_argument('--netmask', help='specifies netmask to be used with static ip addresses', default=None)
parser.add_argument('--gateway', help='specifies default gateway to be used with static addressing', default=None)
parser.add_argument('--use_test_groups', help='will use test groups to start/stop instead of single endps/cxs',
action='store_true', default=False)
parser.add_argument('--read_only_test_group', help='specifies name to use for read only test group', default=None)
parser.add_argument('--write_only_test_group', help='specifies name to use for write only test group', default=None)
parser.add_argument('--mode', help='write,read,both', default='both', type=str)
tg_group = parser.add_mutually_exclusive_group()
tg_group.add_argument('--add_to_group', help='name of test group to add cxs to', default=None)
tg_group.add_argument('--del_from_group', help='name of test group to delete cxs from', default=None)
parser.add_argument('--cxs', help='list of cxs to add/remove depending on use of --add_to_group or --del_from_group'
, default=None)
args = parser.parse_args()
update_group_args = {
"name": None,
"action": None,
"cxs": None
}
if args.add_to_group is not None and args.cxs is not None:
update_group_args['name'] = args.add_to_group
update_group_args['action'] = "add"
update_group_args['cxs'] = args.cxs
elif args.del_from_group is not None and args.cxs is not None:
update_group_args['name'] = args.del_from_group
update_group_args['action'] = "del"
update_group_args['cxs'] = args.cxs
port_list = []
ip_list = []
if args.first_port is not None and args.use_ports is not None:
@@ -365,17 +122,17 @@ Generic command layout:
if (args.num_ports is not None) and (int(args.num_ports) > 0):
start_num = int(args.first_port[3:])
num_ports = int(args.num_ports)
port_list = LFUtils.port_name_series(prefix="sta", start_id=start_num, end_id=start_num+num_ports-1,
padding_number=10000,
radio=args.radio)
port_list = LFUtils.port_name_series(prefix="sta", start_id=start_num, end_id=start_num + num_ports - 1,
padding_number=10000,
radio=args.radio)
else:
if (args.num_ports is not None) and args.macvlan_parent is not None and (int(args.num_ports) > 0) \
and args.macvlan_parent in args.first_port:
start_num = int(args.first_port[args.first_port.index('#')+1:])
and args.macvlan_parent in args.first_port:
start_num = int(args.first_port[args.first_port.index('#') + 1:])
num_ports = int(args.num_ports)
port_list = LFUtils.port_name_series(prefix=args.macvlan_parent+"#", start_id=start_num,
end_id=start_num+num_ports-1, padding_number=100000,
radio=args.radio)
port_list = LFUtils.port_name_series(prefix=args.macvlan_parent + "#", start_id=start_num,
end_id=start_num + num_ports - 1, padding_number=100000,
radio=args.radio)
else:
raise ValueError("Invalid values for num_ports [%s], macvlan_parent [%s], and/or first_port [%s].\n"
"first_port must contain parent port and num_ports must be greater than 0"
@@ -383,14 +140,9 @@ Generic command layout:
else:
if args.use_ports is None:
num_ports = int(args.num_ports)
if not args.use_macvlans:
port_list = LFUtils.port_name_series(prefix="sta", start_id=0, end_id=num_ports - 1,
padding_number=10000,
radio=args.radio)
else:
port_list = LFUtils.port_name_series(prefix=args.macvlan_parent + "#", start_id=0,
end_id=num_ports - 1, padding_number=100000,
radio=args.radio)
port_list = LFUtils.port_name_series(prefix=args.macvlan_parent + "#", start_id=0,
end_id=num_ports - 1, padding_number=100000,
radio=args.radio)
else:
temp_list = args.use_ports.split(',')
for port in temp_list:
@@ -413,34 +165,25 @@ Generic command layout:
# print(port_list)
# exit(1)
ip_test = FileIOTest(args.mgr,
args.mgr_port,
ssid=args.ssid,
password=args.passwd,
security=args.security,
port_list=port_list,
ip_list=ip_list,
test_duration=args.test_duration,
upstream_port=args.upstream_port,
_debug_on=args.debug,
macvlan_parent=args.macvlan_parent,
use_macvlans=args.use_macvlans,
first_mvlan_ip=args.first_mvlan_ip,
netmask=args.netmask,
gateway=args.gateway,
dhcp=dhcp,
num_ports=args.num_ports,
use_test_groups=args.use_test_groups,
write_only_test_group=args.write_only_test_group,
read_only_test_group=args.read_only_test_group,
update_group_args = update_group_args,
connections_per_port=args.connections_per_port,
mode=args.mode
# want a mount options param
)
ip_test = CreateMacVlan(args.mgr,
args.mgr_port,
port_list=port_list,
ip_list=ip_list,
upstream_port=args.upstream_port,
_debug_on=args.debug,
macvlan_parent=args.macvlan_parent,
first_mvlan_ip=args.first_mvlan_ip,
netmask=args.netmask,
gateway=args.gateway,
dhcp=dhcp,
num_ports=args.num_ports,
connections_per_port=args.connections_per_port,
# want a mount options param
)
ip_test.build()
print('Created %s MacVlan connections' % args.num_ports)
if __name__ == "__main__":
main()
main()

161
py-scripts/create_qvlan.py Executable file
View File

@@ -0,0 +1,161 @@
#!/usr/bin/env python3
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import argparse
from LANforge.lfcli_base import LFCliBase
from LANforge.LFUtils import *
from LANforge.add_file_endp import *
from LANforge import LFUtils
import argparse
from realm import Realm
class CreateQVlan(Realm):
def __init__(self,
host="localhost",
port=8080,
qvlan_parent=None,
num_ports=1,
dhcp=True,
netmask=None,
first_qvlan_ip=None,
gateway=None,
port_list=[],
ip_list=[],
exit_on_error=False,
debug=False):
super().__init__(host, port)
self.host = host
self.port = port
self.qvlan_parent = qvlan_parent
self.debug = debug
self.port_list = port_list
self.ip_list = ip_list
self.exit_on_error = exit_on_error
self.qvlan_profile = self.new_qvlan_profile()
self.qvlan_profile.num_qvlans = int(num_ports)
self.qvlan_profile.desired_qvlans = self.port_list
self.qvlan_profile.qvlan_parent = self.qvlan_parent
self.qvlan_profile.dhcp = dhcp
self.qvlan_profile.netmask = netmask
self.qvlan_profile.first_ip_addr = first_qvlan_ip
self.qvlan_profile.gateway = gateway
self.qvlan_profile.dhcp = dhcp
def build(self):
print("Creating QVLAN stations")
self.qvlan_profile.create(admin_down=False, sleep_time=.5, debug=self.debug)
def main():
parser = LFCliBase.create_bare_argparse(
prog='create_qvlan.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''Creates Q-VLAN stations attached to the Eth port of the user's choice.''',
description='''\
create_qvlan.py:
---------------------
Generic command ''')
parser.add_argument('--radio', help='radio EID, e.g: 1.wiphy2')
parser.add_argument('--qvlan_parent', help='specifies parent port for qvlan creation', default=None)
parser.add_argument('--first_port', help='specifies name of first port to be used', default=None)
parser.add_argument('--num_ports', help='number of ports to create', default=1)
parser.add_argument('--first_qvlan_ip', help='specifies first static ip address to be used or dhcp', default=None)
parser.add_argument('--netmask', help='specifies netmask to be used with static ip addresses', default=None)
parser.add_argument('--gateway', help='specifies default gateway to be used with static addressing', default=None)
parser.add_argument('--use_ports',
help='list of comma separated ports to use with ips, \'=\' separates name and ip { port_name1=ip_addr1,port_name1=ip_addr2 }. Ports without ips will be left alone',
default=None)
tg_group = parser.add_mutually_exclusive_group()
tg_group.add_argument('--add_to_group', help='name of test group to add cxs to', default=None)
parser.add_argument('--cxs', help='list of cxs to add/remove depending on use of --add_to_group or --del_from_group'
, default=None)
parser.add_argument('--use_qvlans', help='will create qvlans', action='store_true', default=False)
args = parser.parse_args()
update_group_args = {
"name": None,
"action": None,
"cxs": None
}
# update_group_args['name'] =
if args.first_qvlan_ip in ["dhcp", "DHCP"]:
dhcp = True
else:
dhcp = False
update_group_args['action'] = "add"
update_group_args['cxs'] = args.cxs
port_list = []
ip_list = []
if args.first_port is not None and args.use_ports is not None:
if args.first_port.startswith("sta"):
if (args.num_ports is not None) and (int(args.num_ports) > 0):
start_num = int(args.first_port[3:])
num_ports = int(args.num_ports)
port_list = LFUtils.port_name_series(prefix="sta", start_id=start_num, end_id=start_num + num_ports - 1,
padding_number=10000,
radio=args.radio)
print(1)
else:
if (args.num_ports is not None) and args.qvlan_parent is not None and (int(args.num_ports) > 0) \
and args.qvlan_parent in args.first_port:
start_num = int(args.first_port[args.first_port.index('#') + 1:])
num_ports = int(args.num_ports)
port_list = LFUtils.port_name_series(prefix=args.qvlan_parent + "#", start_id=start_num,
end_id=start_num + num_ports - 1, padding_number=10000,
radio=args.radio)
print(2)
else:
raise ValueError("Invalid values for num_ports [%s], qvlan_parent [%s], and/or first_port [%s].\n"
"first_port must contain parent port and num_ports must be greater than 0"
% (args.num_ports, args.qvlan_parent, args.first_port))
else:
if args.use_ports is None:
num_ports = int(args.num_ports)
port_list = LFUtils.port_name_series(prefix=args.qvlan_parent + "#", start_id=1,
end_id=num_ports, padding_number=10000,
radio=args.radio)
print(3)
else:
temp_list = args.use_ports.split(',')
for port in temp_list:
port_list.append(port.split('=')[0])
if '=' in port:
ip_list.append(port.split('=')[1])
else:
ip_list.append(0)
if len(port_list) != len(ip_list):
raise ValueError(temp_list, " ports must have matching ip addresses!")
print(port_list)
print(ip_list)
create_qvlan = CreateQVlan(args.mgr,
args.mgr_port,
qvlan_parent=args.qvlan_parent,
num_ports=args.num_ports,
dhcp=dhcp,
netmask=args.netmask,
first_qvlan_ip=args.first_qvlan_ip,
gateway=args.gateway,
port_list=port_list,
ip_list=ip_list,
debug=args.debug)
create_qvlan.build()
print('Created %s QVLAN stations' % num_ports)
if __name__ == "__main__":
main()

View File

@@ -17,7 +17,6 @@ if 'py-json' not in sys.path:
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
from realm import Realm
import time
import pprint
@@ -91,7 +90,6 @@ def main():
--------------------
Command example:
./create_station.py
--upstream_port eth1
--radio wiphy0
--num_stations 3
--security open
@@ -131,6 +129,7 @@ Command example:
_debug_on=args.debug)
create_station.build()
print('Created %s stations' % num_sta)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,137 @@
#!/usr/bin/env python3
"""
Script for creating a variable number of stations.
"""
import sys
import os
import argparse
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from LANforge.lfcli_base import LFCliBase
from realm import Realm
import pandas as pd
import pprint
class CreateStation(Realm):
def __init__(self,
_ssid=None,
_security=None,
_password=None,
_host=None,
_port=None,
_sta_list=None,
_number_template="00000",
_radio="wiphy0",
_proxy_str=None,
_debug_on=False,
_up=True,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(_host,
_port)
self.host = _host
self.port = _port
self.ssid = _ssid
self.security = _security
self.password = _password
self.sta_list = _sta_list
self.radio = _radio
self.timeout = 120
self.number_template = _number_template
self.debug = _debug_on
self.up = _up
self.station_profile = self.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password,
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
if self.debug:
print("----- Station List ----- ----- ----- ----- ----- ----- \n")
pprint.pprint(self.sta_list)
print("---- ~Station List ----- ----- ----- ----- ----- ----- \n")
def build(self):
# Build stations
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=self.radio, sta_names_=self.sta_list, debug=self.debug)
if self.up:
self.station_profile.admin_up()
self._pass("PASS: Station build finished")
def main():
required=[]
required.append({'name':'--df','help':'Which file you want to build stations off of?'})
parser = LFCliBase.create_basic_argparse(
prog='create_station_from_df.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Create stations
''',
description='''\
create_station.py
--------------------
Command example:
./create_station_from_df.py
--upstream_port eth1
--df df.csv
--security open
--ssid netgear
--passwd BLANK
--debug
''',
more_required=required)
args = parser.parse_args()
df=pd.read_csv(args.df)
unique=df[['radio','ssid','passwd','security']].drop_duplicates().reset_index(drop=True)
for item in unique.index:
uniquedf=unique.iloc[item]
df1=df.merge(pd.DataFrame(uniquedf).transpose(),on=['radio','ssid','passwd','security'])
try:
radio=uniquedf['radio']
except:
radio=args.radio
station_list=df1['station']
try:
ssid=uniquedf['ssid']
passwd=uniquedf['passwd']
security=uniquedf['security']
except:
ssid=args.ssid
passwd=args.passwd
security=args.security
create_station = CreateStation(_host=args.mgr,
_port=args.mgr_port,
_ssid=ssid,
_password=passwd,
_security=security,
_sta_list=station_list,
_radio=radio,
_proxy_str=args.proxy,
_debug_on=args.debug)
create_station.build()
print('Created %s stations' % len(unique.index))
if __name__ == "__main__":
main()

View File

@@ -59,7 +59,7 @@ class CreateVAP(Realm):
self.vap_profile.dhcp = self.dhcp
if self.debug:
print("----- VAP List ----- ----- ----- ----- ----- ----- \n")
pprint.pprint(self.sta_list)
pprint.pprint(self.vap_list)
print("---- ~VAP List ----- ----- ----- ----- ----- ----- \n")

182
py-scripts/create_vr.py Executable file
View File

@@ -0,0 +1,182 @@
#!/usr/bin/env python3
"""
Script for creating a variable number of bridges.
"""
import os
import sys
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
from realm import Realm
import time
from pprint import pprint
class CreateVR(Realm):
def __init__(self,
lfclient_host="localhost",
lfclient_port=8080,
debug=False,
# resource=1, # USE name=1.2.vr0 convention instead
vr_name=None,
ports_list=(),
services_list=(),
_exit_on_error=False,
_exit_on_fail=False,
_proxy_str=None,
_capture_signal_list=()):
super().__init__(lfclient_host=lfclient_host,
lfclient_port=lfclient_port,
debug_=debug,
_exit_on_error=_exit_on_error,
_exit_on_fail=_exit_on_fail,
_proxy_str=_proxy_str,
_capture_signal_list=_capture_signal_list)
eid_name = self.name_to_eid(vr_name)
self.vr_name = eid_name
self.ports_list = ports_list
self.services_list = services_list
self.vr_profile = self.new_vr_profile()
def clean(self):
if (self.vr_name is None) or (self.vr_profile.vr_eid is None) and (self.vr_profile.vr_eid) == "":
print("No vr_eid to clean")
return
self.rm_port("1.1.rd90a", debug_=self.debug)
self.rm_port("1.1.rd90b", debug_=self.debug)
self.wait_until_ports_disappear(sta_list=["1.1.rd90a", "1.1.rd90b"],
debug_=self.debug)
if (self.vr_profile.vr_eid is not None) \
and (self.vr_profile.vr_eid[1] is not None) \
and (self.vr_profile.vr_eid[2] is not None):
self.vr_profile.cleanup(debug=self.debug)
if (self.vr_name is not None) \
and (self.vr_name[1] is not None) \
and (self.vr_name[2] is not None):
data = {
"shelf": 1,
"resource": self.vr_name[1],
"router_name": self.vr_name[2]
}
self.json_post("/cli-json/rm_vr", data, debug_=self.debug)
time.sleep(1)
self.json_post("/cli-json/nc_show_vr", {
"shelf": 1,
"resource": self.vr_name[1],
"router": "all"
}, debug_=self.debug)
self.json_post("/cli-json/nc_show_vrcx", {
"shelf": 1,
"resource": self.vr_name[1],
"cx_name": "all"
}, debug_=self.debug)
def build(self):
self.vr_profile.apply_netsmith(self.vr_name[1], delay=5, debug=self.debug)
self.json_post("/cli-json/add_rdd", {
"shelf": 1,
"resource": self.vr_name[1],
"port": "rd90a",
"peer_ifname": "rd90b",
"report_timer": "3000"
})
self.json_post("/cli-json/add_rdd", {
"shelf": 1,
"resource": self.vr_name[1],
"port": "rd90b",
"peer_ifname": "rd90a",
"report_timer": "3000"
})
self.wait_until_ports_appear(sta_list=["1.1.rd90a", "1.1.rd90b"], debug_=self.debug)
self.vr_profile.vrcx_list(resource=self.vr_name[1], do_sync=True) # do_sync
self.vr_profile.create(vr_name=self.vr_name, debug=self.debug)
self.vr_profile.sync_netsmith(resource=self.vr_name[1], debug=self.debug)
self._pass("created router")
def start(self):
"""
Move a vrcx into a router and then movie it out
:return: void
"""
# move rd90a into router
self.vr_profile.refresh_netsmith(resource=self.vr_name[1], debug=self.debug)
if self.debug:
pprint(("vr_eid", self.vr_name))
self.vr_profile.wait_until_vrcx_appear(resource=self.vr_name[1], name_list=["rd90a", "rd90b"])
self.vr_profile.add_vrcx(vr_eid=self.vr_name, connection_name_list="rd90a", debug=True)
self.vr_profile.refresh_netsmith(resource=self.vr_name[1], debug=self.debug)
# test to make sure that vrcx is inside vr we expect
self.vr_profile.vrcx_list(resource=self.vr_name[1], do_sync=True)
vr_list = self.vr_profile.router_list(resource=self.vr_name[1], do_refresh=True)
router = self.vr_profile.find_cached_router(resource=self.vr_name[1], router_name=self.vr_name[2])
pprint(("cached router 120: ", router))
router_eid = LFUtils.name_to_eid(router["eid"])
pprint(("router eid 122: ", router_eid))
full_router = self.json_get("/vr/1/%s/%s/%s" %(router_eid[0], router_eid[1], self.vr_name[2]), debug_=True)
pprint(("full router: ", full_router))
time.sleep(5)
if router is None:
self._fail("Unable to find router after vrcx move "+self.vr_name)
self.exit_fail()
def stop(self):
pass
def main():
parser = LFCliBase.create_bare_argparse(
prog=__file__,
description="""\
{f}
--------------------
Command example:
{f} --vr_name 1.vr0 --ports 1.br0,1.rdd0a --services 1.br0=dhcp,nat --services 1.vr0=radvd
{f} --vr_name 2.vr0 --ports 2.br0,2.vap2 --services
--debug
""".format(f=__file__))
required = parser.add_argument_group('required arguments')
required.add_argument('--vr_name', '--vr_names', required=True,
help='EID of virtual router, like 1.2.vr0')
optional = parser.add_argument_group('optional arguments')
optional.add_argument('--ports', default=None, required=False,
help='Comma separated list of ports to add to virtual router')
optional.add_argument('--services', default=None, required=False,
help='Add router services to a port, "br0=nat,dhcp"')
args = parser.parse_args()
create_vr = CreateVR(lfclient_host=args.mgr,
lfclient_port=args.mgr_port,
vr_name=args.vr_name,
ports_list=args.ports,
services_list=args.services,
debug=args.debug,
_exit_on_error=True,
_exit_on_fail=True)
create_vr.clean()
create_vr.build()
create_vr.start()
# create_vr.monitor()
create_vr.stop()
create_vr.clean()
print('Created Virtual Router')
if __name__ == "__main__":
main()
#

112
py-scripts/csv_convert.py Executable file
View File

@@ -0,0 +1,112 @@
#!/usr/bin/env python3
# This program is used to read in a LANforge Dataplane CSV file and output
# a csv file that works with a customer's RvRvO visualization tool.
#
# Example use case:
#
# Read in ~/text-csv-0-candela.csv, output is stored at outfile.csv
# ./py-scripts/csv_convert.py -i ~/text-csv-0-candela.csv
import sys
import os
import argparse
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
class CSVParcer():
def __init__(self,csv_infile=None,csv_outfile=None,ddb=False):
idx = 0
i_atten = -1
i_rotation = -1
i_rxbps = -1
fpo = open(csv_outfile, "w")
with open(csv_infile) as fp:
line = fp.readline()
if not line:
exit(1)
# Read in initial line, this is the CSV headers. Parse it to find the column indices for
# the columns we care about.
x = line.split(",")
cni = 0
for cn in x:
if (cn == "Atten"):
i_atten = cni
if (cn == "Rotation"):
i_rotation = cni
if (cn == "Rx-Bps"):
i_rxbps = cni
cni += 1
# Write out out header for the new file.
fpo.write("Step Index,Position [Deg],Attenuation [dB],Traffic Pair 1 Throughput [Mbps]\n")
# Read rest of the input lines, processing one at a time. Covert the columns as
# needed, and write out new data to the output file.
line = fp.readline()
step_i = 0
while line:
x = line.split(",")
mbps_data = x[i_rxbps]
mbps_array = mbps_data.split(" ")
mbps_val = float(mbps_array[0])
if (mbps_array[1] == "Gbps"):
mbps_val *= 1000
if (mbps_array[1] == "Kbps"):
mbps_val /= 1000
if (mbps_array[1] == "bps"):
mbps_val /= 1000000
attenv = float(x[i_atten])
if ddb:
attenv /= 10
fpo.write("%s,%s,%s,%s\n" % (step_i, x[i_rotation], attenv, mbps_val))
line = fp.readline()
step_i += 1
def main():
#debug_on = False
parser = argparse.ArgumentParser(
prog='csv_convert.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Useful Information:
''',
description='''
csv_convert.py:
converts the candela csv into the comcast csv and xlsx,
renames input file from candela to comcast if not outfile given
''')
# for testing parser.add_argument('-i','--infile', help="input file of csv data", default='text-csv-0-candela.csv')
parser.add_argument('-i','--infile', help="input file of csv data", required=True)
parser.add_argument('-d','--ddb', help="Specify attenuation units are in ddb in source file",
action='store_true', default=False)
parser.add_argument('-o','--outfile', help="output file in .csv format", default='outfile.csv')
args = parser.parse_args()
csv_outfile_name = None
if args.infile:
csv_infile_name = args.infile
if args.outfile:
csv_outfile_name = args.outfile
print("infile: %s outfile: %s convert-ddb: %s"%(csv_infile_name, csv_outfile_name, args.ddb))
CSVParcer(csv_infile_name, csv_outfile_name, args.ddb)
if __name__ == "__main__":
main()

161
py-scripts/csv_to_influx.py Executable file
View File

@@ -0,0 +1,161 @@
#!/usr/bin/env python3
# Copies the data from a CSV file from the KPI file generated from a Wifi Capacity test to an Influx database
# The CSV requires three columns in order to work: Date, test details, and numeric-score.
# Date is a unix timestamp, test details is the variable each datapoint is measuring, and numeric-score is the value for that timepoint and variable.
import sys
import os
from pprint import pprint
from influx2 import RecordInflux
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import argparse
from realm import Realm
import datetime
def influx_add_parser_args(parser):
parser.add_argument('--influx_host', help='Hostname for the Influx database', default=None)
parser.add_argument('--influx_port', help='IP Port for the Influx database', default=8086)
parser.add_argument('--influx_org', help='Organization for the Influx database', default=None)
parser.add_argument('--influx_token', help='Token for the Influx database', default=None)
parser.add_argument('--influx_bucket', help='Name of the Influx bucket', default=None)
parser.add_argument('--influx_tag', action='append', nargs=2,
help='--influx_tag <key> <val> Can add more than one of these.', default=[])
class CSVtoInflux(Realm):
def __init__(self,
lfclient_host="localhost",
lfclient_port=8080,
debug=False,
_exit_on_error=False,
_exit_on_fail=False,
_proxy_str=None,
_capture_signal_list=[],
influxdb=None,
_influx_tag=[],
target_csv=None):
super().__init__(lfclient_host=lfclient_host,
lfclient_port=lfclient_port,
debug_=debug,
_exit_on_error=_exit_on_error,
_exit_on_fail=_exit_on_fail,
_proxy_str=_proxy_str,
_capture_signal_list=_capture_signal_list)
self.influxdb = influxdb
self.target_csv = target_csv
self.influx_tag = _influx_tag
# Submit data to the influx db if configured to do so.
def post_to_influx(self):
with open(self.target_csv) as fp:
line = fp.readline()
line = line.split('\t')
# indexes tell us where in the CSV our data is located. We do it this way so that even if the columns are moved around, as long as they are present, the script will still work.
numeric_score_index = line.index('numeric-score')
test_id_index = line.index('test-id')
date_index = line.index('Date')
test_details_index = line.index('test details')
short_description_index = line.index('short-description')
graph_group_index = line.index('Graph-Group')
units_index = line.index('Units')
line = fp.readline()
while line:
line = line.split('\t') #split the line by tabs to separate each item in the string
date = line[date_index]
date = datetime.datetime.utcfromtimestamp(int(date) / 1000).isoformat() #convert to datetime so influx can read it, this is required
numeric_score = line[numeric_score_index]
numeric_score = float(numeric_score) #convert to float, InfluxDB cannot
test_details = line[test_details_index]
short_description = line[short_description_index]
test_id = line[test_id_index]
tags = dict()
tags['script'] = line[test_id_index]
tags['short-description'] = line[short_description_index]
tags['test_details'] = line[test_details_index]
tags['Graph-Group'] = line[graph_group_index]
tags['Units'] = line[units_index]
for item in self.influx_tag: # Every item in the influx_tag command needs to be added to the tags variable
tags[item[0]] = item[1]
self.influxdb.post_to_influx(short_description, numeric_score, tags, date)
line = fp.readline()
#influx wants to get data in the following format:
# variable n ame, value, tags, date
# total-download-mbps-speed-for-the-duration-of-this-iteration 171.085494 {'script': 'WiFi Capacity'} 2021-04-14T19:04:04.902000
def main():
lfjson_host = "localhost"
lfjson_port = 8080
endp_types = "lf_udp"
debug = False
parser = argparse.ArgumentParser(
prog='test_l3_longevity.py',
# formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''
''',
description='''\
csv_to_influx.py:
--------------------
Summary :
----------
Copies the data from a CSV file generated by a wifi capacity test to an influx database.
Column names are designed for the KPI file generated by our Wifi Capacity Test.
A user can of course change the column names to match these in order to input any csv file.
The CSV file needs to have the following columns:
--date - which is a UNIX timestamp
--test details - which is the variable being measured by the test
--numeric-score - which is the value for that variable at that point in time.
Generic command layout:
-----------------------
python .\\csv_to_influx.py
Command:
python3 csv_to_influx.py --influx_host localhost --influx_org Candela --influx_token random_token --influx_bucket lanforge
--target_csv kpi.csv
''')
influx_add_parser_args(parser)
# This argument is specific to this script, so not part of the generic influxdb parser args
# method above.
parser.add_argument('--target_csv', help='CSV file to record to influx database', default="")
args = parser.parse_args()
influxdb = RecordInflux(_lfjson_host=lfjson_host,
_lfjson_port=lfjson_port,
_influx_host=args.influx_host,
_influx_port=args.influx_port,
_influx_org=args.influx_org,
_influx_token=args.influx_token,
_influx_bucket=args.influx_bucket)
csvtoinflux = CSVtoInflux(influxdb=influxdb,
target_csv=args.target_csv,
_influx_tag=args.influx_tag)
csvtoinflux.post_to_influx()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,81 @@
#!/usr/bin/env python3
"""download_test.py will do lf_report::add_kpi(tags, 'throughput-download-bps', $my_value);"""
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from LANforge.lfcli_base import LFCliBase
from influx import RecordInflux
from realm import Realm
import argparse
class DownloadTest(Realm):
def __init__(self,
_sta_list=None,
_ssid=None,
_password=None,
_security=None,
):
super().__init__(_host,
_port)
self.host = _host
self.ssid=_ssid
self.security = _security
self.password = _password
self.sta_list= _sta_list
def main():
parser = LFCliBase.create_bare_argparse(
prog='download_test.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''
Download throughput test''',
)
parser.add_argument('--influx_user', help='Username for your Influx database', required=True)
parser.add_argument('--influx_passwd', help='Password for your Influx database', required=True)
parser.add_argument('--influx_db', help='Name of your Influx database', required=True)
parser.add_argument('--longevity', help='How long you want to gather data', default='4h')
parser.add_argument('--device', help='Device to monitor', action='append', required=True)
parser.add_argument('--monitor_interval', help='How frequently you want to append to your database', default='5s')
parser.add_argument('--target_kpi', help='Monitor only selected columns', action='append', default=target_kpi)
args = parser.parse_args()
num_sta = 2
if (args.num_stations is not None) and (int(args.num_stations) > 0):
num_stations_converted = int(args.num_stations)
num_sta = num_stations_converted
station_list = LFUtils.port_name_series(prefix="sta",
start_id=0,
end_id=num_sta-1,
padding_number=10000,
radio=args.radio)
monitor_interval = LFCliBase.parse_time(args.monitor_interval).total_seconds()
longevity = LFCliBase.parse_time(args.longevity).total_seconds()
grapher = DownloadTest(_host=args.mgr,
_port=args.mgr_port,
_influx_db=args.influx_db,
_influx_user=args.influx_user,
_influx_passwd=args.influx_passwd,
_longevity=longevity,
_devices=args.device,
_monitor_interval=monitor_interval,
_target_kpi=args.target_kpi,
_ssid=args.ssid,
_password=args.passwd,
)
if __name__ == "__main__":
main()

131
py-scripts/event_breaker.py Executable file
View File

@@ -0,0 +1,131 @@
#!/usr/bin/env python3
"""
This file is intended to expose concurrency
problems in the /events/ URL handler by querying events rapidly.
Please use concurrently with event_flood.py.
"""
import sys
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append('../py-json')
import argparse
from LANforge.lfcli_base import LFCliBase
from realm import Realm
import datetime
from datetime import datetime
import time
from time import sleep
import pprint
class EventBreaker(Realm):
def __init__(self, host, port,
duration=None,
_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port)
self.counter = 0
self.test_duration=duration
if (self.test_duration is None):
raise ValueError("run wants numeric run_duration_sec")
def create(self):
pass
def run(self):
now = datetime.now()
now_ms = 0
end_time = self.parse_time(self.test_duration) + now
client_time_ms = 0
prev_client_time_ms = 0
start_loop_time_ms = 0
loop_time_ms = 0
prev_loop_time_ms = 0
num_events = 0
prev_num_events = 0
bad_events = []
while datetime.now() < end_time:
bad_events = []
start_loop_time_ms = int(self.get_milliseconds(datetime.now()))
print ('\r', end='')
#prev_loop_time_ms = loop_time_ms
# loop_time_ms = self.get_milliseconds(datetime.now())
prev_client_time_ms = client_time_ms
response = self.json_get("/events/all")
#pprint.pprint(response)
if "events" not in response:
pprint.pprint(response)
raise AssertionError("no events in response")
events = response["events"]
prev_num_events = num_events
num_events = len(events)
if num_events != prev_num_events:
print("%s events Δ%s"%(num_events, (num_events - prev_num_events)))
if "candela.lanforge.HttpEvents" in response:
client_time_ms = float(response["candela.lanforge.HttpEvents"]["duration"])
# print(" client_time %d"%client_time_ms)
if abs(prev_client_time_ms - client_time_ms) > 30:
print(" client time %d ms Δ%d"%(client_time_ms, (prev_client_time_ms - client_time_ms)),
end='')
event_index = 0
for record in events:
for k in record.keys():
if record[k] is None:
print (' ☠no %s'%k, end='')
continue
# pprint.pprint( record[k])
if "NA" == record[k]["event"] \
or "NA" == record[k]["name"] \
or "NA" == record[k]["type"] \
or "NA" == record[k]["priority"]:
bad_events.append(int(k))
pprint.pprint(record[k])
# print( " ☠id[%s]☠"%k, end='')
if len(bad_events) > 0:
pprint.pprint(events[event_index])
print( " ☠id[%s]☠"%bad_events, end='')
exit(1)
event_index += 1
prev_loop_time_ms = loop_time_ms
now_ms = int(self.get_milliseconds(datetime.now()))
loop_time_ms = now_ms - start_loop_time_ms
if (prev_loop_time_ms - loop_time_ms) > 15:
print(" loop time %d ms Δ%d "
%(loop_time_ms, (prev_loop_time_ms - loop_time_ms)),
end='')
if (prev_loop_time_ms - loop_time_ms) > 30:
print("")
def cleanup(self):
pass
def main():
parser = LFCliBase.create_bare_argparse(
prog='event_breaker.py',
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument("--test_duration", help='test duration', default="30s" )
# if optional_args is not None:
args = parser.parse_args()
event_breaker = EventBreaker(host=args.mgr,
port=args.mgr_port,
duration=args.test_duration,
_debug_on=True,
_exit_on_error=True,
_exit_on_fail=True)
event_breaker.create()
event_breaker.run()
event_breaker.cleanup()
if __name__ == "__main__":
main()

114
py-scripts/event_flood.py Executable file
View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
This file is intended to expose concurrency
problems in the /events/ URL handler by inserting events rapidly.
Please concurrently use with event_breaker.py.
"""
import sys
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append('../py-json')
import argparse
from LANforge.lfcli_base import LFCliBase
from realm import Realm
import datetime
from datetime import datetime
import time
from time import sleep
import pprint
class EventBreaker(Realm):
def __init__(self, host, port,
duration=None,
pause_ms=None,
_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port)
self.counter = 0
self.test_duration = duration
self.pause_ms = pause_ms
if (self.test_duration is None):
raise ValueError("run wants numeric run_duration_sec")
def create(self):
pass
def run(self):
last_second_ms = 0
start_time = datetime.now()
now_ms = 0
end_time = self.parse_time(self.test_duration) + start_time
client_time_ms = 0
prev_client_time_ms = 0
start_loop_time_ms = 0
loop_time_ms = 0
prev_loop_time_ms = 0
num_events = 0
prev_num_events = 0
while datetime.now() < end_time:
sleep( self.pause_ms / 1000 )
start_loop_time_ms = int(self.get_milliseconds(datetime.now()))
print ('\r', end='')
#prev_loop_time_ms = loop_time_ms
# loop_time_ms = self.get_milliseconds(datetime.now())
prev_client_time_ms = client_time_ms
response_list = []
response = self.json_post("/cli-json/add_event",
{
"event_id": "new",
"details": "event_flood %d"%start_loop_time_ms,
"priority": "INFO",
"name": "custom"
},
response_json_list_=response_list)
# pprint.pprint(response_list)
prev_client_time_ms = client_time_ms
prev_loop_time_ms = loop_time_ms
now = int(self.get_milliseconds(datetime.now()))
loop_time_ms = now - start_loop_time_ms
client_time_ms = response_list[0]["LAST"]["duration"]
if (client_time_ms != prev_client_time_ms):
print(" client %d ms %d"%(client_time_ms,
(prev_client_time_ms - client_time_ms)),
end='')
if (loop_time_ms != prev_loop_time_ms):
print(" loop %d ms %d "%(loop_time_ms,
(prev_loop_time_ms - loop_time_ms)),
end='')
if (last_second_ms + 1000) < now:
last_second_ms = now
print("")
def cleanup(self):
pass
def main():
parser = LFCliBase.create_bare_argparse(
prog='event_breaker.py',
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument("--test_duration", help='test duration', default="30s" )
parser.add_argument("--pause_ms", help='interval between submitting events', default="30" )
# if optional_args is not None:
args = parser.parse_args()
event_breaker = EventBreaker(host=args.mgr,
port=args.mgr_port,
duration=args.test_duration,
pause_ms=int(args.pause_ms),
_debug_on=True,
_exit_on_error=True,
_exit_on_fail=True)
event_breaker.create()
event_breaker.run()
event_breaker.cleanup()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,47 @@
chamber-0: RootAP
chamber-1: Node1
chamber-2: Node2
chamber-3:
chamber-4: MobileStations
sta_amount-0: 1
sta_amount-1: 1
sta_amount-2: 1
sta_amount-3: 1
sta_amount-4: 1
radios-0-0: 1.2.wiphy0
radios-0-1:
radios-0-2:
radios-0-3: 1.2.wiphy1
radios-0-4:
radios-0-5:
radios-1-0: 1.3.wiphy0
radios-1-1:
radios-1-2:
radios-1-3: 1.3.wiphy1
radios-1-4:
radios-1-5:
radios-2-0: 1.4.wiphy0
radios-2-1:
radios-2-2:
radios-2-3: 1.4.wiphy1
radios-2-4:
radios-2-5:
radios-3-0:
radios-3-1:
radios-3-2:
radios-3-3:
radios-3-4:
radios-3-5:
radios-4-0: 1.1.2 wiphy0
radios-4-1:
radios-4-2:
radios-4-3: 1.1.3 wiphy1
radios-4-4:
radios-4-5:
ap_arrangements: Current Position
tests: Roam
traf_combo: STA
sta_position: Current Position
traffic_types: UDP
direction: Download
path: Orbit Current

View File

@@ -0,0 +1,43 @@
# Example radio setup, calibration data, and attenuator setup.
# At least the attenuation will be unique for your testbed
# so run the calibration step, view the config, and paste the appropriate
# lines into a file similar to this.
radio-0: 1.1.2 wiphy0
radio-1: 1.1.3 wiphy1
radio-2: 1.1.4 wiphy2
radio-3: 1.1.5 wiphy3
radio-4: 1.1.6 wiphy4
radio-5: 1.1.7 wiphy5
rssi_0_2-0: -26
rssi_0_2-1: -26
rssi_0_2-2: -26
rssi_0_2-3: -26
rssi_0_2-4: -27
rssi_0_2-5: -27
rssi_0_2-6: -27
rssi_0_2-7: -27
rssi_0_2-8: -25
rssi_0_2-9: -25
rssi_0_2-10: -25
rssi_0_2-11: -25
rssi_0_5-0: -38
rssi_0_5-1: -38
rssi_0_5-2: -38
rssi_0_5-3: -38
rssi_0_5-4: -38
rssi_0_5-5: -38
rssi_0_5-6: -38
rssi_0_5-7: -38
rssi_0_5-8: -47
rssi_0_5-9: -47
rssi_0_5-10: -47
rssi_0_5-11: -47
atten-0: 1.1.85.0
atten-1: 1.1.85.1
atten-2: 1.1.85.2
atten-3: 1.1.85.3
atten-4: 1.1.1002.0
atten-5: 1.1.1002.1
atten-8: 1.1.1002.2
atten-9: 1.1.1002.3

View File

@@ -1,109 +0,0 @@
#!/usr/bin/env python3
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import LANforge
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
import realm
import argparse
import time
import pprint
class IPv4Test(LFCliBase):
def __init__(self, ssid, security, password, sta_list=None, number_template="00000", host="localhost", port=8080, radio ="wiphy0",_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port, _debug=_debug_on, _halt_on_error=_exit_on_error, _exit_on_fail=_exit_on_fail)
self.host = host
self.port = port
self.ssid = ssid
self.security = security
self.password = password
self.sta_list = sta_list
self.timeout = 120
self.radio=radio
self.number_template = number_template
self.debug = _debug_on
self.local_realm = realm.Realm(lfclient_host=self.host, lfclient_port=self.port)
self.station_profile = self.local_realm.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password,
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
def build(self):
# Build stations
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=self.radio, sta_names_=self.sta_list, debug=self.debug)
self.station_profile.admin_up()
if self.local_realm.wait_for_ip(station_list=self.sta_list, debug=self.debug, timeout_sec=30):
self._pass("Station build finished")
self.exit_success()
else:
self._fail("Stations not able to acquire IP. Please check network input.")
self.exit_fail()
def cleanup(self, sta_list):
self.station_profile.cleanup(sta_list)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=sta_list,
debug=self.debug)
def main():
parser = LFCliBase.create_basic_argparse(
prog='example_open_connection.py',
# formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Example code that creates a specified amount of stations on a specified SSID using Open security.
''',
description='''\
example_open_connection.py
--------------------
Generic command example:
python3 ./example_open_connection.py
--mgr localhost
--mgr_port 8080
--num_stations 3
--radio wiphy1
--ssid netgear-open
--passwd [BLANK]
--debug
''')
args = parser.parse_args()
num_sta = 2
if (args.num_stations is not None) and (int(args.num_stations) > 0):
num_sta = int(args.num_stations)
station_list = LFUtils.portNameSeries(prefix_="sta",
start_id_=0,
end_id_=num_sta-1,
padding_number_=10000)
ip_test = IPv4Test(host=args.mgr, port=args.mgr_port, ssid=args.ssid, password=args.passwd,
security="open", radio=args.radio, sta_list=station_list)
ip_test.cleanup(station_list)
ip_test.timeout = 60
ip_test.build()
if __name__ == "__main__":
main()

View File

@@ -21,7 +21,7 @@ class IPv4Test(LFCliBase):
def __init__(self, ssid, security, password, sta_list=None, ap=None, mode = 0, number_template="00000", host="localhost", port=8080,radio = "wiphy0",_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port, _debug=_debug_on, _halt_on_error=_exit_on_error, _exit_on_fail=_exit_on_fail)
super().__init__(host, port, _debug=_debug_on, _exit_on_fail=_exit_on_fail)
self.host = host
self.port = port
self.ssid = ssid

View File

@@ -1,109 +0,0 @@
#!/usr/bin/env python3
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import LANforge
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
import realm
import argparse
import time
import pprint
class IPv4Test(LFCliBase):
def __init__(self, ssid, security, password, sta_list=None, number_template="00000", radio = "wiphy0",_debug_on=False, host="locahost", port=8080,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port, _debug=_debug_on, _halt_on_error=_exit_on_error, _exit_on_fail=_exit_on_fail)
self.host = host
self.port = port
self.radio = radio
self.ssid = ssid
self.security = security
self.password = password
self.sta_list = sta_list
self.timeout = 120
self.number_template = number_template
self.debug = _debug_on
self.local_realm = realm.Realm(lfclient_host=self.host, lfclient_port=self.port)
self.station_profile = self.local_realm.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password,
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
def build(self):
# Build stations
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=self.radio, sta_names_=self.sta_list, debug=self.debug)
self.station_profile.admin_up()
if self.local_realm.wait_for_ip(station_list=self.sta_list, debug=self.debug, timeout_sec=30):
self._pass("Station build finished")
self.exit_success()
else:
self._fail("Stations not able to acquire IP. Please check network input.")
self.exit_fail()
def cleanup(self, sta_list):
self.station_profile.cleanup(sta_list)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=sta_list,
debug=self.debug)
def main():
parser = LFCliBase.create_basic_argparse(
prog='example_wep_connection.py',
# formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Example code that creates a specified amount of stations on a specified SSID using WEP security.
''',
description='''\
example_wep_connection.py
--------------------
Generic command example:
python3 ./example_wep_connection.py
--host localhost
--port 8080
--num_stations 3
--radio wiphy1
--ssid jedway-wep-48
--passwd jedway-wep-48
--debug
''')
args = parser.parse_args()
num_sta = 2
if (args.num_stations is not None) and (int(args.num_stations) > 0):
num_stations_converted = int(args.num_stations)
num_sta = num_stations_converted
station_list = LFUtils.portNameSeries(prefix_="sta",
start_id_=0,
end_id_=num_sta-1,
padding_number_=10000)
ip_test = IPv4Test(host=args.mgr,port=args.mgr_port, ssid=args.ssid, password=args.passwd,
security="wep", radio=args.radio, sta_list=station_list)
ip_test.cleanup(station_list)
ip_test.timeout = 60
ip_test.build()
if __name__ == "__main__":
main()

View File

@@ -1,109 +0,0 @@
#!/usr/bin/env python3
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import LANforge
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
import realm
import argparse
import time
import pprint
class IPv4Test(LFCliBase):
def __init__(self, ssid, security, password, sta_list=None,host="localhost", port=8080, number_template="00000", radio="wiphy0",_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port, _debug=_debug_on, _halt_on_error=_exit_on_error, _exit_on_fail=_exit_on_fail)
self.host = host
self.port = port
self.ssid = ssid
self.radio = radio
self.security = security
self.password = password
self.sta_list = sta_list
self.timeout = 120
self.number_template = number_template
self.debug = _debug_on
self.local_realm = realm.Realm(lfclient_host=self.host, lfclient_port=self.port)
self.station_profile = self.local_realm.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password,
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
def build(self):
# Build stations
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=self.radio, sta_names_=self.sta_list, debug=self.debug)
self.station_profile.admin_up()
if self.local_realm.wait_for_ip(station_list=self.sta_list, debug=self.debug, timeout_sec=30):
self._pass("Station build finished")
self.exit_success()
else:
self._fail("Stations not able to acquire IP. Please check network input.")
self.exit_fail()
def cleanup(self, sta_list):
self.station_profile.cleanup(sta_list)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=sta_list,
debug=self.debug)
def main():
parser = LFCliBase.create_basic_argparse(
prog='example_wpa2_connection.py',
# formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Example code that creates a specified amount of stations on a specified SSID using WPA2 security.
''',
description='''\
example_wpa2_connection.py
--------------------
Generic command example
python3 ./example_wpa2_connection.py
--host localhost
--port 8080
--num_stations 3
--ssid netgear-wpa2
--passwd admin123-wpa2
--radio wiphy1
--debug
''')
args = parser.parse_args()
num_sta = 2
if (args.num_stations is not None) and (int(args.num_stations) > 0):
num_stations_converted = int(args.num_stations)
num_sta = num_stations_converted
station_list = LFUtils.portNameSeries(prefix_="sta",
start_id_=0,
end_id_=num_sta-1,
padding_number_=10000,
radio=args.radio)
ip_test = IPv4Test(host=args.mgr, port=args.mgr_port, ssid=args.ssid, password=args.passwd, radio=args.radio,
security="wpa2", sta_list=station_list)
ip_test.cleanup(station_list)
ip_test.timeout = 60
ip_test.build()
if __name__ == "__main__":
main()

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env python3
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import LANforge
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
import realm
import argparse
import time
import pprint
class IPv4Test(LFCliBase):
def __init__(self, ssid, security, password, host="localhost", port=8080,sta_list=None, number_template="00000", radio = "wiphy0",_debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port, _debug=_debug_on, _halt_on_error=_exit_on_error, _exit_on_fail=_exit_on_fail)
self.host = host
self.port = port
self.ssid = ssid
self.radio = radio
self.security = security
self.password = password
self.sta_list = sta_list
self.timeout = 120
self.number_template = number_template
self.debug = _debug_on
self.local_realm = realm.Realm(lfclient_host=self.host, lfclient_port=self.port)
self.station_profile = self.local_realm.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password,
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
def build(self):
# Build stations
#print("We've gotten into the build stations function")
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=self.radio, sta_names_=self.sta_list, debug=self.debug)
self.station_profile.admin_up()
if self.local_realm.wait_for_ip(station_list=self.sta_list, debug=self.debug, timeout_sec=30):
self._pass("Station build finished")
self.exit_success()
else:
self._fail("Stations not able to acquire IP. Please check network input.")
self.exit_fail()
def cleanup(self, sta_list):
self.station_profile.cleanup(sta_list)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=sta_list,
debug=self.debug)
def main():
parser = LFCliBase.create_basic_argparse(
prog='example_wpa3_connection.py',
# formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Example code that creates a specified amount of stations on a specified SSID using WPA3 security.
''',
description='''\
example_wpa3_connection.py
--------------------
Generic command example:
python3 ./example_wpa3_connection.py
--host localhost
--port 8080
--num_stations 3
--ssid netgear-wpa3
--passwd admin123-wpa3
--radio wiphy1
--debug
''')
args = parser.parse_args()
num_sta = 2
if (args.num_stations is not None) and (int(args.num_stations) > 0):
num_stations_converted = int(args.num_stations)
num_sta = num_stations_converted
station_list = LFUtils.portNameSeries(prefix_="sta",
start_id_=0,
end_id_=num_sta-1,
padding_number_=10000,
radio=args.radio)
ip_test = IPv4Test(host=args.mgr, port=args.mgr_port, ssid=args.ssid, password=args.passwd, radio=args.radio,
security="wpa3", sta_list=station_list)
ip_test.cleanup(station_list)
ip_test.timeout = 60
ip_test.build()
if __name__ == "__main__":
main()

View File

@@ -1,114 +0,0 @@
#!/usr/bin/env python3
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import argparse
import LANforge
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
import realm
import time
import pprint
class IPv4Test(LFCliBase):
def __init__(self, ssid, security, password, sta_list=None, host="locahost", port=8080, number_template="00000", radio ="wiphy0", _debug_on=False,
_exit_on_error=False,
_exit_on_fail=False):
super().__init__(host, port, _debug=_debug_on, _halt_on_error=_exit_on_error, _exit_on_fail=_exit_on_fail)
self.host = host
self.port = port
self.ssid = ssid
self.radio= radio
self.security = security
self.password = password
self.sta_list = sta_list
self.timeout = 120
self.number_template = number_template
self.debug = _debug_on
self.local_realm = realm.Realm(lfclient_host=self.host, lfclient_port=self.port)
self.station_profile = self.local_realm.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password,
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
def build(self):
# Build stations
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=self.radio, sta_names_=self.sta_list, debug=self.debug)
self.station_profile.admin_up()
if self.local_realm.wait_for_ip(station_list=self.sta_list, debug=self.debug, timeout_sec=30):
self._pass("Station build finished")
self.exit_success()
else:
self._fail("Stations not able to acquire IP. Please check network input.")
self.exit_fail()
def cleanup(self, sta_list):
self.station_profile.cleanup(sta_list)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=sta_list,
debug=self.debug)
def main():
parser = LFCliBase.create_basic_argparse(
prog='example_wpa_connection.py',
# formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Example code that creates a specified amount of stations on a specified SSID using WPA security.
''',
description='''\
example_wpa_connection.py
--------------------
Generic command example:
python3 ./example_wpa_connection.py
--host localhost
--port 8080
--num_stations 3
--ssid netgear-wpa
--passwd admin123-wpa
--radio wiphy1
--debug
''')
args = parser.parse_args()
num_sta = 2
if (args.num_stations is not None) and (int(args.num_stations) > 0):
num_stations_converted = int(args.num_stations)
num_sta = num_stations_converted
station_list = LFUtils.portNameSeries(prefix_="sta",
start_id_=0,
end_id_=num_sta-1,
padding_number_=10000,
radio=args.radio)
ip_test = IPv4Test(host=args.mgr, port=args.mgr_port, ssid=args.ssid, password=args.passwd, radio=args.radio,
security="wpa", sta_list=station_list)
ip_test.cleanup(station_list)
ip_test.timeout = 60
ip_test.build()
if __name__ == "__main__":
main()

381
py-scripts/ftp_html.py Normal file
View File

@@ -0,0 +1,381 @@
#!/usr/bin/env python3
from matplotlib import pyplot as plt
from datetime import datetime
import numpy as np
import os.path
from os import path
import sys
import pdfkit
sys.path.append('/home/lanforge/.local/lib/python3.6/site-packages')
def report_banner(date):
banner_data = """
<!DOCTYPE html>
<html lang='en'>
<head>
<meta charset='UTF-8'>
<meta name='viewport' content='width=device-width, initial-scale=1' />
<title>LANforge Report</title>
</head>
<title>FTP Test </title></head>
<body>
<div class='Section report_banner-1000x205' style='background-image:url("/home/lanforge/LANforgeGUI_5.4.3/images/report_banner-1000x205.jpg");background-repeat:no-repeat;padding:0;margin:0;min-width:1000px; min-height:205px;width:1000px; height:205px;max-width:1000px; max-height:205px;'>
<br>
<img align='right' style='padding:25;margin:5;width:200px;' src="/home/lanforge/LANforgeGUI_5.4.3/images/CandelaLogo2-90dpi-200x90-trans.png" border='0' />
<div class='HeaderStyle'>
<br>
<h1 class='TitleFontPrint' style='color:darkgreen;'> FTP Test </h1>
<h3 class='TitleFontPrint' style='color:darkgreen;'>""" + str(date) + """</h3>
</div>
</div>
<br><br>
"""
return str(banner_data)
def test_objective(objective= 'This FTP Test is used to "Verify that N clients connected on Specified band and can simultaneously download some amount of file from FTP server and measuring the time taken by client to Download/Upload the file."'):
test_objective = """
<!-- Test Objective -->
<h3 align='left'>Objective</h3>
<p align='left' width='900'>""" + str(objective) + """</p>
<br>
"""
return str(test_objective)
def test_setup_information(test_setup_data=None):
if test_setup_data is None:
return None
else:
var = ""
for i in test_setup_data:
var = var + "<tr><td>" + i + "</td><td colspan='3'>" + str(test_setup_data[i]) + "</td></tr>"
setup_information = """
<!-- Test Setup Information -->
<table width='700px' border='1' cellpadding='2' cellspacing='0' style='border-top-color: gray; border-top-style: solid; border-top-width: 1px; border-right-color: gray; border-right-style: solid; border-right-width: 1px; border-bottom-color: gray; border-bottom-style: solid; border-bottom-width: 1px; border-left-color: gray; border-left-style: solid; border-left-width: 1px'>
<tr>
<th colspan='2'>Test Setup Information</th>
</tr>
<tr>
<td>Device Under Test</td>
<td>
<table width='100%' border='0' cellpadding='2' cellspacing='0' style='border-top-color: gray; border-top-style: solid; border-top-width: 1px; border-right-color: gray; border-right-style: solid; border-right-width: 1px; border-bottom-color: gray; border-bottom-style: solid; border-bottom-width: 1px; border-left-color: gray; border-left-style: solid; border-left-width: 1px'>
""" + str(var) + """
</table>
</td>
</tr>
</table>
<br>
"""
return str(setup_information)
def pass_fail_description(data=" This Table will give Pass/Fail results. "):
pass_fail_info = """
<!-- Radar Detect status -->
<h3 align='left'>PASS/FAIL Results</h3>
<p align='left' width='900'>""" + str(data) + """</p>
<br>
"""
return str(pass_fail_info)
def download_upload_time_description(data=" This Table will FTP Download/Upload Time of Clients."):
download_upload_time= """
<!-- Radar Detect status -->
<h3 align='left'>File Download/Upload Time (sec)</h3>
<p align='left' width='900'>""" + str(data) + """</p>
<br>
"""
return str(download_upload_time)
def add_pass_fail_table(result_data, row_head_list, col_head_list):
var_row = "<th></th>"
for row in col_head_list:
var_row = var_row + "<th>" + str(row) + "</th>"
list_data = []
dict_data = {}
bands = result_data[1]["bands"]
file_sizes = result_data[1]["file_sizes"]
directions = result_data[1]["directions"]
for b in bands:
final_data = ""
for size in file_sizes:
for d in directions:
for data in result_data.values():
if data["band"] == b and data["direction"] == d and data["file_size"] == size:
if data["result"] == "Pass":
final_data = final_data + "<td style='background-color:Green'>Pass</td>"
elif data["result"] == "Fail":
final_data = final_data + "<td style='background-color:Red'>Fail</td>"
list_data.append(final_data)
#print(list_data)
j = 0
for i in row_head_list:
dict_data[i] = list_data[j]
j = j + 1
#print(dict_data)
var_col = ""
for col in row_head_list:
var_col = var_col + "<tr><td>" + str(col) + "</td><!-- Add Variable Here -->" + str(
dict_data[col]) + "</tr>"
pass_fail_table = """
<!-- Radar Detected Table -->
<table width='1000px' border='1' cellpadding='2' cellspacing='0' >
<table width='1000px' border='1' >
<tr>
""" + str(var_row) + """
</tr>
""" + str(var_col) + """
</table>
</table>
<br><br><br><br><br><br><br>
"""
return pass_fail_table
def download_upload_time_table(result_data, row_head_list, col_head_list):
var_row = "<th></th>"
for row in col_head_list:
var_row = var_row + "<th>" + str(row) + "</th>"
list_data = []
dict_data = {}
bands = result_data[1]["bands"]
file_sizes = result_data[1]["file_sizes"]
directions = result_data[1]["directions"]
for b in bands:
final_data = ""
for size in file_sizes:
for d in directions:
for data in result_data.values():
data_time = data['time']
if data_time.count(0) == 0:
Min = min(data_time)
Max = max(data_time)
Sum = int(sum(data_time))
Len = len(data_time)
Avg = round(Sum / Len,2)
elif data_time.count(0) == len(data_time):
Min = "-"
Max = "-"
Avg = "-"
else:
data_time = [i for i in data_time if i != 0]
Min = min(data_time)
Max = max(data_time)
Sum = int(sum(data_time))
Len = len(data_time)
Avg = round(Sum / Len,2)
string_data = "Min=" + str(Min) + ",Max=" + str(Max) + ",Avg=" + str(Avg) + " (sec)"
if data["band"] == b and data["direction"] == d and data["file_size"] == size:
final_data = final_data + """<td>""" + string_data + """</td>"""
list_data.append(final_data)
#print(list_data)
j = 0
for i in row_head_list:
dict_data[i] = list_data[j]
j = j + 1
#print(dict_data)
var_col = ""
for col in row_head_list:
var_col = var_col + "<tr><td>" + str(col) + "</td><!-- Add Variable Here -->" + str(
dict_data[col]) + "</tr>"
download_upload_table = """
<!-- Radar Detected Table -->
<table width='1000px' border='1' cellpadding='2' cellspacing='0' >
<table width='1000px' border='1' >
<tr>
""" + str(var_row) + """
</tr>
""" + str(var_col) + """
</table>
</table>
<br><br><br><br><br><br><br>
"""
return download_upload_table
def graph_html(graph_path="",graph_name="",graph_description=""):
graph_html_obj = """
<h3>""" +graph_name+ """</h3>
<p>""" +graph_description+ """</p>
<img align='center' style='padding:15;margin:5;width:1000px;' src=""" + graph_path + """ border='1' />
<br><br>
"""
return str(graph_html_obj)
def bar_plot(ax,x_axis, data, colors=None, total_width=0.8, single_width=1, legend=True):
# Check if colors where provided, otherwhise use the default color cycle
if colors is None:
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
# Number of bars per group
n_bars = len(data)
# The width of a single bar
bar_width = total_width / n_bars
# List containing handles for the drawn bars, used for the legend
bars = []
# Iterate over all data
for i, (name, values) in enumerate(data.items()):
# The offset in x direction of that bar
x_offset = (i - n_bars / 2) * bar_width + bar_width / 2
# Draw a bar for every value of that type
for x, y in enumerate(values):
bar = ax.bar(x + x_offset, y, width=bar_width * single_width, color=colors[i % len(colors)])
# Add a handle to the last drawn bar, which we'll need for the legend
bars.append(bar[0])
# Draw legend if we need
if legend:
ax.legend(bars, data.keys(),bbox_to_anchor=(1.1,1.05),loc='upper right')
ax.set_ylabel('Time in seconds')
ax.set_xlabel("stations")
x_data = x_axis
idx = np.asarray([i for i in range(len(x_data))])
ax.set_xticks(idx)
ax.set_xticklabels(x_data)
def generate_graph(result_data, x_axis,band,size,graph_path):
# bands = result_data[1]["bands"]
# file_sizes = result_data[1]["file_sizes"]
num_stations = result_data[1]["num_stations"]
# for b in bands:
# for size in file_sizes:
dict_of_graph = {}
color = []
graph_name = ""
graph_description=""
count = 0
for data in result_data.values():
if data["band"] == band and data["file_size"] == size and data["direction"] == "Download":
dict_of_graph["Download"] = data["time"]
color.append("Orange")
graph_name = "File size "+ size +" " + str(num_stations) + " Clients " +band+ "-File Download Times(secs)"
graph_description = "Out of "+ str(data["num_stations"])+ " clients, "+ str(data["num_stations"] - data["time"].count(0))+ " are able to download " + "within " + str(data["duration"]) + " min."
count = count + 1
if data["band"] == band and data["file_size"] == size and data["direction"] == "Upload":
dict_of_graph["Upload"] = data["time"]
color.append("Blue")
graph_name = "File size "+ size +" " + str(num_stations) + " Clients " +band+ "-File Upload Times(secs)"
graph_description = graph_description + "Out of " + str(data["num_stations"]) + " clients, " + str(
data["num_stations"] - data["time"].count(0)) + " are able to upload " + "within " +str(data["duration"]) + " min."
count = count + 1
if count == 2:
graph_name = "File size "+ size +" " + str(num_stations) + " Clients " +band+ "-File Download and Upload Times(secs)"
if len(dict_of_graph) != 0:
fig, ax = plt.subplots()
bar_plot(ax, x_axis, dict_of_graph, total_width=.8, single_width=.9, colors=color)
my_dpi = 96
figure = plt.gcf() # get current figure
figure.set_size_inches(18, 6)
# when saving, specify the DPI
plt.savefig(graph_path + "/image"+band+size+".png", dpi=my_dpi)
return str(graph_html(graph_path + "/image"+band+size+".png", graph_name,graph_description))
else:
return ""
def input_setup_info_table(input_setup_info=None):
if input_setup_info is None:
return None
else:
var = ""
for i in input_setup_info:
var = var + "<tr><td>" + i + "</td><td colspan='3'>" + str(input_setup_info[i]) + "</td></tr>"
setup_information = """
<!-- Test Setup Information -->
<table width='700px' border='1' cellpadding='2' cellspacing='0' style='border-top-color: gray; border-top-style: solid; border-top-width: 1px; border-right-color: gray; border-right-style: solid; border-right-width: 1px; border-bottom-color: gray; border-bottom-style: solid; border-bottom-width: 1px; border-left-color: gray; border-left-style: solid; border-left-width: 1px'>
<tr>
<th colspan='2'>Input Setup Information</th>
</tr>
<tr>
<td>Information</td>
<td>
<table width='100%' border='0' cellpadding='2' cellspacing='0' style='border-top-color: gray; border-top-style: solid; border-top-width: 1px; border-right-color: gray; border-right-style: solid; border-right-width: 1px; border-bottom-color: gray; border-bottom-style: solid; border-bottom-width: 1px; border-left-color: gray; border-left-style: solid; border-left-width: 1px'>
""" + str(var) + """
</table>
</td>
</tr>
</table>
<br>
"""
return str(setup_information)
def generate_report(result_data=None,
date=None,
test_setup_info={},
input_setup_info={},
graph_path="/home/lanforge/html-reports/FTP-Test"):
# Need to pass this to test_setup_information()
input_setup_info = input_setup_info
test_setup_data = test_setup_info
x_axis = []
num_stations = result_data[1]["num_stations"]
for i in range(1, num_stations + 1, 1):
x_axis.append(i)
column_head = []
rows_head = []
bands = result_data[1]["bands"]
file_sizes = result_data[1]["file_sizes"]
directions = result_data[1]["directions"]
for size in file_sizes:
for direction in directions:
column_head.append(size + " " + direction)
for band in bands:
if band != "Both":
rows_head.append(str(num_stations) + " Clients-" + band)
else:
rows_head.append(str(num_stations // 2) + "+" + str(num_stations // 2) + " Clients-2.4G+5G")
reports_root = graph_path + "/" + str(date)
if path.exists(graph_path):
os.mkdir(reports_root)
print("Reports Root is Created")
else:
os.mkdir(graph_path)
os.mkdir(reports_root)
print("Reports Root is created")
print("Generating Reports in : ", reports_root)
html_report = report_banner(date) + \
test_setup_information(test_setup_data) + \
test_objective() + \
pass_fail_description() + \
add_pass_fail_table(result_data, rows_head, column_head) + \
download_upload_time_description() + \
download_upload_time_table(result_data, rows_head, column_head)
for b in bands:
for size in file_sizes:
html_report = html_report + \
generate_graph(result_data, x_axis, b, size, graph_path=reports_root)
html_report = html_report + input_setup_info_table(input_setup_info)
# write the html_report into a file in /home/lanforge/html_reports in a directory named FTP-Test and html_report name should be having a timesnap with it
f = open(reports_root + "/report.html", "a")
# f = open("report.html", "a")
f.write(html_report)
f.close()
# write logic to generate pdf here
pdfkit.from_file(reports_root + "/report.html", reports_root + "/report.pdf")
# test blocks from here
if __name__ == '__main__':
generate_report()

87
py-scripts/grafana_profile.py Executable file
View File

@@ -0,0 +1,87 @@
#!/usr/bin/env python3
import sys
import os
import argparse
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
sys.path.append(os.path.join(os.path.abspath('..'), 'py-dashboard'))
from GrafanaRequest import GrafanaRequest
from LANforge.lfcli_base import LFCliBase
class UseGrafana(LFCliBase):
def __init__(self,
_grafana_token,
host="localhost",
_grafana_host="localhost",
port=8080,
_debug_on=False,
_exit_on_fail=False,
_grafana_port=3000):
super().__init__(host, port, _debug=_debug_on, _exit_on_fail=_exit_on_fail)
self.grafana_token = _grafana_token
self.grafana_port = _grafana_port
self.grafana_host = _grafana_host
self.GR = GrafanaRequest(self.grafana_host, str(self.grafana_port), _folderID=0, _api_token=self.grafana_token)
def create_dashboard(self,
dashboard_name):
return self.GR.create_dashboard(dashboard_name)
def delete_dashboard(self,
dashboard_uid):
return self.GR.delete_dashboard(dashboard_uid)
def list_dashboards(self):
return self.GR.list_dashboards()
def main():
parser = LFCliBase.create_basic_argparse(
prog='grafana_profile.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''Manage Grafana database''',
description='''\
grafana_profile.py
------------------
Command example:
./grafana_profile.py
--grafana_token
--''')
required = parser.add_argument_group('required arguments')
required.add_argument('--grafana_token', help='token to access your Grafana database', required=True)
optional = parser.add_argument_group('optional arguments')
optional.add_argument('--dashboard_name', help='name of dashboard to create', default=None)
optional.add_argument('--dashboard_uid', help='UID of dashboard to modify', default=None)
optional.add_argument('--delete_dashboard',
help='Call this flag to delete the dashboard defined by UID',
default=None)
optional.add_argument('--grafana_port', help='Grafana port if different from 3000', default=3000)
optional.add_argument('--grafana_host', help='Grafana host', default='localhost')
optional.add_argument('--list_dashboards', help='List dashboards on Grafana server', default=None)
args = parser.parse_args()
Grafana = UseGrafana(args.grafana_token,
args.grafana_port,
args.grafana_host
)
if args.dashboard_name is not None:
Grafana.create_dashboard(args.dashboard_name)
if args.delete_dashboard is not None:
Grafana.delete_dashboard(args.dashboard_uid)
if args.list_dashboards is not None:
Grafana.list_dashboards()
if __name__ == "__main__":
main()

1010
py-scripts/html_template.py Normal file

File diff suppressed because it is too large Load Diff

78
py-scripts/influx.py Normal file
View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
# pip3 install influxdb
import sys
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
import requests
import json
from influxdb import InfluxDBClient
import datetime
from LANforge.lfcli_base import LFCliBase
import time
class RecordInflux(LFCliBase):
def __init__(self,
_lfjson_host="lanforge",
_lfjson_port=8080,
_influx_host="localhost",
_influx_port=8086,
_influx_user=None,
_influx_passwd=None,
_influx_db=None,
_debug_on=False,
_exit_on_fail=False):
super().__init__(_lfjson_host, _lfjson_port,
_debug=_debug_on,
_exit_on_fail=_exit_on_fail)
self.influx_host = _influx_host
self.influx_port = _influx_port
self.influx_user = _influx_user
self.influx_passwd = _influx_passwd
self.influx_db = _influx_db
self.client = InfluxDBClient(self.influx_host,
self.influx_port,
self.influx_user,
self.influx_passwd,
self.influx_db)
def post_to_influx(self, key, value, tags):
data = dict()
data["measurement"] = key
data["tags"] = tags
data["time"] = str(datetime.datetime.utcnow().isoformat())
data["fields"] = dict()
data["fields"]["value"] = value
data1 = [data]
self.client.write_points(data1)
# Don't use this unless you are sure you want to.
# More likely you would want to generate KPI in the
# individual test cases and poke those relatively small bits of
# info into influxdb.
# This will not end until the 'longevity' timer has expired.
# This function pushes data directly into the Influx database and defaults to all columns.
def monitor_port_data(self,
lanforge_host="localhost",
devices=None,
longevity=None,
monitor_interval=None):
url = 'http://' + lanforge_host + ':8080/port/1/1/'
end = datetime.datetime.now() + datetime.timedelta(0, longevity)
while datetime.datetime.now() < end:
for station in devices:
url1 = url + station
response = json.loads(requests.get(url1).text)
# Poke everything into influx db
for key in response['interface'].keys():
tags = dict()
tags["region"] = 'us-west'
self.posttoinflux("%s-%s" % (station, key), response['interface'][key], tags)
time.sleep(monitor_interval)

95
py-scripts/influx2.py Normal file
View File

@@ -0,0 +1,95 @@
#!/usr/bin/env python3
# pip3 install influxdb-client
# Version 2.0 influx DB Client
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import requests
import json
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
import datetime
from LANforge.lfcli_base import LFCliBase
import time
class RecordInflux(LFCliBase):
def __init__(self,
_lfjson_host="lanforge",
_lfjson_port=8080,
_influx_host="localhost",
_influx_port=8086,
_influx_org=None,
_influx_token=None,
_influx_bucket=None,
_debug_on=False,
_exit_on_fail=False):
super().__init__(_lfjson_host, _lfjson_port,
_debug=_debug_on,
_exit_on_fail=_exit_on_fail)
self.influx_host = _influx_host
self.influx_port = _influx_port
self.influx_org = _influx_org
self.influx_token = _influx_token
self.influx_bucket = _influx_bucket
self.url = "http://%s:%s"%(self.influx_host, self.influx_port)
self.client = influxdb_client.InfluxDBClient(url=self.url,
token=self.influx_token,
org=self.influx_org,
debug=_debug_on)
self.write_api = self.client.write_api(write_options=SYNCHRONOUS)
#print("org: ", self.influx_org)
#print("token: ", self.influx_token)
#print("bucket: ", self.influx_bucket)
#exit(0)
def post_to_influx(self, key, value, tags, time):
p = influxdb_client.Point(key)
for tag_key, tag_value in tags.items():
p.tag(tag_key, tag_value)
print(tag_key, tag_value)
p.time(time)
p.field("value", value)
self.write_api.write(bucket=self.influx_bucket, org=self.influx_org, record=p)
def set_bucket(self, b):
self.influx_bucket = b
# Don't use this unless you are sure you want to.
# More likely you would want to generate KPI in the
# individual test cases and poke those relatively small bits of
# info into influxdb.
# This will not end until the 'longevity' timer has expired.
# This function pushes data directly into the Influx database and defaults to all columns.
def monitor_port_data(self,
lanforge_host="localhost",
devices=None,
longevity=None,
monitor_interval=None,
bucket=None,
tags=None): # dict
url = 'http://' + lanforge_host + ':8080/port/1/1/'
end = datetime.datetime.now() + datetime.timedelta(0, longevity)
while datetime.datetime.now() < end:
for station in devices:
url1 = url + station
response = json.loads(requests.get(url1).text)
current_time = str(datetime.datetime.utcnow().isoformat())
# Poke everything into influx db
for key in response['interface'].keys():
self.post_to_influx("%s-%s" % (station, key), response['interface'][key], tags, current_time)
time.sleep(monitor_interval)

370
py-scripts/lf_ap_auto_test.py Executable file
View File

@@ -0,0 +1,370 @@
#!/usr/bin/env python3
"""
Note: To Run this script gui should be opened with
path: cd LANforgeGUI_5.4.3 (5.4.3 can be changed with GUI version)
pwd (Output : /home/lanforge/LANforgeGUI_5.4.3)
./lfclient.bash -cli-socket 3990
This script is used to automate running AP-Auto tests. You
may need to view an AP Auto test configured through the GUI to understand
the options and how best to input data.
./lf_ap_auto_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name ap-auto-instance --config_name test_con --upstream 1.1.eth2 \
--dut5_0 'linksys-8450 Default-SSID-5gl c4:41:1e:f5:3f:25 (2)' \
--dut2_0 'linksys-8450 Default-SSID-2g c4:41:1e:f5:3f:24 (1)' \
--max_stations_2 100 --max_stations_5 100 --max_stations_dual 200 \
--radio2 1.1.wiphy0 --radio2 1.1.wiphy2 \
--radio5 1.1.wiphy1 --radio5 1.1.wiphy3 --radio5 1.1.wiphy4 \
--radio5 1.1.wiphy5 --radio5 1.1.wiphy6 --radio5 1.1.wiphy7 \
--set 'Basic Client Connectivity' 1 --set 'Multi Band Performance' 1 \
--set 'Skip 2.4Ghz Tests' 1 --set 'Skip 5Ghz Tests' 1 \
--set 'Throughput vs Pkt Size' 0 --set 'Capacity' 0 --set 'Stability' 0 --set 'Band-Steering' 0 \
--set 'Multi-Station Throughput vs Pkt Size' 0 --set 'Long-Term' 0 \
--test_rig Testbed-01 --pull_report \
--influx_host c7-graphana --influx_port 8086 --influx_org Candela \
--influx_token=-u_Wd-L8o992701QF0c5UmqEp7w7Z7YOMaWLxOMgmHfATJGnQbbmYyNxHBR9PgD6taM_tcxqJl6U8DjU1xINFQ== \
--influx_bucket ben \
--influx_tag testbed Ferndale-01
Note:
--enable [option] will attempt to select any checkbox of that name to true.
--disable [option] will attempt to un-select any checkbox of that name to true.
--raw_line 'line contents' will add any setting to the test config. This is
useful way to support any options not specifically enabled by the
command options.
--set modifications will be applied after the other config has happened,
so it can be used to override any other config.
Example of raw text config for ap-auto, to show other possible options:
sel_port-0: 1.1.sta00500
show_events: 1
show_log: 0
port_sorting: 0
kpi_id: AP Auto
bg: 0xE0ECF8
test_rig: Ferndale-01-Basic
show_scan: 1
auto_helper: 1
skip_2: 1
skip_5: 1
skip_5b: 1
skip_dual: 0
skip_tri: 1
dut5b-0: NA
dut5-0: linksys-8450 Default-SSID-5gl c4:41:1e:f5:3f:25 (2)
dut2-0: linksys-8450 Default-SSID-2g c4:41:1e:f5:3f:24 (1)
dut5b-1: NA
dut5-1: NA
dut2-1: NA
dut5b-2: NA
dut5-2: NA
dut2-2: NA
spatial_streams: AUTO
bandw_options: AUTO
modes: Auto
upstream_port: 1.1.2 eth2
operator:
mconn: 1
tos: 0
vid_buf: 1000000
vid_speed: 700000
reset_stall_thresh_udp_dl: 9600
cx_prcnt: 950000
cx_open_thresh: 35
cx_psk_thresh: 75
cx_1x_thresh: 130
reset_stall_thresh_udp_ul: 9600
reset_stall_thresh_tcp_dl: 9600
reset_stall_thresh_tcp_ul: 9600
reset_stall_thresh_l4: 100000
reset_stall_thresh_voip: 20000
stab_mcast_dl_min: 100000
stab_mcast_dl_max: 0
stab_udp_dl_min: 56000
stab_udp_dl_max: 0
stab_udp_ul_min: 56000
stab_udp_ul_max: 0
stab_tcp_dl_min: 500000
stab_tcp_dl_max: 0
stab_tcp_ul_min: 500000
stab_tcp_ul_max: 0
dl_speed: 85%
ul_speed: 85%
max_stations_2: 100
max_stations_5: 100
max_stations_5b: 64
max_stations_dual: 200
max_stations_tri: 64
lt_sta: 2
voip_calls: 0
lt_dur: 3600
reset_dur: 600
lt_gi: 30
dur20: 20
hunt_retries: 1
hunt_iter: 15
bind_bssid: 1
set_txpower_default: 0
cap_dl: 1
cap_ul: 0
cap_use_pkt_sizes: 0
stability_reset_radios: 0
stability_use_pkt_sizes: 0
pkt_loss_thresh: 10000
frame_sizes: 200, 512, 1024, MTU
capacities: 1, 2, 5, 10, 20, 40, 64, 128, 256, 512, 1024, MAX
pf_text0: 2.4 DL 200 70Mbps
pf_text1: 2.4 DL 512 110Mbps
pf_text2: 2.4 DL 1024 115Mbps
pf_text3: 2.4 DL MTU 120Mbps
pf_text4:
pf_text5: 2.4 UL 200 88Mbps
pf_text6: 2.4 UL 512 106Mbps
pf_text7: 2.4 UL 1024 115Mbps
pf_text8: 2.4 UL MTU 120Mbps
pf_text9:
pf_text10: 5 DL 200 72Mbps
pf_text11: 5 DL 512 185Mbps
pf_text12: 5 DL 1024 370Mbps
pf_text13: 5 DL MTU 525Mbps
pf_text14:
pf_text15: 5 UL 200 90Mbps
pf_text16: 5 UL 512 230Mbps
pf_text17: 5 UL 1024 450Mbps
pf_text18: 5 UL MTU 630Mbps
radio2-0: 1.1.4 wiphy0
radio2-1: 1.1.6 wiphy2
radio5-0: 1.1.5 wiphy1
radio5-1: 1.1.7 wiphy3
radio5-2: 1.1.8 wiphy4
radio5-3: 1.1.9 wiphy5
radio5-4: 1.1.10 wiphy6
radio5-5: 1.1.11 wiphy7
basic_cx: 0
tput: 0
tput_multi: 0
tput_multi_tcp: 1
tput_multi_udp: 1
tput_multi_dl: 1
tput_multi_ul: 1
dual_band_tput: 1
capacity: 0
band_steering: 0
longterm: 0
mix_stability: 0
loop_iter: 1
reset_batch_size: 1
reset_duration_min: 10000
reset_duration_max: 60000
bandsteer_always_5g: 0
"""
import sys
import os
import argparse
import time
import json
from os import path
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from cv_test_manager import cv_test as cvtest
from cv_test_manager import *
from cv_commands import chamberview as cv
class ApAutoTest(cvtest):
def __init__(self,
lf_host="localhost",
lf_port=8080,
lf_user="lanforge",
lf_password="lanforge",
instance_name="ap_auto_instance",
config_name="ap_auto_config",
upstream="1.1.eth1",
pull_report=False,
dut5_0="NA",
dut2_0="NA",
load_old_cfg=False,
max_stations_2=100,
max_stations_5=100,
max_stations_dual=200,
radio2=[],
radio5=[],
enables=[],
disables=[],
raw_lines=[],
raw_lines_file="",
sets=[],
):
super().__init__(lfclient_host=lf_host, lfclient_port=lf_port)
self.lf_host = lf_host
self.lf_port = lf_port
self.lf_user = lf_user
self.lf_password =lf_password
self.createCV = cv(lf_host, lf_port);
self.instance_name = instance_name
self.config_name = config_name
self.upstream = upstream
self.pull_report = pull_report
self.load_old_cfg = load_old_cfg
self.test_name = "AP-Auto"
self.dut5_0 = dut5_0
self.dut2_0 = dut2_0
self.max_stations_2 = max_stations_2
self.max_stations_5 = max_stations_5
self.max_stations_dual = max_stations_dual
self.radio2 = radio2
self.radio5 = radio5
self.enables = enables
self.disables = disables
self.raw_lines = raw_lines
self.raw_lines_file = raw_lines_file
self.sets = sets
def setup(self):
# Nothing to do at this time.
return
def run(self):
self.createCV.sync_cv()
time.sleep(2)
self.createCV.sync_cv()
blob_test = "%s-"%(self.test_name)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
self.show_text_blob(None, None, False)
# Test related settings
cfg_options = []
ridx = 0
for r in self.radio2:
cfg_options.append("radio2-%i: %s"%(ridx, r[0]))
ridx += 1
ridx = 0
for r in self.radio5:
cfg_options.append("radio5-%i: %s"%(ridx, r[0]))
ridx += 1
self.apply_cfg_options(cfg_options, self.enables, self.disables, self.raw_lines, self.raw_lines_file)
# Command line args take precedence.
if self.upstream != "":
cfg_options.append("upstream_port: " + self.upstream)
if self.dut5_0 != "":
cfg_options.append("dut5-0: " + self.dut5_0)
if self.dut2_0 != "":
cfg_options.append("dut2-0: " + self.dut2_0)
if self.max_stations_2 != -1:
cfg_options.append("max_stations_2: " + str(self.max_stations_2))
if self.max_stations_5 != -1:
cfg_options.append("max_stations_5: " + str(self.max_stations_5))
if self.max_stations_dual != -1:
cfg_options.append("max_stations_dual: " + str(self.max_stations_dual))
# We deleted the scenario earlier, now re-build new one line at a time.
self.build_cfg(self.config_name, blob_test, cfg_options)
cv_cmds = []
self.create_and_run_test(self.load_old_cfg, self.test_name, self.instance_name,
self.config_name, self.sets,
self.pull_report, self.lf_host, self.lf_user, self.lf_password,
cv_cmds)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
def main():
parser = argparse.ArgumentParser("""
Open this file in an editor and read the top notes for more details.
Example:
./lf_ap_auto_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name ap-auto-instance --config_name test_con --upstream 1.1.eth2 \
--dut5_0 'linksys-8450 Default-SSID-5gl c4:41:1e:f5:3f:25 (2)' \
--dut2_0 'linksys-8450 Default-SSID-2g c4:41:1e:f5:3f:24 (1)' \
--max_stations_2 100 --max_stations_5 100 --max_stations_dual 200 \
--radio2 1.1.wiphy0 --radio2 1.1.wiphy2 \
--radio5 1.1.wiphy1 --radio5 1.1.wiphy3 --radio5 1.1.wiphy4 \
--radio5 1.1.wiphy5 --radio5 1.1.wiphy6 --radio5 1.1.wiphy7 \
--set 'Basic Client Connectivity' 1 --set 'Multi Band Performance' 1 \
--set 'Skip 2.4Ghz Tests' 1 --set 'Skip 5Ghz Tests' 1 \
--set 'Throughput vs Pkt Size' 0 --set 'Capacity' 0 --set 'Stability' 0 --set 'Band-Steering' 0 \
--set 'Multi-Station Throughput vs Pkt Size' 0 --set 'Long-Term' 0 \
--test_rig Testbed-01 --pull_report \
--influx_host c7-graphana --influx_port 8086 --influx_org Candela \
--influx_token=-u_Wd-L8o992701QF0c5UmqEp7w7Z7YOMaWLxOMgmHfATJGnQbbmYyNxHBR9PgD6taM_tcxqJl6U8DjU1xINFQ== \
--influx_bucket ben \
--influx_tag testbed Ferndale-01
"""
)
cv_add_base_parser(parser) # see cv_test_manager.py
parser.add_argument("-u", "--upstream", type=str, default="",
help="Upstream port for wifi capacity test ex. 1.1.eth1")
parser.add_argument("--max_stations_2", type=int, default=-1,
help="Specify maximum 2.4Ghz stations")
parser.add_argument("--max_stations_5", type=int, default=-1,
help="Specify maximum 5Ghz stations")
parser.add_argument("--max_stations_dual", type=int, default=-1,
help="Specify maximum stations for dual-band tests")
parser.add_argument("--dut5_0", type=str, default="",
help="Specify 5Ghz DUT entry. Syntax is somewhat tricky: DUT-name SSID BSID (bssid-idx), example: linksys-8450 Default-SSID-5gl c4:41:1e:f5:3f:25 (2)")
parser.add_argument("--dut2_0", type=str, default="",
help="Specify 5Ghz DUT entry. Syntax is somewhat tricky: DUT-name SSID BSID (bssid-idx), example: linksys-8450 Default-SSID-2g c4:41:1e:f5:3f:24 (1)")
parser.add_argument("--radio2", action='append', nargs=1, default=[],
help="Specify 2.4Ghz radio. May be specified multiple times.")
parser.add_argument("--radio5", action='append', nargs=1, default=[],
help="Specify 5Ghz radio. May be specified multiple times.")
args = parser.parse_args()
cv_base_adjust_parser(args)
CV_Test = ApAutoTest(lf_host = args.mgr,
lf_port = args.port,
lf_user = args.lf_user,
lf_password = args.lf_password,
instance_name = args.instance_name,
config_name = args.config_name,
upstream = args.upstream,
pull_report = args.pull_report,
dut5_0 = args.dut5_0,
dut2_0 = args.dut2_0,
load_old_cfg = args.load_old_cfg,
max_stations_2 = args.max_stations_2,
max_stations_5 = args.max_stations_5,
max_stations_dual = args.max_stations_dual,
radio2 = args.radio2,
radio5 = args.radio5,
enables = args.enable,
disables = args.disable,
raw_lines = args.raw_line,
raw_lines_file = args.raw_lines_file,
sets = args.set
)
CV_Test.setup()
CV_Test.run()
CV_Test.check_influx_kpi(args)
if __name__ == "__main__":
main()

View File

@@ -1,327 +0,0 @@
#!/usr/bin/env python3
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
import argparse
from LANforge import LFUtils
import time
import test_l3_longevity as DFS
def valid_endp_types(_endp_type):
etypes = _endp_type.split()
for endp_type in etypes:
valid_endp_type=['lf_udp','lf_udp6','lf_tcp','lf_tcp6','mc_udp','mc_udp6']
if not (str(endp_type) in valid_endp_type):
print('invalid endp_type: %s. Valid types lf_udp, lf_udp6, lf_tcp, lf_tcp6, mc_udp, mc_udp6' % endp_type)
exit(1)
return _endp_type
def main():
lfjson_host = "localhost"
lfjson_port = 8080
endp_types = "lf_udp"
debug_on = False
parser = argparse.ArgumentParser(
prog='lf_cisco_dfs.py',
#formatter_class=argparse.RawDescriptionHelpFormatter,
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Useful Information:
1. Polling interval for checking traffic is fixed at 1 minute
2. The test will generate csv file
3. The tx/rx rates are fixed at 256000 bits per second
4. Maximum stations per radio is 64
''',
description='''\
lf_cisco_dfs.py:
--------------------
Summary :
----------
create stations, create traffic between upstream port and stations, run traffic.
The traffic on the stations will be checked once per minute to verify that traffic is transmitted
and recieved.
Generic command layout:
-----------------------
python .\\lf_cisco_dfs.py --test_duration <duration> --endp_type <traffic types> --upstream_port <port>
--radio "radio==<radio> stations==<number staions> ssid==<ssid> ssid_pw==<ssid password> security==<security type: wpa2, open, wpa3>" --debug
Multiple radios may be entered with individual --radio switches
generiic command with controller setting channel and channel width test duration 5 min
python3 lf_cisco_dfs.py --cisco_ctlr <IP> --cisco_dfs True/False --mgr <Lanforge IP>
--cisco_channel <channel> --cisco_chan_width <20,40,80,120> --endp_type 'lf_udp lf_tcp mc_udp' --upstream_port <1.ethX>
--radio "radio==<radio 0 > stations==<number stations> ssid==<ssid> ssid_pw==<ssid password> security==<wpa2 , open>"
--radio "radio==<radio 1 > stations==<number stations> ssid==<ssid> ssid_pw==<ssid password> security==<wpa2 , open>"
--duration 5m
<duration>: number followed by one of the following
d - days
h - hours
m - minutes
s - seconds
<traffic type>:
lf_udp : IPv4 UDP traffic
lf_tcp : IPv4 TCP traffic
lf_udp6 : IPv6 UDP traffic
lf_tcp6 : IPv6 TCP traffic
mc_udp : IPv4 multi cast UDP traffic
mc_udp6 : IPv6 multi cast UDP traffic
<tos>:
BK, BE, VI, VO: Optional wifi related Tos Settings. Or, use your preferred numeric values.
#################################
#Command switches
#################################
--cisco_ctlr <IP of Cisco Controller>',default=None
--cisco_user <User-name for Cisco Controller>',default="admin"
--cisco_passwd <Password for Cisco Controller>',default="Cisco123
--cisco_prompt <Prompt for Cisco Controller>',default="(Cisco Controller) >
--cisco_ap <Cisco AP in question>',default="APA453.0E7B.CF9C"
--cisco_dfs <True/False>',default=False
--cisco_channel <channel>',default=None , no change
--cisco_chan_width <20 40 80 160>',default="20",choices=["20","40","80","160"]
--cisco_band <a | b | abgn>',default="a",choices=["a", "b", "abgn"]
--mgr <hostname for where LANforge GUI is running>',default='localhost'
-d / --test_duration <how long to run> example --time 5d (5 days) default: 3m options: number followed by d, h, m or s',default='3m'
--tos: Support different ToS settings: BK | BE | VI | VO | numeric',default="BE"
--debug: Enable debugging',default=False
-t / --endp_type <types of traffic> example --endp_type \"lf_udp lf_tcp mc_udp\" Default: lf_udp , options: lf_udp, lf_udp6, lf_tcp, lf_tcp6, mc_udp, mc_udp6',
default='lf_udp', type=valid_endp_types
-u / --upstream_port <cross connect upstream_port> example: --upstream_port eth1',default='eth1')
-o / --outfile <Output file for csv data>", default='longevity_results'
#########################################
# Examples
# #######################################
Example #1 running traffic with two radios
1. Test duration 4 minutes
2. Traffic IPv4 TCP
3. Upstream-port eth1
4. Radio #0 wiphy0 has 32 stations, ssid = candelaTech-wpa2-x2048-4-1, ssid password = candelaTech-wpa2-x2048-4-1
5. Radio #1 wiphy1 has 64 stations, ssid = candelaTech-wpa2-x2048-5-3, ssid password = candelaTech-wpa2-x2048-5-3
6. Create connections with TOS of BK and VI
Command: (remove carriage returns)
python3 .\\lf_cisco_dfs.py --test_duration 4m --endp_type \"lf_tcp lf_udp mc_udp\" --tos \"BK VI\" --upstream_port eth1
--radio "radio==wiphy0 stations==32 ssid==candelaTech-wpa2-x2048-4-1 ssid_pw==candelaTech-wpa2-x2048-4-1 security==wpa2"
--radio "radio==wiphy1 stations==64 ssid==candelaTech-wpa2-x2048-5-3 ssid_pw==candelaTech-wpa2-x2048-5-3 security==wpa2"
Example #2 using cisco controller
1. cisco controller at 192.168.100.112
2. cisco dfs True
3. cisco channel 52
4. cisco channel width 20
5. traffic 'lf_udp lf_tcp mc_udp'
6. upstream port eth3
7. radio #0 wiphy0 stations 3 ssid test_candela ssid_pw [BLANK] secruity Open
8. radio #1 wiphy1 stations 16 ssid test_candela ssid_pw [BLANK] security Open
9. lanforge manager at 192.168.100.178
10. duration 5m
Command:
python3 lf_cisco_dfs.py --cisco_ctlr 192.168.100.112 --cisco_dfs True --mgr 192.168.100.178
--cisco_channel 52 --cisco_chan_width 20 --endp_type 'lf_udp lf_tcp mc_udp' --upstream_port 1.eth3
--radio "radio==1.wiphy0 stations==3 ssid==test_candela ssid_pw==[BLANK] security==open"
--radio "radio==1.wiphy1 stations==16 ssid==test_candela ssid_pw==[BLANK] security==open"
--test_duration 5m
''')
parser.add_argument('--cisco_ctlr', help='--cisco_ctlr <IP of Cisco Controller>',default=None)
parser.add_argument('--cisco_user', help='--cisco_user <User-name for Cisco Controller>',default="admin")
parser.add_argument('--cisco_passwd', help='--cisco_passwd <Password for Cisco Controller>',default="Cisco123")
parser.add_argument('--cisco_prompt', help='--cisco_prompt <Prompt for Cisco Controller>',default="\(Cisco Controller\) >")
parser.add_argument('--cisco_ap', help='--cisco_ap <Cisco AP in question>',default="APA453.0E7B.CF9C")
parser.add_argument('--cisco_dfs', help='--cisco_dfs <True/False>',default=False)
parser.add_argument('--cisco_channel', help='--cisco_channel <channel>',default=None)
parser.add_argument('--cisco_chan_width', help='--cisco_chan_width <20 40 80 160>',default="20",choices=["20","40","80","160"])
parser.add_argument('--cisco_band', help='--cisco_band <a | b | abgn>',default="a",choices=["a", "b", "abgn"])
parser.add_argument('--cisco_series', help='--cisco_series <9800 | 3504>',default="3504",choices=["9800","3504"])
parser.add_argument('--cisco_scheme', help='--cisco_scheme (serial|telnet|ssh): connect via serial, ssh or telnet',default="ssh",choices=["serial","telnet","ssh"])
parser.add_argument('--cisco_wlan', help='--cisco_wlan <wlan name> default: NA, NA means no change',default="NA")
parser.add_argument('--cisco_wlanID', help='--cisco_wlanID <wlanID> default: NA , NA means not change',default="NA")
parser.add_argument('--cisco_tx_power', help='--cisco_tx_power <1 | 2 | 3 | 4 | 5 | 6 | 7 | 8> 1 is highest power default NA NA means no change',default="NA"
,choices=["1","2","3","4","5","6","7","8","NA"])
parser.add_argument('--amount_ports_to_reset', help='--amount_ports_to_reset \"<min amount ports> <max amount ports>\" ', default=None)
parser.add_argument('--port_reset_seconds', help='--ports_reset_seconds \"<min seconds> <max seconds>\" ', default="10 30")
parser.add_argument('--mgr', help='--mgr <hostname for where LANforge GUI is running>',default='localhost')
parser.add_argument('-d','--test_duration', help='--test_duration <how long to run> example --time 5d (5 days) default: 3m options: number followed by d, h, m or s',default='3m')
parser.add_argument('--tos', help='--tos: Support different ToS settings: BK | BE | VI | VO | numeric',default="BE")
parser.add_argument('--debug', help='--debug: Enable debugging',default=False)
parser.add_argument('-t', '--endp_type', help='--endp_type <types of traffic> example --endp_type \"lf_udp lf_tcp mc_udp\" Default: lf_udp , options: lf_udp, lf_udp6, lf_tcp, lf_tcp6, mc_udp, mc_udp6',
default='lf_udp', type=valid_endp_types)
parser.add_argument('-u', '--upstream_port', help='--upstream_port <cross connect upstream_port> example: --upstream_port eth1',default='eth1')
parser.add_argument('-o','--csv_outfile', help="--csv_outfile <Output file for csv data>", default='longevity_results')
parser.add_argument('--polling_interval', help="--polling_interval <seconds>", default='60s')
#parser.add_argument('-c','--csv_output', help="Generate csv output", default=False)
parser.add_argument('-r','--radio', action='append', nargs=1, help='--radio \
\"radio==<number_of_wiphy stations=<=number of stations> ssid==<ssid> ssid_pw==<ssid password> security==<security>\" '\
, required=True)
parser.add_argument("--cap_ctl_out", help="--cap_ctl_out , switch the cisco controller output will be captured", action='store_true')
args = parser.parse_args()
#print("args: {}".format(args))
debug_on = args.debug
if args.test_duration:
test_duration = args.test_duration
if args.polling_interval:
polling_interval = args.polling_interval
if args.endp_type:
endp_types = args.endp_type
if args.mgr:
lfjson_host = args.mgr
if args.upstream_port:
side_b = args.upstream_port
if args.radio:
radios = args.radio
if args.csv_outfile != None:
current_time = time.strftime("%m_%d_%Y_%H_%M_%S", time.localtime())
csv_outfile = "{}_{}.csv".format(args.csv_outfile,current_time)
print("csv output file : {}".format(csv_outfile))
MAX_NUMBER_OF_STATIONS = 64
radio_name_list = []
number_of_stations_per_radio_list = []
ssid_list = []
ssid_password_list = []
ssid_security_list = []
#optional radio configuration
reset_port_enable_list = []
reset_port_time_min_list = []
reset_port_time_max_list = []
print("radios {}".format(radios))
for radio_ in radios:
radio_keys = ['radio','stations','ssid','ssid_pw','security']
radio_info_dict = dict(map(lambda x: x.split('=='), str(radio_).replace('[','').replace(']','').replace("'","").split()))
print("radio_dict {}".format(radio_info_dict))
for key in radio_keys:
if key not in radio_info_dict:
print("missing config, for the {}, all of the following need to be present {} ".format(key,radio_keys))
exit(1)
radio_name_list.append(radio_info_dict['radio'])
number_of_stations_per_radio_list.append(radio_info_dict['stations'])
ssid_list.append(radio_info_dict['ssid'])
ssid_password_list.append(radio_info_dict['ssid_pw'])
ssid_security_list.append(radio_info_dict['security'])
optional_radio_reset_keys = ['reset_port_enable']
radio_reset_found = True
for key in optional_radio_reset_keys:
if key not in radio_info_dict:
#print("port reset test not enabled")
radio_reset_found = False
break
if radio_reset_found:
reset_port_enable_list.append(True)
reset_port_time_min_list.append(radio_info_dict['reset_port_time_min'])
reset_port_time_max_list.append(radio_info_dict['reset_port_time_max'])
else:
reset_port_enable_list.append(False)
reset_port_time_min_list.append('0s')
reset_port_time_max_list.append('0s')
index = 0
station_lists = []
for (radio_name_, number_of_stations_per_radio_) in zip(radio_name_list,number_of_stations_per_radio_list):
number_of_stations = int(number_of_stations_per_radio_)
if number_of_stations > MAX_NUMBER_OF_STATIONS:
print("number of stations per radio exceeded max of : {}".format(MAX_NUMBER_OF_STATIONS))
quit(1)
station_list = LFUtils.portNameSeries(prefix_="sta", start_id_= 1 + index*1000, end_id_= number_of_stations + index*1000,
padding_number_=10000, radio=radio_name_)
station_lists.append(station_list)
index += 1
#print("endp-types: %s"%(endp_types))
dfs = DFS.L3VariableTime(
lfjson_host,
lfjson_port,
args=args,
number_template="00",
station_lists= station_lists,
name_prefix="LT-",
endp_types=endp_types,
tos=args.tos,
side_b=side_b,
radio_name_list=radio_name_list,
number_of_stations_per_radio_list=number_of_stations_per_radio_list,
ssid_list=ssid_list,
ssid_password_list=ssid_password_list,
ssid_security_list=ssid_security_list,
test_duration=test_duration,
polling_interval= polling_interval,
reset_port_enable_list=reset_port_enable_list,
reset_port_time_min_list=reset_port_time_min_list,
reset_port_time_max_list=reset_port_time_max_list,
side_a_min_rate=256000,
side_b_min_rate=256000,
debug_on=debug_on,
outfile=csv_outfile)
dfs.pre_cleanup()
dfs.build()
if not dfs.passes():
print("build step failed.")
print(dfs.get_fail_message())
exit(1)
dfs.start(False, False)
dfs.stop()
if not dfs.passes():
print("stop test failed")
print(dfs.get_fail_message())
print("Pausing 30 seconds after run for manual inspection before we clean up.")
time.sleep(30)
dfs.cleanup()
if dfs.passes():
print("Full test passed, all connections increased rx bytes")
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

281
py-scripts/lf_dataplane_test.py Executable file
View File

@@ -0,0 +1,281 @@
#!/usr/bin/env python3
"""
Note: To Run this script gui should be opened with
path: cd LANforgeGUI_5.4.3 (5.4.3 can be changed with GUI version)
pwd (Output : /home/lanforge/LANforgeGUI_5.4.3)
./lfclient.bash -cli-socket 3990
This script is used to automate running Dataplane tests. You
may need to view a Dataplane test configured through the GUI to understand
the options and how best to input data.
./lf_dataplane_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name dataplane-instance --config_name test_con --upstream 1.1.eth2 \
--dut linksys-8450 --duration 15s --station 1.1.sta01500 \
--download_speed 85% --upload_speed 0 \
--raw_line 'pkts: Custom;60;142;256;512;1024;MTU' \
--raw_line 'cust_pkt_sz: 88 1200' \
--raw_line 'directions: DUT Transmit;DUT Receive' \
--raw_line 'traffic_types: UDP;TCP' \
--test_rig Testbed-01 --pull_report \
--influx_host c7-graphana --influx_port 8086 --influx_org Candela \
--influx_token=-u_Wd-L8o992701QF0c5UmqEp7w7Z7YOMaWLxOMgmHfATJGnQbbmYyNxHBR9PgD6taM_tcxqJl6U8DjU1xINFQ== \
--influx_bucket ben \
--influx_tag testbed Ferndale-01
Note:
--raw_line 'line contents' will add any setting to the test config. This is
useful way to support any options not specifically enabled by the
command options.
--set modifications will be applied after the other config has happened,
so it can be used to override any other config.
Example of raw text config for Dataplane, to show other possible options:
show_events: 1
show_log: 0
port_sorting: 0
kpi_id: Dataplane Pkt-Size
notes0: ec5211 in bridge mode, wpa2 auth.
bg: 0xE0ECF8
test_rig:
show_scan: 1
auto_helper: 0
skip_2: 0
skip_5: 0
skip_5b: 1
skip_dual: 0
skip_tri: 1
selected_dut: ea8300
duration: 15000
traffic_port: 1.1.157 sta01500
upstream_port: 1.1.2 eth2
path_loss: 10
speed: 85%
speed2: 0Kbps
min_rssi_bound: -150
max_rssi_bound: 0
channels: AUTO
modes: Auto
pkts: Custom;60;142;256;512;1024;MTU
spatial_streams: AUTO
security_options: AUTO
bandw_options: AUTO
traffic_types: UDP;TCP
directions: DUT Transmit;DUT Receive
txo_preamble: OFDM
txo_mcs: 0 CCK, OFDM, HT, VHT
txo_retries: No Retry
txo_sgi: OFF
txo_txpower: 15
attenuator: 0
attenuator2: 0
attenuator_mod: 255
attenuator_mod2: 255
attenuations: 0..+50..950
attenuations2: 0..+50..950
chamber: 0
tt_deg: 0..+45..359
cust_pkt_sz: 88 1200
show_bar_labels: 1
show_prcnt_tput: 0
show_3s: 0
show_ll_graphs: 0
show_gp_graphs: 1
show_1m: 1
pause_iter: 0
outer_loop_atten: 0
show_realtime: 1
operator:
mconn: 1
mpkt: 1000
tos: 0
loop_iterations: 1
"""
import sys
import os
import argparse
import time
import json
from os import path
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from cv_test_manager import cv_test as cvtest
from cv_test_manager import *
from cv_commands import chamberview as cv
class DataplaneTest(cvtest):
def __init__(self,
lf_host="localhost",
lf_port=8080,
lf_user="lanforge",
lf_password="lanforge",
instance_name="dpt_instance",
config_name="dpt_config",
upstream="1.1.eth2",
pull_report=False,
load_old_cfg=False,
upload_speed="0",
download_speed="85%",
duration="15s",
station="1.1.sta01500",
dut="NA",
enables=[],
disables=[],
raw_lines=[],
raw_lines_file="",
sets=[],
):
super().__init__(lfclient_host=lf_host, lfclient_port=lf_port)
self.lf_host = lf_host
self.lf_port = lf_port
self.lf_user = lf_user
self.lf_password =lf_password
self.createCV = cv(lf_host, lf_port);
self.instance_name = instance_name
self.config_name = config_name
self.dut = dut
self.duration = duration
self.upstream = upstream
self.station = station
self.pull_report = pull_report
self.load_old_cfg = load_old_cfg
self.test_name = "Dataplane"
self.upload_speed = upload_speed
self.download_speed = download_speed
self.enables = enables
self.disables = disables
self.raw_lines = raw_lines
self.raw_lines_file = raw_lines_file
self.sets = sets
def setup(self):
# Nothing to do at this time.
return
def run(self):
self.createCV.sync_cv()
time.sleep(2)
self.createCV.sync_cv()
blob_test = "dataplane-test-latest-"
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
self.show_text_blob(None, None, False)
# Test related settings
cfg_options = []
### HERE###
self.apply_cfg_options(cfg_options, self.enables, self.disables, self.raw_lines, self.raw_lines_file)
# cmd line args take precedence and so come last in the cfg array.
if self.upstream != "":
cfg_options.append("upstream_port: " + self.upstream)
if self.station != "":
cfg_options.append("traffic_port: " + self.station)
if self.download_speed != "":
cfg_options.append("speed: " + self.download_speed)
if self.upload_speed != "":
cfg_options.append("speed2: " + self.upload_speed)
if self.duration != "":
cfg_options.append("duration: " + self.duration)
if self.dut != "":
cfg_options.append("selected_dut: " + self.dut)
# We deleted the scenario earlier, now re-build new one line at a time.
self.build_cfg(self.config_name, blob_test, cfg_options)
cv_cmds = []
self.create_and_run_test(self.load_old_cfg, self.test_name, self.instance_name,
self.config_name, self.sets,
self.pull_report, self.lf_host, self.lf_user, self.lf_password,
cv_cmds)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
def main():
parser = argparse.ArgumentParser("""
Open this file in an editor and read the top notes for more details.
Example:
./lf_dataplane_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name dataplane-instance --config_name test_con --upstream 1.1.eth2 \
--dut linksys-8450 --duration 15s --station 1.1.sta01500 \
--download_speed 85% --upload_speed 0 \
--raw_line 'pkts: Custom;60;142;256;512;1024;MTU' \
--raw_line 'cust_pkt_sz: 88 1200' \
--raw_line 'directions: DUT Transmit;DUT Receive' \
--raw_line 'traffic_types: UDP;TCP' \
--test_rig Testbed-01 --pull_report \
--influx_host c7-graphana --influx_port 8086 --influx_org Candela \
--influx_token=-u_Wd-L8o992701QF0c5UmqEp7w7Z7YOMaWLxOMgmHfATJGnQbbmYyNxHBR9PgD6taM_tcxqJl6U8DjU1xINFQ== \
--influx_bucket ben \
--influx_tag testbed Ferndale-01
"""
)
cv_add_base_parser(parser) # see cv_test_manager.py
parser.add_argument("-u", "--upstream", type=str, default="",
help="Upstream port for wifi capacity test ex. 1.1.eth2")
parser.add_argument("--station", type=str, default="",
help="Station to be used in this test, example: 1.1.sta01500")
parser.add_argument("--dut", default="",
help="Specify DUT used by this test, example: linksys-8450")
parser.add_argument("--download_speed", default="",
help="Specify requested download speed. Percentage of theoretical is also supported. Default: 85%")
parser.add_argument("--upload_speed", default="",
help="Specify requested upload speed. Percentage of theoretical is also supported. Default: 0")
parser.add_argument("--duration", default="",
help="Specify duration of each traffic run")
args = parser.parse_args()
cv_base_adjust_parser(args)
CV_Test = DataplaneTest(lf_host = args.mgr,
lf_port = args.port,
lf_user = args.lf_user,
lf_password = args.lf_password,
instance_name = args.instance_name,
config_name = args.config_name,
upstream = args.upstream,
pull_report = args.pull_report,
load_old_cfg = args.load_old_cfg,
download_speed = args.download_speed,
upload_speed = args.upload_speed,
duration = args.duration,
dut = args.dut,
station = args.station,
enables = args.enable,
disables = args.disable,
raw_lines = args.raw_line,
raw_lines_file = args.raw_lines_file,
sets = args.set
)
CV_Test.setup()
CV_Test.run()
CV_Test.check_influx_kpi(args)
if __name__ == "__main__":
main()

2846
py-scripts/lf_dfs_test.py Executable file

File diff suppressed because it is too large Load Diff

549
py-scripts/lf_ftp_test.py Executable file
View File

@@ -0,0 +1,549 @@
#!/usr/bin/env python3
"""
NAME: ftp_test.py
PURPOSE:
will create stations and endpoints to generate and verify layer-4 traffic over an ftp connection.
find out download/upload time of each client according to file size.
This script will monitor the bytes-rd attribute of the endpoints.
SETUP:
Create a file to be downloaded linux: fallocate -l <size> <name> example fallocate -l 2M ftp_test.txt
EXAMPLE:
'./ftp_test.py --ssid "jedway-wap2-x2048-5-3" --passwd "jedway-wpa2-x2048-5-3" --security wpa2 --bands "5G" --direction "Download" \
--file_size "2MB" --num_stations 2
INCLUDE_IN_README
-Jitendrakumar Kushavah
Copyright 2021 Candela Technologies Inc
License: Free to distribute and modify. LANforge systems must be licensed.
"""
import sys
from ftp_html import *
import paramiko
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append('../py-json')
from LANforge import LFUtils
from LANforge.lfcli_base import LFCliBase
from LANforge.LFUtils import *
import realm
import argparse
from datetime import datetime
import time
import os
class ftp_test(LFCliBase):
def __init__(self, lfclient_host="localhost", lfclient_port=8080, radio = "wiphy0", sta_prefix="sta", start_id=0, num_sta= None,
dut_ssid=None,dut_security=None, dut_passwd=None, file_size=None, band=None,
upstream="eth1",_debug_on=False, _exit_on_error=False, _exit_on_fail=False, direction= None):
super().__init__(lfclient_host, lfclient_port, _debug=_debug_on, _exit_on_fail=_exit_on_fail)
print("Test is about to start")
self.host = lfclient_host
self.port = lfclient_port
self.radio = radio
self.upstream = upstream
self.sta_prefix = sta_prefix
self.sta_start_id = start_id
self.num_sta = num_sta
self.ssid = dut_ssid
self.security = dut_security
self.password = dut_passwd
self.requests_per_ten = 1
self.band=band
self.file_size=file_size
self.direction=direction
self.local_realm = realm.Realm(lfclient_host=self.host, lfclient_port=self.port)
self.station_profile = self.local_realm.new_station_profile()
self.cx_profile = self.local_realm.new_http_profile()
self.port_util = realm.PortUtils(self.local_realm)
self.cx_profile.requests_per_ten = self.requests_per_ten
print("Test is Initialized")
def set_values(self):
#This method will set values according user input
if self.band == "5G":
self.radio = ["wiphy2"] # need to pass in the radios
if self.file_size == "2MB":
#providing time duration for Pass or fail criteria
self.duration = self.convert_min_in_time(1)
elif self.file_size == "500MB":
self.duration = self.convert_min_in_time(1) # 30
elif self.file_size == "1000MB":
self.duration = self.convert_min_in_time(1) # 50
else:
self.duration = self.convert_min_in_time(10) # 10
elif self.band == "2.4G":
self.radio = ["wiphy0"] # need to pass in the radios
if self.file_size == "2MB":
self.duration = self.convert_min_in_time(1) # 2
elif self.file_size == "500MB":
self.duration = self.convert_min_in_time(1) # 60
elif self.file_size == "1000MB":
self.duration = self.convert_min_in_time(1) # 80
else:
self.duration = self.convert_min_in_time(10) # 10
elif self.band == "Both":
self.radio = ["wiphy2", "wiphy0"] # need to pass in the radios
#if Both then number of stations are half for 2.4G and half for 5G
self.num_sta = self.num_sta // 2
print(self.num_sta)
if self.file_size == "2MB":
self.duration = self.convert_min_in_time(1) # 2
elif self.file_size == "500MB":
self.duration = self.convert_min_in_time(1) # 60
elif self.file_size == "1000MB":
self.duration = self.convert_min_in_time(1) # 80
else:
self.duration = self.convert_min_in_time(10) # 10
self.file_size_bytes=int(self.convert_file_size_in_Bytes(self.file_size))
def precleanup(self):
self.count=0
#delete everything in the GUI before starting the script
try:
self.local_realm.load("BLANK")
except:
print("Couldn't load 'BLANK' Test configurations")
for rad in self.radio:
if rad == "wiphy2":
#select mode(All stations will connects to 5G)
self.station_profile.mode = 10
self.count=self.count+1
elif rad == "wiphy0": # This probably is not the best selection mode
# select mode(All stations will connects to 2.4G)
self.station_profile.mode = 6
self.count = self.count + 1
#check Both band if both band then for 2.4G station id start with 20
if self.count == 2:
self.sta_start_id = self.num_sta
self.num_sta = 2 * (self.num_sta)
#if Both band then first 20 stations will connects to 5G
self.station_profile.mode = 10
self.cx_profile.cleanup()
#create station list with sta_id 20
self.station_list1 = LFUtils.portNameSeries(prefix_=self.sta_prefix, start_id_=self.sta_start_id,
end_id_=self.num_sta - 1, padding_number_=10000,
radio=rad)
#cleanup station list which started sta_id 20
self.station_profile.cleanup(self.station_list1, debug_=self.debug)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url,
port_list=self.station_list,
debug=self.debug)
return
#clean layer4 ftp traffic
self.cx_profile.cleanup()
self.station_list = LFUtils.portNameSeries(prefix_=self.sta_prefix, start_id_=self.sta_start_id,
end_id_=self.num_sta - 1, padding_number_=10000,
radio=rad)
#cleans stations
self.station_profile.cleanup(self.station_list , delay=1, debug_=self.debug)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url,
port_list=self.station_list,
debug=self.debug)
time.sleep(1)
print("precleanup done")
def build(self):
#set ftp
self.port_util.set_ftp(port_name=self.local_realm.name_to_eid(self.upstream)[2], resource=1, on=True)
for rad in self.radio:
#station build
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template("00")
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
self.station_profile.create(radio=rad, sta_names_=self.station_list, debug=self.debug)
self.local_realm.wait_until_ports_appear(sta_list=self.station_list)
self.station_profile.admin_up()
if self.local_realm.wait_for_ip(self.station_list):
self._pass("All stations got IPs")
else:
self._fail("Stations failed to get IPs")
exit(1)
#building layer4
self.cx_profile.direction ="dl"
self.cx_profile.dest = "/dev/null"
print('DIRECTION',self.direction)
if self.direction == "Download":
self.cx_profile.create(ports=self.station_profile.station_names, ftp_ip="10.40.0.1/ftp_test.txt",
sleep_time=.5,debug_=self.debug,suppress_related_commands_=True, ftp=True, user="lanforge",
passwd="lanforge", source="")
elif self.direction == "Upload":
dict_sta_and_ip = {}
#data from GUI for find out ip addr of each station
data = self.json_get("ports/list?fields=IP")
# This loop for find out proper ip addr and station name
for i in self.station_list:
for j in data['interfaces']:
for k in j:
if i == k:
dict_sta_and_ip[k] = j[i]['ip']
#list of ip addr of all stations
ip = list(dict_sta_and_ip.values())
eth_list = []
client_list = []
#list of all stations
for i in range(len(self.station_list)):
client_list.append(self.station_list[i][4:])
#list of upstream port
eth_list.append(self.upstream)
#create layer for connection for upload
for client_num in range(len(self.station_list)):
self.cx_profile.create(ports=eth_list, ftp_ip=ip[client_num] + "/ftp_test_upload.txt", sleep_time=.5,
debug_=self.debug, suppress_related_commands_=True, ftp=True,
user="lanforge", passwd="lanforge",
source="", upload_name=client_list[client_num])
#check Both band present then build stations with another station list
if self.count == 2:
self.station_list = self.station_list1
# if Both band then another 20 stations will connects to 2.4G
self.station_profile.mode = 6
print("Test Build done")
def start(self, print_pass=False, print_fail=False):
for rad in self.radio:
self.cx_profile.start_cx()
print("Test Started")
def stop(self):
self.cx_profile.stop_cx()
self.station_profile.admin_down()
def postcleanup(self):
self.cx_profile.cleanup()
self.local_realm.load("BLANK")
self.station_profile.cleanup(self.station_profile.station_names, delay=1, debug_=self.debug)
LFUtils.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=self.station_profile.station_names,
debug=self.debug)
#Create file for given file size
def file_create(self):
if os.path.isfile("/home/lanforge/ftp_test.txt"):
os.remove("/home/lanforge/ftp_test.txt")
os.system("fallocate -l " +self.file_size +" /home/lanforge/ftp_test.txt")
print("File creation done", self.file_size)
#convert file size MB or GB into Bytes
def convert_file_size_in_Bytes(self,size):
if (size.endswith("MB")) or (size.endswith("Mb")) or (size.endswith("GB")) or (size.endswith("Gb")):
if (size.endswith("MB")) or (size.endswith("Mb")):
return float(size[:-2]) * 10**6
elif (size.endswith("GB")) or (size.endswith("Gb")):
return float(size[:-2]) * 10**9
def my_monitor(self,time1):
#data in json format
data = self.json_get("layer4/list?fields=bytes-rd")
print("layer4/list?fields=bytes-read: {}".format(data))
#list of layer 4 connections name
self.data1 = []
for i in range(self.num_sta):
if self.num_sta == 1:
# the station num is 1 , yet the station is 0
print("i: {} self.num_sta: {}".format(i,self.num_sta))
print("data['endpoint'][{}]: {}".format(i,data['endpoint']))
print("data list: {}".format((str(list(data['endpoint'].keys())))[2:-2]))
print("data list: {}".format((str(list(data['endpoint'].keys())))[2:-2]))
#self.data1.append((str(list(data['endpoint']['name']))))
self.data1.append((str((data['endpoint']['name']))))
else:
# the station num is 1 , yet the station is 0
print("i: {} self.num_sta: {}".format(i,self.num_sta))
print("data['endpoint'][{}]: {}".format(i,data['endpoint'][i]))
print("data list: {}".format((str(list(data['endpoint'][i].keys())))[2:-2]))
self.data1.append((str(list(data['endpoint'][i].keys())))[2:-2])
data2 = self.data1
print("data1: {}".format(self.data1))
print("data2: {}".format(data2))
list_of_time = []
list1 = []
list2 = []
for i in range(self.num_sta):
list_of_time.append(0)
print("list_of_time: {}".format(list_of_time))
num_sta_finished = 0
while list_of_time.count(0) != 0:
#run script upto given time
if str(datetime.now()- time1) >= self.duration:
break
for i in range(self.num_sta):
data = self.json_get("layer4/list?fields=bytes-rd")
#print("data from bytes-rd: {}".format(data))
if self.num_sta == 1:
#reading uc-avg data in json format
uc_avg= self.json_get("layer4/list?fields=uc-avg")
#print("layer4/list?fields=uc-avg: {}".format(uc_avg))
if data['endpoint']['bytes-rd'] <= self.file_size_bytes:
data = self.json_get("layer4/list?fields=bytes-rd")
if data['endpoint']['bytes-rd'] >= self.file_size_bytes:
list1.append(i)
print("list1: {} list2: {}".format(list1,list2))
if list1.count(i) == 1:
list2.append(i)
list1 = list2
print("CX_{} list1: {} list2: {}".format(data2[0],list1,list2))
#stop station after download or upload file with particular size
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": "CX_" + data2[0],
"cx_state": "STOPPED"
}, debug_=self.debug)
list_of_time[i] = round(int(uc_avg['endpoint']['uc-avg'])/1000,1)
num_sta_finished += 1
if num_sta_finished >= self.num_sta:
break
else:
#reading uc-avg data in json format
uc_avg= self.json_get("layer4/list?fields=uc-avg")
if data['endpoint'][i][data2[i]]['bytes-rd'] <= self.file_size_bytes:
data = self.json_get("layer4/list?fields=bytes-rd")
if data['endpoint'][i][data2[i]]['bytes-rd'] >= self.file_size_bytes:
list1.append(i)
print("list1: {} list2: {}".format(list1,list2))
if list1.count(i) == 1:
list2.append(i)
list1 = list2
print("CX_{} list1: {} list2: {}".format(data2[i],list1,list2))
#stop station after download or upload file with particular size
self.json_post("/cli-json/set_cx_state", {
"test_mgr": "default_tm",
"cx_name": "CX_" + data2[i],
"cx_state": "STOPPED"
}, debug_=self.debug)
list_of_time[i] = round(int(uc_avg['endpoint'][i][data2[i]]['uc-avg'])/1000,1)
time.sleep(0.5)
# print(".", end='')
#return list of download/upload time in seconds
return list_of_time
#Method for arrange ftp download/upload time data in dictionary
def ftp_test_data(self, list_time, pass_fail, bands, file_sizes, directions, num_stations):
#creating dictionary for single iteration
create_dict={}
create_dict["band"] = self.band
create_dict["direction"] = self.direction
create_dict["file_size"] = self.file_size
create_dict["time"] = list_time
create_dict["duration"] = self.time_test
create_dict["result"] = pass_fail
create_dict["bands"] = bands
create_dict["file_sizes"] = file_sizes
create_dict["directions"] = directions
create_dict["num_stations"] = num_stations
return create_dict
#Method for AP reboot
def ap_reboot(self, ip, user, pswd):
print("starting AP reboot")
# creating shh client object we use this object to connect to router
ssh = paramiko.SSHClient()
# automatically adds the missing host key
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, port=22, username=user, password=pswd, banner_timeout=600)
stdin, stdout, stderr = ssh.exec_command('reboot')
output = stdout.readlines()
ssh.close()
# print('\n'.join(output))
time.sleep(180)
print("AP rebooted")
def convert_min_in_time(self,total_minutes):
#saving time in minutus
self.time_test = total_minutes
# Get hours with floor division
hours = total_minutes // 60
# Get additional minutes with modulus
minutes = total_minutes % 60
# Create time as a string
time_string = str("%d:%02d" % (divmod(total_minutes, 60))) + ":00" + ":000000"
return time_string
def pass_fail_check(self,time_list):
if time_list.count(0) == 0:
return "Pass"
else:
return "Fail"
def main():
# This has --mgr, --mgr_port and --debug
parser = LFCliBase.create_bare_argparse(prog="netgear-ftp", formatter_class=argparse.RawTextHelpFormatter, epilog="About This Script")
# Adding More Arguments for custom use
parser.add_argument('--ssid',type=str, help='--ssid', default="TestAP-Jitendra")
parser.add_argument('--passwd',type=str, help='--passwd', default="BLANK")
parser.add_argument('--security', type=str, help='--security', default="open")
parser.add_argument('--radios',nargs="+",help='--radio to use on LANforge for 5G and 2G', default=["wiphy0"])
# Test variables
parser.add_argument('--bands', nargs="+", help='--bands defaults ["5G","2.4G","Both"]', default=["5G","2.4G","Both"])
parser.add_argument('--directions', nargs="+", help='--directions defaults ["Download","Upload"]', default=["Download","Upload"])
parser.add_argument('--file_sizes', nargs="+", help='--File Size defaults ["2MB","500MB","1000MB"]', default=["2MB","500MB","1000MB"])
parser.add_argument('--num_stations', type=int, help='--num_client is number of stations', default=40)
args = parser.parse_args()
# 1st time stamp for test duration
time_stamp1 = datetime.now()
#use for creating ftp_test dictionary
iteraration_num=0
#empty dictionary for whole test data
ftp_data={}
#For all combinations ftp_data of directions, file size and client counts, run the test
for band in args.bands:
for direction in args.directions:
for file_size in args.file_sizes:
# Start Test
obj = ftp_test(lfclient_host=args.mgr,
lfclient_port=args.mgr_port,
dut_ssid=args.ssid,
dut_passwd=args.passwd,
dut_security=args.security,
num_sta= args.num_stations,
band=band,
file_size=file_size,
direction=direction
)
iteraration_num=iteraration_num+1
obj.file_create()
obj.set_values()
obj.precleanup()
#if file_size != "2MB":
#obj.ap_reboot("192.168.213.190","root","Password@123xzsawq@!")
obj.build()
if not obj.passes():
print(obj.get_fail_message())
exit(1)
#First time stamp
time1 = datetime.now()
obj.start(False, False)
#return list of download/upload completed time stamp
time_list = obj.my_monitor(time1)
# check pass or fail
pass_fail = obj.pass_fail_check(time_list)
#dictionary of whole data
ftp_data[iteraration_num] = obj.ftp_test_data(time_list,pass_fail,args.bands,args.file_sizes,args.directions,args.num_stations)
obj.stop()
obj.postcleanup()
#2nd time stamp for test duration
time_stamp2 = datetime.now()
#total time for test duration
test_duration = str(time_stamp2 - time_stamp1)[:-7]
print("FTP Test Data", ftp_data)
date = str(datetime.now()).split(",")[0].replace(" ", "-").split(".")[0]
test_setup_info = {
"AP Name": "vap5",
"SSID": args.ssid,
"Number of Stations": args.num_stations,
"Test Duration": test_duration
}
input_setup_info = {
"IP": "192.168.213.190" ,
"user": "root",
"Contact": "support@candelatech.com"
}
generate_report(ftp_data,
date,
test_setup_info,
input_setup_info,
graph_path="/home/lanforge/html-reports/FTP-Test")
if __name__ == '__main__':
main()

211
py-scripts/lf_graph.py Normal file
View File

@@ -0,0 +1,211 @@
#!/usr/bin/env python3
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import pandas as pd
import pdfkit
import math
# internal candela references included during intial phases, to be deleted at future date
# graph reporting classes
class lf_bar_graph():
def __init__(self,
_data_set= [[30,55,69,37],[45,67,34,22],[22,45,12,34]],
_xaxis_name="x-axis",
_yaxis_name="y-axis",
_xaxis_categories=[1,2,3,4],
_graph_image_name="image_name",
_label=["bi-downlink", "bi-uplink",'uplink'],
_color=None,
_bar_width=0.25,
_color_edge='grey',
_font_weight='bold',
_color_name=['lightcoral','darkgrey','r','g','b','y'],
_figsize=(10,5),
_dpi=96):
self.data_set=_data_set
self.xaxis_name=_xaxis_name
self.yaxis_name=_yaxis_name
self.xaxis_categories=_xaxis_categories
self.graph_image_name=_graph_image_name
self.label=_label
self.color=_color
self.bar_width=_bar_width
self.color_edge=_color_edge
self.font_weight=_font_weight
self.color_name=_color_name
self.figsize=_figsize
def build_bar_graph(self):
if self.color is None:
i = 0
self.color = []
for col in self.data_set:
self.color.append(self.color_name[i])
i = i+1
fig = plt.subplots(figsize=self.figsize)
i = 0
for set in self.data_set:
if i > 0:
br = br1
br2 = [x + self.bar_width for x in br]
plt.bar(br2, self.data_set[i], color=self.color[i], width=self.bar_width,
edgecolor=self.color_edge, label=self.label[i])
br1 = br2
i = i+1
else:
br1 = np.arange(len(self.data_set[i]))
plt.bar(br1, self.data_set[i], color=self.color[i], width=self.bar_width,
edgecolor=self.color_edge, label=self.label[i])
i=i+1
plt.xlabel(self.xaxis_name, fontweight='bold', fontsize=15)
plt.ylabel(self.yaxis_name, fontweight='bold', fontsize=15)
plt.xticks([r + self.bar_width for r in range(len(self.data_set[0]))],
self.xaxis_categories)
plt.legend()
fig = plt.gcf()
plt.savefig("%s.png"% (self.graph_image_name), dpi=96)
plt.close()
print("{}.png".format(self.graph_image_name))
return "%s.png" % (self.graph_image_name)
class lf_scatter_graph():
def __init__(self,
_x_data_set= ["sta0 ","sta1","sta2","sta3"],
_y_data_set= [[30,55,69,37]],
_xaxis_name="x-axis",
_yaxis_name="y-axis",
_label = ["num1", "num2"],
_graph_image_name="image_name",
_color=["r","y"],
_figsize=(9,4)):
self.x_data_set = _x_data_set
self.y_data_set = _y_data_set
self.xaxis_name = _xaxis_name
self.yaxis_name = _yaxis_name
self.figsize = _figsize
self.graph_image_name = _graph_image_name
self.color = _color
self.label = _label
def build_scatter_graph(self):
if self.color is None:
self.color = ["orchid", "lime", "aquamarine", "royalblue", "darkgray", "maroon"]
fig = plt.subplots(figsize=self.figsize)
plt.scatter(self.x_data_set, self.y_data_set[0], color=self.color[0], label=self.label[0])
if len(self.y_data_set) > 1:
for i in range(1,len(self.y_data_set)):
plt.scatter(self.x_data_set, self.y_data_set[i], color=self.color[i], label=self.label[i])
plt.xlabel(self.xaxis_name, fontweight='bold', fontsize=15)
plt.ylabel(self.yaxis_name, fontweight='bold', fontsize=15)
plt.gcf().autofmt_xdate()
plt.legend()
plt.savefig("%s.png" % (self.graph_image_name), dpi=96)
plt.close()
print("{}.png".format(self.graph_image_name))
return "%s.png" % (self.graph_image_name)
class lf_stacked_graph():
def __init__(self,
_data_set= [[1,2,3,4],[1,1,1,1],[1,1,1,1]],
_xaxis_name="Stations",
_yaxis_name="Numbers",
_label = ['Success','Fail'],
_graph_image_name="image_name",
_color = ["b","g"],
_figsize=(9,4)):
self.data_set = _data_set # [x_axis,y1_axis,y2_axis]
self.xaxis_name = _xaxis_name
self.yaxis_name = _yaxis_name
self.figsize = _figsize
self.graph_image_name = _graph_image_name
self.label = _label
self.color = _color
def build_stacked_graph(self):
fig = plt.subplots(figsize=self.figsize)
if self.color is None:
self.color = ["darkred", "tomato", "springgreen", "skyblue", "indigo", "plum"]
plt.bar(self.data_set[0], self.data_set[1], color=self.color[0])
plt.bar(self.data_set[0], self.data_set[2], bottom=self.data_set[1], color=self.color[1])
if len(self.data_set) > 3:
for i in range(3, len(self.data_set)):
plt.bar(self.data_set[0], self.data_set[i], bottom=np.array(self.data_set[i-2])+np.array(self.data_set[i-1]), color=self.color[i-1])
plt.xlabel(self.xaxis_name)
plt.ylabel(self.yaxis_name)
plt.legend(self.label)
plt.savefig("%s.png" % (self.graph_image_name), dpi=96)
plt.close()
print("{}.png".format(self.graph_image_name))
return "%s.png" % (self.graph_image_name)
# Unit Test
if __name__ == "__main__":
output_html_1 = "graph_1.html"
output_pdf_1 = "graph_1.pdf"
# test build_bar_graph with defaults
graph = lf_bar_graph()
graph_html_obj = """
<img align='center' style='padding:15;margin:5;width:1000px;' src=""" + "%s" % (graph.build_bar_graph()) + """ border='1' />
<br><br>
"""
#
test_file = open(output_html_1, "w")
test_file.write(graph_html_obj)
test_file.close()
# write to pdf
# write logic to generate pdf here
# wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.focal_amd64.deb
# sudo apt install ./wkhtmltox_0.12.6-1.focal_amd64.deb
options = {"enable-local-file-access" : None} # prevent eerror Blocked access to file
pdfkit.from_file(output_html_1, output_pdf_1, options=options)
# test build_bar_graph setting values
dataset = [[45,67,34,22],[22,45,12,34],[30,55,69,37]]
x_axis_values = [1,2,3,4]
output_html_2 = "graph_2.html"
output_pdf_2 = "graph_2.pdf"
# test build_bar_graph with defaults
graph = lf_bar_graph(_data_set=dataset,
_xaxis_name="stations",
_yaxis_name="Throughput 2 (Mbps)",
_xaxis_categories=x_axis_values,
_graph_image_name="Bi-single_radio_2.4GHz",
_label=["bi-downlink", "bi-uplink",'uplink'],
_color=None,
_color_edge='red')
graph_html_obj = """
<img align='center' style='padding:15;margin:5;width:1000px;' src=""" + "%s" % (graph.build_bar_graph()) + """ border='1' />
<br><br>
"""
#
test_file = open(output_html_2, "w")
test_file.write(graph_html_obj)
test_file.close()
# write to pdf
# write logic to generate pdf here
# wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.focal_amd64.deb
# sudo apt install ./wkhtmltox_0.12.6-1.focal_amd64.deb
options = {"enable-local-file-access" : None} # prevent eerror Blocked access to file
pdfkit.from_file(output_html_2, output_pdf_2, options=options)

285
py-scripts/lf_mesh_test.py Executable file
View File

@@ -0,0 +1,285 @@
#!/usr/bin/env python3
"""
Note: To Run this script gui should be opened with
path: cd LANforgeGUI_5.4.3 (5.4.3 can be changed with GUI version)
pwd (Output : /home/lanforge/LANforgeGUI_5.4.3)
./lfclient.bash -cli-socket 3990
This script is used to automate running Mesh tests. You
may need to view a Mesh test configured through the GUI to understand
the options and how best to input data.
./lf_mesh_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name mesh-instance --config_name test_con --upstream 1.1.eth1 \
--raw_line 'selected_dut2: RootAP wactest 08:36:c9:19:47:40 (1)' \
--raw_line 'selected_dut5: RootAP wactest 08:36:c9:19:47:50 (2)' \
--duration 15s \
--download_speed 85% --upload_speed 56Kbps \
--raw_line 'velocity: 100' \
--raw_lines_file example-configs/mesh-ferndale-cfg.txt \
--test_rig Ferndale-Mesh-01 --pull_report
Note:
--raw_line 'line contents' will add any setting to the test config. This is
useful way to support any options not specifically enabled by the
command options.
--set modifications will be applied after the other config has happened,
so it can be used to override any other config.
Example of raw text config for Mesh, to show other possible options:
show_events: 1
show_log: 0
port_sorting: 0
kpi_id: Mesh
bg: 0xE0ECF8
test_rig:
show_scan: 1
auto_helper: 1
skip_2: 0
skip_5: 0
skip_5b: 1
skip_dual: 0
skip_tri: 1
selected_dut5: RootAP wactest 08:36:c9:19:47:50 (2)
selected_dut2: RootAP wactest 08:36:c9:19:47:40 (1)
upstream_port: 1.1.1 eth1
operator:
mconn: 5
tos: 0
dur: 60
speed: 100%
speed2: 56Kbps
velocity: 100
path_loops: 1
bgscan_mod: simple
bgscan_short: 30
bgscan_long: 300
bgscan_rssi: -60
skip_2: 0
skip_5: 0
skip_dhcp: 0
show_tx_mcs: 1
show_rx_mcs: 1
chamber-0: RootAP
chamber-1: Node1
chamber-2: Node2
chamber-3:
chamber-4: MobileStations
sta_amount-0: 1
sta_amount-1: 1
sta_amount-2: 1
sta_amount-3: 1
sta_amount-4: 1
radios-0-0: 1.2.2 wiphy0
radios-0-1:
radios-0-2:
radios-0-3: 1.2.3 wiphy1
radios-0-4:
radios-0-5:
radios-1-0: 1.3.2 wiphy0
radios-1-1:
radios-1-2:
radios-1-3: 1.3.3 wiphy1
radios-1-4:
radios-1-5:
radios-2-0: 1.4.2 wiphy0
radios-2-1:
radios-2-2:
radios-2-3: 1.4.3 wiphy1
radios-2-4:
radios-2-5:
radios-3-0:
radios-3-1:
radios-3-2:
radios-3-3:
radios-3-4:
radios-3-5:
radios-4-0: 1.1.2 wiphy0
radios-4-1:
radios-4-2:
radios-4-3: 1.1.3 wiphy1
radios-4-4:
radios-4-5:
ap_arrangements: Current Position
tests: Roam
traf_combo: STA
sta_position: Current Position
traffic_types: UDP
direction: Download
path: Orbit Current
traf_use_sta: 0
"""
import sys
import os
import argparse
import time
import json
from os import path
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from cv_test_manager import cv_test as cvtest
from cv_test_manager import *
from cv_commands import chamberview as cv
class MeshTest(cvtest):
def __init__(self,
lf_host="localhost",
lf_port=8080,
lf_user="lanforge",
lf_password="lanforge",
instance_name="dpt_instance",
config_name="dpt_config",
upstream="1.1.eth1",
pull_report=False,
load_old_cfg=False,
upload_speed="56Kbps",
download_speed="85%",
duration="60s",
enables=[],
disables=[],
raw_lines=[],
raw_lines_file="",
sets=[],
):
super().__init__(lfclient_host=lf_host, lfclient_port=lf_port)
self.lf_host = lf_host
self.lf_port = lf_port
self.lf_user = lf_user
self.lf_password =lf_password
self.createCV = cv(lf_host, lf_port);
self.instance_name = instance_name
self.config_name = config_name
self.duration = duration
self.upstream = upstream
self.pull_report = pull_report
self.load_old_cfg = load_old_cfg
self.test_name = "Mesh"
self.upload_speed = upload_speed
self.download_speed = download_speed
self.enables = enables
self.disables = disables
self.raw_lines = raw_lines
self.raw_lines_file = raw_lines_file
self.sets = sets
def setup(self):
# Nothing to do at this time.
return
def run(self):
self.createCV.sync_cv()
time.sleep(2)
self.createCV.sync_cv()
blob_test = "Mesh-"
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
self.show_text_blob(None, None, False)
# Test related settings
cfg_options = []
### HERE###
self.apply_cfg_options(cfg_options, self.enables, self.disables, self.raw_lines, self.raw_lines_file)
# cmd line args take precedence and so come last in the cfg array.
if self.upstream != "":
cfg_options.append("upstream_port: " + self.upstream)
if self.download_speed != "":
cfg_options.append("speed: " + self.download_speed)
if self.upload_speed != "":
cfg_options.append("speed2: " + self.upload_speed)
if self.duration != "":
cfg_options.append("duration: " + self.duration)
# We deleted the scenario earlier, now re-build new one line at a time.
self.build_cfg(self.config_name, blob_test, cfg_options)
cv_cmds = []
self.create_and_run_test(self.load_old_cfg, self.test_name, self.instance_name,
self.config_name, self.sets,
self.pull_report, self.lf_host, self.lf_user, self.lf_password,
cv_cmds)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
def main():
parser = argparse.ArgumentParser("""
Open this file in an editor and read the top notes for more details.
Example:
./lf_mesh_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name mesh-instance --config_name test_con --upstream 1.1.eth1 \
--raw_line 'selected_dut2: RootAP wactest 08:36:c9:19:47:40 (1)' \
--raw_line 'selected_dut5: RootAP wactest 08:36:c9:19:47:50 (2)' \
--duration 15s \
--download_speed 85% --upload_speed 56Kbps \
--raw_line 'velocity: 100' \
--raw_lines_file example-configs/mesh-ferndale-cfg.txt \
--test_rig Ferndale-Mesh-01 --pull_report
NOTE: There is quite a lot of config needed, see example-configs/mesh-ferndale-cfg.txt
Suggestion is to configure the test through the GUI, make sure it works, then view
the config and paste it into your own cfg.txt file.
"""
)
cv_add_base_parser(parser) # see cv_test_manager.py
parser.add_argument("-u", "--upstream", type=str, default="",
help="Upstream port for wifi capacity test ex. 1.1.eth2")
parser.add_argument("--download_speed", default="",
help="Specify requested download speed. Percentage of theoretical is also supported. Default: 85%")
parser.add_argument("--upload_speed", default="",
help="Specify requested upload speed. Percentage of theoretical is also supported. Default: 0")
parser.add_argument("--duration", default="",
help="Specify duration of each traffic run")
args = parser.parse_args()
cv_base_adjust_parser(args)
CV_Test = MeshTest(lf_host = args.mgr,
lf_port = args.port,
lf_user = args.lf_user,
lf_password = args.lf_password,
instance_name = args.instance_name,
config_name = args.config_name,
upstream = args.upstream,
pull_report = args.pull_report,
load_old_cfg = args.load_old_cfg,
download_speed = args.download_speed,
upload_speed = args.upload_speed,
duration = args.duration,
enables = args.enable,
disables = args.disable,
raw_lines = args.raw_line,
raw_lines_file = args.raw_lines_file,
sets = args.set
)
CV_Test.setup()
CV_Test.run()
# Mesh does not do KPI currently.
#CV_Test.check_influx_kpi(args)
if __name__ == "__main__":
main()

358
py-scripts/lf_report.py Executable file
View File

@@ -0,0 +1,358 @@
#!/usr/bin/env python3
'''
NAME: lf_report.py
PURPOSE:
This program is a helper class for reporting results for a lanforge python script.
The class will generate an output directory based on date and time in the /home/lanforge/reports-data/ .
If the reports-data is not present then the date and time directory will be created in the current directory.
The banner and Candela Technology logo will be copied in the results directory.
The results directory may be over written and many of the other paramaters during construction.
Creating the date time directory on construction was a design choice.
EXAMPLE:
This is a helper class, a unit test is included at the bottom of the file.
To test lf_report.py and lf_graph.py together use the lf_report_test.py file
LICENSE:
Free to distribute and modify. LANforge systems must be licensed.
Copyright 2021 Candela Technologies Inc
'''
import os
import shutil
import datetime
import pandas as pd
import pdfkit
# internal candela references included during intial phases, to be deleted at future date
# https://candelatech.atlassian.net/wiki/spaces/LANFORGE/pages/372703360/Scripting+Data+Collection+March+2021
# base report class
class lf_report():
def __init__(self,
#_path the report directory under which the report directories will be created.
_path = "/home/lanforge/report-data",
_alt_path = "",
_date = "",
_title="LANForge Test Run Heading",
_table_title="LANForge Table Heading",
_graph_title="LANForge Graph Title",
_obj = "",
_obj_title = "",
_output_html="outfile.html",
_output_pdf="outfile.pdf",
_results_dir_name = "LANforge_Test_Results",
_output_format = 'html', # pass in on the write functionality, current not used
_dataframe="",
_path_date_time=""): # this is where the final report is placed.
#other report paths,
# _path is where the directory with the data time will be created
if _path == "local" or _path == "here":
self.path = os.path.abspath(__file__)
print("path set to file path: {}".format(self.path))
elif _alt_path != "":
self.path = _alt_path
print("path set to alt path: {}".format(self.path))
else:
self.path = _path
print("path set: {}".format(self.path))
self.dataframe=_dataframe
self.title=_title
self.table_title=_table_title
self.graph_title=_graph_title
self.date=_date
self.output_html=_output_html
self.path_date_time = _path_date_time
self.write_output_html = ""
self.output_pdf=_output_pdf
self.write_output_pdf = ""
self.banner_html = ""
self.graph_titles=""
self.graph_image=""
self.html = ""
self.custom_html = ""
self.objective = _obj
self.obj_title = _obj_title
#self.systeminfopath = ""
self.date_time_directory = ""
self.banner_directory = "artifacts"
self.banner_file_name = "banner.png" # does this need to be configurable
self.logo_directory = "artifacts"
self.logo_file_name = "CandelaLogo2-90dpi-200x90-trans.png" # does this need to be configurable.
self.current_path = os.path.dirname(os.path.abspath(__file__))
# pass in _date to allow to change after construction
self.set_date_time_directory(_date,_results_dir_name)
self.build_date_time_directory()
# move the banners and candela images to report path
self.copy_banner()
self.copy_logo()
def copy_banner(self):
banner_src_file = str(self.current_path)+'/'+str(self.banner_directory)+'/'+str(self.banner_file_name)
banner_dst_file = str(self.path_date_time)+'/'+ str(self.banner_file_name)
#print("banner src_file: {}".format(banner_src_file))
#print("dst_file: {}".format(banner_dst_file))
shutil.copy(banner_src_file,banner_dst_file)
def copy_logo(self):
logo_src_file = str(self.current_path)+'/'+str(self.logo_directory)+'/'+str(self.logo_file_name)
logo_dst_file = str(self.path_date_time)+'/'+ str(self.logo_file_name)
#print("logo_src_file: {}".format(logo_src_file))
#print("logo_dst_file: {}".format(logo_dst_file))
shutil.copy(logo_src_file,logo_dst_file)
def move_graph_image(self,):
graph_src_file = str(self.graph_image)
graph_dst_file = str(self.path_date_time)+'/'+ str(self.graph_image)
print("graph_src_file: {}".format(graph_src_file))
print("graph_dst_file: {}".format(graph_dst_file))
shutil.move(graph_src_file,graph_dst_file)
def set_path(self,_path):
self.path = _path
def set_date_time_directory(self,_date,_results_dir_name):
self.date = _date
self.results_dir_name = _results_dir_name
if self.date != "":
self.date_time_directory = str(self.date) + str("_") + str(self.results_dir_name)
else:
self.date = str(datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")).replace(':','-')
self.date_time_directory = self.date + str("_") + str(self.results_dir_name)
#def set_date_time_directory(self,date_time_directory):
# self.date_time_directory = date_time_directory
def build_date_time_directory(self):
if self.date_time_directory == "":
self.set_date_time_directory()
self.path_date_time = os.path.join(self.path, self.date_time_directory)
print("path_date_time {}".format(self.path_date_time))
try:
if not os.path.exists(self.path_date_time):
os.mkdir(self.path_date_time)
except:
self.path_date_time = os.path.join(self.current_path, self.date_time_directory)
if not os.path.exists(self.path_date_time):
os.mkdir(self.path_date_time)
print("report path : {}".format(self.path_date_time))
def set_title(self,_title):
self.title = _title
def set_table_title(self,_table_title):
self.table_title = _table_title
def set_graph_title(self,_graph_title):
self.graph_title = _graph_title
def set_date(self,_date):
self.date = _date
def set_table_dataframe(self,_dataframe):
self.dataframe = _dataframe
def set_table_dataframe_from_csv(self,_csv):
self.dataframe = pd.read_csv(_csv)
def set_custom_html(self,_custom_html):
self.custom_html = _custom_html
def set_obj_html(self,_obj_title, _obj ):
self.objective = _obj
self.obj_title = _obj_title
def set_graph_image(self,_graph_image):
self.graph_image = _graph_image
def get_path(self):
return self.path
# get_path_date_time, get_report_path and need to be the same ()
def get_path_date_time(self):
return self.path_date_time
def get_report_path(self):
return self.path_date_time
def file_add_path(self, file):
output_file = str(self.path_date_time)+'/'+ str(file)
print("output file {}".format(output_file))
return output_file
def write_html(self):
self.write_output_html = str(self.path_date_time)+'/'+ str(self.output_html)
print("write_output_html: {}".format(self.write_output_html))
try:
test_file = open(self.write_output_html, "w")
test_file.write(self.html)
test_file.close()
except:
print("write_html failed")
return self.write_output_html
# https://wkhtmltopdf.org/usage/wkhtmltopdf.txt
# page_size A4, A3, Letter, Legal
# orientation Portrait , Landscape
def write_pdf(self, _page_size = 'A4', _orientation = 'Portrait'):
# write logic to generate pdf here
# wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.focal_amd64.deb
# sudo apt install ./wkhtmltox_0.12.6-1.focal_amd64.deb
options = {"enable-local-file-access" : None,
'orientation': _orientation,
'page-size': _page_size} # prevent error Blocked access to file
self.write_output_pdf = str(self.path_date_time)+'/'+ str(self.output_pdf)
pdfkit.from_file(self.write_output_html, self.write_output_pdf, options=options)
pass
def generate_report(self):
self.write_html()
self.write_pdf()
# only use is pass all data in constructor, no graph output
def build_all(self):
self.build_banner()
self.build_table_title()
self.build_table()
def build_banner(self):
self.banner_html = """
<!DOCTYPE html>
<html lang='en'>
<head>
<meta charset='UTF-8'>
<meta name='viewport' content='width=device-width, initial-scale=1' />
<br>
</head>
<title>BANNER </title></head>
<body>
<div class='Section report_banner-1000x205' style='background-image:url("banner.png");background-repeat:no-repeat;padding:0;margin:0;min-width:1000px; min-height:205px;width:1000px; height:205px;max-width:1000px; max-height:205px;'>
<br>
<img align='right' style='padding:25;margin:5;width:200px;' src="CandelaLogo2-90dpi-200x90-trans.png" border='0' />
<div class='HeaderStyle'>
<br>
<h1 class='TitleFontPrint' style='color:darkgreen;'>""" + str(self.title) + """</h1>
<h3 class='TitleFontPrint' style='color:darkgreen;'>""" + str(self.date) + """</h3>
<br>
<br>
<br>
<br>
<br>
</div>
"""
self.html += self.banner_html
def build_table_title(self):
self.table_title_html = """
<html lang='en'>
<head>
<meta charset='UTF-8'>
<meta name='viewport' content='width=device-width, initial-scale=1' />
<div class='HeaderStyle'>
<h2 class='TitleFontPrint' style='color:darkgreen;'>""" + str(self.table_title) + """</h2>
"""
self.html += self.table_title_html
def build_date_time(self):
self.date_time = str(datetime.datetime.now().strftime("%Y-%m-%d-%H-h-%m-m-%S-s")).replace(':','-')
return self.date_time
def build_path_date_time(self):
try:
self.path_date_time = os.path.join(self.path,self.date_time)
os.mkdir(self.path_date_time)
except:
curr_dir_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
self.path_date_time = os.path.join(curr_dir_path,self.date_time)
os.mkdir(self.path_date_time)
def build_table(self):
self.dataframe_html = self.dataframe.to_html(index=False) # have the index be able to be passed in.
self.html += self.dataframe_html
def build_custom(self):
self.html += self.custom_html
def build_objective(self):
self.obj_html = """
<!-- Test Objective -->
<h3 align='left'>""" + str(self.obj_title) + """</h3>
<p align='left' width='900'>""" + str(self.objective) + """</p>
"""
self.html += self.obj_html
def build_graph_title(self):
self.table_graph_html = """
<html lang='en'>
<head>
<meta charset='UTF-8'>
<meta name='viewport' content='width=device-width, initial-scale=1' />
<div class='HeaderStyle'>
<h2 class='TitleFontPrint' style='color:darkgreen;'>""" + str(self.graph_title) + """</h2>
"""
self.html += self.table_graph_html
def build_graph(self):
self.graph_html_obj = """
<img align='center' style='padding:15;margin:5;width:1000px;' src=""" + "%s" % (self.graph_image) + """ border='1' />
<br><br>
"""
self.html +=self.graph_html_obj
# Unit Test
if __name__ == "__main__":
# Testing: generate data frame
dataframe = pd.DataFrame({
'product':['CT521a-264-1ac-1n','CT521a-1ac-1ax','CT522-264-1ac2-1n','CT523c-2ac2-db-10g-cu','CT523c-3ac2-db-10g-cu','CT523c-8ax-ac10g-cu','CT523c-192-2ac2-1ac-10g'],
'radios':[1,1,2,2,6,9,3],
'MIMO':['N','N','N','Y','Y','Y','Y'],
'stations':[200,64,200,128,384,72,192],
'mbps':[300,300,300,10000,10000,10000,10000]
})
print(dataframe)
# Testing: generate data frame
dataframe2 = pd.DataFrame({
'station':[1,2,3,4,5,6,7],
'time_seconds':[23,78,22,19,45,22,25]
})
report = lf_report()
report.set_title("Banner Title One")
report.build_banner()
report.set_table_title("Title One")
report.build_table_title()
report.set_table_dataframe(dataframe)
report.build_table()
report.set_table_title("Title Two")
report.build_table_title()
report.set_table_dataframe(dataframe2)
report.build_table()
#report.build_all()
html_file = report.write_html()
print("returned file ")
print(html_file)
report.write_pdf()
print("report path {}".format(report.get_path()))

View File

@@ -0,0 +1,129 @@
#!/usr/bin/env python3
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import pandas as pd
import pdfkit
from lf_report import lf_report
from lf_graph import lf_bar_graph, lf_scatter_graph , lf_stacked_graph
# Unit Test
if __name__ == "__main__":
# Testing: generate data frame
dataframe = pd.DataFrame({
'product':['CT521a-264-1ac-1n','CT521a-1ac-1ax','CT522-264-1ac2-1n','CT523c-2ac2-db-10g-cu','CT523c-3ac2-db-10g-cu','CT523c-8ax-ac10g-cu','CT523c-192-2ac2-1ac-10g'],
'radios':[1,1,2,2,6,9,3],
'MIMO':['N','N','N','Y','Y','Y','Y'],
'stations':[200,64,200,128,384,72,192],
'mbps':[300,300,300,10000,10000,10000,10000]
})
print(dataframe)
# Testing: generate data frame
dataframe2 = pd.DataFrame({
'station':[1,2,3,4,5,6,7],
'time_seconds':[23,78,22,19,45,22,25]
})
#report = lf_report(_dataframe=dataframe)
report = lf_report()
report_path = report.get_path()
report_path_date_time = report.get_path_date_time()
print("path: {}".format(report_path))
print("path_date_time: {}".format(report_path_date_time))
report.set_title("Banner Title One")
report.build_banner()
#report.set_title("Banner Title Two")
#report.build_banner()
report.set_table_title("Title One")
report.build_table_title()
report.set_table_dataframe(dataframe)
report.build_table()
report.set_table_title("Title Two")
report.build_table_title()
report.set_table_dataframe(dataframe2)
report.build_table()
# test lf_graph in report
dataset = [[45,67,34,22],[22,45,12,34],[30,55,69,37]]
x_axis_values = [1,2,3,4]
report.set_graph_title("Graph Title")
report.build_graph_title()
graph = lf_bar_graph(_data_set=dataset,
_xaxis_name="stations",
_yaxis_name="Throughput 2 (Mbps)",
_xaxis_categories=x_axis_values,
_graph_image_name="Bi-single_radio_2.4GHz",
_label=["bi-downlink", "bi-uplink",'uplink'],
_color=None,
_color_edge='red')
graph_png = graph.build_bar_graph()
print("graph name {}".format(graph_png))
report.set_graph_image(graph_png)
# need to move the graph image to the results
report.move_graph_image()
report.build_graph()
set1 = [1, 2, 3, 4]
set2 = [[45, 67, 45, 34], [34, 56, 45, 34], [45, 78, 23, 45]]
graph2 = lf_scatter_graph(_x_data_set=set1, _y_data_set=set2, _xaxis_name="x-axis",
_yaxis_name="y-axis",
_graph_image_name="image_name1",
_color=None,
_label=["s1", "s2", "s3"])
graph_png = graph2.build_scatter_graph()
print("graph name {}".format(graph_png))
report.set_graph_image(graph_png)
report.move_graph_image()
report.build_graph()
dataset = [["1", "2", "3", "4"], [12, 45, 67, 34], [23, 67, 23, 12], [25, 45, 34, 23]]
graph = lf_stacked_graph(_data_set=dataset,
_xaxis_name="Stations",
_yaxis_name="Login PASS/FAIL",
_label=['Success', 'Fail', 'both'],
_graph_image_name="login_pass_fail1",
_color=None)
graph_png = graph.build_stacked_graph()
print("graph name {}".format(graph_png))
report.set_graph_image(graph_png)
report.move_graph_image()
report.build_graph()
#report.build_all()
html_file = report.write_html()
print("returned file {}".format(html_file))
print(html_file)
# try other pdf formats
#report.write_pdf()
#report.write_pdf(_page_size = 'A3', _orientation='Landscape')
#report.write_pdf(_page_size = 'A4', _orientation='Landscape')
report.write_pdf(_page_size = 'Legal', _orientation='Landscape')
#report.write_pdf(_page_size = 'Legal', _orientation='Portrait')
#report.generate_report()

273
py-scripts/lf_rvr_test.py Executable file
View File

@@ -0,0 +1,273 @@
#!/usr/bin/env python3
"""
Note: To Run this script gui should be opened with
path: cd LANforgeGUI_5.4.3 (5.4.3 can be changed with GUI version)
pwd (Output : /home/lanforge/LANforgeGUI_5.4.3)
./lfclient.bash -cli-socket 3990
This script is used to automate running Rate-vs-Range tests. You
may need to view a Rate-vs-Range test configured through the GUI to understand
the options and how best to input data.
./lf_rvr_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name rvr-instance --config_name test_con --upstream 1.1.eth1 \
--dut RootAP --duration 15s --station 1.1.wlan0 \
--download_speed 85% --upload_speed 56Kbps \
--raw_line 'pkts: MTU' \
--raw_line 'directions: DUT Transmit' \
--raw_line 'traffic_types: TCP' \
--test_rig Ferndale-Mesh-01 --pull_report \
--raw_line 'attenuator: 1.1.1040' \
--raw_line 'attenuations: 0..+50..950' \
--raw_line 'attenuator_mod: 3' \
--influx_host c7-graphana --influx_port 8086 --influx_org Candela \
--influx_token=-u_Wd-L8o992701QF0c5UmqEp7w7Z7YOMaWLxOMgmHfATJGnQbbmYyNxHBR9PgD6taM_tcxqJl6U8DjU1xINFQ== \
--influx_bucket ben \
--influx_tag testbed Ferndale-Advanced
Note:
attenuator_mod: selects the attenuator modules, bit-field.
This example uses 3, which is first two attenuator modules on Attenuator ID 1040.
--raw_line 'line contents' will add any setting to the test config. This is
useful way to support any options not specifically enabled by the
command options.
--set modifications will be applied after the other config has happened,
so it can be used to override any other config.
Example of raw text config for Rate-vsRange, to show other possible options:
sel_port-0: 1.1.wlan0
show_events: 1
show_log: 0
port_sorting: 0
kpi_id: Rate vs Range
bg: 0xE0ECF8
test_rig:
show_scan: 1
auto_helper: 0
skip_2: 0
skip_5: 0
skip_5b: 1
skip_dual: 0
skip_tri: 1
selected_dut: RootAP
duration: 15000
traffic_port: 1.1.6 wlan0
upstream_port: 1.1.1 eth1
path_loss: 10
speed: 85%
speed2: 56Kbps
min_rssi_bound: -150
max_rssi_bound: 0
channels: AUTO
modes: Auto
pkts: MTU
spatial_streams: AUTO
security_options: AUTO
bandw_options: AUTO
traffic_types: TCP
directions: DUT Transmit
txo_preamble: OFDM
txo_mcs: 0 CCK, OFDM, HT, VHT
txo_retries: No Retry
txo_sgi: OFF
txo_txpower: 15
attenuator: 1.1.1040
attenuator2: 0
attenuator_mod: 243
attenuator_mod2: 255
attenuations: 0..+50..950
attenuations2: 0..+50..950
chamber: 0
tt_deg: 0..+45..359
cust_pkt_sz:
show_bar_labels: 1
show_prcnt_tput: 0
show_3s: 0
show_ll_graphs: 0
show_gp_graphs: 1
show_1m: 1
pause_iter: 0
outer_loop_atten: 0
show_realtime: 1
operator:
mconn: 1
mpkt: 1000
tos: 0
loop_iterations: 1
"""
import sys
import os
import argparse
import time
import json
from os import path
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from cv_test_manager import cv_test as cvtest
from cv_test_manager import *
from cv_commands import chamberview as cv
class RvrTest(cvtest):
def __init__(self,
lf_host="localhost",
lf_port=8080,
lf_user="lanforge",
lf_password="lanforge",
instance_name="rvr_instance",
config_name="rvr_config",
upstream="1.1.eth1",
pull_report=False,
load_old_cfg=False,
upload_speed="0",
download_speed="85%",
duration="15s",
station="1.1.wlan0",
dut="NA",
enables=[],
disables=[],
raw_lines=[],
raw_lines_file="",
sets=[],
):
super().__init__(lfclient_host=lf_host, lfclient_port=lf_port)
self.lf_host = lf_host
self.lf_port = lf_port
self.lf_user = lf_user
self.lf_password =lf_password
self.createCV = cv(lf_host, lf_port);
self.instance_name = instance_name
self.config_name = config_name
self.dut = dut
self.duration = duration
self.upstream = upstream
self.station = station
self.pull_report = pull_report
self.load_old_cfg = load_old_cfg
self.test_name = "Rate vs Range"
self.upload_speed = upload_speed
self.download_speed = download_speed
self.enables = enables
self.disables = disables
self.raw_lines = raw_lines
self.raw_lines_file = raw_lines_file
self.sets = sets
def setup(self):
# Nothing to do at this time.
return
def run(self):
self.createCV.sync_cv()
time.sleep(2)
self.createCV.sync_cv()
blob_test = "rvr-test-latest-"
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
self.show_text_blob(None, None, False)
# Test related settings
cfg_options = []
### HERE###
self.apply_cfg_options(cfg_options, self.enables, self.disables, self.raw_lines, self.raw_lines_file)
# cmd line args take precedence and so come last in the cfg array.
if self.upstream != "":
cfg_options.append("upstream_port: " + self.upstream)
if self.station != "":
cfg_options.append("traffic_port: " + self.station)
if self.download_speed != "":
cfg_options.append("speed: " + self.download_speed)
if self.upload_speed != "":
cfg_options.append("speed2: " + self.upload_speed)
if self.duration != "":
cfg_options.append("duration: " + self.duration)
if self.dut != "":
cfg_options.append("selected_dut: " + self.dut)
# We deleted the scenario earlier, now re-build new one line at a time.
self.build_cfg(self.config_name, blob_test, cfg_options)
cv_cmds = []
self.create_and_run_test(self.load_old_cfg, self.test_name, self.instance_name,
self.config_name, self.sets,
self.pull_report, self.lf_host, self.lf_user, self.lf_password,
cv_cmds)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
def main():
parser = argparse.ArgumentParser("""
Open this file in an editor and read the top notes for more details.
Example:
"""
)
cv_add_base_parser(parser) # see cv_test_manager.py
parser.add_argument("-u", "--upstream", type=str, default="",
help="Upstream port for wifi capacity test ex. 1.1.eth2")
parser.add_argument("--station", type=str, default="",
help="Station to be used in this test, example: 1.1.sta01500")
parser.add_argument("--dut", default="",
help="Specify DUT used by this test, example: linksys-8450")
parser.add_argument("--download_speed", default="",
help="Specify requested download speed. Percentage of theoretical is also supported. Default: 85%")
parser.add_argument("--upload_speed", default="",
help="Specify requested upload speed. Percentage of theoretical is also supported. Default: 0")
parser.add_argument("--duration", default="",
help="Specify duration of each traffic run")
args = parser.parse_args()
cv_base_adjust_parser(args)
CV_Test = RvrTest(lf_host = args.mgr,
lf_port = args.port,
lf_user = args.lf_user,
lf_password = args.lf_password,
instance_name = args.instance_name,
config_name = args.config_name,
upstream = args.upstream,
pull_report = args.pull_report,
load_old_cfg = args.load_old_cfg,
download_speed = args.download_speed,
upload_speed = args.upload_speed,
duration = args.duration,
dut = args.dut,
station = args.station,
enables = args.enable,
disables = args.disable,
raw_lines = args.raw_line,
raw_lines_file = args.raw_lines_file,
sets = args.set
)
CV_Test.setup()
CV_Test.run()
CV_Test.check_influx_kpi(args)
if __name__ == "__main__":
main()

2631
py-scripts/lf_snp_test.py Executable file

File diff suppressed because it is too large Load Diff

326
py-scripts/lf_tr398_test.py Executable file
View File

@@ -0,0 +1,326 @@
#!/usr/bin/env python3
"""
Note: To Run this script gui should be opened with
path: cd LANforgeGUI_5.4.3 (5.4.3 can be changed with GUI version)
pwd (Output : /home/lanforge/LANforgeGUI_5.4.3)
./lfclient.bash -cli-socket 3990
This script is used to automate running TR398 tests. You
may need to view a TR398 test configured through the GUI to understand
the options and how best to input data.
./lf_tr398_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name tr398-instance --config_name test_con \
--upstream 1.2.eth2 \
--dut5 'TR398-DUT ruckus750-5 4c:b1:cd:18:e8:ec (1)' \
--dut2 'TR398-DUT ruckus750-2 4c:b1:cd:18:e8:e8 (2)' \
--raw_lines_file example-configs/tr398-ferndale-ac-cfg.txt \
--set 'Calibrate Attenuators' 0 \
--set 'Receiver Sensitivity' 0 \
--set 'Maximum Connection' 1 \
--set 'Maximum Throughput' 1 \
--set 'Airtime Fairness' 0 \
--set 'Range Versus Rate' 0 \
--set 'Spatial Consistency' 0 \
--set 'Multiple STAs Performance' 0 \
--set 'Multiple Assoc Stability' 0 \
--set 'Downlink MU-MIMO' 0 \
--set 'AP Coexistence' 0 \
--set 'Long Term Stability' 0 \
--test_rig Testbed-01
Note:
--raw_line 'line contents' will add any setting to the test config. This is
useful way to support any options not specifically enabled by the
command options.
--set modifications will be applied after the other config has happened,
so it can be used to override any other config. Above, we are disabling many
of the subtests, and enablign just Maximum Connection and Maximum Throughput
tests.
The RSSI values are calibrated, so you will need to run the calibration step and
call with appropriate values for your particular testbed. This is loaded from
example-configs/tr398-ferndale-ac-cfg.txt in this example.
Contents of that file is a list of raw lines, for instance:
rssi_0_2-0: -26
rssi_0_2-1: -26
rssi_0_2-2: -26
....
Example of raw text config for TR-398, to show other possible options:
show_events: 1
show_log: 0
port_sorting: 0
kpi_id: TR_398
notes0: Standard LANforge TR-398 automation setup, DUT is in large chamber CT840a, LANforge test system is in
notes1: smaller CT810a chamber. CT704b and CT714 4-module attenuators are used. Directional antennas
notes2: mounted on the sides of the DUT chamber are used to communicate to the DUT. DUT is facing forward at
notes3: the zero-rotation angle.
bg: 0xE0ECF8
test_rig: TR-398 test bed
show_scan: 1
auto_helper: 1
skip_2: 0
skip_5: 0
skip_5b: 1
skip_dual: 0
skip_tri: 1
selected_dut5: TR398-DUT ruckus750-5 4c:b1:cd:18:e8:ec (1)
selected_dut2: TR398-DUT ruckus750-2 4c:b1:cd:18:e8:e8 (2)
upstream_port: 1.2.2 eth2
operator:
mconn: 5
band2_freq: 2437
band5_freq: 5180
tos: 0
speed: 65%
speed_max_cx_2: 2000000
speed_max_cx_5: 8000000
max_tput_speed_2: 100000000
max_tput_speed_5: 560000000
rxsens_deg_rot: 45
rxsens_pre_steps: 8
stability_udp_dur: 3600
stability_iter: 288
calibrate_mode: 4
calibrate_nss: 1
dur120: 120
dur180: 180
i_5g_80: 195000000
i_5g_40: 90000000
i_2g_20: 32000000
spatial_deg_rot: 30
spatial_retry: 0
reset_pp: 99
rxsens_stop_at_pass: 0
auto_coex: 1
rvr_adj: 0
rssi_2m_2: -20
rssi_2m_5: -32
extra_dl_path_loss: 3
dur60: 60
turn_table: TR-398
radio-0: 1.1.2 wiphy0
radio-1: 1.1.3 wiphy1
radio-2: 1.1.4 wiphy2
radio-3: 1.1.5 wiphy3
radio-4: 1.1.6 wiphy4
radio-5: 1.1.7 wiphy5
rssi_0_2-0: -26
rssi_0_2-1: -26
rssi_0_2-2: -26
rssi_0_2-3: -26
rssi_0_2-4: -27
rssi_0_2-5: -27
rssi_0_2-6: -27
rssi_0_2-7: -27
rssi_0_2-8: -25
rssi_0_2-9: -25
rssi_0_2-10: -25
rssi_0_2-11: -25
rssi_0_5-0: -38
rssi_0_5-1: -38
rssi_0_5-2: -38
rssi_0_5-3: -38
rssi_0_5-4: -38
rssi_0_5-5: -38
rssi_0_5-6: -38
rssi_0_5-7: -38
rssi_0_5-8: -47
rssi_0_5-9: -47
rssi_0_5-10: -47
rssi_0_5-11: -47
atten-0: 1.1.85.0
atten-1: 1.1.85.1
atten-2: 1.1.85.2
atten-3: 1.1.85.3
atten-4: 1.1.1002.0
atten-5: 1.1.1002.1
atten-8: 1.1.1002.2
atten-9: 1.1.1002.3
atten_cal: 1
rxsens: 0
max_cx: 0
max_tput: 0
atf: 0
rvr: 0
spatial: 0
multi_sta: 0
reset: 0
mu_mimo: 0
stability: 0
ap_coex: 0
"""
import sys
import os
import argparse
import time
import json
from os import path
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from cv_test_manager import cv_test as cvtest
from cv_test_manager import *
from cv_commands import chamberview as cv
class DataplaneTest(cvtest):
def __init__(self,
lf_host="localhost",
lf_port=8080,
lf_user="lanforge",
lf_password="lanforge",
instance_name="tr398_instance",
config_name="tr398_config",
upstream="1.2.eth2",
pull_report=False,
load_old_cfg=False,
raw_lines_file="",
dut5="",
dut2="",
enables=[],
disables=[],
raw_lines=[],
sets=[],
):
super().__init__(lfclient_host=lf_host, lfclient_port=lf_port)
self.lf_host = lf_host
self.lf_port = lf_port
self.lf_user = lf_user
self.lf_password =lf_password
self.createCV = cv(lf_host, lf_port);
self.instance_name = instance_name
self.config_name = config_name
self.dut5 = dut5
self.dut2 = dut2
self.raw_lines_file = raw_lines_file
self.upstream = upstream
self.pull_report = pull_report
self.load_old_cfg = load_old_cfg
self.test_name = "TR-398"
self.enables = enables
self.disables = disables
self.raw_lines = raw_lines
self.sets = sets
def setup(self):
# Nothing to do at this time.
return
def run(self):
self.createCV.sync_cv()
time.sleep(2)
self.createCV.sync_cv()
blob_test = "%s-"%(self.test_name)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
self.show_text_blob(None, None, False)
# Test related settings
cfg_options = []
self.apply_cfg_options(cfg_options, self.enables, self.disables, self.raw_lines, self.raw_lines_file)
# cmd line args take precedence
if self.upstream != "":
cfg_options.append("upstream_port: " + self.upstream)
if self.dut5 != "":
cfg_options.append("selected_dut5: " + self.dut5)
if self.dut2 != "":
cfg_options.append("selected_dut2: " + self.dut2)
# We deleted the scenario earlier, now re-build new one line at a time.
self.build_cfg(self.config_name, blob_test, cfg_options)
cv_cmds = []
self.create_and_run_test(self.load_old_cfg, self.test_name, self.instance_name,
self.config_name, self.sets,
self.pull_report, self.lf_host, self.lf_user, self.lf_password,
cv_cmds)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
def main():
parser = argparse.ArgumentParser("""
Open this file in an editor and read the top notes for more details.
Example:
./lf_tr398_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name tr398-instance --config_name test_con \
--upstream 1.2.eth2 \
--dut5 'TR398-DUT ruckus750-5 4c:b1:cd:18:e8:ec (1)' \
--dut2 'TR398-DUT ruckus750-2 4c:b1:cd:18:e8:e8 (2)' \
--raw_lines_file example-configs/tr398-ferndale-ac-cfg.txt \
--set 'Calibrate Attenuators' 0 \
--set 'Receiver Sensitivity' 0 \
--set 'Maximum Connection' 1 \
--set 'Maximum Throughput' 1 \
--set 'Airtime Fairness' 0 \
--set 'Range Versus Rate' 0 \
--set 'Spatial Consistency' 0 \
--set 'Multiple STAs Performance' 0 \
--set 'Multiple Assoc Stability' 0 \
--set 'Downlink MU-MIMO' 0 \
--set 'AP Coexistence' 0 \
--set 'Long Term Stability' 0 \
--test_rig Testbed-01
"""
)
cv_add_base_parser(parser) # see cv_test_manager.py
parser.add_argument("-u", "--upstream", type=str, default="",
help="Upstream port for wifi capacity test ex. 1.1.eth2")
parser.add_argument("--dut2", default="",
help="Specify 2Ghz DUT used by this test, example: 'TR398-DUT ruckus750-2 4c:b1:cd:18:e8:e8 (2)'")
parser.add_argument("--dut5", default="",
help="Specify 5Ghz DUT used by this test, example: 'TR398-DUT ruckus750-5 4c:b1:cd:18:e8:ec (1)'")
args = parser.parse_args()
cv_base_adjust_parser(args)
CV_Test = DataplaneTest(lf_host = args.mgr,
lf_port = args.port,
lf_user = args.lf_user,
lf_password = args.lf_password,
instance_name = args.instance_name,
config_name = args.config_name,
upstream = args.upstream,
pull_report = args.pull_report,
load_old_cfg = args.load_old_cfg,
dut2 = args.dut2,
dut5 = args.dut5,
raw_lines_file = args.raw_lines_file,
enables = args.enable,
disables = args.disable,
raw_lines = args.raw_line,
sets = args.set
)
CV_Test.setup()
CV_Test.run()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,546 @@
#!/usr/bin/env python3
"""
Note: To Run this script gui should be opened with
path: cd LANforgeGUI_5.4.3 (5.4.3 can be changed with GUI version)
pwd (Output : /home/lanforge/LANforgeGUI_5.4.3)
./lfclient.bash -cli-socket 3990
Note: This is a test file which will run a wifi capacity test.
ex. on how to run this script (if stations are available in lanforge):
The influx part can be skipped if you are not using influx/graphana.
./lf_wifi_capacity_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name this_inst --config_name test_con --upstream 1.1.eth2 --batch_size 1,5,25,50,100 --loop_iter 1 \
--protocol UDP-IPv4 --duration 6000 --pull_report \
--test_rig Testbed-01 \
--influx_host c7-graphana --influx_port 8086 --influx_org Candela \
--influx_token=-u_Wd-L8o992701QF0c5UmqEp7w7Z7YOMaWLxOMgmHfATJGnQbbmYyNxHBR9PgD6taM_tcxqJl6U8DjU1xINFQ== \
--influx_bucket ben --influx_tag testbed Ferndale-01
ex. on how to run this script (to create new stations):
./lf_wifi_capacity_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name wct_instance --config_name wifi_config --upstream 1.1.eth1 --batch_size 1,5,25 --loop_iter 1 \
--protocol UDP-IPv4 --duration 6000 --pull_report --stations 1.1.sta0000,1.1.sta0001 \
--create_stations --radio wiphy0 --ssid test-ssid --security open --paswd [BLANK] \
--test_rig Testbed-01
Note:
--pull_report == If specified, this will pull reports from lanforge to your code directory,
from where you are running this code
--stations == Enter stations to use for wifi capacity
Example of raw text config for Capacity, to show other possible options:
sel_port-0: 1.1.eth1
sel_port-1: 1.1.sta00000
sel_port-2: 1.1.sta00001
sel_port-3: 1.1.sta00002
sel_port-4: 1.1.sta00003
sel_port-5: 1.1.sta00004
sel_port-6: 1.1.sta00005
sel_port-7: 1.1.sta00006
sel_port-8: 1.1.sta00007
sel_port-9: 1.1.sta00008
sel_port-10: 1.1.sta00009
sel_port-11: 1.1.sta00010
sel_port-12: 1.1.sta00011
sel_port-13: 1.1.sta00012
sel_port-14: 1.1.sta00013
sel_port-15: 1.1.sta00014
sel_port-16: 1.1.sta00015
sel_port-17: 1.1.sta00016
sel_port-18: 1.1.sta00017
sel_port-19: 1.1.sta00018
sel_port-20: 1.1.sta00019
sel_port-21: 1.1.sta00020
sel_port-22: 1.1.sta00021
sel_port-23: 1.1.sta00022
sel_port-24: 1.1.sta00023
sel_port-25: 1.1.sta00024
sel_port-26: 1.1.sta00025
sel_port-27: 1.1.sta00026
sel_port-28: 1.1.sta00027
sel_port-29: 1.1.sta00028
sel_port-30: 1.1.sta00029
sel_port-31: 1.1.sta00030
sel_port-32: 1.1.sta00031
sel_port-33: 1.1.sta00032
sel_port-34: 1.1.sta00033
sel_port-35: 1.1.sta00034
sel_port-36: 1.1.sta00035
sel_port-37: 1.1.sta00036
sel_port-38: 1.1.sta00037
sel_port-39: 1.1.sta00038
sel_port-40: 1.1.sta00039
sel_port-41: 1.1.sta00040
sel_port-42: 1.1.sta00041
sel_port-43: 1.1.sta00042
sel_port-44: 1.1.sta00043
sel_port-45: 1.1.sta00044
sel_port-46: 1.1.sta00045
sel_port-47: 1.1.sta00046
sel_port-48: 1.1.sta00047
sel_port-49: 1.1.sta00048
sel_port-50: 1.1.sta00049
sel_port-51: 1.1.sta00500
sel_port-52: 1.1.sta00501
sel_port-53: 1.1.sta00502
sel_port-54: 1.1.sta00503
sel_port-55: 1.1.sta00504
sel_port-56: 1.1.sta00505
sel_port-57: 1.1.sta00506
sel_port-58: 1.1.sta00507
sel_port-59: 1.1.sta00508
sel_port-60: 1.1.sta00509
sel_port-61: 1.1.sta00510
sel_port-62: 1.1.sta00511
sel_port-63: 1.1.sta00512
sel_port-64: 1.1.sta00513
sel_port-65: 1.1.sta00514
sel_port-66: 1.1.sta00515
sel_port-67: 1.1.sta00516
sel_port-68: 1.1.sta00517
sel_port-69: 1.1.sta00518
sel_port-70: 1.1.sta00519
sel_port-71: 1.1.sta00520
sel_port-72: 1.1.sta00521
sel_port-73: 1.1.sta00522
sel_port-74: 1.1.sta00523
sel_port-75: 1.1.sta00524
sel_port-76: 1.1.sta00525
sel_port-77: 1.1.sta00526
sel_port-78: 1.1.sta00527
sel_port-79: 1.1.sta00528
sel_port-80: 1.1.sta00529
sel_port-81: 1.1.sta00530
sel_port-82: 1.1.sta00531
sel_port-83: 1.1.sta00532
sel_port-84: 1.1.sta00533
sel_port-85: 1.1.sta00534
sel_port-86: 1.1.sta00535
sel_port-87: 1.1.sta00536
sel_port-88: 1.1.sta00537
sel_port-89: 1.1.sta00538
sel_port-90: 1.1.sta00539
sel_port-91: 1.1.sta00540
sel_port-92: 1.1.sta00541
sel_port-93: 1.1.sta00542
sel_port-94: 1.1.sta00543
sel_port-95: 1.1.sta00544
sel_port-96: 1.1.sta00545
sel_port-97: 1.1.sta00546
sel_port-98: 1.1.sta00547
sel_port-99: 1.1.sta00548
sel_port-100: 1.1.sta00549
sel_port-101: 1.1.sta01000
sel_port-102: 1.1.sta01001
sel_port-103: 1.1.sta01002
sel_port-104: 1.1.sta01003
sel_port-105: 1.1.sta01004
sel_port-106: 1.1.sta01005
sel_port-107: 1.1.sta01006
sel_port-108: 1.1.sta01007
sel_port-109: 1.1.sta01008
sel_port-110: 1.1.sta01009
sel_port-111: 1.1.sta01010
sel_port-112: 1.1.sta01011
sel_port-113: 1.1.sta01012
sel_port-114: 1.1.sta01013
sel_port-115: 1.1.sta01014
sel_port-116: 1.1.sta01015
sel_port-117: 1.1.sta01016
sel_port-118: 1.1.sta01017
sel_port-119: 1.1.sta01018
sel_port-120: 1.1.sta01019
sel_port-121: 1.1.sta01020
sel_port-122: 1.1.sta01021
sel_port-123: 1.1.sta01022
sel_port-124: 1.1.sta01023
sel_port-125: 1.1.sta01024
sel_port-126: 1.1.sta01025
sel_port-127: 1.1.sta01026
sel_port-128: 1.1.sta01027
sel_port-129: 1.1.sta01028
sel_port-130: 1.1.sta01029
sel_port-131: 1.1.sta01030
sel_port-132: 1.1.sta01031
sel_port-133: 1.1.sta01032
sel_port-134: 1.1.sta01033
sel_port-135: 1.1.sta01034
sel_port-136: 1.1.sta01035
sel_port-137: 1.1.sta01036
sel_port-138: 1.1.sta01037
sel_port-139: 1.1.sta01038
sel_port-140: 1.1.sta01039
sel_port-141: 1.1.sta01040
sel_port-142: 1.1.sta01041
sel_port-143: 1.1.sta01042
sel_port-144: 1.1.sta01043
sel_port-145: 1.1.sta01044
sel_port-146: 1.1.sta01045
sel_port-147: 1.1.sta01046
sel_port-148: 1.1.sta01047
sel_port-149: 1.1.sta01048
sel_port-150: 1.1.sta01049
sel_port-151: 1.1.sta01500
sel_port-152: 1.1.sta01501
sel_port-153: 1.1.sta01502
sel_port-154: 1.1.sta01503
sel_port-155: 1.1.sta01504
sel_port-156: 1.1.sta01505
sel_port-157: 1.1.sta01506
sel_port-158: 1.1.sta01507
sel_port-159: 1.1.sta01508
sel_port-160: 1.1.sta01509
sel_port-161: 1.1.sta01510
sel_port-162: 1.1.sta01511
sel_port-163: 1.1.sta01512
sel_port-164: 1.1.sta01513
sel_port-165: 1.1.sta01514
sel_port-166: 1.1.sta01515
sel_port-167: 1.1.sta01516
sel_port-168: 1.1.sta01517
sel_port-169: 1.1.sta01518
sel_port-170: 1.1.sta01519
sel_port-171: 1.1.sta01520
sel_port-172: 1.1.sta01521
sel_port-173: 1.1.sta01522
sel_port-174: 1.1.sta01523
sel_port-175: 1.1.sta01524
sel_port-176: 1.1.sta01525
sel_port-177: 1.1.sta01526
sel_port-178: 1.1.sta01527
sel_port-179: 1.1.sta01528
sel_port-180: 1.1.sta01529
sel_port-181: 1.1.sta01530
sel_port-182: 1.1.sta01531
sel_port-183: 1.1.sta01532
sel_port-184: 1.1.sta01533
sel_port-185: 1.1.sta01534
sel_port-186: 1.1.sta01535
sel_port-187: 1.1.sta01536
sel_port-188: 1.1.sta01537
sel_port-189: 1.1.sta01538
sel_port-190: 1.1.sta01539
sel_port-191: 1.1.sta01540
sel_port-192: 1.1.sta01541
sel_port-193: 1.1.sta01542
sel_port-194: 1.1.sta01543
sel_port-195: 1.1.sta01544
sel_port-196: 1.1.sta01545
sel_port-197: 1.1.wlan4
sel_port-198: 1.1.wlan5
sel_port-199: 1.1.wlan6
sel_port-200: 1.1.wlan7
show_events: 1
show_log: 0
port_sorting: 0
kpi_id: WiFi Capacity
bg: 0xE0ECF8
test_rig:
show_scan: 1
auto_helper: 1
skip_2: 0
skip_5: 0
skip_5b: 1
skip_dual: 0
skip_tri: 1
batch_size: 1
loop_iter: 1
duration: 6000
test_groups: 0
test_groups_subset: 0
protocol: UDP-IPv4
dl_rate_sel: Total Download Rate:
dl_rate: 1000000000
ul_rate_sel: Total Upload Rate:
ul_rate: 10000000
prcnt_tcp: 100000
l4_endp:
pdu_sz: -1
mss_sel: 1
sock_buffer: 0
ip_tos: 0
multi_conn: -1
min_speed: -1
ps_interval: 60-second Running Average
fairness: 0
naptime: 0
before_clear: 5000
rpt_timer: 1000
try_lower: 0
rnd_rate: 1
leave_ports_up: 0
down_quiesce: 0
udp_nat: 1
record_other_ssids: 0
clear_reset_counters: 0
do_pf: 0
pf_min_period_dl: 0
pf_min_period_ul: 0
pf_max_reconnects: 0
use_mix_pdu: 0
pdu_prcnt_pps: 1
pdu_prcnt_bps: 0
pdu_mix_ln-0:
show_scan: 1
show_golden_3p: 0
save_csv: 0
show_realtime: 1
show_pie: 1
show_per_loop_totals: 1
show_cx_time: 1
show_dhcp: 1
show_anqp: 1
show_4way: 1
show_latency: 1
"""
import sys
import os
import argparse
import time
import json
from os import path
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from cv_test_manager import cv_test as cvtest
from cv_commands import chamberview as cv
from cv_test_manager import *
class WiFiCapacityTest(cvtest):
def __init__(self,
lf_host="localhost",
lf_port=8080,
lf_user="lanforge",
lf_password="lanforge",
instance_name="wct_instance",
config_name="wifi_config",
upstream="eth1",
batch_size="1",
loop_iter="1",
protocol="UDP-IPv4",
duration="5000",
pull_report=False,
load_old_cfg=False,
upload_rate="10Mbps",
download_rate="1Gbps",
sort="interleave",
stations="",
create_stations=False,
radio="wiphy0",
security="open",
paswd="[BLANK]",
ssid="",
enables=[],
disables=[],
raw_lines=[],
raw_lines_file="",
sets=[],
):
super().__init__(lfclient_host=lf_host, lfclient_port=lf_port)
self.lf_host = lf_host
self.lf_port = lf_port
self.lf_user = lf_user
self.lf_password =lf_password
self.createCV = cv(lf_host, lf_port);
self.station_profile = self.new_station_profile()
self.pull_report = pull_report
self.load_old_cfg = load_old_cfg
self.instance_name = instance_name
self.config_name = config_name
self.test_name = "WiFi Capacity"
self.batch_size = batch_size
self.loop_iter = loop_iter
self.protocol = protocol
self.duration = duration
self.upload_rate = upload_rate
self.download_rate = download_rate
self.upstream = upstream
self.sort = sort
self.stations = stations
self.create_stations =create_stations
self.security = security
self.ssid = ssid
self.paswd = paswd
self.radio = radio
self.enables = enables
self.disables = disables
self.raw_lines = raw_lines
self.raw_lines_file = raw_lines_file
self.sets = sets
def setup(self):
if self.create_stations and self.stations != "":
sta = self.stations.split(",")
self.station_profile.cleanup(sta)
self.station_profile.use_security(self.security, self.ssid, self.paswd)
self.station_profile.create(radio=self.radio, sta_names_=sta, debug=self.debug)
self.station_profile.admin_up()
self.wait_for_ip(station_list=sta)
print("stations created")
def run(self):
self.createCV.sync_cv()
time.sleep(2)
self.rm_text_blob(self.config_name, "Wifi-Capacity-") # To delete old config with same name
self.show_text_blob(None, None, False)
# Test related settings
cfg_options = []
port_list = [self.upstream]
if self.stations == "":
stas = self.station_map() # See realm
for eid in stas.keys():
port_list.append(eid)
else:
stas = self.stations.split(",")
for s in stas:
port_list.append(s)
idx = 0
for eid in port_list:
add_port = "sel_port-" + str(idx) + ": " + eid
self.create_test_config(self.config_name, "Wifi-Capacity-", add_port)
idx += 1
self.apply_cfg_options(cfg_options, self.enables, self.disables, self.raw_lines, self.raw_lines_file)
if self.batch_size != "":
cfg_options.append("batch_size: " + self.batch_size)
if self.loop_iter != "":
cfg_options.append("loop_iter: " + self.loop_iter)
if self.protocol != "":
cfg_options.append("protocol: " + str(self.protocol))
if self.duration != "":
cfg_options.append("duration: " + self.duration)
if self.upload_rate != "":
cfg_options.append("ul_rate: " + self.upload_rate)
if self.download_rate != "":
cfg_options.append("dl_rate: " + self.download_rate)
blob_test = "Wifi-Capacity-"
# We deleted the scenario earlier, now re-build new one line at a time.
self.build_cfg(self.config_name, blob_test, cfg_options)
cv_cmds = []
if self.sort == 'linear':
cmd = "cv click '%s' 'Linear Sort'" % self.instance_name
cv_cmds.append(cmd)
if self.sort == 'interleave':
cmd = "cv click '%s' 'Interleave Sort'" % self.instance_name
cv_cmds.append(cmd)
self.create_and_run_test(self.load_old_cfg, self.test_name, self.instance_name,
self.config_name, self.sets,
self.pull_report, self.lf_host, self.lf_user, self.lf_password,
cv_cmds)
self.rm_text_blob(self.config_name, blob_test) # To delete old config with same name
self.rm_text_blob(self.config_name, "Wifi-Capacity-") # To delete old config with same name
def main():
parser = argparse.ArgumentParser(
description="""
./lf_wifi_capacity_test.py --mgr localhost --port 8080 --lf_user lanforge --lf_password lanforge \
--instance_name wct_instance --config_name wifi_config --upstream 1.1.eth1 --batch_size 1 --loop_iter 1 \
--protocol UDP-IPv4 --duration 6000 --pull_report --stations 1.1.sta0000,1.1.sta0001 \
--create_stations --radio wiphy0 --ssid test-ssid --security open --paswd [BLANK] \
--test_rig Testbed-01 \
--influx_host c7-graphana --influx_port 8086 --influx_org Candela \
--influx_token=-u_Wd-L8o992701QF0c5UmqEp7w7Z7YOMaWLxOMgmHfATJGnQbbmYyNxHBR9PgD6taM_tcxqJl6U8DjU1xINFQ== \
--influx_bucket ben \
--influx_tag testbed Ferndale-01
""")
cv_add_base_parser(parser) # see cv_test_manager.py
parser.add_argument("-u", "--upstream", type=str, default="",
help="Upstream port for wifi capacity test ex. 1.1.eth1")
parser.add_argument("-b", "--batch_size", type=str, default="",
help="station increment ex. 1,2,3")
parser.add_argument("-l", "--loop_iter", type=str, default="",
help="Loop iteration ex. 1")
parser.add_argument("-p", "--protocol", type=str, default="",
help="Protocol ex.TCP-IPv4")
parser.add_argument("-d", "--duration", type=str, default="",
help="duration in ms. ex. 5000")
parser.add_argument("--download_rate", type=str, default="1Gbps",
help="Select requested download rate. Kbps, Mbps, Gbps units supported. Default is 1Gbps")
parser.add_argument("--upload_rate", type=str, default="10Mbps",
help="Select requested upload rate. Kbps, Mbps, Gbps units supported. Default is 10Mbps")
parser.add_argument("--sort", type=str, default="interleave",
help="Select station sorting behaviour: none | interleave | linear Default is interleave.")
parser.add_argument("-s", "--stations", type=str, default="",
help="If specified, these stations will be used. If not specified, all available stations will be selected. Example: 1.1.sta001,1.1.wlan0,...")
parser.add_argument("-cs", "--create_stations", default=False, action='store_true',
help="create stations in lanforge (by default: False)")
parser.add_argument("-radio", "--radio", default="wiphy0",
help="create stations in lanforge at this radio (by default: wiphy0)")
parser.add_argument("-ssid", "--ssid", default="",
help="ssid name")
parser.add_argument("-security", "--security", default="open",
help="ssid Security type")
parser.add_argument("-paswd", "--paswd", default="[BLANK]",
help="ssid Password")
args = parser.parse_args()
cv_base_adjust_parser(args)
WFC_Test = WiFiCapacityTest(lf_host=args.mgr,
lf_port=args.port,
lf_user=args.lf_user,
lf_password=args.lf_password,
instance_name=args.instance_name,
config_name=args.config_name,
upstream=args.upstream,
batch_size=args.batch_size,
loop_iter=args.loop_iter,
protocol=args.protocol,
duration=args.duration,
pull_report=args.pull_report,
load_old_cfg=args.load_old_cfg,
download_rate=args.download_rate,
upload_rate=args.upload_rate,
sort=args.sort,
stations=args.stations,
create_stations=args.create_stations,
radio =args.radio,
ssid=args.ssid,
security =args.security,
paswd =args.paswd,
enables = args.enable,
disables = args.disable,
raw_lines = args.raw_line,
raw_lines_file = args.raw_lines_file,
sets = args.set
)
WFC_Test.setup()
WFC_Test.run()
WFC_Test.check_influx_kpi(args)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,192 @@
#!/usr/bin/env python3
"""
Script for creating a variable number of stations.
"""
import sys
import os
import argparse
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from LANforge.lfcli_base import LFCliBase
from LANforge import LFUtils
from realm import Realm
import datetime
import pprint
import pandas as pd
import time
class MeasureTimeUp(Realm):
def __init__(self,
_ssid=None,
_security=None,
_password=None,
_host=None,
_port=None,
_num_sta=None,
_number_template="00000",
_radio=["wiphy0", "wiphy1"],
_proxy_str=None,
_debug_on=False,
_up=True,
_exit_on_error=False,
_exit_on_fail=False,
_load=None,
_action="overwrite",
_clean_chambers="store_true",
_start=None,
_quiesce=None,
_stop=None,
_clean_dut="no"):
super().__init__(_host,
_port)
self.host = _host
self.port = _port
self.ssid = _ssid
self.security = _security
self.password = _password
self.num_sta = _num_sta
self.radio = _radio
# self.timeout = 120
self.number_template = _number_template
self.debug = _debug_on
self.up = _up
self.station_profile = self.new_station_profile()
self.station_profile.lfclient_url = self.lfclient_url
self.station_profile.ssid = self.ssid
self.station_profile.ssid_pass = self.password,
self.station_profile.security = self.security
self.station_profile.number_template_ = self.number_template
self.station_profile.mode = 0
self.load = _load
self.action = _action
self.clean_chambers = _clean_chambers
self.start = _start
self.quiesce = _quiesce
self.stop = _stop
self.clean_dut = _clean_dut
def build(self):
# Build stations
self.station_profile.use_security(self.security, self.ssid, self.password)
self.station_profile.set_number_template(self.number_template)
print("Creating stations")
start_num = 0
sta_names = []
for item in self.radio:
self.station_profile.set_command_flag("add_sta", "create_admin_down", 1)
self.station_profile.set_command_param("set_port", "report_timer", 1500)
self.station_profile.set_command_flag("set_port", "rpt_timer", 1)
sta_list = LFUtils.port_name_series(prefix="sta",
start_id=start_num,
end_id=self.num_sta + start_num,
padding_number=10000,
radio=item)
start_num = self.num_sta + start_num + 1
sta_names.extend(sta_list)
self.station_profile.create(radio=item, sta_names_=sta_list, debug=self.debug)
def station_up(self):
if self.up:
self.station_profile.admin_up()
self.wait_for_ip(station_list=self.station_profile.station_names)
self._pass("PASS: Station build finished")
def scenario(self):
if self.load is not None:
data = {
"name": self.load,
"action": self.action,
"clean_dut": "no",
"clean_chambers": "no"
}
if self.clean_dut:
data['clean_dut'] = "yes"
if self.clean_chambers:
data['clean_chambers'] = "yes"
print("Loading database %s" % self.load)
self.json_post("/cli-json/load", data)
elif self.start is not None:
print("Starting test group %s..." % self.start)
self.json_post("/cli-json/start_group", {"name": self.start})
elif self.stop is not None:
print("Stopping test group %s..." % self.stop)
self.json_post("/cli-json/stop_group", {"name": self.stop})
elif self.quiesce is not None:
print("Quiescing test group %s..." % self.quiesce)
self.json_post("/cli-json/quiesce_group", {"name": self.quiesce})
def main():
parser = LFCliBase.create_basic_argparse(
prog='measure_station_time_up.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''\
Measure how long it takes to up stations
''',
description='''\
measure_station_time_up.py
--------------------
Command example:
./measure_station_time_up.py
--radio wiphy0
--num_stations 3
--security open
--ssid netgear
--passwd BLANK
--debug
--outfile
''')
required = parser.add_argument_group('required arguments')
required.add_argument('--report_file', help='where you want to store results', required=True)
args = parser.parse_args()
dictionary = dict()
for num_sta in list(filter(lambda x: (x % 2 == 0), [*range(0, 200)])):
print(num_sta)
try:
create_station = MeasureTimeUp(_host=args.mgr,
_port=args.mgr_port,
_ssid=args.ssid,
_password=args.passwd,
_security=args.security,
_num_sta=num_sta,
_radio=["wiphy0", "wiphy1"],
_proxy_str=args.proxy,
_debug_on=args.debug,
_load='FACTORY_DFLT')
create_station.scenario()
time.sleep(5.0 + num_sta / 10)
start = datetime.datetime.now()
create_station.build()
built = datetime.datetime.now()
create_station.station_up()
stationsup = datetime.datetime.now()
dictionary[num_sta] = [start, built, stationsup]
create_station.wait_until_ports_disappear(base_url=self.lfclient_url, port_list=station_list)
time.sleep(5.0 + num_sta / 20)
except:
pass
df = pd.DataFrame.from_dict(dictionary).transpose()
df.columns = ['Start', 'Built', 'Stations Up']
df['built duration'] = df['Built'] - df['Start']
df['Up Stations'] = df['Stations Up'] - df['Built']
df['duration'] = df['Stations Up'] - df['Start']
for variable in ['built duration', 'duration']:
df[variable] = [x.total_seconds() for x in df[variable]]
df.to_pickle(args.report_file)
if __name__ == "__main__":
main()

86
py-scripts/recordinflux.py Executable file
View File

@@ -0,0 +1,86 @@
#!/usr/bin/env python3
"""recordinflux will record data from existing lanforge endpoints to record to an already existing influx database.
This data can then be streamed in Grafana or any other graphing program the user chooses while this script runs.
https://influxdb-python.readthedocs.io/en/latest/include-readme.html
Use './recordinflux.py --help' to see command line usage and options
Copyright 2021 Candela Technologies Inc
License: Free to distribute and modify. LANforge systems must be licensed.
"""
import sys
import os
if sys.version_info[0] != 3:
print("This script requires Python 3")
exit(1)
if 'py-json' not in sys.path:
sys.path.append(os.path.join(os.path.abspath('..'), 'py-json'))
from LANforge.lfcli_base import LFCliBase
import argparse
def main():
parser = LFCliBase.create_bare_argparse(
prog='recordinflux.py',
formatter_class=argparse.RawTextHelpFormatter,
epilog='''
Record data to an Influx database in order to be able stream to Grafana or other graphing software''',
description='''
recordinflux.py:
----------------------------
Generic command example:
./recordinflux.py --influx_user lanforge \\
--influx_passwd password \\
--influx_db lanforge \\
--stations \\
--longevity 5h'''
)
target_kpi = ['bps rx', 'rx bytes', 'pps rx', 'rx pkts', 'rx drop']
parser.add_argument('--influx_user', help='Username for your Influx database')
parser.add_argument('--influx_passwd', help='Password for your Influx database')
parser.add_argument('--influx_token', help='Token for your Influx database', default=None)
parser.add_argument('--influx_db', help='Name of your Influx database')
parser.add_argument('--influx_bucket', help='Name of your Influx bucket')
parser.add_argument('--influx_org', help='Name of your Influx Organization')
parser.add_argument('--influx_port', help='Name of your Influx Port', default=8086)
parser.add_argument('--longevity', help='How long you want to gather data', default='4h')
parser.add_argument('--device', help='Device to monitor', action='append', required=True)
parser.add_argument('--monitor_interval', help='How frequently you want to append to your database', default='5s')
parser.add_argument('--target_kpi', help='Monitor only selected columns', action='append', default=target_kpi)
args = parser.parse_args()
monitor_interval = LFCliBase.parse_time(args.monitor_interval).total_seconds()
longevity = LFCliBase.parse_time(args.longevity).total_seconds()
tags=dict()
tags['script'] = 'recordinflux'
if args.influx_user is None:
from influx2 import RecordInflux
grapher = RecordInflux(_influx_host=args.mgr,
_influx_port=args.influx_port,
_influx_bucket=args.influx_db,
_influx_token=args.influx_token,
_influx_org=args.influx_org)
grapher.monitor_port_data(longevity=longevity,
devices=args.device,
monitor_interval=monitor_interval,
tags=tags)
else:
from influx import RecordInflux
grapher = RecordInflux(_influx_host=args.mgr,
_port=args.mgr_port,
_influx_db=args.influx_db,
_influx_user=args.influx_user,
_influx_passwd=args.influx_passwd)
grapher.getdata(longevity=longevity,
devices=args.device,
monitor_interval=monitor_interval,
target_kpi=args.target_kpi)
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More