Compare commits

..

95 Commits
next ... v5.0.0

Author SHA1 Message Date
Mike Hansen
0e0d7771f0 Merge pull request #26 from Telecominfraproject/staging-release-5.0.0-pki-2.0
PKI 2.0 Implementation with Schema v5.0.0 Updates
2026-03-03 10:33:05 -05:00
Mike Hansen
2d0f260f5d Add support for PKI 2.0 with schema v5.0.0 updates
This commit completes the PKI 2.0 implementation by integrating schema
v5.0.0 and fixing runtime configuration handling.

PKI 2.0 Testing:
- Tested with simulated switch (ols-ucentral-client) running in Docker container
- Successfully connected to cloud instance using PKI 2.0 birth certificates
- Verified automatic EST enrollment and operational certificate retrieval
- Confirmed gateway connectivity with operational certificates

Version Update:
- Update client version from 4.1.0 → 5.0.0
- Aligns with PKI 2.0 feature release

Schema Update:
- Update schema reference to release/v5.0.0
- Add version.json and schema.json to /etc/ for runtime capabilities reporting
- Update config-samples/ucentral.schema.pretty.json to v5.0.0
  - New fields: autoneg, qos-priority-mapping
- Regenerate property databases for test suite:
  - property-database-base.c (418 properties)
  - property-database-platform-brcm-sonic.c (418 properties)

Configuration Fixes:
- Fix UC_GATEWAY_ADDRESS parsing to support host:port format
  - Previously required separate -s and -P flags
  - Now supports single environment variable: UC_GATEWAY_ADDRESS=host:port
2026-02-26 13:38:18 -05:00
Mike Hansen
59ef4a8db5 Add EST client stubs to config parser test suite
Add stub implementations for PKI 2.0 EST client functions (est_get_server_url,
est_simple_reenroll, est_get_error) to support testing proto.c with the new
reenroll RPC handler. These stubs allow the config parser test suite to compile
and run without requiring actual EST functionality.

Tests: All 24 config parser tests pass with no regressions
2026-02-25 17:37:56 -05:00
Mike Hansen
d8af348fae Implement PKI 2.0 with EST protocol support
- Add EST (RFC 7030) client implementation for automated certificate lifecycle
  - est-client.c/h: Complete EST protocol implementation using libcurl + OpenSSL
  - Support for simple enrollment, reenrollment, and CA certificate retrieval
  - Auto-detection of EST server based on certificate issuer

- Update ucentral-client for PKI 2.0 certificate flow
  - Remove DigiCert firstcontact flow (marked as legacy, to be removed)
  - Implement automatic EST enrollment on first boot
  - Birth certificates (cert.pem, key.pem, cas.pem) → EST → operational certificates
  - Fallback to birth certificates if enrollment fails

- Add reenroll RPC command handler in proto.c
  - Allows gateway-initiated certificate renewal before expiration
  - Saves renewed certificate and schedules restart after 10 seconds

- Update configuration and documentation
  - Version bump: 4.1.0 → 5.0.0
  - Dockerfile: Reference schema v5.0.0 (tag to be created after PR merge)
  - README.md: Comprehensive PKI 2.0 architecture and workflow documentation
  - partition_script.sh: Add comments clarifying birth certificate provisioning

- Add PKI 2.0 example scripts
  - Test EST enrollment, reenrollment, and CA certificate retrieval
  - Manual testing tools for certificate operations
  - Comprehensive troubleshooting guide

- Update Makefile to compile est-client.o
- Build tested successfully with no regressions (38MB binary)

This implementation follows the proven TIP wlan-ap PKI 2.0 pattern for
consistency across TIP/OpenWifi projects.
2026-02-25 17:25:53 -05:00
Mike Hansen
2c9045c777 Merge pull request #25 from Telecominfraproject/ols-968-documentation-accuracy-and-property-database-regeneration
OLS-968:   Fix documentation accuracy and property database regeneration
2026-01-21 20:32:45 -05:00
Mike Hansen
e36ddea61e OLS-968: Fix documentation accuracy and property database regeneration
- Update line counts across documentation to match actual source files
    (test-config-parser.c: 3304 lines, test-stubs.c: 219 lines)
  - Correct test configuration count to exact "25 configs"
  - Fix schema file references to single canonical file
  - Add property database documentation with accurate statistics
  - Fix broken Makefile regeneration targets (regenerate-property-db,
    regenerate-platform-property-db) to use correct schema-based workflow
  - Replace non-existent rebuild-property-database.py with three-step
    process: extract-schema-properties.py → generate-database-from-schema.py

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-21 16:25:37 -05:00
Mike Hansen
9c91b06be3 Merge pull request #24 from Telecominfraproject/OLS-915-schema-verification-enhancement
[OLS-915] Schema Verification Enhancement

Enhanced schema verification, added 'make clean' before each test run to force correct binary
rebuild when switching between stub/platform modes
2026-01-15 14:05:19 -05:00
Mike Hansen
dc2b9cd8ab [OLS-915] Schema Verification Enhancement
Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-15 13:57:47 -05:00
Mike Hansen
00ca1cd1f4 [OLS-915] Schema Verification Enhancement
Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-15 13:19:39 -05:00
Mike Hansen
ff4b2095b5 [OLS-915] Schema Verification Enhancement
Fix platform property tracking in test output

  Platform databases now always included for property analysis, showing
  complete base→platform flow with arrow notation in test reports.

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-13 11:08:25 -05:00
Mike Hansen
0cb84a23a0 [OLS-915] Schema Verification Enhancement
Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-13 10:40:51 -05:00
Mike Hansen
a4a153197b [OLS-915] Schema Verification Enhancement
Enables undefined property detection during make validate-schema, providing warnings about typos and vendor-specific
  properties that aren't in the schema.

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-13 10:22:59 -05:00
Mike Hansen
cd135692b3 Merge pull request #23 from Telecominfraproject/ols-925-platform-config-testing-with-stubs
Ols 925 platform config logic testing with hardware stubs
2026-01-07 10:22:48 -05:00
Mike Hansen
cef160a0fa [OLS-915] Schema-based property generation and platform testing
Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-05 11:40:10 -05:00
Mike Hansen
04edeb90e4 [OLS-915] Schema-based property generation and platform testing │
Schema-based property database:                                                                                                                                                    │
- Extract 398 properties from schema (vs config files)                                                                                                                             │
- Generate base DB (proto.c: 102 found, 296 unimplemented)                                                                                                                         │
- Generate platform DB (plat-gnma.c: 141 found, 257 unimplemented)                                                                                                                 │
- Support JSON schema (included) and YAML (external repo)                                                                                                                          │

Platform testing framework:                                                                                                                                                        │
- Stub hardware layer (gNMI/gNOI), test real platform logic                                                                                                                        │
- Dual tracking: base parsing + platform application                                                                                                                               │
- Platform mocks for brcm-sonic and example platforms                                                                                                                              │

Changes:                                                                                                                                                                           │
- New: extract-schema-properties.py (JSON/YAML support)                                                                                                                            │
- New: generate-database-from-schema.py, generate-platform-database-from-schema.py                                                                                                 │
- New: Platform mocks and property databases                                                                                                                                       │
- Remove: 4 legacy config-based generation tools                                                                                                                                   │
- Remove: unused ucentral.schema.full.json                                                                                                                                         │
- Update: Simplified fetch-schema.sh, updated MAINTENANCE.md                                                                                                                       │

Tests: 24/24 pass (stub and platform modes)

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2026-01-05 10:00:54 -05:00
Mike Hansen
b00c502016 [OLS-915] Configuration Testing Framework with Property Tracking
- corrected references and paths in markdown files
  hopefully the last

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-12-18 09:44:43 -05:00
Mike Hansen
3963845143 [OLS-915] Configuration Testing Framework with Property Tracking
- corrected references and paths in markdown files

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-12-18 09:40:48 -05:00
Mike Hansen
6c4c918c3b [OLS-915] Configuration Testing Framework with Property Tracking
- corrected paths in markdown files

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-12-18 09:37:14 -05:00
Mike Hansen
f90edc4c41 Merge pull request #22 from Telecominfraproject/OLS-915-configuration-validation-and-parser-tools
[OLS-915] Configuration Testing Framework with Property Tracking
2025-12-18 09:15:47 -05:00
Mike Hansen
9564925df5 [OLS-915] Remove generated build artifacts from repository
- Remove binary executable: tests/config-parser/test-config-parser
- Remove generated test reports: test-report.html, test-report.json
- Update .gitignore to exclude test artifacts and build outputs

These files are generated during build/test and should not be tracked in git.
They will be regenerated when running 'make test-config' or './run-config-tests.sh'.

Related to: OLS-915 Configuration Testing Framework with Property Tracking
2025-12-17 09:20:10 -05:00
Mike Hansen
30b1904d00 [OLS-915] Configuration Testing Framework with Property Tracking - ols-ucentral-client
Add comprehensive configuration testing framework with property tracking

  Implements two-layer validation system (schema + parser) for JSON configurations:
  - Add test-config-parser.c with 628-property database tracking implementation status
  - Add Python schema validator and property database generation tools
  - Add test runner script (run-config-tests.sh) for automated testing
  - Add 25+ test configurations covering core and platform-specific features
  - Modify proto.c with TEST_STATIC macro to expose cfg_parse() for testing
  - Support multiple output formats: human-readable, HTML, JSON, JUnit XML

  Enables automated validation of configuration processing, tracks feature
  implementation coverage, and provides CI/CD integration for continuous testing.

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-12-16 17:41:53 -05:00
Mike Hansen
3c9d20d97f Merge pull request #20 from Telecominfraproject/OLS_Update_Version_410
OLS 410
2025-05-29 14:52:06 -04:00
Mike Hansen
84789e07ce OLS 410
Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-05-28 10:08:52 -04:00
Mike Hansen
10cc5bec80 Merge pull request #19 from Telecominfraproject/OLS-578-Tag-ols-ucentral-client-and-ols-ucentral-schema-4.0.0-pre-release
[OLS-578] Tag ols-ucentral-client and ols-ucentral-schema 4.0.0 pre-r…
2025-02-07 08:06:04 -05:00
Mike Hansen
ca74a49604 [OLS-578] Tag ols-ucentral-client and ols-ucentral-schema 4.0.0 pre-release
Take the schema version from tag v4.0.0-rc1
Update version.json to 4.0.0

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-02-05 08:36:43 -05:00
Mike Hansen
fb10d141d0 Merge pull request #18 from Telecominfraproject/OLS-563-version-client-tag-and-update-schema-ref
[OLS-563] Add version to ols-ucentral-client
2025-02-03 09:27:55 -05:00
Mike Hansen
176d2b9f36 [OLS-563] Add version to ols-ucentral-client
Dockerfile updated to pull tagged version of schema to get the schema.json file.
This will make subsequent versioning much easier.

Once this is merged we can tag the client as 3.2.7 and then move forward to 4.x as we did with the schema.

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-01-31 16:48:09 -05:00
Mike Hansen
41d50f4650 Merge pull request #17 from Telecominfraproject/OLS-563-Add-version-to-ols-ucentral-client
[OLS-563] Add version to ols-ucentral-client
2025-01-27 09:36:05 -05:00
Mike Hansen
00ae4001e7 [OLS-563] Add version to ols-ucentral-client
Add version to ols-ucentral-client
Augment the build to pull the schema version file from the ols-ucentral-schema repo (if present) based on commit id of
schema used as baseline for this client version.
Use both it and the version to provide the version information in the connect message.

Signed-off-by: Mike Hansen <mike.hansen@netexperience.com>
2025-01-22 19:49:46 -05:00
Olexandr, Mazur
5936fbed88 Merge pull request #13 from r4nx/fix-cpp-compilation
Fix compilation issue when platform is implemented in C++
2024-05-14 23:24:10 +03:00
Viacheslav Holovetskyi
0aea2e273c Fix compilation issues
new is a C++ keyword, so the header couldn't be used from C++
2024-04-29 16:17:53 +03:00
Olexandr, Mazur
6e8ccbf40c Merge pull request #12 from Telecominfraproject/plv_next/2.2_build5
Plv next/2.2 build5
2024-04-16 18:53:14 +03:00
Oleksandr Mazur
80f01f977c Update build version to 2.2 b5
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I8ba40d21d4f1d4c81eee906fce0bae6b76aa6052
2024-04-16 17:53:25 +03:00
Oleksandr Mazur
145f8aba82 Fixup scripts: make sure uplink iface (port) is dhcp trusted
With the DHCP-snooping full support for the BRCM platforms
it's needed for the ports to be marked either trusted/untrusted
to make sure proper DHCP request/reply forwarding occurs.

With this commit the following behavior is enforced:
 * Every port is untrusted by default upon device startup;
 * Uplink interface (port) is determined through the means of
   parsing ARP+FBD table;
 * DHCP trust is enabled only for the uplink port;
 * All vlan members can now send DHCP discover(s), which
   will be forwarded (flooded) to trusted ports (in our
   case to a single trusted uplink port) and get
   their replies back.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
2024-04-16 16:30:01 +03:00
Olexandr, Mazur
e53d618a33 Merge pull request #11 from Telecominfraproject/plv_next/2.2_build4
Plv next/2.2 build4
2024-04-08 14:39:14 +03:00
Oleksandr Mazur
0799cec723 Update build version to 2.2 b4
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: Id96cba2b6e1e53a0706133492172c17f860bb2f3
2024-04-08 14:18:08 +03:00
Oleksandr Mazur
24143fc5bc Fix infinite loop deviceupdate send
Deviceupdate condition never fails whenever password is changed,
thus makes the device spam with passwordchange events.
This, as a results, overflows internal buffers, which
overflows internal disk storage.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
2024-04-08 13:56:11 +03:00
Oleksandr Mazur
15b9868322 Revert "script: Use a new GW address"
This reverts commit 4911cab05e.
2024-04-08 13:37:10 +03:00
Olexandr, Mazur
54141e0af6 Merge pull request #10 from Telecominfraproject/plv_next/2.2_build3
Plv next/2.2 build3
2024-04-07 12:30:13 +03:00
Oleksandr Mazur
9034123c2e Update build version to 2.2 b3
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I2d6837667e39be364e32a4a73932928b1edd6b0a
2024-04-07 12:28:45 +03:00
Serhiy Boiko
d0189eaad6 plat: ipv4: Skip L3 configuration for default vlan
Do not apply or modify any L3 cfg for Vlan 1 (ip, dhcp, igmp, etc.)

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I0e19bad017cacc14cb30af946101d7ad4d9bc8d0
2024-04-07 12:28:38 +03:00
Serhiy Boiko
c349f3f9a4 proto: log: Remove unused code
'log' field is expected to be (and is defined in the schema as) an object,
so there is no need to handle it as an array.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I05dd8de1fad086fd6ea16a4cb4792ad7f6e826fd
2024-04-07 12:28:27 +03:00
Serhiy Boiko
5832ecdf36 cfg: Align config samples with current schema definitions
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I7b4a2974d19ea645c7cfe7ab9ca1bde27a849f73
2024-04-07 12:28:19 +03:00
Serhiy Boiko
4911cab05e script: Use a new GW address
Use docker env variables to pass a new gw addr, since the old
gw (that is reported by the redirector) is not available.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ibb3a3b95d556996617d7f4ce1d6a58303df76ad7
2024-04-07 12:28:09 +03:00
Serhiy Boiko
7442bb79c3 ucentral-client: Fix env variable for GW address
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I811f4faeb438d84e6cbd88905c9bb5846264ef5a
2024-04-07 12:27:58 +03:00
Serhiy Boiko
f972987312 proto: Fix RPVSTP config being rejected
'priority' should be a multiple of 4096

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ie974257c47be54852d8a634022ad6f033f06597a
2024-04-07 12:27:48 +03:00
Olexandr, Mazur
783368dd7b Merge pull request #9 from Telecominfraproject/feat/igmp_global_querier_filtering
plat: Parse new fields from config message
2024-04-02 13:00:06 +03:00
Serhiy Boiko
dc60bab84b plat: Parse new fields from config message
Parse and handle unknown-multicast-flood-control and querier-enable fields.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ifa0620bc22e3235b8fb4eb2f7f5dcd026ad0404f
2024-04-02 12:46:39 +03:00
Olexandr, Mazur
681efcabfc Merge pull request #8 from Telecominfraproject/plv_next/next_290324
Plv next/next 290324
2024-03-29 17:34:25 +02:00
Oleksandr Mazur
6f6bd4dfd0 Update build version to 2.2 b2
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I1091f1f34d38f197bdc272adab018a2ec68b5bbf
2024-03-29 17:31:04 +02:00
Serhiy Boiko
04e80e1650 proto: Move diagnostics to a separate thread
Running diagnostics takes a long time (up to 10mins). Running
it in the same context as the callback broker means that all
messages are blocked until diagnostics is finished.
Creating a new thread resolves this issue.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Iadb628007903a7d643b6d2e705da84fd04e73dbe
2024-03-29 17:27:58 +02:00
Serhiy Boiko
a8e2b18733 plat: Report uplink address in state message
Reported info:
- uplink ip addr
- mac addr
- egress port
- route metrics

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ifdae793be73f4c43b3daffb4cf3f4016ea989d44
2024-03-29 17:27:53 +02:00
Oleksandr Mazur
2e5499c375 Fix DHCP + NTP not working properly for Vlan1
Enable properly DHCP snooping for Vlan1 by default;
Make sure NTP is configured to use Vlan1 as wel;

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: Ided132b12a5d472954458632cd61b3e43e072fa0
2024-03-29 17:27:48 +02:00
Serhiy Boiko
0f64807cfb port-isolation: Update parser based on schema
Since port isolation schema moved from ethernet to switch update
the code acordingly.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I30335af3d17ad6ecc910c5c8ed2ca69eaaae0913
2024-03-29 17:27:41 +02:00
Serhiy Boiko
215d4dab4a vlan: Add SVI ip addrs to state message
Add netlink calls to get vlan ip addrs.
Add gnma api to retreive list of addrs.
Add "addresses" field to vlan interfaces.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I4a43485d45c75993ef128c952acfd69f04cd975e
2024-03-29 17:27:35 +02:00
Olexandr, Mazur
e13a8fac52 Merge pull request #7 from Telecominfraproject/plv_next/next_040324
Plv next/next 040324
2024-03-04 16:53:19 +02:00
Serhiy Boiko
049fef08d9 ipv4: Fix interface ipv4 cfg parsing
The schema requires the config to have the following format:
  {
    "ipv4": {
      "subnet": [
        { "prefix": "255.255.225.255/32" }
      ]
    }
  }

But the code expected this:
  {
    "ipv4": {
      "subnet": "255.255.225.255/32"
    }
  }

Since parsing ipv4 for vlans and ports is the same the code
is moved to a function.
*_interface_parse function were refactored to remove unnecessary
indentations.

Limitations:
Only a single ip address can be configured on an interface.
IP address will be configured only for the first port in the list.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ie3ed777a963129269b10833c970dc3e8a24b6b38
2024-03-04 16:50:18 +02:00
Serhiy Boiko
2bd145e09f cfg: Beautify json configs
Use https://codebeautify.org/jsonviewer to beautify all configs.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I4e82ae96927d4f63027c68f17d2179adb9f09052
2024-03-04 16:50:18 +02:00
Serhiy Boiko
732b4e1bc7 cfg: Add sample configs for new and old features
Sample configs added:
- log service
- port isolation
- igmp
- stp

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ia58fab2da04658100b5be1044892f4966d935e10
2024-03-04 16:50:18 +02:00
Serhiy Boiko
559776ba06 proto: Fix parser issue
WA for issue where an array of objects might have malformed keys:

Original message:
    {
        "static-mcast-groups": [
            {"address": "1.1.1.1",
             "egress-ports": [...]}
        ]
    }

Malformed message:
    {
        "static-mcast-groups": [
            {"static-mcast-groups[].address": "1.1.1.1",
             "static-mcast-groups[].egress-ports": [...]}
        ]
    }

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Id0ebe93ab976338adab6cdc3b7d6691ecca9dc94
2024-03-04 16:50:18 +02:00
tip-admin
b6c03319d3 Create LICENSE 2024-02-29 08:52:04 -08:00
Olexandr, Mazur
05d06592cc Merge pull request #5 from Telecominfraproject/plv_next_270224
Plv next 270224
2024-02-27 14:35:52 +02:00
Serhiy Boiko
ebf160fa06 igmp: Fix invalid port name size
sizeof(*port_node->name) == 1
sizeof(port_node->name) == PORT_MAX_NAME_LEN  # 32

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I3bd09eaf00bb55045a935de7795509f5582b0878
2024-02-27 14:04:18 +02:00
Serhiy Boiko
ace64ef341 stp: Fix STP config not applying
Fix issue where STP config was rejected on device because
some of the GNMI entries could not be deleted.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ida30be22c609c682b526358c98225bff0567290c
2024-02-27 14:04:16 +02:00
Serhiy Boiko
5acd35237c igmp: Change what attributes are set for snooping/querrier
Set querrier attributes only for `ip igmp ...`.
Set snooping attributes only for `ip igmp snooping ...`

fixes: a15c56f
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I56369d323fdd8b2b63605392cb8b23fa0442bb8a
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
d9fae8097b Update build number 1.6 b5
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I63b7b8b9945ed67f6ddba104ec3c154c63a308dc
2024-02-27 14:04:12 +02:00
Serhiy Boiko
ee4ff0ee3a igmp: Configure igmp snooping and static groups
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I679807532d04338077cc8657ed9702d3ad09536e
2024-02-27 14:04:12 +02:00
Serhiy Boiko
6efdcb7eb5 igmp: plat: Fill vlan interface data
Store IGMP info inside plat struct (GNMI/GNMA handlers);
Add vlan interfaces to the list of all interfaces (proto handlers).
Vlan interfaces have a multicast field as per state schema.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Id9778a017e0ba54f8e1154e580304f95e3de41c8
2024-02-27 14:04:12 +02:00
Serhiy Boiko
bca8160f67 sfp: Send transceiver info to GW
Notify GW about the ports' transceiver info. If transceiver
info is not supported the "transceiver-info" field is omitted.

    "interfaces": [
        ...
        {
            ...
            "transceiver-info": {
                "vendor-name": "VENDOR",
                "part-number": "PART NUMBER",
                "serial-number": "SERIAL",
                "revision": "REVISION",
                "temperature": 0,
                "tx-optical-power": 0,
                "rx-optical-power": 0,
                "max-module-power": 0,
                "form-factor": "FORM FACTOR",
                "supported-link-modes": [ ... ]}
            ...
        },
        ...
    ],

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: If83bb41d1ebf76b41c2ed0be6f3f755eefa18d29
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
3d2b3295e7 proto: fill state with CoA-related global counters
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I0979c0abf180fe5a8309fc43ed233352edda936c
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
15e4f7a580 gnma: Implement gnmi handlers for fetching COA stats
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: If5027dbb59522731312941b4e539f38e8a54dc70
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
1afbc126fe gnma: implement DAS + DAC gnmi handlers + plat_state recovery
Implement handlers for retrieving platstate of DAS configuration,
as well as DAC list (same as for RADIUS clients).

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: Ifa149aa1708c114cc4b5b59772d524f4cd5b70b2
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
a2c49e8ab5 gnma: remove unused radius host definition
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I3cb4739bd6f0724dad784caf381e9f7f01797153
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
2a8d2c18ce gnma: fix invalid poe stats reported
Newest BRCM images changed from number->string values of
some PoE-related info.
Fix parsing to report valid data back to cloud.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I46afee603f16439fa23adda698be748bfe008b04
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
4549ef61c3 proto: fix FSM for plat diagnostics
GW expects for reply to be sent immediatly (pending state).
This gives GW understanding that command's been processed,
but is still executing.
The final result would be sent afterwards, upon diag completion.
However, the initial pending should be sent as fast as possible,
upon parsing the command itself.

This change fixes the expectations of GW. Without this change
command is marked as timedout, then pending and the completed
OK.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I36925783fc2bc1cd7dfebb957d7ba30d2c7650ea
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
5710119746 proto: remove iface-type from port isolation
Schema changes pruned the type from port-isolation definition.
Aling basecode with schema requirements.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I985022fdefda25461734e8158df7464220f84d8b
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
cedf998260 gnma/proto: fix issues introduced with port-isolation support
- crash upon clearing port-isolation
- memleak of never-freed port-isolation cfg

Change-Id: I847708249cf85f2cfd40ebffefbd56cee822ea8d
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
963120f2b4 proto: implement port-isolation pasring
Only partial - json parsing - implementation is present.
Parsed (from JSON) config values can be used
in Platform code, upon applying conf, to alter port-isolation cfg.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I5deb31be5b5b2295a7698ca357fc10555b5dd772
2024-02-27 14:04:12 +02:00
Oleksandr Mazur
8b4a63fb66 proto: implement services (ssh, http, telnet) pasring
Implement partial (only enable/disable) parsing of services - common part
of OLS-NOS repo.
Only partial - json parsing - implementation is present.
Parsed (from JSON) config values (bool - enable true/false) can be used
in Platform code, upon applying conf, to alter services state.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I2c25724913b5524729950513fd6c5f3b1c25f9e0
2024-02-27 14:04:12 +02:00
Olexandr, Mazur
60abb9a7e6 Merge pull request #1 from Telecominfraproject/plv_next
Plv next
2024-01-22 20:29:41 +02:00
Serhiy Boiko
0b683379b4 system-password: Allow to change admin pass from GW
System (admin) password is changed every time the configure
message contains a system-password field:

  {
    ...
    "unit": {
      "system-password": "YourPaSsWoRd"
    }
    ...
  }

Every time the password is updated a deviceupdate message
(with the new password) is sent to the GW.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I9c8eb49a62402807d9de61e8020637da57986e52
2024-01-22 17:36:01 +02:00
Serhiy Boiko
a328cd6b7a mac-address-list: Add overflow flag
The flag is set in two cases:
- if the value of wired-clients-max-num is set to 0
- if the number of learned mac addrs is greater than
  the value of wired-clients-max-num

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I73b46dbb213a91f6375ec33106e84fed50d30ac6
2024-01-22 17:32:06 +02:00
Serhiy Boiko
7afff76db1 proto: Fill platform state with learned mac addresses
JSON message to GW:

    "mac-forwarding-table": {
        "Ethernet0": {
            "100": [ "11:11:11:11:11:11", "22:22:22:22:22:22" ],
            "200": [ "33:33:33:33:33:33", "44:44:44:44:44:44" ]
        },
        "Ethernet1": {
            "100": [ "55:55:55:55:55:55", "66:66:66:66:66:66" ],
            "200": [ "77:77:77:77:77:77", "88:88:88:88:88:88" ]
        }
    }

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I49a4380225bc105a880731df367df3efbd0f4908
2024-01-22 17:32:06 +02:00
Oleksandr Mazur
0d9af851b4 Update build number 1.6 b4
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I2df8401d9437179e9564b5cae3aa34428f1ce5c1
2024-01-22 17:32:06 +02:00
Serhiy Boiko
b8c952cf1c plat: Add learned_mac_addrs_get API
The API will store the list of learned MAC addrs inside
of plat_state_info.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ib1e03c3fbc9f52ee9037e6bfca1c2c8fb2db56df
2024-01-22 17:32:06 +02:00
Serhiy Boiko
8636487247 gnma: Add mac_addr_list_get API
This API will return a list of fdb entries from the device.
The caller is responsible of providing a big enough buffer.
If the buffer is not big enough then a GNMA_ERR_OVERFLOW
error is returned and the list_size arg is set to the minimum
required size.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I8046549d9aff5903a068ea3ea2914dcb154a34da
2024-01-22 17:32:06 +02:00
Serhiy Boiko
ac20c4c276 Refactor router utils
Make for_router_db_diff macro more readible.
Use int instead of bool in _fib_info_cmp.
This also fixes the case where unreachable routes were not
rejected by the device if they were replacing blackhole
routes with the same prefix.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ic560ce85e506715509437de765de535172ccbf67
2024-01-22 17:32:06 +02:00
Oleksandr Mazur
ee4b0ca66b Update build number 1.6 b3
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I9aaa730610dd3f107e146be72f113964447d72d9
2024-01-22 17:31:27 +02:00
Oleksandr Mazur
977c651079 Add unavailable reboot-case handler
Whenever system is not yet ready to determine reboot-cause
send an appropriate unavailable reboot cause msg, do not
default to <crash>.

Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Change-Id: I33130375889ccf689f4e5bcc4e6ee0b5ceb6f76e
2024-01-22 17:31:27 +02:00
Yevhen Orlov
3beb5f314b Update build number 1.6 b2
Signed-off-by: Yevhen Orlov <yevhen.orlov@plvision.eu>

Change-Id: If2c54a2c44dbe90c615dcb15f4c9fb285eb7a364
2024-01-22 17:31:27 +02:00
Serhiy Boiko
a84bfa8e04 Fix segfault caused by blackhole routes
The for loop incorrectly handles the array of routes which causes
a NULL pointer dereference in case that we have any routes
configured at device init (e.g. blackhole).
If we don't have any routes at init time the loop is skipped and
the issue is not reproduced.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I5f3656c470141fbfad3ceb268d91f755e33cf65f
2024-01-22 17:31:27 +02:00
Yevhen Orlov
289c74a81d Add parameters overriding in order to be sure after upgrade
Change-Id: I43b25ea6717403dfb871c67549bc248fd22f09b9
2024-01-22 17:31:27 +02:00
Yevhen Orlov
be1138ebc6 Revert "Add MGMT_VRF config to config_db.json"
This reverts commit d85cd586a8bedef86f4793befea42b6e511d2254.

Decide to do so, because we has multiple issues regarding that some
services startted especially for mgmt vrf so unavalible in case that
eth0 is not used in our scenarios.

Change-Id: I5fd53edba7d54acc61efa624ea798c169775a3ad
2024-01-22 17:31:27 +02:00
Serhiy Boiko
764e9f93ab Update revision naming format
Introduce a define (PLATFORM_REVISION) that can be used to
represent the current platform revision.
The new format is: "Rel %s build %s"
PLATFORM_REVISION can be passed to the make cmd as an env variable.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: Ifab6df704946fe283a102b1985afe9cedc39eba7
2024-01-22 17:31:27 +02:00
Serhiy Boiko
12117ebfc8 Fix Spanning Tree configuration
Set parameters per vlan instead of globally.

Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Change-Id: I011db1d4a800d0dc8463932fe692c4b57f898a55
2024-01-22 17:28:59 +02:00
120 changed files with 30975 additions and 3635 deletions

27
.gitignore vendored
View File

@@ -5,3 +5,30 @@ src/docker/ucentral-client
*.orig
*.rej
docker/*
CLAUDE.md
# Test artifacts and generated files
tests/config-parser/test-config-parser
tests/config-parser/test-report.*
tests/config-parser/test-results*.txt
tests/config-parser/*.bak
output/
# Platform build artifacts
*.a
*.full.a
*.pb.cc
*.pb.h
*_protoc_stamp
# Generated gRPC/protobuf files
src/ucentral-client/platform/*/gnma/gnmi/*.pb.*
src/ucentral-client/platform/*/gnma/gnmi/*.grpc.pb.*
src/ucentral-client/platform/*/gnma/*.a
src/ucentral-client/platform/*/netlink/*.a
# Schema repository (fetched by tools)
ols-ucentral-schema/
# Backup files
*.backup

View File

@@ -1,9 +1,12 @@
FROM debian:buster
FROM debian:bullseye
LABEL Description="Ucentral client (Build) environment"
ARG HOME /root
ARG EXTERNAL_LIBS ${HOME}/ucentral-external-libs
ARG SCHEMA="release/v5.0.0"
ARG SCHEMA_VERSION="${SCHEMA}"
ARG SCHEMA_ZIP_FILE="${SCHEMA_VERSION}.zip"
ARG SCHEMA_UNZIPPED="ols-ucentral-schema-${SCHEMA}"
ARG OLS_SCHEMA_SRC="https://github.com/Telecominfraproject/ols-ucentral-schema/archive/refs/heads/${SCHEMA_ZIP_FILE}"
SHELL ["/bin/bash", "-c"]
RUN apt-get update -q -y && apt-get -q -y --no-install-recommends install \
@@ -15,19 +18,26 @@ RUN apt-get update -q -y && apt-get -q -y --no-install-recommends install \
libcurl4-openssl-dev \
libev-dev \
libssl-dev \
libnl-route-3-dev \
libnl-3-dev \
apt-utils \
git \
wget \
autoconf \
libtool \
pkg-config \
libjsoncpp-dev
libjsoncpp-dev \
unzip \
python3 \
python3-jsonschema
RUN git config --global http.sslverify false
RUN git clone https://github.com/DaveGamble/cJSON.git ${HOME}/ucentral-external-libs/cJSON/
RUN git clone https://libwebsockets.org/repo/libwebsockets ${HOME}/ucentral-external-libs/libwebsockets/
RUN git clone --recurse-submodules -b v1.50.0 --depth 1 --shallow-submodules https://github.com/grpc/grpc ${HOME}/ucentral-external-libs/grpc/
RUN git clone --recursive --branch v7.1.4 https://github.com/zhaojh329/rtty.git ${HOME}/ucentral-external-libs/rtty/
ADD ${OLS_SCHEMA_SRC} /tmp/
# The following libs should be prebuilt in docker-build-env img to speed-up
# recompilation of only the ucentral-client itself
@@ -39,6 +49,8 @@ RUN cd ${HOME}/ucentral-external-libs/cJSON/ && \
make install
RUN cd ${HOME}/ucentral-external-libs/libwebsockets/ && \
git branch --all && \
git checkout a9b8fe7ebf61b8c0e7891e06e70d558412933a33 && \
mkdir build && \
cd build && \
cmake .. && \
@@ -60,3 +72,13 @@ RUN cd ${HOME}/ucentral-external-libs/rtty/ && \
cd build && \
cmake .. && \
make -j4
RUN unzip /tmp/${SCHEMA_ZIP_FILE} -d ${HOME}/ucentral-external-libs/
RUN cd ${HOME}/ucentral-external-libs/ && \
mv ${SCHEMA_UNZIPPED} ols-ucentral-schema
# Copy version files to /etc/ for runtime use
COPY version.json /etc/version.json
RUN mkdir -p /etc && \
cp ${HOME}/ucentral-external-libs/ols-ucentral-schema/schema.json /etc/schema.json

28
LICENSE Normal file
View File

@@ -0,0 +1,28 @@
BSD 3-Clause License
Copyright (c) 2024, Telecom Infra Project
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -7,8 +7,7 @@ IMG_ID := "ucentral-client-build-env"
IMG_TAG := $(shell cat Dockerfile | sha1sum | awk '{print substr($$1,0,11);}')
CONTAINER_NAME := "ucentral_client_build_env"
.PHONY: all clean build-host-env build-final-deb build-ucentral-docker-img run-host-env run-ucentral-docker-img \
plat-ec plat-ec-clean
.PHONY: all clean build-host-env build-final-deb build-ucentral-docker-img run-host-env run-ucentral-docker-img
all: build-host-env build-ucentral-app build-ucentral-docker-img build-final-deb
@@ -21,9 +20,9 @@ build-host-env:
docker build --file Dockerfile --tag ${IMG_ID}:${IMG_TAG} docker
@echo Docker build done;
@echo Saving docker img to local archive...;
if [ ! -f output/docker-ucentral-client-build-env-${IMG_TAG}.gz ] ; then
if [ ! -f output/docker-ucentral-client-build-env-${IMG_TAG}.gz ] ; then \
docker save ${IMG_ID}:${IMG_TAG} | gzip -c - > \
output/docker-ucentral-client-build-env-${IMG_TAG}.gz;
output/docker-ucentral-client-build-env-${IMG_TAG}.gz; \
fi
@echo Docker save done...;
@@ -33,6 +32,7 @@ run-host-env: build-host-env
docker run -d -t --name ${CONTAINER_NAME} \
-v $(realpath ./):/root/ols-nos \
--env UCENTRAL_PLATFORM=$(UCENTRAL_PLATFORM) \
--env PLATFORM_REVISION="$(PLATFORM_REVISION)" \
${IMG_ID}:${IMG_TAG} \
bash
@@ -50,8 +50,13 @@ build-ucentral-app: run-host-env
@echo Running ucentralclient docker-build-env container to build ucentral-client...;
docker exec -t ${CONTAINER_NAME} /root/ols-nos/docker-build-client.sh
docker cp ${CONTAINER_NAME}:/root/deliverables/ src/docker/
# copy the schema version, if it is there
docker cp ${CONTAINER_NAME}:/root/ucentral-external-libs/ols-ucentral-schema/schema.json src/docker/ || true
docker container stop ${CONTAINER_NAME} > /dev/null 2>&1 || true;
docker container rm ${CONTAINER_NAME} > /dev/null 2>&1 || true;
if [ -f version.json ]; then \
cp version.json src/docker/; \
fi
build-ucentral-docker-img: build-ucentral-app
pushd src
@@ -61,8 +66,8 @@ build-ucentral-docker-img: build-ucentral-app
OLDIMG=$$(docker images --format "{{.ID}}" ucentral-client:latest)
docker build --file docker/Dockerfile --tag ucentral-client:latest docker
NEWIMG=$$(docker images --format "{{.ID}}" ucentral-client:latest)
if [ -n "$$OLDIMG" ] && [ ! "$$OLDIMG" = "$$NEWIMG" ]; then
docker image rm $$OLDIMG
if [ -n "$$OLDIMG" ] && [ ! "$$OLDIMG" = "$$NEWIMG" ]; then \
docker image rm $$OLDIMG; \
fi
docker save ucentral-client:latest |gzip -c - > docker-ucentral-client.gz
popd
@@ -81,9 +86,6 @@ build-final-deb: build-ucentral-docker-img
@echo
@echo "ucentral client deb pkg is available under ./output/ dir"
plat-ec:
src/ec-private/build.sh
clean:
docker container stop ${CONTAINER_NAME} > /dev/null 2>&1 || true;
docker container rm ${CONTAINER_NAME} > /dev/null 2>&1 || true;
@@ -94,19 +96,11 @@ clean:
rm -rf src/docker/deliverables || true;
rm -rf src/docker/lib* || true;
rm -rf src/docker/ucentral-client || true;
rm -rf src/docker/version.json || true;
rm -rf src/docker/schema.json || true;
rm -rf src/debian/ucentral-client.substvars 2>/dev/null || true;
rm -rf src/debian/shasta-ucentral-client.debhelper.log 2>/dev/null || true;
rm -rf src/debian/.debhelper src/debian/ucentral-client 2>/dev/null || true;
rm -rf src/debian/shasta-ucentral-client* 2>/dev/null || true;
rm -rf src/debian/debhelper-build-stamp* 2>/dev/null || true;
rm -rf src/debian/files shasta_1.0_amd64.changes shasta_1.0_amd64.buildinfo 2>/dev/null || true;
plat-ec-clean:
rm -rf src/ec-private/cjson
rm -rf src/ec-private/curl
rm -rf src/ec-private/libwebsockets
rm -rf src/ec-private/openssl
rm -rf src/ec-private/openssl
rm -rf src/ec-private/ecapi/build
rm -rf src/ec-private/ucentral
rm -rf output

544
PREREQUISITES.md Normal file
View File

@@ -0,0 +1,544 @@
# Prerequisites and Dependencies
This document lists all tools, libraries, and dependencies required for building, testing, and developing the OLS uCentral Client.
---
## Table of Contents
1. [Core Build Requirements](#core-build-requirements)
2. [Testing Framework Requirements](#testing-framework-requirements)
3. [Property Database Generation Requirements](#property-database-generation-requirements)
4. [Schema Repository Access](#schema-repository-access)
5. [Optional Tools](#optional-tools)
6. [Quick Setup Guide](#quick-setup-guide)
7. [Verification Commands](#verification-commands)
---
## Core Build Requirements
### Required for Building the Application
| Tool | Version | Purpose | Installation |
|------|---------|---------|-------------|
| **Docker** | 20.10+ | Dockerized build environment | [Install Docker](https://docs.docker.com/get-docker/) |
| **Docker Compose** | 1.29+ (optional) | Multi-container orchestration | Included with Docker Desktop |
| **Make** | 3.81+ | Build system | Usually pre-installed on Linux/macOS |
| **Git** | 2.20+ | Version control and schema fetching | `apt install git` or `brew install git` |
| **Bash** | 4.0+ | Shell scripts | Usually pre-installed |
### Build Process Dependencies (Inside Docker)
These are **automatically installed** inside the Docker build environment (no manual installation required):
- **GCC/G++** - C/C++ compiler
- **CMake** - Build system generator
- **cJSON** - JSON parsing library
- **libwebsockets** - WebSocket client library
- **OpenSSL** - TLS/SSL support
- **gRPC** - RPC framework (platform-specific)
- **Protocol Buffers** - Serialization (platform-specific)
- **jsoncpp** - JSON library (C++)
**Note:** You do NOT need to install these manually - Docker handles everything!
---
## Testing Framework Requirements
### Required for Running Tests
| Tool | Version | Purpose | Required For |
|------|---------|---------|-------------|
| **Python 3** | 3.7+ | Test scripts and schema validation | All testing |
| **PyYAML** | 5.1+ | YAML schema parsing | Schema-based database generation |
| **Docker** | 20.10+ | Consistent test environment | Recommended (but optional) |
### Python Package Dependencies
Install via pip:
```bash
# Install all Python dependencies
pip3 install pyyaml
# Or if you prefer using requirements file (see below)
pip3 install -r tests/requirements.txt
```
**Detailed breakdown:**
1. **PyYAML** (`yaml` module)
- Used by: `tests/tools/extract-schema-properties.py`
- Purpose: Parse YAML schema files from ols-ucentral-schema
- Installation: `pip3 install pyyaml`
- Version: 5.1 or later
2. **Standard Library Only** (no additional packages needed)
- Used by: `tests/schema/validate-schema.py`
- Built-in modules: `json`, `sys`, `argparse`, `os`
- Used by: Most other Python scripts
- Built-in modules: `pathlib`, `typing`, `re`, `subprocess`
### Test Configuration Files
These are **included in the repository** (no installation needed):
- Configuration samples: `config-samples/*.json`
- JSON schema: `config-samples/ucentral.schema.pretty.json`
- Test framework: `tests/config-parser/test-config-parser.c`
---
## Property Database Generation Requirements
### Required for Database Generation/Regeneration
| Tool | Version | Purpose | When Needed |
|------|---------|---------|-------------|
| **Python 3** | 3.7+ | Database generation scripts | Database regeneration |
| **PyYAML** | 5.1+ | Schema parsing | Schema-based generation |
| **Git** | 2.20+ | Fetch ols-ucentral-schema | Schema access |
| **Bash** | 4.0+ | Schema fetch script | Automated schema fetching |
### Scripts Overview
1. **Schema Extraction:**
- `tests/tools/extract-schema-properties.py` - Extract properties from YAML schema
- Dependencies: Python 3.7+, PyYAML
- Input: ols-ucentral-schema YAML files
- Output: Property list (text)
2. **Line Number Finder:**
- `tests/tools/find-property-line-numbers.py` - Find property parsing locations
- Dependencies: Python 3.7+ (standard library only)
- Input: proto.c + property list
- Output: Property database with line numbers
3. **Database Regeneration:**
- `tests/tools/rebuild-property-database.py` - Master regeneration script
- Dependencies: Python 3.7+ (standard library only)
- Input: proto.c + config files
- Output: Complete property database
4. **Database Updater:**
- `tests/tools/update-test-config-parser.py` - Update test file with new database
- Dependencies: Python 3.7+ (standard library only)
- Input: test-config-parser.c + new database
- Output: Updated test file
5. **Schema Fetcher:**
- `tests/tools/fetch-schema.sh` - Fetch/update ols-ucentral-schema
- Dependencies: Bash, Git
- Input: Current branch name
- Output: Downloaded schema repository
---
## Schema Repository Access
### ols-ucentral-schema Repository
The uCentral configuration schema is maintained in a separate GitHub repository:
**Repository:** https://github.com/Telecominfraproject/ols-ucentral-schema
### Access Methods
#### Method 1: Automatic Fetching (Recommended)
Use the provided script with intelligent branch matching:
```bash
cd tests/tools
# Auto-detect branch (matches client branch to schema branch)
./fetch-schema.sh
# Force specific branch
./fetch-schema.sh --branch main
./fetch-schema.sh --branch release-1.0
# Check what branch would be used
./fetch-schema.sh --check-only
# Force re-download
./fetch-schema.sh --force
```
**Branch Matching Logic:**
- Client on `main` → Uses schema `main`
- Client on `release-x` → Tries schema `release-x`, falls back to `main`
- Client on feature branch → Uses schema `main`
#### Method 2: Manual Clone
```bash
# Clone to recommended location (peer to ols-ucentral-client)
cd /path/to/projects
git clone https://github.com/Telecominfraproject/ols-ucentral-schema.git
# Or clone to custom location and set path in tools
git clone https://github.com/Telecominfraproject/ols-ucentral-schema.git /custom/path
```
#### Method 3: Web Access (Read-Only)
View schema files directly on GitHub:
- Browse: https://github.com/Telecominfraproject/ols-ucentral-schema/tree/main/schema
- Raw files: `https://raw.githubusercontent.com/Telecominfraproject/ols-ucentral-schema/main/schema/ucentral.yml`
### Schema Directory Structure
Expected schema layout:
```
ols-ucentral-schema/
├── schema/
│ ├── ucentral.yml # Root schema
│ ├── ethernet.yml
│ ├── interface.ethernet.yml
│ ├── switch.yml
│ ├── unit.yml
│ └── ... (40+ YAML files)
└── README.md
```
### Schema Location Configuration
Default location (peer to client repository):
```
/path/to/projects/
├── ols-ucentral-client/ # This repository
│ └── tests/tools/
└── ols-ucentral-schema/ # Schema repository (default)
└── schema/
```
Custom location (set in scripts):
```bash
# Edit schema path in tools
SCHEMA_DIR="/custom/path/to/ols-ucentral-schema"
```
---
## Optional Tools
### Development and Debugging
| Tool | Purpose | Installation |
|------|---------|-------------|
| **GDB** | C debugger | `apt install gdb` or `brew install gdb` |
| **Valgrind** | Memory leak detection | `apt install valgrind` |
| **clang-format** | Code formatting | `apt install clang-format` |
| **cppcheck** | Static analysis | `apt install cppcheck` |
### Documentation
| Tool | Purpose | Installation |
|------|---------|-------------|
| **Doxygen** | API documentation | `apt install doxygen` |
| **Graphviz** | Diagram generation | `apt install graphviz` |
| **Pandoc** | Markdown conversion | `apt install pandoc` |
### CI/CD Integration
| Tool | Purpose | Notes |
|------|---------|-------|
| **GitHub Actions** | Automated testing | Configuration in `.github/workflows/` |
| **Jenkins** | Build automation | JUnit XML output supported |
| **GitLab CI** | CI/CD pipeline | Docker-based builds supported |
---
## Quick Setup Guide
### For Building Only
```bash
# 1. Install Docker
# Follow: https://docs.docker.com/get-docker/
# 2. Clone repository
git clone https://github.com/Telecominfraproject/ols-ucentral-client.git
cd ols-ucentral-client
# 3. Build everything
make all
# Done! The .deb package is in output/
```
### For Testing
```bash
# 1. Install Python 3 and pip (if not already installed)
# Ubuntu/Debian:
sudo apt update
sudo apt install python3 python3-pip
# macOS:
brew install python3
# 2. Install Python dependencies
pip3 install pyyaml
# 3. Run tests
cd tests/config-parser
make test-config-full
# Or use Docker (recommended)
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make test-config-full"
```
### For Database Generation
```bash
# 1. Ensure Python 3 and PyYAML are installed (see above)
# 2. Fetch schema repository
cd tests/tools
./fetch-schema.sh
# 3. Extract properties from schema
python3 extract-schema-properties.py \
../../ols-ucentral-schema/schema \
ucentral.yml \
--filter switch --filter ethernet
# 4. Follow property database generation guide
# See: tests/PROPERTY_DATABASE_GENERATION_GUIDE.md
```
---
## Verification Commands
### Verify Docker Installation
```bash
docker --version
# Expected: Docker version 20.10.0 or later
docker ps
# Expected: No errors (should list running containers or empty list)
```
### Verify Python Installation
```bash
python3 --version
# Expected: Python 3.7.0 or later
pip3 --version
# Expected: pip 20.0.0 or later
```
### Verify Python Dependencies
```bash
python3 -c "import yaml; print('PyYAML:', yaml.__version__)"
# Expected: PyYAML: 5.1 or later
python3 -c "import json; print('JSON: built-in')"
# Expected: JSON: built-in (no errors)
```
### Verify Git Installation
```bash
git --version
# Expected: git version 2.20.0 or later
git config --get user.name
# Expected: Your name (if configured)
```
### Verify Build Environment
```bash
# Check if Docker image exists
docker images | grep ucentral-client-build-env
# Expected: ucentral-client-build-env image listed (after first build)
# Check if container is running
docker ps | grep ucentral_client_build_env
# Expected: Container listed (if build-host-env was run)
```
### Verify Schema Access
```bash
# Check schema repository
cd tests/tools
./fetch-schema.sh --check-only
# Expected: Shows which branch would be used
# Verify schema files
ls -la ../../ols-ucentral-schema/schema/ucentral.yml
# Expected: File exists (after fetch-schema.sh)
```
---
## Troubleshooting
### Docker Issues
**Problem:** "Cannot connect to Docker daemon"
```bash
# Solution: Start Docker Desktop or daemon
sudo systemctl start docker # Linux
# Or open Docker Desktop app (macOS/Windows)
```
**Problem:** "Permission denied" when running Docker
```bash
# Solution: Add user to docker group (Linux)
sudo usermod -aG docker $USER
# Log out and back in for changes to take effect
```
### Python Issues
**Problem:** "ModuleNotFoundError: No module named 'yaml'"
```bash
# Solution: Install PyYAML
pip3 install pyyaml
# If pip3 install fails, try:
python3 -m pip install pyyaml
```
**Problem:** "python3: command not found"
```bash
# Solution: Install Python 3
# Ubuntu/Debian:
sudo apt install python3
# macOS:
brew install python3
```
### Schema Access Issues
**Problem:** "Schema repository not found"
```bash
# Solution: Fetch schema manually
cd tests/tools
./fetch-schema.sh --force
# Or clone manually
git clone https://github.com/Telecominfraproject/ols-ucentral-schema.git ../../ols-ucentral-schema
```
**Problem:** "Branch 'release-x' not found in schema repository"
```bash
# Solution: Use main branch
./fetch-schema.sh --branch main
# Or check available branches
git ls-remote --heads https://github.com/Telecominfraproject/ols-ucentral-schema.git
```
---
## Platform-Specific Notes
### Ubuntu/Debian
```bash
# Install all required tools
sudo apt update
sudo apt install -y \
docker.io \
docker-compose \
python3 \
python3-pip \
git \
make \
bash
# Install Python dependencies
pip3 install pyyaml
# Add user to docker group
sudo usermod -aG docker $USER
# Log out and back in
```
### macOS
```bash
# Install Docker Desktop
# Download from: https://docs.docker.com/desktop/mac/install/
# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install tools
brew install python3 git
# Install Python dependencies
pip3 install pyyaml
```
### Windows (WSL2)
```bash
# Install Docker Desktop for Windows with WSL2 backend
# Download from: https://docs.docker.com/desktop/windows/install/
# Inside WSL2 Ubuntu:
sudo apt update
sudo apt install -y python3 python3-pip git make
# Install Python dependencies
pip3 install pyyaml
```
---
## Dependencies Summary
### Minimal (Build Only)
- Docker
### Standard (Build + Test)
- Docker
- Python 3.7+
- PyYAML
### Full (Build + Test + Database Generation)
- Docker
- Python 3.7+
- PyYAML
- Git
- Bash
### Everything Included in Repository
- Test framework (C code)
- Test configurations
- JSON schema
- Python scripts
- Shell scripts
- Documentation
---
## See Also
- **[README.md](README.md)** - Main project documentation
- **[TESTING_FRAMEWORK.md](TESTING_FRAMEWORK.md)** - Testing overview
- **[tests/MAINTENANCE.md](tests/MAINTENANCE.md)** - Schema-based database generation workflow
- **[tests/README.md](tests/README.md)** - Complete testing documentation
---
## Questions or Issues?
- **GitHub Issues:** https://github.com/Telecominfraproject/ols-ucentral-client/issues
- **Schema Issues:** https://github.com/Telecominfraproject/ols-ucentral-schema/issues
- **Documentation:** Check the `tests/` directory for detailed guides

218
QUICK_START_TESTING.md Normal file
View File

@@ -0,0 +1,218 @@
# Quick Start: Testing Guide
## TL;DR
```bash
# Test all configs with human-readable output (default)
./run-config-tests.sh
# Generate HTML report
./run-config-tests.sh --format html
# Test single config with HTML output
./run-config-tests.sh --format html cfg0.json
# Results are in: output/
```
## Common Commands
### Test All Configurations
```bash
# Stub mode (default - fast, proto.c parsing only)
./run-config-tests.sh # Console output with colors (default)
./run-config-tests.sh --format html # Interactive HTML report
./run-config-tests.sh --format json # Machine-readable JSON
# Platform mode (integration testing with platform code)
./run-config-tests.sh --mode platform # Console output
./run-config-tests.sh --mode platform --format html # HTML report
```
### Test Single Configuration
```bash
# Stub mode (default)
./run-config-tests.sh cfg0.json # Human output (default)
./run-config-tests.sh --format html cfg0.json # HTML report
./run-config-tests.sh --format json cfg0.json # JSON output
# Platform mode
./run-config-tests.sh --mode platform cfg0.json # Human output
./run-config-tests.sh --mode platform --format html cfg0.json # HTML report
```
### View Results
```bash
# Open HTML report in browser
open output/test-report.html # macOS
xdg-open output/test-report.html # Linux
# View text results
cat output/test-results.txt
# Parse JSON results
cat output/test-report.json | jq '.summary'
```
## What the Script Does
1. ✅ Checks Docker is running
2. ✅ Builds Docker environment (only if needed)
3. ✅ Starts/reuses container
4. ✅ Runs tests inside container
5. ✅ Copies results to `output/` directory
6. ✅ Shows summary
## Output Formats
| Format | Use Case | Output File |
|--------|----------|-------------|
| `human` | Interactive development, debugging | `output/test-results.txt` |
| `html` | Reports, sharing, presentations | `output/test-report.html` |
| `json` | CI/CD, automation, metrics | `output/test-report.json` |
## First Run vs Subsequent Runs
**First Run (cold start):**
- Builds Docker environment: ~10 minutes (one-time)
- Runs tests: ~30 seconds
- **Total: ~10 minutes**
**Subsequent Runs (warm start):**
- Reuses environment: ~2 seconds
- Runs tests: ~30 seconds
- **Total: ~30 seconds**
## Troubleshooting
### Docker not running
```bash
# Start Docker Desktop (macOS/Windows)
# OR
sudo systemctl start docker # Linux
```
### Permission denied
```bash
chmod +x run-config-tests.sh
```
### Config not found
```bash
# List available configs
ls config-samples/*.json
```
## CI/CD Integration
### Exit Codes
- `0` = All tests passed ✅
- `1` = Tests failed ❌
- `2` = System error ⚠️
### Example Pipeline
```yaml
- name: Run tests
run: ./run-config-tests.sh --format json
- name: Check results
run: |
if [ $? -eq 0 ]; then
echo "✅ All tests passed"
else
echo "❌ Tests failed"
exit 1
fi
```
## Available Test Configs
```bash
# List all configs
ls -1 config-samples/*.json | xargs -n1 basename
# Common test configs:
cfg0.json # Basic config
ECS4150-TM.json # Traffic management
ECS4150-ACL.json # Access control lists
ECS4150STP_RSTP.json # Spanning tree
ECS4150_IGMP_Snooping.json # IGMP snooping
ECS4150_POE.json # Power over Ethernet
ECS4150_VLAN.json # VLAN configuration
```
## What Gets Tested
✅ JSON schema validation (structure, types, constraints)
✅ Parser validation (actual C parser implementation)
✅ Property tracking (configured vs unknown properties)
✅ Feature coverage (implemented vs documented features)
✅ Error handling (invalid configs, missing fields)
## Test Modes
### Stub Mode (Default - Fast)
- Tests proto.c parsing only
- Uses simple platform stubs
- Shows base properties only
- Execution time: ~30 seconds
- Use for: Quick validation, CI/CD
### Platform Mode (Integration)
- Tests proto.c + platform code (plat-gnma.c)
- Uses platform implementation with mocks
- Shows base AND platform properties
- Tracks hardware application functions
- Execution time: ~45 seconds
- Use for: Platform-specific validation
## Quick Reference
| Task | Command |
|------|---------|
| Test everything (stub) | `./run-config-tests.sh` |
| Test everything (platform) | `./run-config-tests.sh --mode platform` |
| HTML report | `./run-config-tests.sh --format html` |
| JSON output | `./run-config-tests.sh --format json` |
| Single config | `./run-config-tests.sh cfg0.json` |
| Single config HTML | `./run-config-tests.sh -f html cfg0.json` |
| Platform mode single config | `./run-config-tests.sh -m platform cfg0.json` |
| View HTML | `open output/test-report.html` |
| View results | `cat output/test-results.txt` |
| Parse JSON | `cat output/test-report.json \| jq` |
## Full Documentation
- **TEST_RUNNER_README.md** - Complete script documentation
- **TESTING_FRAMEWORK.md** - Testing framework overview
- **tests/config-parser/TEST_CONFIG_README.md** - Detailed testing guide
- **TEST_CONFIG_PARSER_DESIGN.md** - Test framework architecture
- **tests/MAINTENANCE.md** - Maintenance procedures
- **README.md** - Project overview and build instructions
## Directory Structure
```
ols-ucentral-client/
├── run-config-tests.sh ← Test runner script
├── output/ ← Test results go here
├── config-samples/ ← Test configurations
└── tests/
├── config-parser/
│ ├── test-config-parser.c ← Test implementation
│ ├── test-stubs.c ← Platform stubs
│ ├── config-parser.h ← Test header
│ ├── Makefile ← Test build system
│ └── TEST_CONFIG_README.md ← Detailed guide
├── schema/
│ ├── validate-schema.py ← Schema validator
│ └── SCHEMA_VALIDATOR_README.md
├── tools/ ← Property database tools
└── MAINTENANCE.md ← Maintenance procedures
```
---
**Need help?** Check TEST_RUNNER_README.md for troubleshooting and advanced usage.

208
README.md
View File

@@ -1,9 +1,41 @@
# What is it?
This repo holds the source code for OLS (OpenLAN Switching) uCentral client implementation.
It implements both the ZTP Gateway discovery procedure (calling the ZTP Redirector and locating
the desired Gateway to connect to), as well as handlers to process the Gateway requests.
the desired Gateway to connect to), as well as handlers to process the Gateway requests.
Upon connecting to the Gateway service, the device can be provisioned and controlled from the
cloud in the same manner as uCentral OpenWifi APs do.
cloud in the same manner as uCentral OpenWifi APs do.
# Prerequisites
## Required Tools
- **Docker** (20.10+) - Dockerized build environment
- **Git** (2.20+) - Version control and schema repository access
- **Make** (3.81+) - Build system
## For Testing and Database Generation
- **Python 3** (3.7+) - Test scripts and database generation
- **PyYAML** (5.1+) - YAML schema parsing
Install Python dependencies:
```bash
pip3 install -r tests/requirements.txt
# Or: pip3 install pyyaml
```
## Schema Repository
The uCentral configuration schema is maintained in a separate repository:
- **Repository:** https://github.com/Telecominfraproject/ols-ucentral-schema
Fetch schema automatically (with intelligent branch matching):
```bash
cd tests/tools
./fetch-schema.sh
```
**See [PREREQUISITES.md](PREREQUISITES.md) for complete setup guide and troubleshooting.**
# Build System
The build system implements an automated dockerized-based build enviroment to produce a final
@@ -94,33 +126,155 @@ Technically same as 'all'. Produces final .deb pkg that can be copied to target
and installed as native deb pkg.
# Certificates
TIP Certificates should be preinstalled upon launch of the service. uCentral
client uses certificates to establish a secure connection with both Redirector
(firstcontact), as well as te GW itself.
In order to create a partition, partition_script.sh (part of this repo)
can be used to do so, the steps are as follows (should be executed on device):
Enter superuser mode
## PKI 2.0 Architecture
The uCentral client implements **PKI 2.0** using the EST (Enrollment over Secure Transport, RFC 7030) protocol for automated certificate lifecycle management.
### Certificate Types
**Birth Certificates** (Factory-provisioned):
- `cert.pem` - Birth certificate (factory-issued)
- `key.pem` - Private key
- `cas.pem` - CA certificate bundle
- `dev-id` - Device identifier
**Operational Certificates** (Runtime-generated):
- `operational.pem` - Operational certificate (obtained via EST)
- `operational.ca` - Operational CA certificate
### Certificate Lifecycle
1. **Factory Provisioning**: Birth certificates are provisioned to the device partition during manufacturing or initial setup
2. **First Boot**: On first boot, the uCentral client automatically enrolls with the EST server using birth certificates to obtain operational certificates
3. **Runtime**: The client uses operational certificates for all gateway connections
4. **Renewal**: Operational certificates can be renewed via the `reenroll` RPC command before expiration
### Automatic EST Enrollment
The client automatically:
- Detects the EST server based on certificate issuer (QA vs Production)
- Enrolls with the EST server to obtain operational certificates
- Saves operational certificates to `/etc/ucentral/`
- Falls back to birth certificates if enrollment fails
### Installing Birth Certificates
Birth certificates should be preinstalled before launching the service. Use `partition_script.sh` (included in this repo) to provision certificates to the device partition.
**Steps (execute on device):**
1. Enter superuser mode:
```bash
sudo su
```
$ sudo su
```
Create temp directory to copy certificates + script into, and unpack
Copy both certificates and partition script to the device:
```
$ mkdir /tmp/temp
$ cd /tmp/temp/
$ scp <remote_host>:/certificates/<some_mac>.tar ./
$ scp <remote_host>:/partition_script.sh ./
$ tar -xvf ./<some_mac>.tar
```
After certificate files are being unpacked, launch this script with single argument being
the path to the certificates directory (please note that BASH interpreter should be used explicitly,
it's done to make this script compatible with most ONIE builds as well, as they mostly
pack busybox / sh):
2. Create temp directory and copy certificates + script:
```bash
mkdir /tmp/temp
cd /tmp/temp/
scp <remote_host>:/certificates/<some_mac>.tar ./
scp <remote_host>:/partition_script.sh ./
tar -xvf ./<some_mac>.tar
```
3. Run the partition script to install birth certificates:
```bash
bash ./partition_script.sh ./
```
Once certificates are installed and partition is created, rebooting the device is required.
After reboot and uCentral start, service creates <TCA> volume upon start based on physical partition
(by-label provided by udev - /dev/disk/by-label/ONIE-TIP-CA-CERT) automatically.
4. Reboot the device:
```bash
reboot
```
After reboot, the uCentral service will:
- Mount the certificate partition (ONIE-TIP-CA-CERT)
- Automatically perform EST enrollment to obtain operational certificates
- Connect to the gateway using operational certificates
### Certificate Renewal
To renew operational certificates before expiration, send the `reenroll` RPC command from the gateway:
```json
{
"jsonrpc": "2.0",
"id": 123,
"method": "reenroll",
"params": {
"serial": "device_serial_number"
}
}
```
The client will:
1. Contact the EST server with current operational certificate
2. Obtain a renewed operational certificate
3. Save the new certificate to `/etc/ucentral/operational.pem`
4. Restart after 10 seconds to use the new certificate
### EST Servers
- **QA**: `qaest.certificates.open-lan.org:8001` (for Demo Birth CA)
- **Production**: `est.certificates.open-lan.org` (for Production Birth CA)
The EST server is automatically selected based on the certificate issuer field.
### Certificate Files Location
- Birth certificates: `/etc/ucentral/cert.pem`, `/etc/ucentral/key.pem`, `/etc/ucentral/cas.pem`
- Operational certificates: `/etc/ucentral/operational.pem`, `/etc/ucentral/operational.ca`
# Testing
The repository includes a comprehensive testing framework for configuration validation:
## Running Tests
**Quick Start:**
```bash
# Using the test runner script (recommended)
./run-config-tests.sh
# Generate HTML report
./run-config-tests.sh --format html
# Or run tests directly in the tests directory
cd tests/config-parser
make test-config-full
```
**Docker-based Testing (recommended for consistency):**
```bash
# Run in Docker environment
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make test-config-full"
```
## Test Framework
The testing framework validates configurations through two layers:
1. **Schema Validation** - JSON structure validation against uCentral schema
2. **Parser Testing** - Actual C parser implementation testing with property tracking
Tests are organized in the `tests/` directory:
- `tests/config-parser/` - Configuration parser tests
- `tests/schema/` - Schema validation
- `tests/tools/` - Property database generation tools
- `tests/unit/` - Unit tests
## Documentation
- **[TESTING_FRAMEWORK.md](TESTING_FRAMEWORK.md)** - Testing overview and quick reference
- **[tests/README.md](tests/README.md)** - Complete testing documentation
- **[tests/config-parser/TEST_CONFIG_README.md](tests/config-parser/TEST_CONFIG_README.md)** - Detailed testing guide
- **[tests/MAINTENANCE.md](tests/MAINTENANCE.md)** - Schema and property database maintenance
## Test Configuration
Test configurations are located in `config-samples/`:
- 21 positive test configurations covering various features
- 4 negative test configurations for error handling validation
- JSON schema: `config-samples/ucentral.schema.pretty.json`

497
TESTING_FRAMEWORK.md Normal file
View File

@@ -0,0 +1,497 @@
# Configuration Testing Framework
## Overview
The OLS uCentral Client includes a comprehensive configuration testing framework that provides two-layer validation of JSON configurations:
1. **Schema Validation** - Structural validation against the uCentral JSON schema
2. **Parser Testing** - Implementation validation of the C parser with property tracking
This framework enables automated testing, continuous integration, and tracking of configuration feature implementation status.
## Documentation Index
This testing framework includes multiple documentation files, each serving a specific purpose:
### Primary Documentation
1. **[tests/config-parser/TEST_CONFIG_README.md](tests/config-parser/TEST_CONFIG_README.md)** - Complete testing framework guide
- Overview of two-layer validation approach
- Quick start and running tests
- Property tracking system
- Configuration-specific validators
- Test output interpretation
- CI/CD integration
- **Start here** for understanding the testing framework
2. **[tests/schema/SCHEMA_VALIDATOR_README.md](tests/schema/SCHEMA_VALIDATOR_README.md)** - Schema validator detailed documentation
- Standalone validator usage
- Command-line interface
- Programmatic API
- Porting guide for other repositories
- Common validation errors
- **Start here** for schema validation specifics
3. **[tests/MAINTENANCE.md](tests/MAINTENANCE.md)** - Maintenance procedures guide
- Schema update procedures
- Property database update procedures
- Version synchronization
- Testing after updates
- Troubleshooting common issues
- **Start here** when updating schema or property database
4. **[TEST_CONFIG_PARSER_DESIGN.md](TEST_CONFIG_PARSER_DESIGN.md)** - Test framework architecture
- Multi-layer validation design
- Property metadata system (398 schema properties)
- Property inspection engine
- Test execution flow diagrams
- Data structures and algorithms
- Output format implementations
- **Start here** for understanding the test framework internals
### Supporting Documentation
5. **[README.md](README.md)** - Project overview and build instructions
- Build system architecture
- Platform abstraction layer
- Testing framework integration
- Deployment instructions
## Quick Reference
### Running Tests
**RECOMMENDED: Use the test runner script** (handles Docker automatically):
```bash
# Test all configurations in STUB mode (default - fast, proto.c only)
./run-config-tests.sh # Human-readable output
./run-config-tests.sh --format html # HTML report
./run-config-tests.sh --format json # JSON report
# Test all configurations in PLATFORM mode (integration testing)
./run-config-tests.sh --mode platform # Human-readable output
./run-config-tests.sh --mode platform --format html # HTML report
# Test single configuration
./run-config-tests.sh cfg0.json # Stub mode
./run-config-tests.sh --mode platform cfg0.json # Platform mode
```
**Alternative: Run tests directly in Docker** (manual Docker management):
```bash
# Build the Docker environment first (if not already built)
make build-host-env
# Run all tests in STUB mode (default - fast)
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make test-config-full"
# Run all tests in PLATFORM mode (integration)
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make test-config-full USE_PLATFORM=brcm-sonic"
# Run individual test suites (stub mode)
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make validate-schema"
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make test-config"
# Generate test reports (stub mode)
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make test-config-html"
# Generate test reports (platform mode)
docker exec ucentral_client_build_env bash -c \
"cd /root/ols-nos/tests/config-parser && make test-config-html USE_PLATFORM=brcm-sonic"
# Copy report files out of container to view
docker cp ucentral_client_build_env:/root/ols-nos/tests/config-parser/test-report.html output/
```
**Alternative: Run tests locally** (may have OS-specific dependencies):
```bash
# Navigate to test directory
cd tests/config-parser
# Run all tests in STUB mode (default)
make test-config-full
# Run all tests in PLATFORM mode
make test-config-full USE_PLATFORM=brcm-sonic
# Run individual test suites
make validate-schema # Schema validation only
make test-config # Parser tests only
# Generate test reports (stub mode)
make test-config-html # HTML report (browser-viewable)
make test-config-json # JSON report (machine-readable)
make test-config-junit # JUnit XML (CI/CD integration)
# Generate test reports (platform mode)
make test-config-html USE_PLATFORM=brcm-sonic
make test-config-json USE_PLATFORM=brcm-sonic
```
**Note:** Running tests in Docker is the preferred method as it provides a consistent, reproducible environment regardless of your host OS (macOS, Linux, Windows).
### Key Files
**Test Implementation:**
- `tests/config-parser/test-config-parser.c` (3304 lines) - Parser test framework with property tracking
- `tests/config-parser/test-stubs.c` (219 lines) - Platform function stubs for stub mode testing
- `tests/config-parser/platform-mocks/brcm-sonic.c` - gNMI/gNOI mocks for platform mode
- `tests/config-parser/platform-mocks/example-platform.c` - Example platform mocks
- `tests/schema/validate-schema.py` (649 lines) - Standalone schema validator with undefined property detection
- `tests/config-parser/config-parser.h` - Test header exposing cfg_parse()
**Property Database Files:**
- `tests/config-parser/property-database-base.c` (416 lines) - Proto.c parsing (398 properties: 102 implemented, 296 not yet)
- `tests/config-parser/property-database-platform-brcm-sonic.c` (419 lines) - Platform hardware application tracking
- `tests/config-parser/property-database-platform-example.c` - Example platform database
**Configuration Files:**
- `config-samples/ucentral.schema.pretty.json` - uCentral JSON schema (human-readable, single schema file)
- `config-samples/*.json` - Test configuration files (25 configs)
- `config-samples/*invalid*.json` - Negative test cases
**Build System:**
- `tests/config-parser/Makefile` - Test targets and build rules
- `run-config-tests.sh` - Test runner script (recommended)
**Production Code (Minimal Changes):**
- `src/ucentral-client/proto.c` - Added TEST_STATIC macro (2 lines changed)
- `src/ucentral-client/include/router-utils.h` - Added extern declarations (minor change)
## Features
### Schema Validation
- Validates JSON structure against official uCentral schema
- Checks property types, required fields, constraints
- Standalone tool, no dependencies on C code
- Exit codes for CI/CD integration
### Parser Testing
- Tests actual C parser implementation
- Multiple output formats (human-readable, HTML, JSON, JUnit XML)
- Interactive HTML reports with detailed analysis
- Machine-readable JSON for automation
- JUnit XML for CI/CD integration
- Validates configuration processing and struct population
- Configuration-specific validators for business logic
- Memory leak detection
- Hardware constraint validation
### Property Tracking System
- Database of all schema properties and their implementation status (398 canonical properties)
- Tracks which properties are parsed by which functions
- Identifies unimplemented features
- Status classification: CONFIGURED, IGNORED, SYSTEM, INVALID, Unknown
- Property usage reports across all test configurations
- Properties with line numbers are implemented, line_number=0 means not yet implemented
### Two-Layer Validation Strategy
**Why Both Layers?**
Each layer catches different types of errors:
- **Schema catches**: Type mismatches, missing required fields, constraint violations
- **Parser catches**: Implementation bugs, hardware limits, cross-field dependencies
- **Property tracking catches**: Missing implementations, platform-specific features
See TEST_CONFIG_README.md section "Two-Layer Validation Strategy" for detailed explanation.
## Test Coverage
Current test suite includes:
- 25 configuration files covering various features
- Positive tests (configs that should parse successfully)
- Negative tests (configs that should fail)
- Feature-specific validators for critical configurations
- Two testing modes: stub mode (fast, proto.c only) and platform mode (integration, proto.c + platform code)
- Platform stub with 54-port simulation (matches ECS4150 hardware)
### Tested Features
- Port configuration (enable/disable, speed, duplex)
- VLAN configuration and membership
- Spanning Tree Protocol (STP, RSTP, PVST, RPVST)
- IGMP Snooping
- Power over Ethernet (PoE)
- IEEE 802.1X Authentication
- DHCP Relay
- Static routing
- System configuration (timezone, hostname, etc.)
### Platform-Specific Features (Schema-Valid, Platform Implementation Required)
- LLDP (Link Layer Discovery Protocol)
- LACP (Link Aggregation Control Protocol)
- ACLs (Access Control Lists)
- DHCP Snooping
- Loop Detection
- Port Mirroring
- Voice VLAN
These features pass schema validation but show as "Unknown" in property reports, indicating they require platform-specific implementation.
## Changes from Base Repository
The testing framework was added with minimal impact to production code:
### New Files Added
1. `tests/config-parser/test-config-parser.c` (3304 lines) - Complete test framework
2. `tests/config-parser/test-stubs.c` (219 lines) - Platform stubs for stub mode
3. `tests/config-parser/platform-mocks/brcm-sonic.c` - Platform mocks for brcm-sonic
4. `tests/config-parser/platform-mocks/example-platform.c` - Platform mocks for example platform
5. `tests/config-parser/property-database-base.c` (416 lines) - Base property database (398 properties: 102 implemented, 296 not yet)
6. `tests/config-parser/property-database-platform-brcm-sonic.c` (419 lines) - Platform property database
7. `tests/config-parser/property-database-platform-example.c` - Example platform property database
8. `tests/schema/validate-schema.py` (649 lines) - Schema validator
9. `tests/config-parser/config-parser.h` - Test header
10. `tests/config-parser/TEST_CONFIG_README.md` - Framework documentation
11. `tests/config-parser/platform-mocks/README.md` - Platform mocks documentation
12. `tests/schema/SCHEMA_VALIDATOR_README.md` - Validator documentation
13. `tests/MAINTENANCE.md` - Maintenance procedures
14. `tests/ADDING_NEW_PLATFORM.md` - Platform addition guide
15. `tests/config-parser/Makefile` - Test build system with stub and platform modes
16. `tests/tools/extract-schema-properties.py` - Extract properties from schema
17. `tests/tools/generate-database-from-schema.py` - Generate base property database
18. `tests/tools/generate-platform-database-from-schema.py` - Generate platform property database
19. `tests/README.md` - Testing framework overview
20. `TESTING_FRAMEWORK.md` - This file (documentation index, in repository root)
21. `TEST_CONFIG_PARSER_DESIGN.md` - Test framework architecture and design (in repository root)
22. `QUICK_START_TESTING.md` - Quick start guide (in repository root)
23. `TEST_RUNNER_README.md` - Test runner script documentation (in repository root)
24. `run-config-tests.sh` - Test runner script (in repository root)
### Modified Files
1. `src/ucentral-client/proto.c` - Added TEST_STATIC macro pattern (2 lines)
```c
// Changed from:
static struct plat_cfg *cfg_parse(...)
// Changed to:
#ifdef UCENTRAL_TESTING
#define TEST_STATIC
#else
#define TEST_STATIC static
#endif
TEST_STATIC struct plat_cfg *cfg_parse(...)
```
This allows test code to call cfg_parse() while keeping it static in production builds.
2. `src/ucentral-client/include/router-utils.h` - Added extern declarations
- Exposed necessary functions for test stubs
3. `src/ucentral-client/Makefile` - No changes (production build only)
- Test targets are in tests/config-parser/Makefile
- Clean separation between production and test code
### Configuration Files
- Added `config-samples/cfg_invalid_*.json` - Negative test cases
- Added `config-samples/ECS4150_*.json` - Feature-specific test configs
- No changes to existing valid configurations
### Zero Impact on Production
- Production builds: No functional changes, cfg_parse() remains static
- Test builds: cfg_parse() becomes visible with -DUCENTRAL_TESTING flag
- No ABI changes, no performance impact
- No runtime dependencies added
## Integration with Development Workflow
### During Development
```bash
# 1. Make code changes to proto.c
vi src/ucentral-client/proto.c
# 2. Run tests using test runner script
./run-config-tests.sh
# 3. Review property tracking report
# Check for unimplemented features or errors
# 4. If adding new parser function, update property database
vi tests/config-parser/test-config-parser.c
# Add property entries for new function
# 5. Create test configuration
vi config-samples/test-new-feature.json
# 6. Retest
./run-config-tests.sh
```
### Before Committing
```bash
# Ensure all tests pass
./run-config-tests.sh
# Generate full HTML report for review
./run-config-tests.sh --format html
open output/test-report.html
# Check for property database accuracy
# Review "Property Usage Report" section in HTML report
# Look for unexpected "Unknown" properties
```
### In CI/CD Pipeline
```yaml
test-configurations:
stage: test
script:
- ./run-config-tests.sh --format json
artifacts:
paths:
- output/test-report.json
- output/test-report.html
when: always
```
## Property Database Management
The property database is a critical component tracking which JSON properties are parsed by which functions.
### Database Structure
```c
static struct property_metadata properties[] = {
{
.path = "interfaces.ethernet.enabled",
.status = PROP_CONFIGURED,
.source_file = "proto.c",
.source_function = "cfg_ethernet_parse",
.source_line = 1119,
.notes = "Enable/disable ethernet interface"
},
// ... entries for all 398 schema properties (with line numbers for implemented properties) ...
};
```
### Key Rules
1. **Only track properties for functions that exist in this repository's proto.c**
2. **Remove entries when parser functions are removed**
3. **Add entries immediately when adding new parser functions**
4. **Use accurate function names** - different platforms may use different names
5. **Properties not in database show as "Unknown"** - this is correct for platform-specific features
See MAINTENANCE.md for complete property database update procedures.
## Schema Management
The schema file defines what configurations are structurally valid.
### Schema Location
- `config-samples/ucentral.schema.pretty.json` - Human-readable version (single schema file in repository)
### Schema Source
Schema is maintained in the external [ols-ucentral-schema](https://github.com/Telecominfraproject/ols-ucentral-schema) repository.
### Schema Updates
When ols-ucentral-schema releases a new version:
1. Copy new schema to config-samples/
2. Run schema validation on all test configs
3. Fix any configs that fail new requirements
4. Document breaking changes
5. Update property database if new properties are implemented
See MAINTENANCE.md section "Schema Update Procedures" for complete process.
## Platform-Specific Repositories
This is the **base repository** providing the core framework. Platform-specific repositories (like Edgecore EC platform) can:
1. **Fork the test framework** - Copy test files to their repository
2. **Extend property database** - Add entries for platform-specific parser functions
3. **Add platform configs** - Create configs testing platform features
4. **Maintain separate tracking** - Properties "Unknown" in base become "CONFIGURED" in platform
### Example: LLDP Property Status
**In base repository (this repo):**
```
Property: interfaces.ethernet.lldp
Status: Unknown (not in property database)
Note: May require platform-specific implementation
```
**In Edgecore EC platform repository:**
```
Property: interfaces.ethernet.lldp
Parser: cfg_ethernet_lldp_parse()
Status: CONFIGURED
Note: Per-interface LLDP transmit/receive configuration
```
Each platform tracks only the properties it actually implements.
## Troubleshooting
### Common Issues
**Tests fail in Docker but pass locally:**
- Check schema file exists in container
- Verify paths are correct in container environment
- Rebuild container: `make build-host-env`
**Property shows as "Unknown" when it should be CONFIGURED:**
- Verify parser function exists: `grep "function_name" proto.c`
- Check property path matches JSON exactly
- Ensure property entry is in properties[] array
**Schema validation fails for valid config:**
- Schema may be outdated - check version
- Config may use vendor extensions not in base schema
- Validate against specific schema: `./validate-schema.py config.json --schema /path/to/schema.json`
See MAINTENANCE.md "Troubleshooting" section for complete troubleshooting guide.
## Documentation Maintenance
When updating the testing framework:
1. **Update relevant documentation:**
- New features → TEST_CONFIG_README.md
- Schema changes → MAINTENANCE.md + SCHEMA_VALIDATOR_README.md
- Property database changes → MAINTENANCE.md + TEST_CONFIG_README.md
- Build changes → README.md
2. **Keep version information current:**
- Update compatibility matrices
- Document breaking changes
- Maintain changelogs
3. **Update examples:**
- Refresh command output examples
- Update property counts
- Keep test results current
## Contributing
When contributing to the testing framework:
1. **Maintain property database accuracy** - Update when changing parser functions
2. **Add test configurations** - Create configs demonstrating new features
3. **Update documentation** - Keep docs synchronized with code changes
4. **Follow conventions** - Use established patterns for validators and property entries
5. **Test thoroughly** - Run full test suite before committing
## License
BSD-3-Clause (same as parent project)
## See Also
- **[tests/config-parser/TEST_CONFIG_README.md](tests/config-parser/TEST_CONFIG_README.md)** - Complete testing framework guide
- **[TEST_CONFIG_PARSER_DESIGN.md](TEST_CONFIG_PARSER_DESIGN.md)** - Test framework architecture and design
- **[tests/schema/SCHEMA_VALIDATOR_README.md](tests/schema/SCHEMA_VALIDATOR_README.md)** - Schema validator documentation
- **[tests/MAINTENANCE.md](tests/MAINTENANCE.md)** - Update procedures and troubleshooting
- **[TEST_RUNNER_README.md](TEST_RUNNER_README.md)** - Test runner script documentation
- **[QUICK_START_TESTING.md](QUICK_START_TESTING.md)** - Quick start guide
- **[README.md](README.md)** - Project overview and build instructions
- **ols-ucentral-schema repository** - Official schema source

View File

@@ -0,0 +1,308 @@
# Design of test-config-parser.c
The `tests/config-parser/test-config-parser.c` file implements a comprehensive configuration testing framework with a sophisticated multi-layered design. This document describes the architecture and implementation details.
## 1. **Core Architecture: Multi-Layer Validation**
The framework validates configurations through three complementary layers:
### Layer 1: Schema Validation
- Invokes external `tests/schema/validate-schema.py` to verify JSON structure against uCentral schema
- Catches: JSON syntax errors, type mismatches, missing required fields, constraint violations
- If schema validation fails, parsing is skipped to ensure clean error isolation
### Layer 2: Parser Testing
- Calls production `cfg_parse()` function from `src/ucentral-client/proto.c`
- Tests actual C parser implementation with real platform data structures
- Catches: Parser bugs, memory issues, hardware constraints, cross-field dependencies
### Layer 3: Property Tracking
- Deep recursive inspection of JSON tree to classify every property
- Maps properties to property metadata database (398 schema properties)
- Tracks which properties are CONFIGURED, IGNORED, INVALID, UNKNOWN, etc.
- Properties with line numbers are implemented in proto.c; line_number=0 means not yet implemented
## 2. **Property Metadata System**
### Property Database Structure
```c
struct property_metadata {
const char *path; // JSON path: "ethernet[].speed"
enum property_status status; // CONFIGURED, IGNORED, UNKNOWN, etc.
const char *source_file; // Where processed: "proto.c"
const char *source_function; // Function: "cfg_ethernet_parse"
int source_line; // Line number in proto.c (if available)
const char *notes; // Context/rationale
};
```
**Database contains entries for all 398 schema properties** documenting:
- Which properties are actively parsed (PROP_CONFIGURED with line numbers)
- Which are not yet implemented (line_number=0)
- Which are intentionally ignored (PROP_IGNORED)
- Which need platform implementation (PROP_UNKNOWN)
- Which are structural containers (PROP_SYSTEM)
### Property Status Classification
- **PROP_CONFIGURED**: Successfully processed by parser
- **PROP_MISSING**: Required but absent
- **PROP_IGNORED**: Present but intentionally not processed
- **PROP_INVALID**: Invalid value (out of bounds, wrong type)
- **PROP_INCOMPLETE**: Missing required sub-fields
- **PROP_UNKNOWN**: Needs manual classification/testing (may require platform implementation)
- **PROP_SYSTEM**: Structural container (not leaf value)
## 3. **Property Inspection Engine**
### scan_json_tree_recursive() (lines 1399-1459)
Recursive descent through JSON tree:
1. Traverses entire JSON configuration structure
2. For each property, builds full dot-notation path (e.g., `"interfaces[].ipv4.subnet[].prefix"`)
3. Looks up property in metadata database via `lookup_property_metadata()`
4. Records property validation result with status, value, source location
5. Continues recursion into nested objects/arrays
### lookup_property_metadata() (lines 1314-1348)
Smart property matching:
1. Normalizes path by replacing `[N]` with `[]` (e.g., `ethernet[5].speed``ethernet[].speed`)
2. Searches property database for matching canonical path
3. Returns metadata if found, NULL if unknown property
### scan_for_unprocessed_properties() (lines 1666-1765)
Legacy unprocessed property detection:
- Checks properties against known property lists at each config level
- Reports properties that exist in JSON but aren't in "known" lists
- Used alongside property database for comprehensive coverage
## 4. **Test Execution Flow**
### Main Test Function: test_config_file() (lines 1790-1963)
```
┌─────────────────────────────────────────┐
│ 1. Schema Validation │
│ - validate_against_schema() │
│ - If fails: mark test, skip parsing │
└──────────────┬──────────────────────────┘
┌─────────────────────────────────────────┐
│ 2. JSON Parsing │
│ - read_json_file() │
│ - cJSON_Parse() │
└──────────────┬──────────────────────────┘
┌─────────────────────────────────────────┐
│ 3. Feature Detection │
│ - detect_json_features() │
│ - Find LLDP, ACL, LACP, etc. │
└──────────────┬──────────────────────────┘
┌─────────────────────────────────────────┐
│ 4. Property Inspection │
│ - scan_json_tree_recursive() │
│ - Build property validation list │
└──────────────┬──────────────────────────┘
┌─────────────────────────────────────────┐
│ 5. Parser Invocation │
│ - cfg = cfg_parse(json) │
│ - Invoke production parser │
└──────────────┬──────────────────────────┘
┌─────────────────────────────────────────┐
│ 6. Feature Statistics │
│ - update_feature_statistics() │
│ - Count ports, VLANs, features │
└──────────────┬──────────────────────────┘
┌─────────────────────────────────────────┐
│ 7. Validation (Optional) │
│ - run_validator() for specific │
│ configs (cfg0, PoE, DHCP, etc.) │
└──────────────┬──────────────────────────┘
┌─────────────────────────────────────────┐
│ 8. Result Recording │
│ - finalize_test_result() │
│ - Store in linked list │
└─────────────────────────────────────────┘
```
## 5. **Data Structures**
### test_result (lines 94-128)
Per-test result tracking:
```c
struct test_result {
char filename[256];
int passed;
char error_message[512];
int ports_configured, vlans_configured;
int unprocessed_properties;
// Property counters
int properties_configured;
int properties_missing;
int properties_ignored;
// ... etc
// Feature presence flags
int has_port_config, has_vlan_config;
int has_stp, has_igmp, has_poe;
// ... etc
// Linked list of property validations
struct property_validation *property_validations;
struct test_result *next;
};
```
### property_validation (lines 85-92)
Individual property validation record:
```c
struct property_validation {
char path[128]; // "unit.hostname"
enum property_status status;
char value[512]; // "\"switch01\""
char details[256]; // Additional context
char source_location[128]; // "proto.c:cfg_unit_parse()"
struct property_validation *next;
};
```
## 6. **Feature Statistics Tracking**
### Global Statistics (lines 40-56)
```c
struct feature_stats {
int configs_with_ports;
int configs_with_vlans;
int configs_with_stp;
int configs_with_igmp;
int configs_with_poe;
int configs_with_ieee8021x;
int configs_with_dhcp_relay;
int configs_with_lldp; // JSON-detected
int configs_with_acl; // JSON-detected
int configs_with_lacp; // JSON-detected
// ... etc
};
```
**Two detection methods:**
1. **Parser-based**: Check `plat_cfg` structure for configured values (ports, VLANs, STP mode)
2. **JSON-based**: Detect schema-valid features in JSON that may not be parsed (LLDP, ACL, LACP)
## 7. **Output Formats** (lines 26-31)
### OUTPUT_HUMAN (default)
- Colorful console output with emojis
- Detailed property analysis
- Processing summaries
- Feature statistics
### OUTPUT_JSON (lines 2015-2097)
- Machine-readable JSON report
- Full test results with property details
- CI/CD integration friendly
### OUTPUT_HTML (lines 2099+)
- Interactive web report
- Full test details with styling
- Browser-viewable (982KB typical size)
### OUTPUT_JUNIT (planned)
- JUnit XML format for Jenkins/GitLab CI
## 8. **Validator Registry** (lines 302-343)
Optional per-config validators for deep validation:
```c
static const struct config_validator validators[] = {
{ "cfg0.json", validate_cfg0, "Port disable configuration" },
{ "cfg5_poe.json", validate_cfg_poe, "PoE configuration" },
{ "cfg6_dhcp.json", validate_cfg_dhcp, "DHCP relay" },
// ... etc
};
```
Validators inspect `plat_cfg` structure to verify specific features were correctly parsed.
## 9. **Test Discovery** (lines 1968-2010)
`test_directory()` auto-discovers test configs:
- Scans directory for `.json` files
- Skips `schema.json`, `Readme.json`
- Invokes `test_config_file()` for each config
## 10. **Key Design Patterns**
### Negative Test Support (lines 445-458)
```c
static int is_negative_test(const char *filename) {
if (strstr(filename, "invalid") != NULL) return 1;
if (strstr(filename, "ECS4150_port_isoltaon.json") != NULL) return 1;
return 0;
}
```
Configs expected to fail are marked as "PASS" if parsing fails.
### Schema-First Validation (lines 1818-1836)
Schema validation is a **prerequisite** for parser testing. If schema fails, parser is never invoked, ensuring clean error isolation.
### Linked List Result Storage (lines 221-242)
All test results stored in linked list for:
- Multiple output format generation from same data
- Summary statistics calculation
- Report generation after all tests complete
## 11. **Critical Integration Points**
### With Production Code (minimal impact):
- **proto.c**: Uses `cfg_parse()` exposed via `TEST_STATIC` macro
- **ucentral-log.h**: Registers `test_log_callback()` to capture parser errors (lines 134-160)
- **ucentral-platform.h**: Inspects `struct plat_cfg` to verify parsing results
### With Schema Validator:
- **tests/schema/validate-schema.py**: External Python script invoked via `system()` call
- Schema path: `config-samples/ols.ucentral.schema.pretty.json`
## 12. **Property Database Maintenance Rules**
**Critical Rule**:
> The property database must only contain entries for parser functions that exist in this repository's proto.c. Do not add entries for platform-specific functions that don't exist in the base implementation.
This keeps the base repository clean and allows platform-specific forks to extend the database with their own implementations.
---
## Summary
The design elegantly separates concerns:
1. **Schema layer** validates JSON structure (delegated to Python)
2. **Parser layer** tests C implementation (calls production code)
3. **Property layer** tracks implementation status (metadata database)
4. **Validation layer** verifies specific features (optional validators)
5. **Reporting layer** generates multiple output formats
The property metadata database is the **crown jewel** - it documents the implementation status of all 398 schema properties, enabling automated detection of unimplemented features and validation of parser coverage.
## Related Documentation
For additional information about the testing framework:
- **TESTING_FRAMEWORK.md** - Overview and documentation index
- **tests/config-parser/TEST_CONFIG_README.md** - Complete testing framework guide
- **tests/schema/SCHEMA_VALIDATOR_README.md** - Schema validator documentation
- **tests/MAINTENANCE.md** - Schema and property database update procedures
- **TEST_RUNNER_README.md** - Test runner script documentation
- **QUICK_START_TESTING.md** - Quick start guide
- **README.md** - Project overview and testing framework integration

479
TEST_RUNNER_README.md Normal file
View File

@@ -0,0 +1,479 @@
# Test Runner Script Documentation
## Overview
`run-config-tests.sh` is a comprehensive Docker-based test runner for uCentral configuration validation. It automates the entire testing workflow: building the Docker environment, running tests with various output formats, and copying results to the host.
## Features
- **Automatic Docker Environment Management**
- Builds Docker environment only when needed (checks Dockerfile SHA)
- Starts/reuses existing containers intelligently
- No manual Docker commands required
- **Multiple Output Formats**
- **human**: Human-readable console output with colors and detailed analysis
- **html**: Interactive HTML report with test results and property tracking
- **json**: Machine-readable JSON for automation and metrics
- **Flexible Testing**
- Test all configurations in one run
- Test a single configuration file
- Automatic result file naming and organization
- **Production-Ready**
- Exit codes for CI/CD integration (0 = pass, non-zero = fail/issues)
- Colored output for readability
- Comprehensive error handling
- Results automatically copied to `output/` directory
## Usage
### Basic Syntax
```bash
./run-config-tests.sh [OPTIONS] [config-file]
```
**Options:**
- `-f, --format FORMAT`: Output format - `html`, `json`, or `human` (default: `human`)
- `-m, --mode MODE`: Test mode - `stub` or `platform` (default: `stub`)
- `-p, --platform NAME`: Platform name for platform mode (default: `brcm-sonic`)
- `-h, --help`: Show help message
**Arguments:**
- `config-file` (optional): Specific config file to test (default: test all configs)
### Examples
#### Test All Configurations
```bash
# Human-readable output (default)
./run-config-tests.sh
# HTML report
./run-config-tests.sh --format html
# OR short form:
./run-config-tests.sh -f html
# JSON output
./run-config-tests.sh --format json
# OR short form:
./run-config-tests.sh -f json
```
#### Test Single Configuration
```bash
# Test single config with human output (default)
./run-config-tests.sh cfg0.json
# Test single config with HTML report
./run-config-tests.sh --format html cfg0.json
# OR short form:
./run-config-tests.sh -f html cfg0.json
# Test single config with JSON output
./run-config-tests.sh --format json cfg0.json
# OR short form:
./run-config-tests.sh -f json cfg0.json
```
## Output Files
All output files are saved to the `output/` directory in the repository root.
### Output File Naming
**All Configs:**
- `test-results.txt` - Human-readable output
- `test-report.html` - HTML report
- `test-report.json` - JSON output
**Single Config:**
- `test-results-{config-name}.txt` - Human-readable output
- `test-report-{config-name}.html` - HTML report
- `test-results-{config-name}.json` - JSON output
### Output Directory Structure
```
output/
├── test-results.txt # All configs, human format
├── test-report.html # All configs, HTML format
├── test-report.json # All configs, JSON format
├── test-results-cfg0.txt # Single config results
├── test-report-ECS4150-TM.html # Single config HTML
└── test-results-ECS4150-ACL.json # Single config JSON
```
## How It Works
### Workflow Steps
1. **Docker Check**: Verifies Docker daemon is running
2. **Environment Build**: Builds Docker environment if needed (caches based on Dockerfile SHA)
3. **Container Start**: Starts or reuses existing container
4. **Test Execution**: Runs tests inside container with specified format
5. **Result Copy**: Copies output files from container to host `output/` directory
6. **Summary**: Displays test summary and output file locations
### Docker Environment Management
The script intelligently manages the Docker environment:
```
Dockerfile unchanged → Skip build (use existing image)
Dockerfile modified → Build new image with new SHA tag
Container exists → Reuse existing container
Container missing → Create new container
Container stopped → Start existing container
```
This ensures fast subsequent runs while detecting when rebuilds are necessary.
## Output Format Details
### Human Format (default)
Human-readable console output with:
- Color-coded pass/fail indicators
- Detailed error messages
- Property usage reports
- Feature coverage analysis
- Schema validation results
**Best for:** Interactive development, debugging, manual testing
**Example:**
```
[TEST] config-samples/cfg0.json
✓ PASS - Schema validation
✓ PASS - Parser validation
Properties: 42 configured, 5 unknown
Total tests: 37
Passed: 37
Failed: 0
```
### HTML Format
Interactive web report with:
- Test result summary table
- Pass/fail status with colors
- Expandable test details
- Property tracking information
- Feature coverage matrix
- Timestamp and metadata
**Best for:** Test reports, sharing results, archiving, presentations
**Open with:**
```bash
open output/test-report.html # macOS
xdg-open output/test-report.html # Linux
start output/test-report.html # Windows
```
### JSON Format
Machine-readable structured data with:
- Test results array
- Pass/fail status
- Error details
- Property usage data
- Timestamps
- Exit codes
**Best for:** CI/CD integration, automation, metrics, analysis
**Structure:**
```json
{
"summary": {
"total": 37,
"passed": 37,
"failed": 0,
"timestamp": "2025-12-15T10:30:00Z"
},
"tests": [
{
"config": "cfg0.json",
"passed": true,
"schema_valid": true,
"parser_valid": true,
"properties": { "configured": 42, "unknown": 5 }
}
]
}
```
## Exit Codes
The script uses exit codes for CI/CD integration:
- `0` - All tests passed successfully
- `1` - Some tests failed or had validation errors
- `2` - System errors (Docker not running, file not found, etc.)
**CI/CD Example:**
```bash
./run-config-tests.sh --format json
if [ $? -eq 0 ]; then
echo "All tests passed!"
else
echo "Tests failed, see output/test-report.json"
exit 1
fi
```
## Performance
### First Run (Cold Start)
```
Build Docker environment: 5-10 minutes (one-time)
Run all config tests: 10-30 seconds
Total first run: ~10 minutes
```
### Subsequent Runs (Warm Start)
```
Environment check: 1-2 seconds (skipped if unchanged)
Container startup: 1-2 seconds (or reuse running container)
Run all config tests: 10-30 seconds
Total subsequent run: ~15 seconds
```
### Single Config Test
```
Test single config: 1-3 seconds
Total time: ~5 seconds (with running container)
```
## Troubleshooting
### Docker Not Running
**Error:**
```
✗ Docker is not running. Please start Docker and try again.
```
**Solution:**
- Start Docker Desktop (macOS/Windows)
- Start Docker daemon: `sudo systemctl start docker` (Linux)
### Container Build Failed
**Error:**
```
✗ Failed to build environment
```
**Solution:**
```bash
# Clean Docker and rebuild
docker system prune -a
make clean
./run-config-tests.sh
```
### Config File Not Found
**Error:**
```
✗ Config file not found in container: myconfig.json
```
**Solution:**
- Check available configs: `ls config-samples/*.json`
- Ensure config file is in `config-samples/` directory
- Use correct filename (case-sensitive)
### Test Output Not Copied
**Error:**
```
⚠ Output file not found in container: test-report.html
```
**Solution:**
- Check test execution logs for errors
- Verify test completed successfully inside container
- Try running tests manually: `docker exec ucentral_client_build_env bash -c "cd /root/ols-nos/tests/config-parser && make test-config"`
### Permission Denied
**Error:**
```
Permission denied: ./run-config-tests.sh
```
**Solution:**
```bash
chmod +x run-config-tests.sh
```
## Integration with Existing Workflows
### With Makefile
The script is independent of the Makefile but uses the same Docker infrastructure:
```bash
# Build environment (Makefile or script)
make build-host-env
# OR let script build it automatically
# Run tests (script provides better output management)
./run-config-tests.sh --format html
```
### With CI/CD
#### GitHub Actions
```yaml
name: Configuration Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run config tests
run: ./run-config-tests.sh --format json
- name: Upload test results
uses: actions/upload-artifact@v3
with:
name: test-results
path: output/test-report.json
```
#### GitLab CI
```yaml
test-configs:
stage: test
script:
- ./run-config-tests.sh --format json
artifacts:
paths:
- output/test-report.json
when: always
```
#### Jenkins
```groovy
stage('Test Configurations') {
steps {
sh './run-config-tests.sh --format html'
publishHTML([
reportDir: 'output',
reportFiles: 'test-report.html',
reportName: 'Config Test Report'
])
}
}
```
### With Git Hooks
**Pre-commit hook** (test before commit):
```bash
#!/bin/bash
# .git/hooks/pre-commit
echo "Running configuration tests..."
./run-config-tests.sh
if [ $? -ne 0 ]; then
echo "Tests failed. Commit aborted."
exit 1
fi
```
## Advanced Usage
### Custom Output Directory
Modify the `OUTPUT_DIR` variable in the script:
```bash
# Edit run-config-tests.sh
OUTPUT_DIR="$SCRIPT_DIR/my-custom-output"
```
### Test Specific Config Pattern
```bash
# Test all ACL configs
for config in config-samples/*ACL*.json; do
./run-config-tests.sh --format json "$(basename $config)"
done
```
### Parallel Testing (Multiple Containers)
```bash
# Start multiple containers for parallel testing
docker exec ucentral_client_build_env_1 bash -c "cd /root/ols-nos/tests/config-parser && ./test-config-parser config1.json" &
docker exec ucentral_client_build_env_2 bash -c "cd /root/ols-nos/tests/config-parser && ./test-config-parser config2.json" &
wait
```
### Automated Report Generation
```bash
# Generate all format reports
for format in human html json; do
./run-config-tests.sh --format $format
done
# Timestamp reports
mv output/test-report.html output/test-report-$(date +%Y%m%d-%H%M%S).html
```
## Comparison with Direct Make Commands
| Feature | run-config-tests.sh | Direct Make |
|---------|---------------------|-------------|
| Docker management | Automatic | Manual |
| Output to host | Automatic | Manual copy |
| Format selection | Command-line arg | Multiple make targets |
| Single config test | Built-in | Manual setup |
| Result organization | Automatic | Manual |
| Error handling | Comprehensive | Basic |
| CI/CD ready | Yes (exit codes) | Requires scripting |
**Recommendation:** Use `run-config-tests.sh` for all testing workflows. It provides a better user experience and handles Docker complexity automatically.
## Related Documentation
- **TESTING_FRAMEWORK.md** - Overview of testing framework
- **tests/config-parser/TEST_CONFIG_README.md** - Complete testing guide
- **TEST_CONFIG_PARSER_DESIGN.md** - Test framework architecture
- **tests/MAINTENANCE.md** - Schema and property database maintenance
- **QUICK_START_TESTING.md** - Quick start guide
- **README.md** - Project overview and build instructions
## Support
For issues or questions:
1. Check troubleshooting section above
2. Review test output in `output/` directory
3. Check Docker container logs: `docker logs ucentral_client_build_env`
4. File issue in repository issue tracker
## Version
Script version: 1.0.0
Last updated: 2025-12-15
Compatible with: uCentral schema 5.0.0 and later

View File

@@ -11,7 +11,7 @@ cfg2:
cfg3:
Bring ports 1 up, 2 up (Ethernet1, Ethernet2) (admin state);
Destroy any VLAN that is not in the list (in this particular CFG - create VLAN 10,
destroye any other, except for MGMT VLAN 1 - it's not being altered by the
destroy any other, except for MGMT VLAN 1 - it's not being altered by the
uCentral app itself);
Create VLAN 10;
Set VLAN 10 memberlist with the following ports: Ethernet1, Ethernet2;
@@ -39,6 +39,7 @@ cfg5_poe:
- detection mode is 4pt-dot3af;
- power limit is 99900mW (e.g. max per port);
- priority is LOW;
cfg7_ieee80211x.json:
Following json file configures the given topology:
+-----------------+
@@ -64,3 +65,33 @@ cfg7_ieee80211x.json:
to be the same for the given (10.10.20.0/24) network.
.1x client also must have a valid credentials data (both client and radius server
must have same clients credentials configured).
cfg_igmp.json:
Configure igmp snooping and querier on VLAN 1.
Configure igmp static groups:
- 230.1.1.1 with egress port Ethernet1
- 230.2.2.2 with egress ports Ethernet2 & Ethernet3
cfg_rpvstp.json:
Configure VLAN 1;
Configure VLAN 2;
Configure rapid per-vlan STP on VLAN 1 with priority 32768;
Disable STP on VLAN 2.
cfg_port_isolation.json:
Configure port isolation with Ethernet1 as uplink and
Ethernet2 & Ethernet3 as downlink
cfg_services_log.json:
Enable syslog with these parameters:
- remote host addr
- remote host port
- log severity (priority):
* emerg: 0
* alert: 1
* crit: 2
* error: 3
* warning: 4
* notice: 5
* info: 6
* debug: 7

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,581 @@
{
"strict": false,
"uuid": 1765383961,
"unit":
{
"hostname": "MJH-4150",
"leds-active": true,
"random-password": false,
"usage-threshold": 95
},
"ethernet":
[
{
"select-ports":
[
"Ethernet23"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe":
{
"admin-mode": true
},
"lacp-config":
{
"lacp-enable": false,
"lacp-role": "actor",
"lacp-mode": "passive",
"lacp-port-admin-key": 1,
"lacp-port-priority": 32768,
"lacp-system-priority": 32768,
"lacp-timeout": "long"
},
"lldp-interface-config":
{
"lldp-admin-status": "rx-tx",
"lldp-basic-tlv-mgmt-ip-v4": true,
"lldp-basic-tlv-mgmt-ip-v6": true,
"lldp-basic-tlv-port-descr": true,
"lldp-basic-tlv-sys-capab": true,
"lldp-basic-tlv-sys-descr": true,
"lldp-basic-tlv-sys-name": true,
"lldp-dot1-tlv-proto-ident": true,
"lldp-dot1-tlv-proto-vid": true,
"lldp-dot1-tlv-pvid": true,
"lldp-dot1-tlv-vlan-name": true,
"lldp-dot3-tlv-link-agg": true,
"lldp-dot3-tlv-mac-phy": true,
"lldp-dot3-tlv-max-frame": true,
"lldp-dot3-tlv-poe": true,
"lldp-med-location-civic-addr":
{
"lldp-med-location-civic-addr-admin-status": true,
"lldp-med-location-civic-country-code": "CA",
"lldp-med-location-civic-device-type": 1,
"lldp-med-location-civic-ca":
[
{
"lldp-med-location-civic-ca-type": 29,
"lldp-med-location-civic-ca-value": "Mike-WFH"
}
]
},
"lldp-med-notification": true,
"lldp-med-tlv-ext-poe": true,
"lldp-med-tlv-inventory": true,
"lldp-med-tlv-location": true,
"lldp-med-tlv-med-cap": true,
"lldp-med-tlv-network-policy": true,
"lldp-notification": true
},
"dhcp-snoop-port":
{
"dhcp-snoop-port-trust": true,
"dhcp-snoop-port-client-limit": 16,
"dhcp-snoop-port-circuit-id": "1-5c17834a98a0-24"
},
"edge-port": false
},
{
"select-ports":
[
"Ethernet2"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe":
{
"admin-mode": true
},
"lacp-config":
{
"lacp-enable": false,
"lacp-role": "actor",
"lacp-mode": "passive",
"lacp-port-admin-key": 1,
"lacp-port-priority": 32768,
"lacp-system-priority": 32768,
"lacp-timeout": "long"
},
"lldp-interface-config":
{
"lldp-admin-status": "rx-tx",
"lldp-basic-tlv-mgmt-ip-v4": true,
"lldp-basic-tlv-mgmt-ip-v6": true,
"lldp-basic-tlv-port-descr": true,
"lldp-basic-tlv-sys-capab": true,
"lldp-basic-tlv-sys-descr": true,
"lldp-basic-tlv-sys-name": true,
"lldp-dot1-tlv-proto-ident": true,
"lldp-dot1-tlv-proto-vid": true,
"lldp-dot1-tlv-pvid": true,
"lldp-dot1-tlv-vlan-name": true,
"lldp-dot3-tlv-link-agg": true,
"lldp-dot3-tlv-mac-phy": true,
"lldp-dot3-tlv-max-frame": true,
"lldp-dot3-tlv-poe": true,
"lldp-med-location-civic-addr":
{
"lldp-med-location-civic-addr-admin-status": true,
"lldp-med-location-civic-country-code": "CA",
"lldp-med-location-civic-device-type": 1,
"lldp-med-location-civic-ca":
[
{
"lldp-med-location-civic-ca-type": 29,
"lldp-med-location-civic-ca-value": "Mike-WFH"
}
]
},
"lldp-med-notification": true,
"lldp-med-tlv-ext-poe": true,
"lldp-med-tlv-inventory": true,
"lldp-med-tlv-location": true,
"lldp-med-tlv-med-cap": true,
"lldp-med-tlv-network-policy": true,
"lldp-notification": true
},
"dhcp-snoop-port":
{
"dhcp-snoop-port-trust": true,
"dhcp-snoop-port-client-limit": 16,
"dhcp-snoop-port-circuit-id": "1-5c17834a98a0-3"
},
"edge-port": false
},
{
"select-ports":
[
"Ethernet24",
"Ethernet25",
"Ethernet26",
"Ethernet27"
],
"speed": 10000,
"duplex": "full",
"enabled": true,
"dhcp-snoop-port":
{
"dhcp-snoop-port-trust": true
},
"edge-port": false
},
{
"select-ports":
[
"Ethernet0"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe":
{
"admin-mode": true
},
"lacp-config":
{
"lacp-enable": false,
"lacp-role": "actor",
"lacp-mode": "passive",
"lacp-port-admin-key": 1,
"lacp-port-priority": 32768,
"lacp-system-priority": 32768,
"lacp-timeout": "long"
},
"lldp-interface-config":
{
"lldp-admin-status": "rx-tx",
"lldp-basic-tlv-mgmt-ip-v4": true,
"lldp-basic-tlv-mgmt-ip-v6": true,
"lldp-basic-tlv-port-descr": true,
"lldp-basic-tlv-sys-capab": true,
"lldp-basic-tlv-sys-descr": true,
"lldp-basic-tlv-sys-name": true,
"lldp-dot1-tlv-proto-ident": true,
"lldp-dot1-tlv-proto-vid": true,
"lldp-dot1-tlv-pvid": true,
"lldp-dot1-tlv-vlan-name": true,
"lldp-dot3-tlv-link-agg": true,
"lldp-dot3-tlv-mac-phy": true,
"lldp-dot3-tlv-max-frame": true,
"lldp-dot3-tlv-poe": true,
"lldp-med-location-civic-addr":
{
"lldp-med-location-civic-addr-admin-status": true,
"lldp-med-location-civic-country-code": "CA",
"lldp-med-location-civic-device-type": 1,
"lldp-med-location-civic-ca":
[
{
"lldp-med-location-civic-ca-type": 29,
"lldp-med-location-civic-ca-value": "Mike-WFH"
}
]
},
"lldp-med-notification": true,
"lldp-med-tlv-ext-poe": true,
"lldp-med-tlv-inventory": true,
"lldp-med-tlv-location": true,
"lldp-med-tlv-med-cap": true,
"lldp-med-tlv-network-policy": true,
"lldp-notification": true
},
"dhcp-snoop-port":
{
"dhcp-snoop-port-trust": true,
"dhcp-snoop-port-client-limit": 16,
"dhcp-snoop-port-circuit-id": "1-5c17834a98a0-1"
},
"edge-port": false
},
{
"select-ports":
[
"Ethernet4"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe":
{
"admin-mode": true
},
"lacp-config":
{
"lacp-enable": false,
"lacp-role": "actor",
"lacp-mode": "passive",
"lacp-port-admin-key": 1,
"lacp-port-priority": 32768,
"lacp-system-priority": 32768,
"lacp-timeout": "long"
},
"lldp-interface-config":
{
"lldp-admin-status": "rx-tx",
"lldp-basic-tlv-mgmt-ip-v4": true,
"lldp-basic-tlv-mgmt-ip-v6": true,
"lldp-basic-tlv-port-descr": true,
"lldp-basic-tlv-sys-capab": true,
"lldp-basic-tlv-sys-descr": true,
"lldp-basic-tlv-sys-name": true,
"lldp-dot1-tlv-proto-ident": true,
"lldp-dot1-tlv-proto-vid": true,
"lldp-dot1-tlv-pvid": true,
"lldp-dot1-tlv-vlan-name": true,
"lldp-dot3-tlv-link-agg": true,
"lldp-dot3-tlv-mac-phy": true,
"lldp-dot3-tlv-max-frame": true,
"lldp-dot3-tlv-poe": true,
"lldp-med-location-civic-addr":
{
"lldp-med-location-civic-addr-admin-status": true,
"lldp-med-location-civic-country-code": "CA",
"lldp-med-location-civic-device-type": 1,
"lldp-med-location-civic-ca":
[
{
"lldp-med-location-civic-ca-type": 29,
"lldp-med-location-civic-ca-value": "Mike-WFH"
}
]
},
"lldp-med-notification": true,
"lldp-med-tlv-ext-poe": true,
"lldp-med-tlv-inventory": true,
"lldp-med-tlv-location": true,
"lldp-med-tlv-med-cap": true,
"lldp-med-tlv-network-policy": true,
"lldp-notification": true
},
"dhcp-snoop-port":
{
"dhcp-snoop-port-trust": true,
"dhcp-snoop-port-client-limit": 16,
"dhcp-snoop-port-circuit-id": "1-5c17834a98a0-5"
},
"edge-port": false
},
{
"select-ports":
[
"Ethernet1",
"Ethernet3"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe":
{
"admin-mode": true
},
"lacp-config":
{
"lacp-enable": false,
"lacp-role": "actor",
"lacp-mode": "passive",
"lacp-port-admin-key": 1,
"lacp-port-priority": 32768,
"lacp-system-priority": 32768,
"lacp-timeout": "long"
},
"edge-port": false
},
{
"select-ports":
[
"Ethernet5",
"Ethernet7",
"Ethernet8",
"Ethernet9",
"Ethernet10",
"Ethernet11",
"Ethernet12",
"Ethernet13",
"Ethernet14",
"Ethernet15",
"Ethernet16",
"Ethernet17",
"Ethernet18",
"Ethernet19",
"Ethernet20",
"Ethernet21",
"Ethernet22"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe":
{
"admin-mode": true
},
"dhcp-snoop-port":
{
"dhcp-snoop-port-trust": true
},
"edge-port": false
},
{
"select-ports":
[
"Ethernet6"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe":
{
"admin-mode": true
},
"lacp-config":
{
"lacp-enable": false,
"lacp-role": "actor",
"lacp-mode": "passive",
"lacp-port-admin-key": 1,
"lacp-port-priority": 32768,
"lacp-system-priority": 32768,
"lacp-timeout": "long"
},
"lldp-interface-config":
{
"lldp-admin-status": "rx-tx",
"lldp-basic-tlv-mgmt-ip-v4": true,
"lldp-basic-tlv-mgmt-ip-v6": true,
"lldp-basic-tlv-port-descr": true,
"lldp-basic-tlv-sys-capab": true,
"lldp-basic-tlv-sys-descr": true,
"lldp-basic-tlv-sys-name": true,
"lldp-dot1-tlv-proto-ident": true,
"lldp-dot1-tlv-proto-vid": true,
"lldp-dot1-tlv-pvid": true,
"lldp-dot1-tlv-vlan-name": true,
"lldp-dot3-tlv-link-agg": true,
"lldp-dot3-tlv-mac-phy": true,
"lldp-dot3-tlv-max-frame": true,
"lldp-dot3-tlv-poe": true,
"lldp-med-location-civic-addr":
{
"lldp-med-location-civic-addr-admin-status": true,
"lldp-med-location-civic-country-code": "CA",
"lldp-med-location-civic-device-type": 1,
"lldp-med-location-civic-ca":
[
{
"lldp-med-location-civic-ca-type": 29,
"lldp-med-location-civic-ca-value": "Mike-WFH"
}
]
},
"lldp-med-notification": true,
"lldp-med-tlv-ext-poe": true,
"lldp-med-tlv-inventory": true,
"lldp-med-tlv-location": true,
"lldp-med-tlv-med-cap": true,
"lldp-med-tlv-network-policy": true,
"lldp-notification": true
},
"dhcp-snoop-port":
{
"dhcp-snoop-port-trust": true,
"dhcp-snoop-port-client-limit": 16,
"dhcp-snoop-port-circuit-id": "1-5c17834a98a0-7"
},
"edge-port": false
}
],
"switch":
{
"loop-detection":
{
"protocol": "stp",
"instances":
[
{
"enabled": true,
"priority": 32768,
"forward_delay": 15,
"hello_time": 2,
"max_age": 20
}
]
},
"trunk-balance-method": "src-dst-mac",
"jumbo-frames": false,
"dhcp-snooping":
{
"dhcp-snoop-enable": true,
"dhcp-snoop-rate-limit": 1000,
"dhcp-snoop-mac-verify": true,
"dhcp-snoop-inf-opt-82": true,
"dhcp-snoop-inf-opt-encode-subopt": true,
"dhcp-snoop-inf-opt-remoteid": "5c17834a98a0",
"dhcp-snoop-inf-opt-policy": "drop"
},
"lldp-global-config":
{
"lldp-enable": true,
"lldp-holdtime-multiplier": 3,
"lldp-med-fast-start-count": 5,
"lldp-refresh-interval": 60,
"lldp-reinit-delay": 5,
"lldp-tx-delay": 5,
"lldp-notification-interval": 10
},
"mc-lag": false,
"arp-inspect":
{
"ip-arp-inspect": false
}
},
"interfaces":
[
{
"name": "VLAN1",
"role": "upstream",
"services":
[
"lldp",
"ssh"
],
"vlan":
{
"id": 1,
"proto": "802.1q"
},
"ethernet":
[
{
"select-ports":
[
"Ethernet0",
"Ethernet1",
"Ethernet2",
"Ethernet3",
"Ethernet4",
"Ethernet5",
"Ethernet6",
"Ethernet7",
"Ethernet8",
"Ethernet9",
"Ethernet10",
"Ethernet11",
"Ethernet12",
"Ethernet13",
"Ethernet14",
"Ethernet15",
"Ethernet16",
"Ethernet17",
"Ethernet18",
"Ethernet19",
"Ethernet20",
"Ethernet21",
"Ethernet22",
"Ethernet23",
"Ethernet24",
"Ethernet25",
"Ethernet26",
"Ethernet27"
],
"vlan-tag": "un-tagged",
"pvid": true
}
],
"ipv4":
{
"addressing": "dynamic",
"send-hostname": true,
"dhcp-snoop-vlan-enable": true
}
}
],
"services":
{
"lldp":
{
"describe": "MJH-4150",
"location": "Mike-WFH"
},
"ssh":
{
"port": 22,
"password-authentication": true,
"enable": true
},
"log":
{
"host": "192.168.2.38",
"port": 514,
"proto": "udp",
"size": 1000,
"priority": 7
},
"snmp":
{
"enabled": true
}
},
"metrics":
{
"statistics":
{
"interval": 60,
"types":
[
"lldp",
"clients"
]
},
"health":
{
"interval": 60,
"dhcp-local": true,
"dhcp-remote": false,
"dns-local": true,
"dns-remote": true
}
}
}

View File

@@ -1,70 +1,15 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": false,
"speed": 1000,
"select-ports": [
"Ethernet0",
"Ethernet1",
"Ethernet2",
"Ethernet3",
"Ethernet4",
"Ethernet5",
"Ethernet6",
"Ethernet7",
"Ethernet8",
"Ethernet9",
"Ethernet10",
"Ethernet11",
"Ethernet12",
"Ethernet13",
"Ethernet14",
"Ethernet15",
"Ethernet16",
"Ethernet17",
"Ethernet18",
"Ethernet19",
"Ethernet20",
"Ethernet21",
"Ethernet22",
"Ethernet23",
"Ethernet24",
"Ethernet25",
"Ethernet26",
"Ethernet27",
"Ethernet28",
"Ethernet29",
"Ethernet30",
"Ethernet31",
"Ethernet32",
"Ethernet33",
"Ethernet34",
"Ethernet35",
"Ethernet36",
"Ethernet37",
"Ethernet38",
"Ethernet39",
"Ethernet40",
"Ethernet41",
"Ethernet42",
"Ethernet43",
"Ethernet44",
"Ethernet45",
"Ethernet46",
"Ethernet47",
"Ethernet48",
"Ethernet52",
"Ethernet56",
"Ethernet60",
"Ethernet64",
"Ethernet68",
"Ethernet72",
"Ethernet76"
]
}
],
"interfaces": [],
"services": {},
"uuid": 1
}
{
"ethernet": [
{
"duplex": "full",
"enabled": false,
"speed": 1000,
"select-ports": [
"Ethernet*"
]
}
],
"interfaces": [],
"services": {},
"uuid": 1
}

View File

@@ -1,16 +1,16 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1",
"Ethernet2"
]
}
],
"interfaces": [],
"services": {},
"uuid": 1
}
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1",
"Ethernet2"
]
}
],
"interfaces": [],
"services": {},
"uuid": 1
}

View File

@@ -1,23 +1,23 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1"
]
},
{
"duplex": "full",
"enabled": false,
"select-ports": [
"Ethernet2"
],
"speed": 1000
}
],
"interfaces": [],
"services": {},
"uuid": 2
}
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1"
]
},
{
"duplex": "full",
"enabled": false,
"select-ports": [
"Ethernet2"
],
"speed": 1000
}
],
"interfaces": [],
"services": {},
"uuid": 2
}

View File

@@ -1,35 +1,35 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1",
"Ethernet2"
]
}
],
"interfaces": [
{
"vlan": {
"id": 10,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet1",
"Ethernet2"
],
"vlan-tag": "tagged"
}
],
"name": "mgmt",
"role": "upstream",
"services": []
}
],
"services": {},
"uuid": 3
}
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1",
"Ethernet2"
]
}
],
"interfaces": [
{
"vlan": {
"id": 10,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet1",
"Ethernet2"
],
"vlan-tag": "tagged"
}
],
"name": "mgmt",
"role": "upstream",
"services": []
}
],
"services": {},
"uuid": 3
}

View File

@@ -1,51 +1,51 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1",
"Ethernet2"
]
}
],
"interfaces": [
{
"vlan": {
"id": 10,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet1"
],
"vlan-tag": "tagged"
}
],
"name": "mgmt",
"role": "upstream",
"services": []
},
{
"vlan": {
"id": 100,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet2"
],
"vlan-tag": "tagged"
}
],
"name": "mgmt",
"role": "upstream",
"services": []
}
],
"services": {},
"uuid": 3
}
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1",
"Ethernet2"
]
}
],
"interfaces": [
{
"vlan": {
"id": 10,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet1"
],
"vlan-tag": "tagged"
}
],
"name": "mgmt",
"role": "upstream",
"services": []
},
{
"vlan": {
"id": 100,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet2"
],
"vlan-tag": "tagged"
}
],
"name": "mgmt",
"role": "upstream",
"services": []
}
],
"services": {},
"uuid": 3
}

View File

@@ -17,7 +17,11 @@
{
"ipv4": {
"addressing": "static",
"subnet": "20.20.20.20/24",
"subnet": [
{
"prefix": "20.20.20.20/24"
}
],
"dhcp": {
"relay-server": "172.20.254.8",
"circuit-id-format": "{Name}:{VLAN-ID}"
@@ -44,7 +48,11 @@
{
"ipv4": {
"addressing": "static",
"subnet": "30.30.30.30/24",
"subnet": [
{
"prefix": "30.30.30.30/24"
}
],
"dhcp": {
"relay-server": "172.20.10.12",
"circuit-id-format": "{Name}:{VLAN-ID}"
@@ -71,7 +79,11 @@
{
"ipv4": {
"addressing": "static",
"subnet": "172.20.10.181/24"
"subnet": [
{
"prefix": "172.20.10.181/24"
}
]
},
"vlan": {
"id": 20,

View File

@@ -50,7 +50,11 @@
},
"ipv4": {
"addressing": "static",
"subnet": "10.10.20.100/24"
"subnet": [
{
"prefix": "10.10.20.100/24"
}
]
},
"ethernet": [
{
@@ -70,7 +74,11 @@
},
"ipv4": {
"addressing": "static",
"subnet": "10.10.50.100/24"
"subnet": [
{
"prefix": "10.10.50.100/24"
}
]
},
"ethernet": [
{

View File

@@ -0,0 +1,64 @@
{
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true
}
}
],
"interfaces": [
{
"vlan": {
"id": 1,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"multicast": {
"igmp": {
"querier-enable": true,
"query-interval": 60,
"snooping-enable": true,
"version": 3,
"static-mcast-groups": [
{
"address": "230.1.1.1",
"egress-ports": [
"Ethernet1"
]
},
{
"address": "230.2.2.2",
"egress-ports": [
"Ethernet2",
"Ethernet3"
]
}
]
}
},
"subnet": [
{
"prefix": "1.1.1.1/24"
}
]
},
"role": "upstream",
"name": "mgmt-vlan"
}
],
"uuid": 1
}

View File

@@ -0,0 +1,530 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet0"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet1"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet2"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet3"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet4"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet5"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet6"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet7"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet8"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet9"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet10"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet11"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet12"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet13"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet14"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet15"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet16"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet17"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet18"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet19"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet20"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet21"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet22"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet23"
],
"speed": 1000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet24"
],
"speed": 10000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet25"
],
"speed": 10000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet26"
],
"speed": 10000
},
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true,
"detection": "2pt-dot3af",
"priority": "high"
},
"select-ports": [
"Ethernet27"
],
"speed": 10000
}
],
"interfaces": [
{
"ethernet": [
{
"select-ports": [
"Ethernet0",
"Ethernet5",
"Ethernet6",
"Ethernet7",
"Ethernet8",
"Ethernet9",
"Ethernet10",
"Ethernet11",
"Ethernet12",
"Ethernet13",
"Ethernet14",
"Ethernet15",
"Ethernet16",
"Ethernet17",
"Ethernet18",
"Ethernet19",
"Ethernet20",
"Ethernet21",
"Ethernet22",
"Ethernet23",
"Ethernet24",
"Ethernet25",
"Ethernet26",
"Ethernet27"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"addressing": "dynamic"
},
"name": "VLAN1",
"vlan": {
"id": 1
}
},
{
"ethernet": [
{ "pvid": true,
"select-ports": [
"Ethernet1",
"Ethernet2",
"Ethernet3",
"Ethernet4"
],
"vlan-tag": "un-tagged"
},
{
"select-ports": [
"Ethernet0"
],
"vlan-tag": "tagged"
}
],
"ipv4": {
"addressing": "static",
"subnet": [
{
"prefix": "10.1.12.157/24"
}
]
},
"name": "VLAN100",
"vlan": {
"id": 100,
"proto": "802.1q"
}
},
{
"ethernet": [
{
"select-ports": [
"Ethernet5",
"Ethernet6",
"Ethernet8"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"addressing": "static",
"multicast": {
"igmp": {
"fast-leave-enable": true,
"last-member-query-interval": 33,
"max-response-time": 11,
"querier-enable": true,
"query-interval": 14,
"snooping-enable": true,
"static-mcast-groups": [
{
"address": "229.229.229.1",
"egress-ports": [
"Ethernet5",
"Ethernet6",
"Ethernet8"
]
}
],
"version": 3
}
}
},
"role": "upstream",
"services": [
"ssh",
"lldp"
],
"vlan": {
"id": 500,
"proto": "802.1q"
}
}
],
"metrics": {
"dhcp-snooping": {
"filters": [
"ack",
"discover",
"offer",
"request",
"solicit",
"reply",
"renew"
]
},
"health": {
"interval": 60
},
"statistics": {
"interval": 300,
"types": ["lldp",
"clients"
]
}
},
"services": {
"http": {
"enable": true
},
"ssh": {
"enable": true
},
"lldp": {
"describe": "uCentral",
"location": "universe"
}
},
"unit": {
"leds-active": true,
"usage-threshold": 90
},
"uuid": 1719887774
}

View File

@@ -0,0 +1,13 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": ["Ethernet1"]
}
],
"interfaces": "not-an-array",
"services": {},
"uuid": 1
}

View File

@@ -0,0 +1,13 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": [
"Ethernet1"
]
}
],
"uuid": 1
}

View File

@@ -0,0 +1,13 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": ["Ethernet1"]
}
],
"interfaces": [],
"services": ["should-be-object"],
"uuid": 1
}

View File

@@ -0,0 +1,11 @@
{
"ethernet": {
"duplex": "full",
"enabled": true,
"speed": 1000,
"select-ports": ["Ethernet1"]
},
"interfaces": [],
"services": {},
"uuid": 1
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,54 @@
{
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true
}
}
],
"interfaces": [
{
"vlan": {
"id": 1,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"vlan-tag": "un-tagged"
}
],
"role": "upstream",
"name": "mgmt-vlan"
}
],
"switch": {
"port-isolation": {
"sessions": [
{
"id": 1,
"uplink": {
"interface-list": [
"Ethernet1"
]
},
"downlink": {
"interface-list": [
"Ethernet2",
"Ethernet3"
]
}
}
]
}
},
"uuid": 1
}

View File

@@ -0,0 +1,66 @@
{
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true
}
}
],
"interfaces": [
{
"vlan": {
"id": 1,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"vlan-tag": "un-tagged"
}
],
"role": "upstream",
"name": "mgmt-vlan"
},
{
"vlan": {
"id": 2,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"vlan-tag": "tagged"
}
],
"role": "upstream",
"name": "mgmt-vlan"
}
],
"switch": {
"loop-detection": {
"protocol": "rpvstp",
"instances": [
{
"id": 1,
"enabled": true,
"priority": 32768
},
{
"id": 2,
"enabled": false
}
]
}
},
"uuid": 1
}

View File

@@ -0,0 +1,43 @@
{
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"speed": 1000,
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": true
}
}
],
"interfaces": [
{
"vlan": {
"id": 1,
"proto": "802.1q"
},
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"vlan-tag": "un-tagged"
}
],
"role": "upstream",
"name": "mgmt-vlan"
}
],
"services": {
"log": {
"port": 2000,
"priority": 7,
"size": 1000,
"host": "192.168.1.10",
"proto": "udp"
}
},
"uuid": 1
}

165
config-samples/cfg_stp_rstp.json Executable file
View File

@@ -0,0 +1,165 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"poe": {
"admin-mode": false,
"power-limit": 12345
},
"select-ports": [
"Ethernet1",
"Ethernet2"
],
"speed": 1000
}
],
"interfaces": [
{
"ethernet": [
{
"pvid": true,
"select-ports": [
"Ethernet1"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"addressing": "static",
"dhcp": {
"circuit-id-format": "{VLAN-ID}",
"relay-server": "192.168.5.1"
},
"subnet": [
{
"prefix": "192.168.2.254/24"
}
]
},
"name": "vlan_2",
"vlan": {
"id": 2
}
},
{
"ethernet": [
{
"pvid": true,
"select-ports": [
"Ethernet2"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"addressing": "static",
"dhcp": {
"circuit-id-format": "{VLAN-ID}",
"relay-server": "192.168.5.1"
},
"subnet": [
{
"prefix": "192.168.3.254/24"
}
]
},
"name": "vlan_3",
"vlan": {
"id": 3
}
},
{
"ethernet": [
{
"pvid": true,
"select-ports": [
"Ethernet4"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"addressing": "static",
"subnet": [
{
"prefix": "192.168.5.254/24"
}
]
},
"name": "vlan_5",
"vlan": {
"id": 5
}
},
{
"ethernet": [
{
"select-ports": [
"Ethernet8",
"Ethernet9"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"addressing": "dynamic"
},
"name": "vlan_1234",
"vlan": {
"id": 1234
}
}
],
"metrics": {
"dhcp-snooping": {
"filters": [
"ack",
"discover",
"offer",
"request",
"solicit",
"reply",
"renew"
]
},
"health": {
"interval": 600
},
"statistics": {
"interval": 1200,
"types": []
}
},
"services": {
"http": {
"enable": true
},
"ssh": {
"enable": true
},
"telnet": {
"enable": true
}
},
"switch": {
"loop-detection": {
"instances": [
{
"enabled": true,
"forward_delay": 15,
"hello_time": 3,
"id": 20,
"max_age": 20,
"priority": 32768
}
],
"protocol": "rstp"
}
},
"unit": {
"leds-active": true,
"usage-threshold": 95
},
"uuid": 1713842091
}

View File

@@ -0,0 +1,96 @@
{
"ethernet": [
{
"duplex": "full",
"enabled": true,
"select-ports": [
"Ethernet*"
],
"speed": 1000
}
],
"interfaces": [
{
"ethernet": [
{
"select-ports": [
"Ethernet*"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"addressing": "dynamic"
},
"name": "VLAN1",
"vlan": {
"id": 1
}
},
{
"ethernet": [
{
"select-ports": [
"Ethernet1"
],
"vlan-tag": "un-tagged"
}
],
"ipv4": {
"voice-vlan-intf-config": {
"voice-vlan-intf-detect-voice": "lldp",
"voice-vlan-intf-mode": "auto",
"voice-vlan-intf-priority": 3,
"voice-vlan-intf-security": true
}
}
}
],
"metrics": {
"health": {
"interval": 300
},
"statistics": {
"interval": 300,
"types": []
}
},
"services": {
"http": {
"enable": true
},
"https": {
"enable": true
},
"ssh": {
"enable": false
},
"telnet": {
"enable": false
}
},
"switch": {
"voice-vlan-config": {
"voice-vlan-ageing-time": 1440,
"voice-vlan-enable": true,
"voice-vlan-id": 100,
"voice-vlan-oui-config": [
{
"voice-vlan-oui-description": "Cisco VoIP Phone",
"voice-vlan-oui-mac": "00:1B:44:11:3A:B7",
"voice-vlan-oui-mask": "FF:FF:FF:00:00:00"
},
{
"voice-vlan-oui-description": "Polycom VoIP Phone",
"voice-vlan-oui-mac": "00:0E:8F:12:34:56",
"voice-vlan-oui-mask": "FF:FF:FF:00:00:00"
}
]
}
},
"unit": {
"leds-active": true,
"usage-threshold": 90
},
"uuid": 1730796040
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

355
examples/pki-2.0/README.md Normal file
View File

@@ -0,0 +1,355 @@
# PKI 2.0 Certificate Examples and Tools
This directory contains examples and tools for working with PKI 2.0 certificates in the uCentral client.
## Overview
PKI 2.0 uses a two-tier certificate system:
- **Birth Certificates**: Factory-provisioned, long-lived certificates used for EST enrollment
- **Operational Certificates**: Runtime-generated, shorter-lived certificates obtained via EST protocol
## Certificate Workflow
```
Factory/Manufacturing
Birth Certificates Generated (openlan-pki-tools)
Certificates Provisioned to Device (partition_script.sh)
Device First Boot
EST Enrollment (automatic via ucentral-client)
Operational Certificates Obtained
Device Runtime (uses operational certs)
Certificate Renewal (via reenroll RPC)
```
## Birth Certificate Generation
Birth certificates are generated using the `openlan-pki-tools` repository.
### Prerequisites
```bash
# Clone openlan-pki-tools
git clone https://github.com/Telecominfraproject/openlan-pki-tools.git
cd openlan-pki-tools
```
### Generating Birth Certificates
The openlan-pki-tools repository provides scripts for generating birth certificates:
**For QA/Demo Environment:**
```bash
cd openlan-pki-tools
./scripts/generate-birth-cert.sh \
--mac "AA:BB:CC:DD:EE:FF" \
--serial "SN123456789" \
--ca demo \
--output /tmp/birth-certs/
```
**For Production Environment:**
```bash
./scripts/generate-birth-cert.sh \
--mac "AA:BB:CC:DD:EE:FF" \
--serial "SN123456789" \
--ca production \
--output /tmp/birth-certs/
```
### Birth Certificate Files
After generation, you'll have:
- `cert.pem` - Birth certificate (contains device identity)
- `key.pem` - Private key (keep secure!)
- `cas.pem` - CA certificate bundle
- `dev-id` - Device identifier file
## Installing Birth Certificates on Device
### Using partition_script.sh
```bash
# On your development machine, package the certificates
cd /tmp/birth-certs
tar -czf device-certs.tar.gz cert.pem key.pem cas.pem dev-id
# Copy to device
scp device-certs.tar.gz admin@<device-ip>:/tmp/
scp ../../partition_script.sh admin@<device-ip>:/tmp/
# On the device
ssh admin@<device-ip>
sudo su
cd /tmp
tar -xzf device-certs.tar.gz
bash ./partition_script.sh ./
# Reboot to mount the partition
reboot
```
## EST Enrollment (Automatic)
After installing birth certificates and rebooting, the uCentral client automatically:
1. Detects birth certificates in `/etc/ucentral/`
2. Determines EST server based on certificate issuer:
- "OpenLAN Demo Birth CA" → `qaest.certificates.open-lan.org:8001`
- "OpenLAN Birth Issuing CA" → `est.certificates.open-lan.org`
3. Performs EST simple enrollment using birth certificate
4. Saves operational certificate to `/etc/ucentral/operational.pem`
5. Saves operational CA to `/etc/ucentral/operational.ca`
6. Uses operational certificates for gateway connection
## Manual EST Enrollment Testing
For testing or manual certificate operations, you can use curl directly:
### Prerequisites
```bash
# Install curl and openssl
apt-get update
apt-get install -y curl openssl
```
### Test EST Enrollment
```bash
#!/bin/bash
# test-est-enrollment.sh
BIRTH_CERT="/etc/ucentral/cert.pem"
BIRTH_KEY="/etc/ucentral/key.pem"
CA_BUNDLE="/etc/ucentral/cas.pem"
EST_SERVER="qaest.certificates.open-lan.org:8001"
# Generate CSR from birth certificate
openssl req -new -key $BIRTH_KEY -out /tmp/device.csr -subj "/CN=$(hostname)"
# Base64 encode CSR (no headers)
CSR_B64=$(openssl req -in /tmp/device.csr -outform DER | base64 | tr -d '\n')
# Perform EST simple enrollment
curl -v --cacert $CA_BUNDLE \
--cert $BIRTH_CERT \
--key $BIRTH_KEY \
-H "Content-Type: application/pkcs10" \
-H "Content-Transfer-Encoding: base64" \
--data "$CSR_B64" \
"https://${EST_SERVER}/.well-known/est/simpleenroll" \
-o /tmp/operational.p7
# Convert PKCS#7 to PEM
openssl pkcs7 -inform DER -in /tmp/operational.p7 -print_certs -out /tmp/operational.pem
echo "Operational certificate saved to /tmp/operational.pem"
```
### Test EST Re-enrollment
```bash
#!/bin/bash
# test-est-reenrollment.sh
OPERATIONAL_CERT="/etc/ucentral/operational.pem"
KEY="/etc/ucentral/key.pem"
CA_BUNDLE="/etc/ucentral/operational.ca"
EST_SERVER="qaest.certificates.open-lan.org:8001"
# Generate CSR from operational certificate
openssl req -new -key $KEY -out /tmp/device-renew.csr -subj "/CN=$(hostname)"
# Base64 encode CSR
CSR_B64=$(openssl req -in /tmp/device-renew.csr -outform DER | base64 | tr -d '\n')
# Perform EST simple re-enrollment
curl -v --cacert $CA_BUNDLE \
--cert $OPERATIONAL_CERT \
--key $KEY \
-H "Content-Type: application/pkcs10" \
-H "Content-Transfer-Encoding: base64" \
--data "$CSR_B64" \
"https://${EST_SERVER}/.well-known/est/simplereenroll" \
-o /tmp/operational-renewed.p7
# Convert PKCS#7 to PEM
openssl pkcs7 -inform DER -in /tmp/operational-renewed.p7 -print_certs -out /tmp/operational-renewed.pem
echo "Renewed certificate saved to /tmp/operational-renewed.pem"
```
### Get CA Certificates
```bash
#!/bin/bash
# test-get-cacerts.sh
OPERATIONAL_CERT="/etc/ucentral/operational.pem"
KEY="/etc/ucentral/key.pem"
CA_BUNDLE="/etc/ucentral/operational.ca"
EST_SERVER="qaest.certificates.open-lan.org:8001"
# Fetch CA certificates from EST server
curl -v --cacert $CA_BUNDLE \
--cert $OPERATIONAL_CERT \
--key $KEY \
"https://${EST_SERVER}/.well-known/est/cacerts" \
-o /tmp/cacerts.p7
# Convert PKCS#7 to PEM
openssl pkcs7 -inform DER -in /tmp/cacerts.p7 -print_certs -out /tmp/cacerts.pem
echo "CA certificates saved to /tmp/cacerts.pem"
```
## Certificate Renewal via RPC
To renew operational certificates from the gateway, send a reenroll RPC command:
```json
{
"jsonrpc": "2.0",
"id": 123,
"method": "reenroll",
"params": {
"serial": "device_serial_number"
}
}
```
The device will:
1. Contact EST server with current operational certificate
2. Obtain renewed operational certificate
3. Save to `/etc/ucentral/operational.pem`
4. Restart after 10 seconds to use new certificate
## Certificate Inspection
### View Certificate Details
```bash
# View birth certificate
openssl x509 -in /etc/ucentral/cert.pem -text -noout
# View operational certificate
openssl x509 -in /etc/ucentral/operational.pem -text -noout
# Check certificate expiration
openssl x509 -in /etc/ucentral/operational.pem -noout -enddate
```
### Verify Certificate Chain
```bash
# Verify birth certificate against CA
openssl verify -CAfile /etc/ucentral/cas.pem /etc/ucentral/cert.pem
# Verify operational certificate against CA
openssl verify -CAfile /etc/ucentral/operational.ca /etc/ucentral/operational.pem
```
### Extract Certificate Information
```bash
# Get Common Name (CN)
openssl x509 -in /etc/ucentral/operational.pem -noout -subject | sed 's/.*CN = //'
# Get Issuer
openssl x509 -in /etc/ucentral/cert.pem -noout -issuer
# Get Serial Number
openssl x509 -in /etc/ucentral/operational.pem -noout -serial
```
## Troubleshooting
### EST Enrollment Fails
**Check EST server connectivity:**
```bash
curl -v https://qaest.certificates.open-lan.org:8001/.well-known/est/cacerts
```
**Verify birth certificates are valid:**
```bash
openssl x509 -in /etc/ucentral/cert.pem -text -noout
openssl verify -CAfile /etc/ucentral/cas.pem /etc/ucentral/cert.pem
```
**Check certificate issuer:**
```bash
openssl x509 -in /etc/ucentral/cert.pem -noout -issuer
# Should show: "OpenLAN Demo Birth CA" or "OpenLAN Birth Issuing CA"
```
### Operational Certificate Not Created
**Check uCentral client logs:**
```bash
journalctl -u ucentral-client -f
```
**Look for EST enrollment errors:**
```bash
grep -i "est\|enroll\|pki" /var/log/ucentral-client.log
```
**Manually test EST enrollment:**
```bash
cd /tmp
bash /path/to/examples/pki-2.0/test-est-enrollment.sh
```
### Certificate Expiration
**Check expiration dates:**
```bash
echo "Birth certificate:"
openssl x509 -in /etc/ucentral/cert.pem -noout -enddate
echo "Operational certificate:"
openssl x509 -in /etc/ucentral/operational.pem -noout -enddate
```
**Setup expiration monitoring:**
```bash
# Check if operational cert expires in less than 30 days
CERT_FILE="/etc/ucentral/operational.pem"
EXPIRE_DATE=$(openssl x509 -in $CERT_FILE -noout -enddate | cut -d= -f2)
EXPIRE_EPOCH=$(date -d "$EXPIRE_DATE" +%s)
NOW_EPOCH=$(date +%s)
DAYS_LEFT=$(( ($EXPIRE_EPOCH - $NOW_EPOCH) / 86400 ))
if [ $DAYS_LEFT -lt 30 ]; then
echo "WARNING: Certificate expires in $DAYS_LEFT days"
echo "Consider triggering reenroll RPC command"
fi
```
## Production Checklist
Before deploying to production:
- [ ] Birth certificates generated with production CA
- [ ] Certificates securely stored during manufacturing
- [ ] Device partition properly configured
- [ ] EST server URL matches certificate issuer
- [ ] Operational certificate automatically obtained on first boot
- [ ] Gateway can reach device for reenroll RPC
- [ ] Certificate expiration monitoring in place
- [ ] Backup/recovery procedure documented
## Additional Resources
- **Main README**: `../../README.md` - Certificate architecture overview
- **openlan-pki-tools**: https://github.com/Telecominfraproject/openlan-pki-tools
- **EST RFC 7030**: https://tools.ietf.org/html/rfc7030
- **est-client.c**: `../../src/ucentral-client/est-client.c` - EST client implementation
- **partition_script.sh**: `../../partition_script.sh` - Certificate partition tool

View File

@@ -0,0 +1,171 @@
#!/bin/bash
#
# Test EST Enrollment Script
#
# This script demonstrates manual EST enrollment using birth certificates
# to obtain an operational certificate from the EST server.
#
# Usage: ./test-est-enrollment.sh [est-server]
#
set -e
# Configuration
BIRTH_CERT="${BIRTH_CERT:-/etc/ucentral/cert.pem}"
BIRTH_KEY="${BIRTH_KEY:-/etc/ucentral/key.pem}"
CA_BUNDLE="${CA_BUNDLE:-/etc/ucentral/cas.pem}"
EST_SERVER="${1:-qaest.certificates.open-lan.org:8001}"
OUTPUT_DIR="${OUTPUT_DIR:-/tmp}"
echo "========================================="
echo "EST Enrollment Test"
echo "========================================="
echo "Birth Certificate: $BIRTH_CERT"
echo "Birth Key: $BIRTH_KEY"
echo "CA Bundle: $CA_BUNDLE"
echo "EST Server: $EST_SERVER"
echo "Output Directory: $OUTPUT_DIR"
echo ""
# Verify birth certificates exist
if [ ! -f "$BIRTH_CERT" ]; then
echo "ERROR: Birth certificate not found: $BIRTH_CERT"
exit 1
fi
if [ ! -f "$BIRTH_KEY" ]; then
echo "ERROR: Birth key not found: $BIRTH_KEY"
exit 1
fi
if [ ! -f "$CA_BUNDLE" ]; then
echo "ERROR: CA bundle not found: $CA_BUNDLE"
exit 1
fi
# Verify birth certificate is valid
echo "Verifying birth certificate..."
if ! openssl verify -CAfile "$CA_BUNDLE" "$BIRTH_CERT" > /dev/null 2>&1; then
echo "WARNING: Birth certificate verification failed"
fi
# Extract Common Name from birth certificate
CN=$(openssl x509 -in "$BIRTH_CERT" -noout -subject | sed 's/.*CN = //')
echo "Device Common Name: $CN"
echo ""
# Generate CSR from birth certificate
echo "Generating Certificate Signing Request (CSR)..."
CSR_FILE="$OUTPUT_DIR/device.csr"
openssl req -new -key "$BIRTH_KEY" -out "$CSR_FILE" -subj "/CN=$CN"
if [ $? -ne 0 ]; then
echo "ERROR: Failed to generate CSR"
exit 1
fi
echo "CSR generated: $CSR_FILE"
echo ""
# Base64 encode CSR (no headers, DER format)
echo "Encoding CSR to base64..."
CSR_B64=$(openssl req -in "$CSR_FILE" -outform DER | base64 | tr -d '\n')
if [ -z "$CSR_B64" ]; then
echo "ERROR: Failed to encode CSR"
exit 1
fi
echo "CSR encoded successfully"
echo ""
# Perform EST simple enrollment
echo "Performing EST simple enrollment..."
echo "Contacting EST server: https://$EST_SERVER/.well-known/est/simpleenroll"
echo ""
PKCS7_FILE="$OUTPUT_DIR/operational.p7"
curl -v --cacert "$CA_BUNDLE" \
--cert "$BIRTH_CERT" \
--key "$BIRTH_KEY" \
-H "Content-Type: application/pkcs10" \
-H "Content-Transfer-Encoding: base64" \
--data "$CSR_B64" \
"https://${EST_SERVER}/.well-known/est/simpleenroll" \
-o "$PKCS7_FILE"
CURL_EXIT=$?
echo ""
if [ $CURL_EXIT -ne 0 ]; then
echo "ERROR: EST enrollment failed with curl exit code: $CURL_EXIT"
echo ""
echo "Troubleshooting:"
echo "1. Verify EST server is reachable:"
echo " curl -I https://$EST_SERVER/.well-known/est/cacerts"
echo "2. Check birth certificate is valid and not expired"
echo "3. Verify CA bundle contains the correct root CA"
exit 1
fi
if [ ! -f "$PKCS7_FILE" ] || [ ! -s "$PKCS7_FILE" ]; then
echo "ERROR: No response received from EST server"
exit 1
fi
echo "EST enrollment response received"
echo ""
# Convert PKCS#7 to PEM
echo "Converting PKCS#7 response to PEM format..."
OPERATIONAL_CERT="$OUTPUT_DIR/operational.pem"
openssl pkcs7 -inform DER -in "$PKCS7_FILE" -print_certs -out "$OPERATIONAL_CERT"
if [ $? -ne 0 ]; then
echo "ERROR: Failed to convert PKCS#7 to PEM"
exit 1
fi
# Verify operational certificate was created
if [ ! -f "$OPERATIONAL_CERT" ] || [ ! -s "$OPERATIONAL_CERT" ]; then
echo "ERROR: Failed to create operational certificate"
exit 1
fi
echo "Operational certificate created: $OPERATIONAL_CERT"
echo ""
# Display certificate information
echo "========================================="
echo "Operational Certificate Information"
echo "========================================="
openssl x509 -in "$OPERATIONAL_CERT" -noout -text | head -30
echo ""
# Display expiration date
echo "Certificate Expiration:"
openssl x509 -in "$OPERATIONAL_CERT" -noout -enddate
echo ""
# Verify operational certificate
echo "Verifying operational certificate..."
if openssl verify -CAfile "$CA_BUNDLE" "$OPERATIONAL_CERT" > /dev/null 2>&1; then
echo "✓ Certificate verification successful"
else
echo "⚠ Certificate verification failed (may need operational CA bundle)"
fi
echo ""
echo "========================================="
echo "EST Enrollment Test Complete"
echo "========================================="
echo ""
echo "Next steps:"
echo "1. Copy operational certificate to /etc/ucentral/operational.pem"
echo "2. Copy operational CA bundle to /etc/ucentral/operational.ca"
echo "3. Restart ucentral-client service"
echo ""
echo "Commands:"
echo " sudo cp $OPERATIONAL_CERT /etc/ucentral/operational.pem"
echo " sudo cp $CA_BUNDLE /etc/ucentral/operational.ca"
echo " sudo systemctl restart ucentral-client"

View File

@@ -0,0 +1,195 @@
#!/bin/bash
#
# Test EST Re-enrollment Script
#
# This script demonstrates manual EST re-enrollment using an existing
# operational certificate to obtain a renewed operational certificate.
#
# Usage: ./test-est-reenrollment.sh [est-server]
#
set -e
# Configuration
OPERATIONAL_CERT="${OPERATIONAL_CERT:-/etc/ucentral/operational.pem}"
KEY="${KEY:-/etc/ucentral/key.pem}"
CA_BUNDLE="${CA_BUNDLE:-/etc/ucentral/operational.ca}"
EST_SERVER="${1:-qaest.certificates.open-lan.org:8001}"
OUTPUT_DIR="${OUTPUT_DIR:-/tmp}"
echo "========================================="
echo "EST Re-enrollment Test"
echo "========================================="
echo "Operational Certificate: $OPERATIONAL_CERT"
echo "Private Key: $KEY"
echo "CA Bundle: $CA_BUNDLE"
echo "EST Server: $EST_SERVER"
echo "Output Directory: $OUTPUT_DIR"
echo ""
# Verify operational certificate exists
if [ ! -f "$OPERATIONAL_CERT" ]; then
echo "ERROR: Operational certificate not found: $OPERATIONAL_CERT"
echo ""
echo "Run test-est-enrollment.sh first to obtain an operational certificate"
exit 1
fi
if [ ! -f "$KEY" ]; then
echo "ERROR: Private key not found: $KEY"
exit 1
fi
if [ ! -f "$CA_BUNDLE" ]; then
echo "ERROR: CA bundle not found: $CA_BUNDLE"
echo "Attempting to use cas.pem as fallback..."
CA_BUNDLE="/etc/ucentral/cas.pem"
if [ ! -f "$CA_BUNDLE" ]; then
echo "ERROR: No CA bundle found"
exit 1
fi
fi
# Display current certificate information
echo "Current Operational Certificate Information:"
echo "Subject: $(openssl x509 -in "$OPERATIONAL_CERT" -noout -subject)"
echo "Issuer: $(openssl x509 -in "$OPERATIONAL_CERT" -noout -issuer)"
echo "Valid until: $(openssl x509 -in "$OPERATIONAL_CERT" -noout -enddate)"
echo ""
# Verify operational certificate is valid
echo "Verifying current operational certificate..."
if ! openssl verify -CAfile "$CA_BUNDLE" "$OPERATIONAL_CERT" > /dev/null 2>&1; then
echo "WARNING: Operational certificate verification failed"
fi
# Extract Common Name from operational certificate
CN=$(openssl x509 -in "$OPERATIONAL_CERT" -noout -subject | sed 's/.*CN = //')
echo "Device Common Name: $CN"
echo ""
# Generate CSR for renewal
echo "Generating Certificate Signing Request (CSR) for renewal..."
CSR_FILE="$OUTPUT_DIR/device-renew.csr"
openssl req -new -key "$KEY" -out "$CSR_FILE" -subj "/CN=$CN"
if [ $? -ne 0 ]; then
echo "ERROR: Failed to generate CSR"
exit 1
fi
echo "CSR generated: $CSR_FILE"
echo ""
# Base64 encode CSR (no headers, DER format)
echo "Encoding CSR to base64..."
CSR_B64=$(openssl req -in "$CSR_FILE" -outform DER | base64 | tr -d '\n')
if [ -z "$CSR_B64" ]; then
echo "ERROR: Failed to encode CSR"
exit 1
fi
echo "CSR encoded successfully"
echo ""
# Perform EST simple re-enrollment
echo "Performing EST simple re-enrollment..."
echo "Contacting EST server: https://$EST_SERVER/.well-known/est/simplereenroll"
echo ""
PKCS7_FILE="$OUTPUT_DIR/operational-renewed.p7"
curl -v --cacert "$CA_BUNDLE" \
--cert "$OPERATIONAL_CERT" \
--key "$KEY" \
-H "Content-Type: application/pkcs10" \
-H "Content-Transfer-Encoding: base64" \
--data "$CSR_B64" \
"https://${EST_SERVER}/.well-known/est/simplereenroll" \
-o "$PKCS7_FILE"
CURL_EXIT=$?
echo ""
if [ $CURL_EXIT -ne 0 ]; then
echo "ERROR: EST re-enrollment failed with curl exit code: $CURL_EXIT"
echo ""
echo "Troubleshooting:"
echo "1. Verify EST server is reachable:"
echo " curl -I https://$EST_SERVER/.well-known/est/cacerts"
echo "2. Check operational certificate is valid and not expired"
echo "3. Verify CA bundle contains the correct root CA"
echo "4. Check if operational certificate is trusted by EST server"
exit 1
fi
if [ ! -f "$PKCS7_FILE" ] || [ ! -s "$PKCS7_FILE" ]; then
echo "ERROR: No response received from EST server"
exit 1
fi
echo "EST re-enrollment response received"
echo ""
# Convert PKCS#7 to PEM
echo "Converting PKCS#7 response to PEM format..."
RENEWED_CERT="$OUTPUT_DIR/operational-renewed.pem"
openssl pkcs7 -inform DER -in "$PKCS7_FILE" -print_certs -out "$RENEWED_CERT"
if [ $? -ne 0 ]; then
echo "ERROR: Failed to convert PKCS#7 to PEM"
exit 1
fi
# Verify renewed certificate was created
if [ ! -f "$RENEWED_CERT" ] || [ ! -s "$RENEWED_CERT" ]; then
echo "ERROR: Failed to create renewed certificate"
exit 1
fi
echo "Renewed operational certificate created: $RENEWED_CERT"
echo ""
# Display renewed certificate information
echo "========================================="
echo "Renewed Certificate Information"
echo "========================================="
openssl x509 -in "$RENEWED_CERT" -noout -text | head -30
echo ""
# Display expiration date
echo "New Certificate Expiration:"
openssl x509 -in "$RENEWED_CERT" -noout -enddate
echo ""
# Compare old and new expiration dates
OLD_EXPIRE=$(openssl x509 -in "$OPERATIONAL_CERT" -noout -enddate | cut -d= -f2)
NEW_EXPIRE=$(openssl x509 -in "$RENEWED_CERT" -noout -enddate | cut -d= -f2)
echo "Certificate Renewal Comparison:"
echo " Old expiration: $OLD_EXPIRE"
echo " New expiration: $NEW_EXPIRE"
echo ""
# Verify renewed certificate
echo "Verifying renewed certificate..."
if openssl verify -CAfile "$CA_BUNDLE" "$RENEWED_CERT" > /dev/null 2>&1; then
echo "✓ Certificate verification successful"
else
echo "⚠ Certificate verification failed"
fi
echo ""
echo "========================================="
echo "EST Re-enrollment Test Complete"
echo "========================================="
echo ""
echo "Next steps:"
echo "1. Backup current operational certificate"
echo "2. Replace with renewed certificate"
echo "3. Restart ucentral-client service"
echo ""
echo "Commands:"
echo " sudo cp $OPERATIONAL_CERT ${OPERATIONAL_CERT}.backup"
echo " sudo cp $RENEWED_CERT $OPERATIONAL_CERT"
echo " sudo systemctl restart ucentral-client"

View File

@@ -0,0 +1,181 @@
#!/bin/bash
#
# Test EST Get CA Certificates Script
#
# This script demonstrates retrieving CA certificates from an EST server.
# The CA certificates are needed to verify operational certificates.
#
# Usage: ./test-get-cacerts.sh [est-server]
#
set -e
# Configuration
OPERATIONAL_CERT="${OPERATIONAL_CERT:-/etc/ucentral/operational.pem}"
KEY="${KEY:-/etc/ucentral/key.pem}"
CA_BUNDLE="${CA_BUNDLE:-/etc/ucentral/operational.ca}"
EST_SERVER="${1:-qaest.certificates.open-lan.org:8001}"
OUTPUT_DIR="${OUTPUT_DIR:-/tmp}"
echo "========================================="
echo "EST Get CA Certificates Test"
echo "========================================="
echo "Operational Certificate: $OPERATIONAL_CERT"
echo "Private Key: $KEY"
echo "CA Bundle: $CA_BUNDLE"
echo "EST Server: $EST_SERVER"
echo "Output Directory: $OUTPUT_DIR"
echo ""
# Verify operational certificate exists
if [ ! -f "$OPERATIONAL_CERT" ]; then
echo "WARNING: Operational certificate not found: $OPERATIONAL_CERT"
echo "Attempting to use birth certificate..."
OPERATIONAL_CERT="/etc/ucentral/cert.pem"
if [ ! -f "$OPERATIONAL_CERT" ]; then
echo "ERROR: No certificate found for authentication"
exit 1
fi
fi
if [ ! -f "$KEY" ]; then
echo "ERROR: Private key not found: $KEY"
exit 1
fi
# CA bundle may not exist yet, use birth CA as fallback
if [ ! -f "$CA_BUNDLE" ]; then
echo "WARNING: CA bundle not found: $CA_BUNDLE"
echo "Attempting to use birth CA bundle..."
CA_BUNDLE="/etc/ucentral/cas.pem"
if [ ! -f "$CA_BUNDLE" ]; then
echo "ERROR: No CA bundle found"
exit 1
fi
fi
echo "Using certificate: $OPERATIONAL_CERT"
echo "Using CA bundle: $CA_BUNDLE"
echo ""
# Fetch CA certificates from EST server
echo "Fetching CA certificates from EST server..."
echo "Contacting: https://$EST_SERVER/.well-known/est/cacerts"
echo ""
PKCS7_FILE="$OUTPUT_DIR/cacerts.p7"
curl -v --cacert "$CA_BUNDLE" \
--cert "$OPERATIONAL_CERT" \
--key "$KEY" \
"https://${EST_SERVER}/.well-known/est/cacerts" \
-o "$PKCS7_FILE"
CURL_EXIT=$?
echo ""
if [ $CURL_EXIT -ne 0 ]; then
echo "ERROR: Failed to fetch CA certificates, curl exit code: $CURL_EXIT"
echo ""
echo "Troubleshooting:"
echo "1. Verify EST server is reachable:"
echo " curl -I https://$EST_SERVER/.well-known/est/cacerts"
echo "2. Check certificate is valid for authentication"
echo "3. Verify CA bundle contains the correct root CA"
exit 1
fi
if [ ! -f "$PKCS7_FILE" ] || [ ! -s "$PKCS7_FILE" ]; then
echo "ERROR: No response received from EST server"
exit 1
fi
echo "CA certificates response received"
echo ""
# Convert PKCS#7 to PEM
echo "Converting PKCS#7 response to PEM format..."
CACERTS_PEM="$OUTPUT_DIR/cacerts.pem"
openssl pkcs7 -inform DER -in "$PKCS7_FILE" -print_certs -out "$CACERTS_PEM"
if [ $? -ne 0 ]; then
echo "ERROR: Failed to convert PKCS#7 to PEM"
exit 1
fi
# Verify CA certificates were extracted
if [ ! -f "$CACERTS_PEM" ] || [ ! -s "$CACERTS_PEM" ]; then
echo "ERROR: Failed to extract CA certificates"
exit 1
fi
echo "CA certificates extracted: $CACERTS_PEM"
echo ""
# Count number of certificates
CERT_COUNT=$(grep -c "BEGIN CERTIFICATE" "$CACERTS_PEM" || true)
echo "Number of CA certificates: $CERT_COUNT"
echo ""
# Display information about each certificate
echo "========================================="
echo "CA Certificates Information"
echo "========================================="
echo ""
# Split certificates and display info for each
csplit -s -f "$OUTPUT_DIR/ca-" "$CACERTS_PEM" '/-----BEGIN CERTIFICATE-----/' '{*}'
for cert_file in "$OUTPUT_DIR"/ca-*; do
if [ -f "$cert_file" ] && [ -s "$cert_file" ]; then
if grep -q "BEGIN CERTIFICATE" "$cert_file" 2>/dev/null; then
echo "Certificate:"
echo " Subject: $(openssl x509 -in "$cert_file" -noout -subject 2>/dev/null || echo 'N/A')"
echo " Issuer: $(openssl x509 -in "$cert_file" -noout -issuer 2>/dev/null || echo 'N/A')"
echo " Valid until: $(openssl x509 -in "$cert_file" -noout -enddate 2>/dev/null || echo 'N/A')"
echo ""
fi
rm -f "$cert_file"
fi
done
# Test verification with the new CA bundle
echo "========================================="
echo "Certificate Verification Test"
echo "========================================="
echo ""
if [ -f "/etc/ucentral/operational.pem" ]; then
echo "Testing operational certificate verification with retrieved CA bundle..."
if openssl verify -CAfile "$CACERTS_PEM" "/etc/ucentral/operational.pem" > /dev/null 2>&1; then
echo "✓ Operational certificate verification successful"
else
echo "⚠ Operational certificate verification failed"
echo " This may be normal if the operational cert was issued by a different CA"
fi
echo ""
fi
if [ -f "/etc/ucentral/cert.pem" ]; then
echo "Testing birth certificate verification with retrieved CA bundle..."
if openssl verify -CAfile "$CACERTS_PEM" "/etc/ucentral/cert.pem" > /dev/null 2>&1; then
echo "✓ Birth certificate verification successful"
else
echo "⚠ Birth certificate verification failed"
echo " This may be normal if the birth cert was issued by a different CA"
fi
echo ""
fi
echo "========================================="
echo "EST Get CA Certificates Test Complete"
echo "========================================="
echo ""
echo "CA certificates saved to: $CACERTS_PEM"
echo ""
echo "Next steps:"
echo "1. Review the CA certificates"
echo "2. Update operational CA bundle if needed"
echo ""
echo "Commands:"
echo " cat $CACERTS_PEM"
echo " sudo cp $CACERTS_PEM /etc/ucentral/operational.ca"

View File

@@ -1,5 +1,9 @@
#!/bin/sh
# PKI 2.0: This script provisions BIRTH certificates to the device partition.
# Birth certificates (cas.pem, cert.pem, key.pem) are used for initial EST enrollment.
# Operational certificates (operational.pem, operational.ca) are generated at runtime
# via EST protocol and stored in /etc/ucentral/ by the ucentral-client daemon.
REQUIRED_CERT_FILES="cas.pem cert.pem key.pem dev-id"
function partition_replace_certs()

384
run-config-tests.sh Executable file
View File

@@ -0,0 +1,384 @@
#!/bin/bash
#
# run-config-tests.sh - Run uCentral configuration tests in Docker
#
# Usage: ./run-config-tests.sh [OPTIONS] [config-file]
#
# Options:
# -m, --mode MODE Test mode: stub or platform (default: stub)
# stub = Fast testing with stubs (proto.c only)
# platform = Full integration testing with platform code
# -p, --platform NAME Platform name for platform mode (default: brcm-sonic)
# Examples: brcm-sonic, ec, example
# -f, --format FORMAT Output format: html, json, human (default: human)
# -h, --help Show this help message
#
# Arguments:
# config-file Optional specific config file to test (default: all configs)
#
# Examples:
# ./run-config-tests.sh # Stub mode, all configs, human output
# ./run-config-tests.sh --mode platform # Platform mode (brcm-sonic), all configs
# ./run-config-tests.sh -m platform -p ec --format html # Platform mode (ec), HTML report
# ./run-config-tests.sh --format json cfg0.json # Stub mode, single config, JSON output
# ./run-config-tests.sh -m platform -f human cfg1.json # Platform mode, single config
#
# Test Modes:
# Stub Mode (default):
# - Fast execution
# - Tests proto.c parsing only
# - Uses simple platform stubs
# - Shows base properties only
# - Use for quick validation and CI/CD
#
# Platform Mode:
# - Integration testing
# - Tests proto.c + platform code (plat-*.c)
# - Uses real platform implementation + mocks
# - Shows base AND platform properties separately
# - Use for platform-specific validation
#
set -e
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONTAINER_NAME="ucentral_client_build_env"
BUILD_DIR="/root/ols-nos/tests/config-parser"
CONFIG_DIR="/root/ols-nos/config-samples"
OUTPUT_DIR="$SCRIPT_DIR/output"
DOCKERFILE_PATH="$SCRIPT_DIR/Dockerfile"
# Default values
TEST_MODE="stub"
PLATFORM_NAME="brcm-sonic"
FORMAT="human"
SINGLE_CONFIG=""
# Function to show help
show_help() {
sed -n '2,39p' "$0" | sed 's/^# \?//'
exit 0
}
# Parse arguments
while [[ $# -gt 0 ]]; do
case "$1" in
-h|--help)
show_help
;;
-m|--mode)
TEST_MODE="$2"
shift 2
;;
-p|--platform)
PLATFORM_NAME="$2"
shift 2
;;
-f|--format)
FORMAT="$2"
shift 2
;;
-*)
echo -e "${RED}Error: Unknown option '$1'${NC}"
echo "Use --help to see usage information"
exit 1
;;
*)
# Assume it's the config file
SINGLE_CONFIG="$1"
shift
;;
esac
done
# Validate test mode
case "$TEST_MODE" in
stub|platform)
;;
*)
echo -e "${RED}Error: Invalid mode '$TEST_MODE'. Must be 'stub' or 'platform'${NC}"
echo "Use --help to see usage information"
exit 1
;;
esac
# Validate format
case "$FORMAT" in
html|json|human)
;;
*)
echo -e "${RED}Error: Invalid format '$FORMAT'. Must be 'html', 'json', or 'human'${NC}"
echo "Use --help to see usage information"
exit 1
;;
esac
# Function to print status messages
print_status() {
echo -e "${BLUE}==>${NC} $1"
}
print_success() {
echo -e "${GREEN}${NC} $1"
}
print_warning() {
echo -e "${YELLOW}${NC} $1"
}
print_error() {
echo -e "${RED}${NC} $1"
}
# Function to check if Docker is running
check_docker() {
if ! docker info > /dev/null 2>&1; then
print_error "Docker is not running. Please start Docker and try again."
exit 1
fi
print_success "Docker is running"
}
# Function to check if container exists
container_exists() {
docker ps -a --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"
}
# Function to check if container is running
container_running() {
docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"
}
# Function to get Dockerfile SHA
get_dockerfile_sha() {
if [ -f "$DOCKERFILE_PATH" ]; then
shasum -a 1 "$DOCKERFILE_PATH" | awk '{print $1}' | cut -c1-8
else
echo "unknown"
fi
}
# Function to build Docker environment if needed
build_environment() {
local current_sha=$(get_dockerfile_sha)
local image_tag="ucentral-build-env:${current_sha}"
# Check if image exists
if docker images --format '{{.Repository}}:{{.Tag}}' | grep -q "^${image_tag}$"; then
print_success "Build environment image already exists (${image_tag})"
return 0
fi
print_status "Building Docker build environment..."
print_status "This may take several minutes on first run..."
if make build-host-env; then
print_success "Build environment created"
else
print_error "Failed to build environment"
exit 1
fi
}
# Function to start container if not running
start_container() {
if container_running; then
print_success "Container is already running"
return 0
fi
if container_exists; then
print_status "Starting existing container..."
docker start "$CONTAINER_NAME" > /dev/null
print_success "Container started"
else
print_status "Creating and starting new container..."
if make run-host-env; then
print_success "Container created and started"
else
print_error "Failed to start container"
exit 1
fi
fi
# Wait for container to be ready
sleep 2
}
# Function to run tests in Docker
run_tests() {
local test_cmd=""
local build_cmd=""
local output_file=""
local copy_files=()
local use_platform_flag=""
# Set platform flag for build commands
if [ "$TEST_MODE" = "platform" ]; then
use_platform_flag="USE_PLATFORM=$PLATFORM_NAME"
print_status "Test mode: Platform ($PLATFORM_NAME)"
else
print_status "Test mode: Stub (fast)"
fi
if [ -n "$SINGLE_CONFIG" ]; then
print_status "Running test for single config: $SINGLE_CONFIG"
# Verify config exists in container
if ! docker exec "$CONTAINER_NAME" bash -c "test -f $CONFIG_DIR/$SINGLE_CONFIG"; then
print_error "Config file not found in container: $SINGLE_CONFIG"
print_status "Available configs:"
docker exec "$CONTAINER_NAME" bash -c "ls $CONFIG_DIR/*.json 2>/dev/null | xargs -n1 basename" || true
exit 1
fi
# Build test binary with appropriate mode (clean first to ensure correct flags)
build_cmd="cd $BUILD_DIR && make clean && make test-config-parser $use_platform_flag"
case "$FORMAT" in
html)
output_file="test-report-${SINGLE_CONFIG%.json}.html"
test_cmd="$build_cmd && LD_LIBRARY_PATH=/usr/local/lib ./test-config-parser --html $CONFIG_DIR/$SINGLE_CONFIG > $BUILD_DIR/$output_file"
copy_files=("$output_file")
;;
json)
output_file="test-results-${SINGLE_CONFIG%.json}.json"
test_cmd="$build_cmd && LD_LIBRARY_PATH=/usr/local/lib ./test-config-parser --json $CONFIG_DIR/$SINGLE_CONFIG > $BUILD_DIR/$output_file"
copy_files=("$output_file")
;;
human)
output_file="test-results-${SINGLE_CONFIG%.json}.txt"
test_cmd="$build_cmd && LD_LIBRARY_PATH=/usr/local/lib ./test-config-parser $CONFIG_DIR/$SINGLE_CONFIG 2>&1 | tee $BUILD_DIR/$output_file"
copy_files=("$output_file")
;;
esac
else
print_status "Running tests for all configurations (format: $FORMAT)"
case "$FORMAT" in
html)
output_file="test-report.html"
test_cmd="cd $BUILD_DIR && make clean && make test-config-html $use_platform_flag"
copy_files=("$output_file")
;;
json)
output_file="test-report.json"
test_cmd="cd $BUILD_DIR && make clean && make test-config-json $use_platform_flag"
copy_files=("$output_file")
;;
human)
output_file="test-results.txt"
test_cmd="cd $BUILD_DIR && make clean && make test-config-full $use_platform_flag 2>&1 | tee $BUILD_DIR/$output_file"
copy_files=("$output_file")
;;
esac
fi
print_status "Executing tests in container..."
echo ""
# Run the test command
if docker exec "$CONTAINER_NAME" bash -c "$test_cmd"; then
print_success "Tests completed successfully"
TEST_EXIT_CODE=0
else
TEST_EXIT_CODE=$?
print_warning "Tests completed with issues (exit code: $TEST_EXIT_CODE)"
fi
echo ""
# Create output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"
# Copy output files from container to host
for file in "${copy_files[@]}"; do
if docker exec "$CONTAINER_NAME" bash -c "test -f $BUILD_DIR/$file"; then
print_status "Copying $file from container to host..."
docker cp "$CONTAINER_NAME:$BUILD_DIR/$file" "$OUTPUT_DIR/$file"
print_success "Output saved: $OUTPUT_DIR/$file"
# Show file info
local file_size=$(du -h "$OUTPUT_DIR/$file" | cut -f1)
echo " Size: $file_size"
else
print_warning "Output file not found in container: $file"
fi
done
return $TEST_EXIT_CODE
}
# Function to print summary
print_summary() {
local exit_code=$1
echo ""
echo "========================================"
echo "Test Run Summary"
echo "========================================"
echo "Mode: $TEST_MODE"
if [ "$TEST_MODE" = "platform" ]; then
echo "Platform: $PLATFORM_NAME"
fi
echo "Format: $FORMAT"
if [ -n "$SINGLE_CONFIG" ]; then
echo "Config: $SINGLE_CONFIG"
else
echo "Config: All configurations"
fi
echo "Output Dir: $OUTPUT_DIR"
echo ""
if [ $exit_code -eq 0 ]; then
print_success "All tests passed!"
if [ "$TEST_MODE" = "platform" ]; then
echo ""
echo "Platform properties tracked from: plat-$PLATFORM_NAME.c"
echo "Check output for 'Successfully Configured (Base)' and"
echo "'Successfully Configured (Platform)' sections"
fi
else
print_warning "Some tests failed or had issues"
fi
echo ""
echo "Output files:"
ls -lh "$OUTPUT_DIR" | tail -n +2 | while read -r line; do
echo " $line"
done
}
# Main execution
main() {
print_status "uCentral Configuration Test Runner"
echo ""
# Check prerequisites
check_docker
# Build environment if needed
build_environment
# Start container if needed
start_container
# Run tests
run_tests
TEST_RESULT=$?
# Print summary
print_summary $TEST_RESULT
exit $TEST_RESULT
}
# Run main function
main

View File

@@ -36,6 +36,7 @@ override_dh_install:
# home folder.
mkdir -p ${INSTALL}/home/admin
cp scripts/OLS_NOS_fixups.script ${INSTALL}/usr/local/lib
cp scripts/OLS_NOS_upgrade_override.script ${INSTALL}/usr/local/lib
cp docker-ucentral-client.gz ${INSTALL}/usr/local/lib
# Install Vlan1 in-band management configuration
mkdir -p ${INSTALL}/etc/network/interfaces.d/

View File

@@ -21,6 +21,8 @@ COPY /ucentral-client /usr/local/bin/ucentral-client
COPY /rtty /usr/local/bin/
COPY /lib* /usr/local/lib/
COPY /version.jso[n] /etc/
COPY /schema.jso[n] /etc/
RUN ldconfig
RUN ls -l /usr/local/bin/ucentral-client

View File

@@ -1,33 +0,0 @@
# Ucentral for EC
Ucentral solution for EC is made of the following parts:
* `ecapi`: a library to communicate with EC via SNMP
# Compiling
## EC Build for Target Device
First build the full EC image for your target device:
* `cd EC_VOB/project_build_environment/<target device>`
* `./make_all`
If this is successful, you can proceed to the next step.
## Build Environment
To successfully build required components the build environments variables must be prepared:
* `cd EC_VOB/project_build_environment/<target device>`
* `cd utils`
* `. build_env_init`
## Building All Components
Presumably you have checked out the [ols-ucentral-src]:
* `cd [ols-ucentral-src]`
* Run `make plat-ec`, which should successfully compile all components
## Creating EC Firmware with Ucentral
After building everything up:
* Check the `output` directory, it should contain all required binaries in appropriate subdirectories
* Copy over these directories to your `EC_VOB/project_build_environment/<target device>/user/thirdpty/ucentral`

View File

@@ -1,141 +0,0 @@
#!/bin/bash
UCENTRAL_DIR=${PWD}
EC_BUILD_DIR=${PWD}/src/ec-private
OUT_DIR=${UCENTRAL_DIR}/output
BIN_DIR=${OUT_DIR}/usr/sbin
LIB_DIR=${OUT_DIR}/lib
LIB_OPENSSL=openssl-1.1.1q
LIB_WEBSOCKETS=libwebsockets-4.1.4
LIB_CURL=curl-7.83.1
LIB_CJSON=cJSON-1.7.15
echo "+++++++++++++++++ check EC build environment +++++++++++++++++"
if [ ! "${PROJECT_NAME}" ] || [ ! "${SOURCE_PATH}" ]; then
echo "Error! Please source 'build_env_init' for your build environment."
exit
fi
cp -af ${UCENTRAL_DIR}/src/ucentral-client/* ${EC_BUILD_DIR}/ucentral-client
rm -rf ${OUT_DIR}
if [ ! -d output ]; then
mkdir -p ${BIN_DIR}
mkdir -p ${LIB_DIR}
fi
C_COMPILER="${TOOLCHAIN_PATH}/${CROSS_COMPILE}gcc ."
echo "+++++++++++++++++ openssl +++++++++++++++++"
cd ${EC_BUILD_DIR}
if [ ! -d openssl ]; then
tar -xf ./archive/${LIB_OPENSSL}.tar.gz
mv ${LIB_OPENSSL} openssl
fi
model_name=${D_MODEL_NAME}
if [ "$model_name" == 'ECS4130_AC5' ]; then
platform=linux-aarch64
elif [ "$model_name" == 'ECS4125_10P' ]; then
platform=linux-mips32
else
echo "Error! The model ${model_name} is not in the support lists, please check."
exit 1
fi
cd openssl
./Configure ${platform} --cross-compile-prefix=${CROSS_COMPILE} no-idea no-mdc2 no-rc5 no-ssl2 no-ssl3
make -j${nproc}
if [ "$?" -eq "0" ]; then
cp -af libssl.so.1.1 libcrypto.so.1.1 ${LIB_DIR}
fi
echo "+++++++++++++++++ libwebsockets +++++++++++++++++"
cd ${EC_BUILD_DIR}
if [ ! -d libwebsockets ]; then
tar -xf ./archive/${LIB_WEBSOCKETS}.tar.gz
mv ${LIB_WEBSOCKETS} libwebsockets
patch -s -N -p1 -d libwebsockets/lib < ./patch/libwebsockets/${LIB_WEBSOCKETS}.patch
fi
cd libwebsockets
cmake \
-DOPENSSL_ROOT_DIR=${EC_BUILD_DIR}/openssl \
-DCMAKE_C_COMPILER=${C_COMPILER}
make -j${nproc}
if [ "$?" -eq "0" ]; then
cp -af lib/libwebsockets.so.17 ${LIB_DIR}
fi
echo "+++++++++++++++++ curl +++++++++++++++++"
cd ${EC_BUILD_DIR}
if [ ! -d curl ]; then
tar -xf ./archive/${LIB_CURL}.tar.xz
mv ${LIB_CURL} curl
patch -s -N -p1 -d curl < ./patch/curl/${LIB_CURL}.patch
fi
cd curl
cmake -DCMAKE_C_COMPILER=${C_COMPILER} -DCMAKE_SHARED_LINKER_FLAGS=-L${EC_BUILD_DIR}/openssl
make
if [ "$?" -eq "0" ]; then
cp -af ./lib/libcurl.so ${LIB_DIR}
cp -af ./src/curl ${BIN_DIR}
fi
echo "+++++++++++++++++ cjson +++++++++++++++++"
cd ${EC_BUILD_DIR}
if [ ! -d cjson ]; then
tar -xf ./archive/${LIB_CJSON}.tar.gz
mv ${LIB_CJSON} cjson
fi
cd cjson
cmake -DCMAKE_C_COMPILER=${C_COMPILER}
make
if [ "$?" -eq "0" ]; then
cp -af ./libcjson.so.1.7.15 ${LIB_DIR}
cd ${LIB_DIR}
mv libcjson.so.1.7.15 libcjson.so.1
fi
echo "+++++++++++++++++ ecapi +++++++++++++++++"
cd ${EC_BUILD_DIR}/ecapi
mkdir ${EC_BUILD_DIR}/ecapi/build
cd ${EC_BUILD_DIR}/ecapi/build
cmake -DCMAKE_C_COMPILER=${C_COMPILER} ..
make
if [ "$?" -eq "0" ]; then
cp -af libecapi.so ${LIB_DIR}
fi
echo "+++++++++++++++++ ucentral-client +++++++++++++++++"
if [ ! -d ucentral ]; then
mkdir -p ${EC_BUILD_DIR}/ucentral
fi
cp -af ${UCENTRAL_DIR}/src/ucentral-client ${EC_BUILD_DIR}/ucentral/ucentral-client
cp -af ${EC_BUILD_DIR}/patch/ucentral/* ${EC_BUILD_DIR}/ucentral
mkdir -p ${EC_BUILD_DIR}/ucentral/build
cd ${EC_BUILD_DIR}/ucentral/build
cmake -DCMAKE_C_COMPILER=${C_COMPILER} ..
make
if [ "$?" -eq "0" ]; then
cp -af ucentral-client ${BIN_DIR}
fi
echo "+++++++++++++++++ Strip target binaries +++++++++++++++++"
${TOOLCHAIN_PATH}/${CROSS_COMPILE}strip ${BIN_DIR}/*
${TOOLCHAIN_PATH}/${CROSS_COMPILE}strip ${LIB_DIR}/*

View File

@@ -1,31 +0,0 @@
cmake_minimum_required(VERSION 2.6)
PROJECT(ecapi C)
ADD_DEFINITIONS(-Os -ggdb -Wall -Werror --std=gnu99 -Wmissing-declarations)
SET(CMAKE_SHARED_LIBRARY_LINK_C_FLAGS "")
INCLUDE_DIRECTORIES(${CMAKE_CURRENT_SOURCE_DIR}/include)
INCLUDE_DIRECTORIES($ENV{SOURCE_PATH}/sysinclude)
INCLUDE_DIRECTORIES($ENV{SOURCE_PATH}/sysinclude/mibconstants)
INCLUDE_DIRECTORIES($ENV{SOURCE_PATH}/sysinclude/oem/$ENV{PROJECT_NAME})
INCLUDE_DIRECTORIES($ENV{PROJECT_PATH}/user/thirdpty/lua/net-snmp-5.4.4/include)
INCLUDE_DIRECTORIES($ENV{PROJECT_PATH}/user/thirdpty/lua/net-snmp-5.4.4/agent/mibgroup)
#LINK_DIRECTORIES(${CMAKE_CURRENT_SOURCE_DIR}/src/snmp)
FIND_LIBRARY(netsnmp_library netsnmp $ENV{PROJECT_PATH}/user/thirdpty/lua/net-snmp-5.4.4/snmplib/.libs)
#INCLUDE (CheckSymbolExists)
#CHECK_SYMBOL_EXISTS(SYS_getrandom syscall.h getrandom)
if ($ENV{D_MODEL_NAME} STREQUAL ECS4130_AC5)
add_definitions(-DENDIANNESS_ADJUST)
endif()
INCLUDE(snmp/CMakeLists.txt)
INCLUDE(generic/CMakeLists.txt)
ADD_LIBRARY(ecapi SHARED ${LIB_SOURCES})
TARGET_LINK_LIBRARIES(ecapi ${netsnmp_library})

View File

@@ -1,3 +0,0 @@
list(APPEND LIB_SOURCES
${CMAKE_CURRENT_LIST_DIR}/api_print.c
)

View File

@@ -1,27 +0,0 @@
// #include <stdarg.h>
#include "api_print.h"
static bool debug_on = false;
void print_set_debug(bool on) {
debug_on = on;
}
bool print_is_debug(void) {
return debug_on;
}
/*
void print_debug(char *fmt, ...) {
if (print_is_debug()) {
va_list args; va_start(args, fmt);
vfprintf(stdout, fmt, args);
va_end(args);
}
}
void print_err(char *fmt, ...) {
va_list args; va_start(args, fmt);
vfprintf(stderr, fmt, args);
va_end(args);
}*/

View File

@@ -1,40 +0,0 @@
#ifndef API_CONFIG_H
#define API_CONFIG_H
#include <stdbool.h>
#include <stdint.h>
typedef enum {
DPX_HALF = 0,
DPX_FULL,
} duplex_t;
typedef enum {
M_NONE = 1,
M_SFP_FORCED_1000 = 7,
M_SFP_FORCED_10G = 8,
} media_t;
typedef enum {
VL_NONE = 0,
VL_TAGGED,
VL_UNTAGGED,
VL_FORBIDDEN
} vlan_membership_t;
void *open_config_transaction();
void commit_config_transaction(void *tr);
void add_eth_speed(void *tr, uint16_t eth_num, uint32_t speed, duplex_t duplex);
void add_eth_media(void *tr, uint16_t eth_num, media_t media);
void add_l2_vlan(void *tr, uint16_t vlan_id,
uint16_t *tagged_members, // NULL terminated array / NULL if not required
uint16_t *un_tagged_members, // NULL terminated array / NULL if not required
uint16_t *forbidden_members, // NULL terminated array / NULL if not required
uint16_t *pvid_ports // NULL terminated array / NULL if not required
);
#endif

View File

@@ -1,8 +0,0 @@
#ifndef API_CONSTS_H
#define API_CONSTS_H
#define STATUS_SUCCESS 0
#define STATUS_ERROR 1
#define STATUS_TIMEOUT 2
#endif

View File

@@ -1,15 +0,0 @@
#ifndef API_DEVICEID_H
#define API_DEVICEID_H
#include <stdint.h>
#include "api_consts.h"
int dev_get_main_mac(char *mac, int mac_len);
int dev_get_serial(char *serial, int serial_len);
int dev_get_fw_version(char *fw, int fw_len);
int dev_get_uptime(uint32_t *up);
int dev_get_vlan_list(int *vlan_arr, int *num);
int dev_get_vlan_mask_len(int *len);
int dev_get_poe_port_num(int *num);
int dev_get_port_capabilities_val_len(int *len);
#endif

View File

@@ -1,13 +0,0 @@
#ifndef API_PRINT_H
#define API_PRINT_H
#include <stdio.h>
#include <stdbool.h>
void print_set_debug(bool on);
bool print_is_debug(void);
#define print_debug(...) if (print_is_debug()) { fprintf(stdout, __VA_ARGS__); }
#define print_err(...) fprintf(stderr, __VA_ARGS__)
#endif

View File

@@ -1,9 +0,0 @@
#ifndef API_SESSION_H
#define API_SESSION_H
#include "api_consts.h"
int session_start(void);
void session_close(void);
#endif

View File

@@ -1,36 +0,0 @@
#ifndef API_STATS_H
#define API_STATS_H
#include <stdint.h>
#include <stdbool.h>
#include "api_consts.h"
#define IF_LOCATION_SIZE 16
#define IF_NAME_SIZE 32
typedef struct {
uint32_t collisions;
uint64_t multicast ;
uint64_t rx_bytes;
uint32_t rx_dropped;
uint32_t rx_errors;
uint64_t rx_packets;
uint64_t tx_bytes;
uint32_t tx_dropped;
uint32_t tx_errors;
uint64_t tx_packets;
} counters_t;
typedef struct {
char location[IF_LOCATION_SIZE];
char name[IF_NAME_SIZE];
uint32_t uptime;
uint32_t speed_dpx_status;
counters_t counters;
} interface_t;
int get_ethernet_count(int *eth_count);
int get_ethernet_stats(interface_t *eths, int eth_count);
int get_vlans(uint16_t **vlans, int *vlan_count);
#endif

View File

@@ -1,41 +0,0 @@
#ifndef OID_DEFINE_H
#define OID_DEFINE_H
#include <sys_adpt.h>
#include <net-snmp/net-snmp-config.h>
#include <net-snmp/net-snmp-includes.h>
const static oid O_MAIN_MAC[] = { SYS_ADPT_PRIVATEMIB_OID, 1, 5, 6, 1, 0 };
const static oid O_SERIAL[] = { SYS_ADPT_PRIVATEMIB_OID, 1, 1, 3, 1, 10, 1 };
const static oid O_OPCODE_VERSION[] = { SYS_ADPT_PRIVATEMIB_OID, 1, 1, 5, 4, 0 };
const static oid O_SYS_UPTIME[] = { 1, 3, 6, 1, 2, 1, 1, 3, 0 };
const static oid O_VLAN_STATUS[] = { 1, 3, 6, 1, 2, 1, 17, 7, 1, 4, 3, 1, 5};
const static oid O_POE_PORT_ENABLE[] ={1, 3, 6, 1, 2, 1, 105, 1, 1, 1, 3, 1};
const static oid O_PORT_CPAPBILITIES[] = { SYS_ADPT_PRIVATEMIB_OID, 1, 2, 1, 1, 6, 1 };
#define O_FACTORY_DEFAULT SYSTEM_OID"1.24.2.1.1.4.1.70.97.99.116.111.114.121.95.68.101.102.97.117.108.116.95.67.111.110.102.105.103.46.99.102.103"
#define O_FW_UPGRADE_MGMT SYSTEM_OID"1.24.6.1.0"
#define O_DEVICE_MODEL SYSTEM_OID"1.1.5.1.0"
#define O_DEVICE_COMPANY SYSTEM_OID"1.1.5.2.0"
#define O_STR_POE_PORT_ENABLE "1.3.6.1.2.1.105.1.1.1.3.1"
#define O_STR_POE_MAX_POWER SYSTEM_OID"1.28.6.1.13.1"
#define O_STR_POE_USAGE_THRESHOLD "1.3.6.1.2.1.105.1.3.1.1.5.1"
#define O_STR_IF_ADMIN_STATUS "1.3.6.1.2.1.2.2.1.7"
#define O_STR_PORT_CPAPBILITIES SYSTEM_OID"1.2.1.1.6"
#define O_STR_PVID "1.3.6.1.2.1.17.7.1.4.5.1.1"
#define O_STR_VLAN_NAME "1.3.6.1.2.1.17.7.1.4.3.1.1"
#define O_STR_VLAN_EGRESS "1.3.6.1.2.1.17.7.1.4.3.1.2"
#define O_STR_VLAN_STATUS "1.3.6.1.2.1.17.7.1.4.3.1.5"
#define O_STR_VLAN_UNTAGGED "1.3.6.1.2.1.17.7.1.4.3.1.4"
#define O_STR_COPY_SRC_TYPE SYSTEM_OID"1.24.1.1.0"
#define O_STR_COPY_DST_TYPE SYSTEM_OID"1.24.1.3.0"
#define O_STR_COPY_DST_NAME SYSTEM_OID"1.24.1.4.0"
#define O_STR_COPY_FILE_TYPE SYSTEM_OID"1.24.1.5.0"
#define O_STR_COPY_ACTION SYSTEM_OID"1.24.1.8.0"
#define O_NTP_STATUS SYSTEM_OID"1.23.5.1.0"
#define O_SNTP_STATUS SYSTEM_OID"1.23.1.1.0"
#define O_SNTP_INTERVAL SYSTEM_OID"1.23.1.3.0"
#define O_SNTP_SERVER_TYPE SYSTEM_OID"1.23.1.4.1.4"
#define O_SNTP_SERVER_ADDR SYSTEM_OID"1.23.1.4.1.5"
#endif

View File

@@ -1,25 +0,0 @@
#ifndef SNMP_HELPER_H
#define SNMP_HELPER_H
#include <net-snmp/net-snmp-config.h>
#include <net-snmp/net-snmp-includes.h>
#include "oid_define.h"
int snmph_session_start(void);
void snmph_session_close(void);
int snmph_get(const oid *req_oid, size_t req_oid_len, struct snmp_pdu **response);
int snmph_get_argstr(const char *oid_str, struct snmp_pdu **response);
int snmph_get_single_string(const oid *req_oid, size_t req_oid_len, char *buf, int buf_len);
int snmph_get_bulk(const oid *req_oid, size_t req_oid_len, int max, struct snmp_pdu **response);
int snmph_set(const char *oid_str, char type, char *value);
int snmph_set_array(const char *oid_str, char type, const u_char *value, size_t len);
int snmph_walk(const char *oid_str, void *buf, int *num);
enum snmp_walk_node {
SNMP_WALK_NODE_NONE,
SNMP_WALK_NODE_VLAN_STATUS,
SNMP_WALK_NODE_POE_PORT_ENABLE,
};
#endif

View File

@@ -1,7 +0,0 @@
list(APPEND LIB_SOURCES
${CMAKE_CURRENT_LIST_DIR}/device.c
${CMAKE_CURRENT_LIST_DIR}/helper.c
${CMAKE_CURRENT_LIST_DIR}/session.c
${CMAKE_CURRENT_LIST_DIR}/stats.c
)

View File

@@ -1,96 +0,0 @@
#include <sys_adpt.h>
#include "api_device.h"
#include "snmp_helper.h"
int dev_get_main_mac(char *mac, int mac_len) {
int status = snmph_get_single_string(O_MAIN_MAC, OID_LENGTH(O_MAIN_MAC), mac, mac_len);
if (status != STAT_SUCCESS) {
return status;
}
int i = 0, j = 2;
for (i = 3; i < 17; i += 3) {
mac[j++] = mac[i];
mac[j++] = mac[i + 1];
}
mac[12] = 0;
char *c;
for (c = mac; *c; c++) {
if (*c >= 'A' && *c <= 'Z') {
*c += 32;
}
}
return STAT_SUCCESS;
}
int dev_get_serial(char *serial, int serial_len) {
return snmph_get_single_string(O_SERIAL, OID_LENGTH(O_SERIAL), serial, serial_len);
}
int dev_get_fw_version(char *fw, int fw_len) {
return snmph_get_single_string(O_OPCODE_VERSION, OID_LENGTH(O_OPCODE_VERSION), fw, fw_len);
}
int dev_get_uptime(uint32_t *up) {
struct snmp_pdu *response = NULL;
int status = snmph_get(O_SYS_UPTIME, OID_LENGTH(O_SYS_UPTIME), &response);
if (status != STATUS_SUCCESS) return status;
*up = (uint32_t) (response->variables->val.integer[0] / 100 + 0.5);
snmp_free_pdu(response);
return STATUS_SUCCESS;
}
int dev_get_vlan_list(int *vlan_arr, int *num) {
int status;
status = snmph_walk(O_STR_VLAN_STATUS, vlan_arr, num);
return status;
}
int dev_get_vlan_mask_len(int *len) {
char oidstr[MAX_OID_LEN];
struct snmp_pdu *response;
sprintf(oidstr, "%s.%d", O_STR_VLAN_EGRESS, 1);
int status = snmph_get_argstr(oidstr, &response);
if (status != STAT_SUCCESS) {
fprintf(stderr, "Could not retrieve vlan mask length.\n");
return status;
}
*len = response->variables->val_len;
return STATUS_SUCCESS;
}
int dev_get_poe_port_num(int *num) {
int status;
status = snmph_walk(O_STR_POE_PORT_ENABLE, 0, num);
return status;
}
int dev_get_port_capabilities_val_len(int *len) {
int status;
struct snmp_pdu *response = NULL;
status = snmph_get(O_PORT_CPAPBILITIES, OID_LENGTH(O_PORT_CPAPBILITIES), &response);
if (status == STATUS_SUCCESS)
*len = response->variables->val_len;
snmp_free_pdu(response);
return status;
}

View File

@@ -1,340 +0,0 @@
/* MODULE NAME: snmp_helper.c
* PURPOSE:
* for ucentral middleware process.
*
* NOTES:
*
* REASON:
* Description:
* HISTORY
* 2023/02/03 - Saulius P., Created
*
* Copyright(C) Accton Corporation, 2023
*/
/* INCLUDE FILE DECLARATIONS
*/
#include <math.h>
#include "snmp_helper.h"
#include "api_print.h"
static struct snmp_session session, *ss;
int snmph_session_start(void) {
init_snmp("ucmw_snmp");
snmp_sess_init( &session );
session.peername = "127.0.0.1";
session.version = SNMP_VERSION_2c;
session.community = (unsigned char*)"private";
session.community_len = strlen((char*)session.community);
ss = snmp_open(&session);
if (ss) {
return STAT_SUCCESS;
} else {
return STAT_ERROR;
}
}
int snmph_set(const char *oid_str, char type, char *value) {
netsnmp_pdu *pdu, *response = NULL;
size_t name_length;
oid name[MAX_OID_LEN];
int status, exitval = 0;
pdu = snmp_pdu_create(SNMP_MSG_SET);
name_length = MAX_OID_LEN;
if (snmp_parse_oid(oid_str, name, &name_length) == NULL){
snmp_perror(oid_str);
return -1;
} else{
if (snmp_add_var(pdu, name, name_length, type, value)) {
snmp_perror(oid_str);
return -1;
}
}
status = snmp_synch_response(ss, pdu, &response);
if (status == STAT_SUCCESS) {
if (response->errstat != SNMP_ERR_NOERROR) {
fprintf(stderr, "Error in packet.\nReason: %s\n",
snmp_errstring(response->errstat));
exitval = 2;
}
} else if (status == STAT_TIMEOUT) {
fprintf(stderr, "Timeout: No Response from %s\n",
session.peername);
exitval = 1;
} else { /* status == STAT_ERROR */
snmp_sess_perror("snmpset", ss);
exitval = 1;
}
if (response)
snmp_free_pdu(response);
return exitval;
}
int snmph_set_array(const char *oid_str, char type, const u_char *value, size_t len) {
netsnmp_pdu *pdu, *response = NULL;
size_t name_length;
oid name[MAX_OID_LEN];
int status, exitval = 0;
pdu = snmp_pdu_create(SNMP_MSG_SET);
name_length = MAX_OID_LEN;
if (snmp_parse_oid(oid_str, name, &name_length) == NULL){
snmp_perror(oid_str);
return -1;
} else{
if (!snmp_pdu_add_variable(pdu, name, name_length, type, value, len)) {
snmp_perror(oid_str);
return -1;
}
}
status = snmp_synch_response(ss, pdu, &response);
if (status == STAT_SUCCESS) {
if (response->errstat != SNMP_ERR_NOERROR) {
fprintf(stderr, "Error in packet.\nReason: %s\n",
snmp_errstring(response->errstat));
exitval = 2;
}
} else if (status == STAT_TIMEOUT) {
fprintf(stderr, "Timeout: No Response from %s\n",
session.peername);
exitval = 1;
} else { /* status == STAT_ERROR */
snmp_sess_perror("snmpset", ss);
exitval = 1;
}
if (response)
snmp_free_pdu(response);
return exitval;
}
int snmph_get(const oid *req_oid, size_t req_oid_len, struct snmp_pdu **response) {
struct snmp_pdu *request = snmp_pdu_create(SNMP_MSG_GET);
snmp_add_null_var(request, req_oid, req_oid_len);
int status = snmp_synch_response(ss, request, response);
if (*response && (*response)->errstat != SNMP_ERR_NOERROR) {
print_err("Error 1, response with error: %d, %ld\n", status, (*response)->errstat);
snmp_free_pdu(*response);
return STAT_ERROR;
}
if (!(*response)) {
print_err("Error 2: empty SNMP response\n");
return STAT_ERROR;
}
if (status != STAT_SUCCESS) {
print_err("Error 3: bad response status: %d\n", status);
snmp_free_pdu(*response);
}
if (!(*response)->variables) {
print_err("Error 4: empty variable list in response\n");
snmp_free_pdu(*response);
return STAT_ERROR;
}
print_debug("Default return: %d\n", status);
return status;
}
int snmph_get_argstr(const char *oid_str, struct snmp_pdu **response) {
oid name[MAX_OID_LEN];
size_t name_length = MAX_OID_LEN;
if (snmp_parse_oid(oid_str, name, &name_length) == NULL) {
snmp_perror(oid_str);
return -1;
}
struct snmp_pdu *request = snmp_pdu_create(SNMP_MSG_GET);
snmp_add_null_var(request, name, name_length);
int status = snmp_synch_response(ss, request, response);
if (*response && (*response)->errstat != SNMP_ERR_NOERROR) {
print_err("Error 1, response with error: %d, %ld\n", status, (*response)->errstat);
snmp_free_pdu(*response);
return STAT_ERROR;
}
if (!(*response)) {
print_err("Error 2: empty SNMP response\n");
return STAT_ERROR;
}
if (status != STAT_SUCCESS) {
print_err("Error 3: bad response status: %d\n", status);
snmp_free_pdu(*response);
}
if (!(*response)->variables) {
print_err("Error 4: empty variable list in response\n");
snmp_free_pdu(*response);
return STAT_ERROR;
}
print_debug("Default return: %d\n", status);
return status;
}
int snmph_get_single_string(const oid *req_oid, size_t req_oid_len, char *buf, int buf_len) {
struct snmp_pdu *response = NULL;
int status = snmph_get(req_oid, req_oid_len, &response);
if (status != STAT_SUCCESS) {
return status;
}
memset(buf, 0, buf_len);
strncpy(buf, (char*)response->variables->val.string, (int) fmin(buf_len, response->variables->val_len));
// if (response)
snmp_free_pdu(response);
return STAT_SUCCESS;
}
int snmph_get_bulk(const oid *req_oid, size_t req_oid_len, int max, struct snmp_pdu **response) {
struct snmp_pdu *request = snmp_pdu_create(SNMP_MSG_GETBULK);
request->non_repeaters = 0;
request->max_repetitions = max;
snmp_add_null_var(request, req_oid, req_oid_len);
int status = snmp_synch_response(ss, request, response);
// printf("Bulk status: %d\n", status);
if (status == 1) {
snmp_sess_perror("snmpbulkget", ss);
}
if (*response && (*response)->errstat != SNMP_ERR_NOERROR) {
print_err("Error 1, bulk response error: %d, %ld\n", status, (*response)->errstat);
snmp_free_pdu(*response);
return STAT_ERROR;
}
if (!(*response)) {
print_err("Error 2: empty bulk response\n");
return STAT_ERROR;
}
if (status != STAT_SUCCESS) {
print_err("Error 3, bad bulk status: %d\n", status);
snmp_free_pdu(*response);
}
if (!(*response)->variables) {
print_err("Error 4, empty bulk variables\n");
snmp_free_pdu(*response);
return STAT_ERROR;
}
print_debug("Default bulk return: %d\n", status);
return status;
}
int snmph_walk(const char *oid_str, void *buf, int *num) {
netsnmp_pdu *pdu, *response = NULL;
netsnmp_variable_list *vars;
oid name[MAX_OID_LEN];
size_t name_length = MAX_OID_LEN;
int running = 1;
int status = 0;
enum snmp_walk_node node = SNMP_WALK_NODE_NONE;
if (snmp_parse_oid(oid_str, name, &name_length) == NULL) {
snmp_perror(oid_str);
return -1;
}
if (!strcmp(oid_str, O_STR_VLAN_STATUS))
node = SNMP_WALK_NODE_VLAN_STATUS;
else if (!strcmp(oid_str, O_STR_POE_PORT_ENABLE))
node = SNMP_WALK_NODE_POE_PORT_ENABLE;
*num = 0;
while (running) {
/*
* create PDU for GETNEXT request and add object name to request
*/
pdu = snmp_pdu_create(SNMP_MSG_GETNEXT);
snmp_add_null_var(pdu, name, name_length);
/*
* do the request
*/
status = snmp_synch_response(ss, pdu, &response);
if (status == STAT_SUCCESS) {
if (response->errstat == SNMP_ERR_NOERROR) {
/*
* check resulting variables
*/
for (vars = response->variables; vars;
vars = vars->next_variable) {
if (node == SNMP_WALK_NODE_VLAN_STATUS)
{
if ((vars->name[12]==O_VLAN_STATUS[12]) && (vars->name_length==(OID_LENGTH(O_VLAN_STATUS)+1)))
{
((int*)buf)[(*num)++] = vars->name[13];
}
else
running = 0;
}
else if (node == SNMP_WALK_NODE_POE_PORT_ENABLE)
{
if ((vars->name[10]==O_POE_PORT_ENABLE[10]) && (vars->name_length==(OID_LENGTH(O_POE_PORT_ENABLE)+1)))
{
(*num)++;
}
else
running = 0;
}
else
running = 0;
memmove((char *) name, (char *) vars->name, vars->name_length * sizeof(oid));
name_length = vars->name_length;
//print_variable(vars->name, vars->name_length, vars);
}
} else {
running = 0;
}
} else if (status == STAT_TIMEOUT) {
fprintf(stderr, "Timeout: No Response from %s\n",
session.peername);
running = 0;
status = 1;
} else { /* status == STAT_ERROR */
snmp_sess_perror("snmpwalk", ss);
running = 0;
status = 1;
}
if (response)
snmp_free_pdu(response);
}
return status;
}
void snmph_session_close(void) {
snmp_close(ss);
}

View File

@@ -1,10 +0,0 @@
#include "api_session.h"
#include "snmp_helper.h"
int session_start() {
return snmph_session_start();
}
void session_close() {
snmph_session_close();
}

View File

@@ -1,250 +0,0 @@
#include <sys_adpt.h>
#include "api_device.h"
#include "api_stats.h"
#include "snmp_helper.h"
#include "if-mib/ifTable/ifTable_constants.h"
const static oid O_IF_COUNT[] = { 1, 3, 6, 1, 2, 1, 2, 1, 0 };
const static oid O_IF_TYPE[] = { 1, 3, 6, 1, 2, 1, 2, 2, 1, 3 };
// const static oid O_IF_LAST_CHANGE[] = { 1, 3, 6, 1, 2, 1, 2, 2, 1, 9 };
const static oid O_IF_UPTIME[] = { SYS_ADPT_PRIVATEMIB_OID, 1, 2, 1, 1, 19 };
const static oid O_SPEED_DPX_STATUS[] = { SYS_ADPT_PRIVATEMIB_OID, 1, 2, 1, 1, 8 };
const static oid OID_IF_NAME[] = { SYS_ADPT_PRIVATEMIB_OID, 1, 2, 1, 1, 2 };
const static oid O_IF_RX_BYTES_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 6 };
const static oid O_IF_RX_DISCARD_PKTS[] = { 1, 3, 6, 1, 2, 1, 2, 2, 1, 13 };
const static oid O_IF_RX_ERROR_PKTS[] = { 1, 3, 6, 1, 2, 1, 2, 2, 1, 14 };
const static oid O_IF_RX_U_PKTS_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 7 }; // Unicast packets
const static oid O_IF_RX_MUL_PKTS_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 8 }; // Multicast packets
const static oid O_IF_RX_BR_PKTS_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 9 };
const static oid O_IF_TX_BYTES_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 10 };
const static oid O_IF_TX_DISCARD_PKTS[] = { 1, 3, 6, 1, 2, 1, 2, 2, 1, 19 };
const static oid O_IF_TX_ERROR_PKTS[] = { 1, 3, 6, 1, 2, 1, 2, 2, 1, 20 };
const static oid O_IF_TX_U_PKTS_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 11 }; // Unicast packets
const static oid O_IF_TX_MUL_PKTS_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 12 }; // Multicast packets
const static oid O_IF_TX_BR_PKTS_64[] = { 1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 13 };
int get_ethernet_count(int *eth_count) {
struct snmp_pdu *response;
// printf("Try to retrieve IF count...\n");
int status = snmph_get(O_IF_COUNT, OID_LENGTH(O_IF_COUNT), &response);
// printf("Retrieved: %d\n", status);
if (status != STAT_SUCCESS) {
// printf("Could not retrieve interfaces count\n");
return status;
}
// printf("Interfaces: %ld\n", response->variables->val.integer[0]);
long int max_if = response->variables->val.integer[0];
snmp_free_pdu(response);
struct variable_list *vars;
status = snmph_get_bulk(O_IF_TYPE, OID_LENGTH(O_IF_TYPE), max_if, &response);
if (status != STAT_SUCCESS) {
// printf("Could not retrieve types\n");
return STATUS_ERROR;
}
*eth_count = 0;
for(vars = response->variables; vars; vars = vars->next_variable) {
// print_variable(vars->name, vars->name_length, vars);
if (vars->val.integer[0] == IANAIFTYPE_ETHERNETCSMACD) {
(*eth_count)++;
} else {
break;
}
}
snmp_free_pdu(response);
return STATUS_SUCCESS;
}
static int fill_ethernet_stats_32(const oid *req_oid, size_t req_oid_len, int max, uint32_t *val, bool aggregate) {
struct snmp_pdu *response;
struct variable_list *vars;
int status = snmph_get_bulk(req_oid, req_oid_len, max, &response);
if (status != STATUS_SUCCESS) return status;
uint32_t *addr = val;
uint32_t local_val = 0;
int i = 0;
for(vars = response->variables; vars; vars = vars->next_variable) {
memcpy(&local_val, &vars->val.integer[0], sizeof(uint32_t));
addr = (uint32_t *) ((char *) val + (sizeof(interface_t) * (i++)));
if (aggregate) {
*addr += local_val;
} else {
*addr = local_val;
}
// addr = (uint32_t *) ((char *) addr + sizeof(interface_t));
}
snmp_free_pdu(response);
return STATUS_SUCCESS;
}
static int fill_ethernet_stats_64(const oid *req_oid, size_t req_oid_len, int max, uint64_t *val, bool aggregate) {
struct snmp_pdu *response;
struct variable_list *vars;
int status = snmph_get_bulk(req_oid, req_oid_len, max, &response);
if (status != STATUS_SUCCESS) return status;
uint64_t *addr = val;
uint64_t local_val = 0;
int i = 0;
for(vars = response->variables; vars; vars = vars->next_variable) {
#ifdef ENDIANNESS_ADJUST
memcpy(&local_val, &vars->val.counter64[0].low, sizeof(uint64_t));
#else
memcpy(&local_val, &vars->val.counter64[0], sizeof(uint64_t));
#endif
addr = (uint64_t *) ((char *) val + (sizeof(interface_t) * (i++)));
if (aggregate) {
*addr += local_val;
} else {
*addr = local_val;
}
// addr = (uint64_t *) ((char *) addr + sizeof(interface_t));
}
snmp_free_pdu(response);
return STATUS_SUCCESS;
}
int get_ethernet_stats(interface_t *eths, int eth_count) {
uint32_t uptime;
if (dev_get_uptime(&uptime) != STATUS_SUCCESS) return STATUS_ERROR;
/***************** Interface uptime *****************/
if (fill_ethernet_stats_32(O_IF_UPTIME, OID_LENGTH(O_IF_UPTIME), eth_count, &eths[0].uptime, false) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_32(O_SPEED_DPX_STATUS, OID_LENGTH(O_SPEED_DPX_STATUS), eth_count, &eths[0].speed_dpx_status, false) != STATUS_SUCCESS) return STATUS_ERROR;
int i;
for (i = 0; i < eth_count; i++) {
if (eths[i].uptime) {
eths[i].uptime /= 100;// uptime - (eths[i].uptime / 100);
}
snprintf(eths[i].location, IF_LOCATION_SIZE, "%d", i);
}
struct snmp_pdu *response;
struct variable_list *vars;
int status = snmph_get_bulk(OID_IF_NAME, OID_LENGTH(OID_IF_NAME), eth_count, &response);
if (status != STATUS_SUCCESS) return status;
i = 0;
for(vars = response->variables; vars || i < eth_count; vars = vars->next_variable) {
strncpy(eths[i].name, (char *)vars->val.string, IF_NAME_SIZE > vars->val_len ? vars->val_len : IF_NAME_SIZE);
i++;
}
snmp_free_pdu(response);
/***************** Bytes (octets) *****************/
if (fill_ethernet_stats_64(O_IF_RX_BYTES_64, OID_LENGTH(O_IF_RX_BYTES_64), eth_count, &eths[0].counters.rx_bytes, false) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_64(O_IF_TX_BYTES_64, OID_LENGTH(O_IF_TX_BYTES_64), eth_count, &eths[0].counters.tx_bytes, false) != STATUS_SUCCESS) return STATUS_ERROR;
/***************** Packets *****************/
if (fill_ethernet_stats_64(O_IF_RX_MUL_PKTS_64, OID_LENGTH(O_IF_RX_MUL_PKTS_64), eth_count, &eths[0].counters.rx_packets, false) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_64(O_IF_TX_MUL_PKTS_64, OID_LENGTH(O_IF_TX_MUL_PKTS_64), eth_count, &eths[0].counters.tx_packets, false) != STATUS_SUCCESS) return STATUS_ERROR;
// "Multicast is the sum of rx+tx multicast packets"
for (i = 0; i < eth_count; i++) {
eths[i].counters.multicast = eths[i].counters.rx_packets + eths[i].counters.tx_packets;
}
// All packets is a sum (aggregate == true) of unicast, multicast and broadcast packets
if (fill_ethernet_stats_64(O_IF_RX_U_PKTS_64, OID_LENGTH(O_IF_RX_U_PKTS_64), eth_count, &eths[0].counters.rx_packets, true) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_64(O_IF_RX_BR_PKTS_64, OID_LENGTH(O_IF_RX_BR_PKTS_64), eth_count, &eths[0].counters.rx_packets, true) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_64(O_IF_TX_U_PKTS_64, OID_LENGTH(O_IF_TX_U_PKTS_64), eth_count, &eths[0].counters.tx_packets, true) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_64(O_IF_TX_BR_PKTS_64, OID_LENGTH(O_IF_TX_BR_PKTS_64), eth_count, &eths[0].counters.tx_packets, true) != STATUS_SUCCESS) return STATUS_ERROR;
/***************** Errors *****************/
if (fill_ethernet_stats_32(O_IF_RX_ERROR_PKTS, OID_LENGTH(O_IF_RX_ERROR_PKTS), eth_count, &eths[0].counters.rx_errors, false) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_32(O_IF_TX_ERROR_PKTS, OID_LENGTH(O_IF_TX_ERROR_PKTS), eth_count, &eths[0].counters.tx_errors, false) != STATUS_SUCCESS) return STATUS_ERROR;
/***************** Dropped *****************/
if (fill_ethernet_stats_32(O_IF_RX_DISCARD_PKTS, OID_LENGTH(O_IF_RX_DISCARD_PKTS), eth_count, &eths[0].counters.rx_dropped, false) != STATUS_SUCCESS) return STATUS_ERROR;
if (fill_ethernet_stats_32(O_IF_TX_DISCARD_PKTS, OID_LENGTH(O_IF_TX_DISCARD_PKTS), eth_count, &eths[0].counters.tx_dropped, false) != STATUS_SUCCESS) return STATUS_ERROR;
return STATUS_SUCCESS;
}
int get_vlans(uint16_t **vlans, int *vlan_count) {
struct snmp_pdu *response;
struct variable_list *vars;
// printf("Try to retrieve IF count...\n");
int status = snmph_get(O_IF_COUNT, OID_LENGTH(O_IF_COUNT), &response);
// printf("Retrieved: %d\n", status);
if (status != STAT_SUCCESS) {
printf("Could not retrieve interfaces count\n");
return status;
}
// printf("Interfaces: %ld\n", response->variables->val.integer[0]);
long int max_if = response->variables->val.integer[0];
status = snmph_get_bulk(O_IF_TYPE, OID_LENGTH(O_IF_TYPE), max_if, &response);
if (status != STAT_SUCCESS) {
// printf("VLANS: could not retrieve types\n");
return STATUS_ERROR;
}
*vlan_count = 0;
for(vars = response->variables; vars; vars = vars->next_variable) {
// print_variable(vars->name, vars->name_length, vars);
if (vars->val.integer[0] == IANAIFTYPE_L2VLAN || vars->val.integer[0] == IANAIFTYPE_L3IPVLAN) {
// printf("Found VLAN: %d\n", (int) vars->name[vars->name_length - 1]);
(*vlan_count)++;
}
}
(*vlans) = malloc(sizeof(uint16_t) * (*vlan_count));
int i = 0;
for(vars = response->variables; vars; vars = vars->next_variable) {
// print_variable(vars->name, vars->name_length, vars);
if (vars->val.integer[0] == IANAIFTYPE_L2VLAN || vars->val.integer[0] == IANAIFTYPE_L3IPVLAN) {
(*vlans)[i++] = (uint16_t) ((int) vars->name[vars->name_length - 1] - 1000);
}
}
return STATUS_SUCCESS;
}

View File

@@ -1,78 +0,0 @@
diff -Nuar a/CMakeLists.txt b/CMakeLists.txt
--- a/CMakeLists.txt 2023-07-21 09:53:57.450424222 +0800
+++ b/CMakeLists.txt 2023-07-21 11:36:15.395258277 +0800
@@ -1,4 +1,4 @@
-#***************************************************************************
+#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
@@ -185,9 +185,9 @@
mark_as_advanced(CURL_DISABLE_HTTP_AUTH)
option(CURL_DISABLE_IMAP "disables IMAP" OFF)
mark_as_advanced(CURL_DISABLE_IMAP)
-option(CURL_DISABLE_LDAP "disables LDAP" OFF)
+option(CURL_DISABLE_LDAP "disables LDAP" ON)
mark_as_advanced(CURL_DISABLE_LDAP)
-option(CURL_DISABLE_LDAPS "disables LDAPS" OFF)
+option(CURL_DISABLE_LDAPS "disables LDAPS" ON)
mark_as_advanced(CURL_DISABLE_LDAPS)
option(CURL_DISABLE_LIBCURL_OPTION "disables --libcurl option from the curl tool" OFF)
mark_as_advanced(CURL_DISABLE_LIBCURL_OPTION)
@@ -433,7 +433,7 @@
endif()
if(CURL_USE_OPENSSL)
- find_package(OpenSSL REQUIRED)
+ #find_package(OpenSSL REQUIRED)
set(SSL_ENABLED ON)
set(USE_OPENSSL ON)
@@ -441,7 +441,7 @@
# version of CMake. This allows our dependents to get our dependencies
# transitively.
if(NOT CMAKE_VERSION VERSION_LESS 3.4)
- list(APPEND CURL_LIBS OpenSSL::SSL OpenSSL::Crypto)
+ #list(APPEND CURL_LIBS OpenSSL::SSL OpenSSL::Crypto)
else()
list(APPEND CURL_LIBS ${OPENSSL_LIBRARIES})
include_directories(${OPENSSL_INCLUDE_DIR})
@@ -595,7 +595,7 @@
set(CMAKE_REQUIRED_LIBRARIES ${OPENSSL_LIBRARIES})
check_library_exists_concat(${CMAKE_LDAP_LIB} ldap_init HAVE_LIBLDAP)
check_library_exists_concat(${CMAKE_LBER_LIB} ber_init HAVE_LIBLBER)
-
+
set(CMAKE_REQUIRED_INCLUDES_BAK ${CMAKE_REQUIRED_INCLUDES})
set(CMAKE_LDAP_INCLUDE_DIR "" CACHE STRING "Path to LDAP include directory")
if(CMAKE_LDAP_INCLUDE_DIR)
@@ -1369,12 +1369,16 @@
add_subdirectory(docs)
endif()
+INCLUDE_DIRECTORIES(../openssl/include)
+FIND_LIBRARY(openssl ssl ../openssl)
+
add_subdirectory(lib)
if(BUILD_CURL_EXE)
add_subdirectory(src)
endif()
+
cmake_dependent_option(BUILD_TESTING "Build tests"
ON "PERL_FOUND;NOT CURL_DISABLE_TESTS"
OFF)
diff -Nuar a/src/CMakeLists.txt b/src/CMakeLists.txt
--- a/src/CMakeLists.txt 2023-07-21 13:47:10.160906907 +0800
+++ b/src/CMakeLists.txt 2023-07-21 13:49:45.205682320 +0800
@@ -98,6 +98,9 @@
#Build curl executable
target_link_libraries(${EXE_NAME} libcurl ${CURL_LIBS})
+target_link_libraries(${EXE_NAME} -lssl)
+target_link_libraries(${EXE_NAME} -lcrypto)
+target_link_libraries(${EXE_NAME} ${CMAKE_SHARED_LINKER_FLAGS})
################################################################################

View File

@@ -1,14 +0,0 @@
--- a/CMakeLists.txt 2020-10-26 04:31:31.000000000 -0700
+++ b/CMakeLists.txt 2023-04-10 20:15:13.399705011 -0700
@@ -102,8 +102,9 @@
# ideally we want to use pipe2()
-
-CHECK_C_SOURCE_COMPILES("#define _GNU_SOURCE\n#include <unistd.h>\nint main(void) {int fd[2];\n return pipe2(fd, 0);\n}\n" LWS_HAVE_PIPE2)
+# jacky
+# comment out this line, use pipe() instead of pipe2()
+#CHECK_C_SOURCE_COMPILES("#define _GNU_SOURCE\n#include <unistd.h>\nint main(void) {int fd[2];\n return pipe2(fd, 0);\n}\n" LWS_HAVE_PIPE2)
# tcp keepalive needs this on linux to work practically... but it only exists
# after kernel 2.6.37

View File

@@ -1,49 +0,0 @@
cmake_minimum_required(VERSION 2.6)
PROJECT(ucentral-client C)
SET(CMAKE_SHARED_LIBRARY_LINK_C_FLAGS "-Wl,--copy-dt-needed-entries")
SET(LDFLAGS -fopenmp -Wl,--copy-dt-needed-entries)
INCLUDE_DIRECTORIES(src/include)
INCLUDE_DIRECTORIES(../)
INCLUDE_DIRECTORIES(../curl/include)
INCLUDE_DIRECTORIES(../libwebsockets/include)
INCLUDE_DIRECTORIES(../openssl/include)
INCLUDE_DIRECTORIES(ucentral-client/include)
INCLUDE_DIRECTORIES(ucentral-client)
INCLUDE_DIRECTORIES(src/include)
INCLUDE_DIRECTORIES(${CMAKE_CURRENT_LIST_DIR}/../ecapi/include)
INCLUDE_DIRECTORIES($ENV{SOURCE_PATH}/sysinclude)
INCLUDE_DIRECTORIES($ENV{SOURCE_PATH}/sysinclude/mibconstants)
INCLUDE_DIRECTORIES($ENV{SOURCE_PATH}/sysinclude/oem/$ENV{PROJECT_NAME})
INCLUDE_DIRECTORIES($ENV{PROJECT_PATH}/user/thirdpty/lua/net-snmp-5.4.4/include)
add_definitions(-DPLAT_EC)
if ($ENV{D_MODEL_NAME} STREQUAL ECS4130_AC5)
add_definitions(-DENDIANNESS_ADJUST)
add_definitions(-DNOT_SUPPORT_CAP_2500)
add_definitions(-DNOT_SUPPORT_NTP_DOMAIN_NAME)
add_definitions(-DSYSTEM_OID="1.3.6.1.4.1.259.10.1.55.")
elseif ($ENV{D_MODEL_NAME} STREQUAL ECS4125_10P)
add_definitions(-DSYSTEM_OID="1.3.6.1.4.1.259.10.1.57.")
else()
message(FATAL_ERROR "not support $ENV{D_MODEL_NAME}")
endif()
INCLUDE(ucentral-client/CMakeLists.txt)
INCLUDE(ucentral-client/platform/ec/CMakeLists.txt)
FIND_LIBRARY(cjson cjson ../cjson)
FIND_LIBRARY(curl curl ../curl/lib)
FIND_LIBRARY(openssl ssl ../openssl)
FIND_LIBRARY(websockets websockets ../libwebsockets/lib)
FIND_LIBRARY(crypto crypto ../openssl)
FIND_LIBRARY(ecapi_library ecapi ../ecapi/build)
FIND_LIBRARY(netsnmp_library netsnmp $ENV{PROJECT_PATH}/user/thirdpty/lua/net-snmp-5.4.4/snmplib/.libs)
ADD_EXECUTABLE(ucentral-client ${UC_SOURCES} ${PLAT_SOURCES})
TARGET_LINK_LIBRARIES(ucentral-client ${cjson} ${curl} ${openssl} ${crypto} ${websockets} ${netsnmp_library} ${ecapi_library})

View File

@@ -1,9 +0,0 @@
list(APPEND UC_SOURCES
${CMAKE_CURRENT_LIST_DIR}/proto.c
${CMAKE_CURRENT_LIST_DIR}/router-utils.c
${CMAKE_CURRENT_LIST_DIR}/ucentral-json-parser.c
${CMAKE_CURRENT_LIST_DIR}/ucentral-log.c
${CMAKE_CURRENT_LIST_DIR}/ucentral-client.c
${CMAKE_CURRENT_LIST_DIR}/inet_net_pton.c
)

View File

@@ -1,200 +0,0 @@
/*
* Copyright (c) 1996,1999 by Internet Software Consortium.
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS
* ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE
* CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
* DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
* PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
* ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS
* SOFTWARE.
*/
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <assert.h>
#include <ctype.h>
#include <errno.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#ifdef SPRINTF_CHAR
# define SPRINTF(x) strlen(sprintf/**/x)
#else
# define SPRINTF(x) ((size_t)sprintf x)
#endif
static int inet_net_pton_ipv4 (const char *src, u_char *dst,
size_t size) __THROW;
# define __rawmemchr strchr
/*
* static int
* inet_net_pton(af, src, dst, size)
* convert network number from presentation to network format.
* accepts hex octets, hex strings, decimal octets, and /CIDR.
* "size" is in bytes and describes "dst".
* return:
* number of bits, either imputed classfully or specified with /CIDR,
* or -1 if some failure occurred (check errno). ENOENT means it was
* not a valid network specification.
* author:
* Paul Vixie (ISC), June 1996
*/
int
inet_net_pton (int af, const char *src, void *dst, size_t size)
{
switch (af) {
case AF_INET:
return (inet_net_pton_ipv4(src, dst, size));
default:
//__set_errno (EAFNOSUPPORT);
return (-1);
}
}
/*
* static int
* inet_net_pton_ipv4(src, dst, size)
* convert IPv4 network number from presentation to network format.
* accepts hex octets, hex strings, decimal octets, and /CIDR.
* "size" is in bytes and describes "dst".
* return:
* number of bits, either imputed classfully or specified with /CIDR,
* or -1 if some failure occurred (check errno). ENOENT means it was
* not an IPv4 network specification.
* note:
* network byte order assumed. this means 192.5.5.240/28 has
* 0b11110000 in its fourth octet.
* author:
* Paul Vixie (ISC), June 1996
*/
static int
inet_net_pton_ipv4 (const char *src, u_char *dst, size_t size)
{
static const char xdigits[] = "0123456789abcdef";
int n, ch, tmp, dirty, bits;
const u_char *odst = dst;
ch = *src++;
if (ch == '0' && (src[0] == 'x' || src[0] == 'X')
&& isascii(src[1]) && isxdigit(src[1])) {
/* Hexadecimal: Eat nybble string. */
if (size <= 0)
goto emsgsize;
dirty = 0;
tmp = 0; /* To calm down gcc. */
src++; /* skip x or X. */
while (isxdigit((ch = *src++))) {
ch = _tolower(ch);
n = (const char *) __rawmemchr(xdigits, ch) - xdigits;
assert(n >= 0 && n <= 15);
if (dirty == 0)
tmp = n;
else
tmp = (tmp << 4) | n;
if (++dirty == 2) {
if (size-- <= 0)
goto emsgsize;
*dst++ = (u_char) tmp;
dirty = 0;
}
}
if (dirty) { /* Odd trailing nybble? */
if (size-- <= 0)
goto emsgsize;
*dst++ = (u_char) (tmp << 4);
}
} else if (isascii(ch) && isdigit(ch)) {
/* Decimal: eat dotted digit string. */
for (;;) {
tmp = 0;
do {
n = ((const char *) __rawmemchr(xdigits, ch)
- xdigits);
assert(n >= 0 && n <= 9);
tmp *= 10;
tmp += n;
if (tmp > 255)
goto enoent;
} while (isascii((ch = *src++)) && isdigit(ch));
if (size-- <= 0)
goto emsgsize;
*dst++ = (u_char) tmp;
if (ch == '\0' || ch == '/')
break;
if (ch != '.')
goto enoent;
ch = *src++;
if (!isascii(ch) || !isdigit(ch))
goto enoent;
}
} else
goto enoent;
bits = -1;
if (ch == '/' && isascii(src[0]) && isdigit(src[0]) && dst > odst) {
/* CIDR width specifier. Nothing can follow it. */
ch = *src++; /* Skip over the /. */
bits = 0;
do {
n = (const char *) __rawmemchr(xdigits, ch) - xdigits;
assert(n >= 0 && n <= 9);
bits *= 10;
bits += n;
} while (isascii((ch = *src++)) && isdigit(ch));
if (ch != '\0')
goto enoent;
if (bits > 32)
goto emsgsize;
}
/* Firey death and destruction unless we prefetched EOS. */
if (ch != '\0')
goto enoent;
/* If nothing was written to the destination, we found no address. */
if (dst == odst)
goto enoent;
/* If no CIDR spec was given, infer width from net class. */
if (bits == -1) {
if (*odst >= 240) /* Class E */
bits = 32;
else if (*odst >= 224) /* Class D */
bits = 4;
else if (*odst >= 192) /* Class C */
bits = 24;
else if (*odst >= 128) /* Class B */
bits = 16;
else /* Class A */
bits = 8;
/* If imputed mask is narrower than specified octets, widen. */
if (bits >= 8 && bits < ((dst - odst) * 8))
bits = (dst - odst) * 8;
}
/* Extend network to cover the actual mask. */
while (bits > ((dst - odst) * 8)) {
if (size-- <= 0)
goto emsgsize;
*dst++ = '\0';
}
return (bits);
enoent:
//__set_errno (ENOENT);
return (-1);
emsgsize:
//__set_errno (EMSGSIZE);
return (-1);
}

View File

@@ -4,3 +4,8 @@ ntp server 1.pool.ntp.org prefer true
ntp server 2.pool.ntp.org prefer true
ntp server 3.pool.ntp.org prefer true
ntp authenticate
ip dhcp snooping
ip dhcp snooping Vlan1
ntp source-interface Vlan 1
interface range Ethernet 0-100
no ip dhcp snooping trust

View File

@@ -0,0 +1,2 @@
configure terminal
no ip vrf mgmt

View File

@@ -11,6 +11,7 @@ start() {
fi
cp /usr/local/lib/OLS_NOS_fixups.script /home/admin/OLS_NOS_fixups.script
cp /usr/local/lib/OLS_NOS_upgrade_override.script /home/admin/OLS_NOS_upgrade_override.script
if [ $(systemctl is-active config-setup.service) == "active" ]; then
# do nothing on service restart
@@ -29,11 +30,23 @@ start() {
}
wait() {
test -d /var/lib/ucentral || mkdir /var/lib/ucentral
# Wait for at least one Vlan to be created - a signal that telemetry is up.
# Even if vlan table is empty, private 3967 will be allocated with all
# ports in it.
while ! ls /sys/class/net/Vlan* &>/dev/null; do sleep 1; done
# Detect first boot on this version
# Run upgrade overrides before fixups
conf_upgrade_md5sum=$(md5sum /home/admin/OLS_NOS_upgrade_override.script | cut -d ' ' -f1)
if test "$conf_upgrade_md5sum" != "$(test -f /var/lib/ucentral/upgrade-override.md5sum && cat /var/lib/ucentral/upgrade-override.md5sum)"; then
sudo -u admin -- bash "sonic-cli" "/home/admin/OLS_NOS_upgrade_override.script"
echo -n "$conf_upgrade_md5sum" >/var/lib/ucentral/upgrade-override.md5sum
fi
sudo touch /etc/default/in-band-dhcp
# Temporary NTP fixup / WA: configure a list of default NTP servers.
# Should mature into a default-config option to make sure board has right
# time upon any boot (especially first time).
@@ -48,6 +61,8 @@ wait() {
# NOTE: alternatively we could use ifplugd. This also handle del/add scenario
ifup Vlan1 || true
config vlan dhcp 1 enable
# There's an issue with containers starting before DNS server is configured:
# resolf.conf file get copied from host to container upon container start.
# This means, that if resolf.conf gets altered (on host) after container's been
@@ -63,9 +78,19 @@ wait() {
# This also means, that we won't start up untill this URI is accessible.
while ! curl clientauth.one.digicert.com &>/dev/null; do sleep 1; done
# Enable DHCP trusting for uplink (Vlan1) iface
# It's needed to forward DHCP Discover (and replies) from/to DHCP server
# of (untrusted) port clients (EthernetX) of the same Vlan (Vlan1).
# Without this fix underlying Vlan members wouldn't be able to receive
# DHCP-lease IP
trusted_dhcp_if=`sudo -u admin -- bash "sonic-cli" "-c" "show ip arp" | grep -Eo "Ethernet[0-9]+"`
sudo -u admin -- "echo" "configure terminal" > /home/admin/fixup_scr.script
sudo -u admin -- "echo" "interface $trusted_dhcp_if" >> /home/admin/fixup_scr.script
sudo -u admin -- "echo" "ip dhcp snooping trust" >> /home/admin/fixup_scr.script
sudo -u admin -- bash "sonic-cli" "/home/admin/fixup_scr.script"
# change admin password
# NOTE: This could lead to access escalation, if you got image from running device
test -d /var/lib/ucentral || mkdir /var/lib/ucentral
if ! test -f /var/lib/ucentral/admin-cred.changed; then
#ADMIN_PASSWD=`openssl rand -hex 10`
ADMIN_PASSWD=broadcom

View File

@@ -154,6 +154,11 @@
"vlanid": "1"
}
},
"VLAN_INTERFACE": {
"Vlan1": {
"dhcp": "enable"
}
},
"VLAN_MEMBER": {
{% for port in PORT %}
"Vlan1|{{port}}": {
@@ -164,11 +169,6 @@
"INTERFACE": {
"Vlan1": {}
},
"MGMT_VRF_CONFIG": {
"vrf_global": {
"mgmtVrfEnabled": "true"
}
},
"VRF": {
"default": {
"enabled": "true"

View File

@@ -1,3 +1,8 @@
# Production build system for uCentral client
# Configuration parser tests have been moved to tests/config-parser/Makefile
# Unit tests remain here for backward compatibility with original repository structure.
# See TESTING_FRAMEWORK.md for complete test documentation.
.PHONY: test
export CFLAGS+= -Werror -Wall -Wextra
@@ -17,7 +22,7 @@ platform/plat.a:
%.o: %.c
gcc -c -o $@ ${CFLAGS} -I ./ -I ./include $^
ucentral-client: ucentral-client.o proto.o platform/plat.a \
ucentral-client: ucentral-client.o proto.o est-client.o platform/plat.a \
ucentral-json-parser.o ucentral-log.o router-utils.o base64.o
g++ -o $@ $^ -lcurl -lwebsockets -lcjson -lssl -lcrypto -lpthread -ljsoncpp -lresolv
@@ -31,4 +36,5 @@ test-ucentral-json-parser: test-ucentral-json-parser.o ucentral-json-parser.o
./test-ucentral-json-parser 2>/dev/null
clean:
rm -f ucentral-client 2>/dev/null
rm -f ucentral-client *.o test-ucentral-json-parser 2>/dev/null
$(MAKE) -C platform/${PLATFORM} clean

View File

@@ -0,0 +1,592 @@
/* SPDX-License-Identifier: BSD-3-Clause */
/**
* EST (Enrollment over Secure Transport) Client Implementation
*
* RFC 7030 compliant EST client for PKI 2.0 operational certificate
* enrollment and renewal. Implements:
* - /simpleenroll - Initial certificate enrollment
* - /simplereenroll - Certificate renewal
* - /cacerts - CA certificate retrieval
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <curl/curl.h>
#include <openssl/bio.h>
#include <openssl/pem.h>
#include <openssl/x509.h>
#include <openssl/x509v3.h>
#include <openssl/err.h>
#include <openssl/evp.h>
#include <openssl/pkcs7.h>
#include "est-client.h"
/* EST default servers */
#define EST_SERVER_PROD "est.certificates.open-lan.org"
#define EST_SERVER_QA "qaest.certificates.open-lan.org:8001"
/* EST well-known paths (RFC 7030) */
#define EST_PATH_SIMPLEENROLL "/.well-known/est/simpleenroll"
#define EST_PATH_SIMPLEREENROLL "/.well-known/est/simplereenroll"
#define EST_PATH_CACERTS "/.well-known/est/cacerts"
/* HTTP request timeout */
#define EST_TIMEOUT_SECONDS 30
/* Global error message buffer */
static char est_error_msg[512] = {0};
/* Memory buffer for CURL responses */
struct est_memory {
char *data;
size_t size;
};
/**
* Set error message
*/
static void est_set_error(const char *fmt, ...)
{
va_list args;
va_start(args, fmt);
vsnprintf(est_error_msg, sizeof(est_error_msg), fmt, args);
va_end(args);
}
const char* est_get_error(void)
{
return est_error_msg[0] ? est_error_msg : "Unknown error";
}
/**
* CURL write callback - store response data in memory
*/
static size_t est_write_callback(void *contents, size_t size, size_t nmemb, void *userp)
{
size_t realsize = size * nmemb;
struct est_memory *mem = (struct est_memory *)userp;
char *ptr = realloc(mem->data, mem->size + realsize + 1);
if (!ptr) {
est_set_error("Out of memory");
return 0;
}
mem->data = ptr;
memcpy(&(mem->data[mem->size]), contents, realsize);
mem->size += realsize;
mem->data[mem->size] = 0; /* null terminate */
return realsize;
}
/**
* Extract certificate issuer string
*/
static char* est_get_cert_issuer(const char *cert_path)
{
FILE *f = fopen(cert_path, "r");
if (!f) {
est_set_error("Cannot open certificate: %s", cert_path);
return NULL;
}
X509 *cert = PEM_read_X509(f, NULL, NULL, NULL);
fclose(f);
if (!cert) {
est_set_error("Failed to parse certificate");
return NULL;
}
X509_NAME *issuer = X509_get_issuer_name(cert);
if (!issuer) {
X509_free(cert);
est_set_error("Failed to get certificate issuer");
return NULL;
}
char *issuer_str = X509_NAME_oneline(issuer, NULL, 0);
X509_free(cert);
return issuer_str;
}
const char* est_get_server_url(const char *cert_path)
{
/* Check environment variable override */
const char *env_server = getenv("EST_SERVER");
if (env_server && env_server[0])
return env_server;
/* Auto-detect from certificate issuer */
char *issuer = est_get_cert_issuer(cert_path);
if (!issuer)
return EST_SERVER_PROD; /* default fallback */
const char *server = EST_SERVER_PROD;
if (strstr(issuer, "OpenLAN Demo Birth CA")) {
server = EST_SERVER_QA;
} else if (strstr(issuer, "OpenLAN Birth Issuing CA")) {
server = EST_SERVER_PROD;
}
OPENSSL_free(issuer);
return server;
}
/**
* Generate CSR from existing certificate
*/
int est_generate_csr(const char *cert_path, const char *key_path,
char **csr_out, size_t *csr_len)
{
if (!cert_path || !key_path || !csr_out || !csr_len) {
est_set_error("Invalid arguments");
return EST_ERROR_INVALID;
}
*csr_out = NULL;
*csr_len = 0;
/* Read existing certificate */
FILE *cert_file = fopen(cert_path, "r");
if (!cert_file) {
est_set_error("Cannot open certificate: %s", cert_path);
return EST_ERROR_GENERAL;
}
X509 *cert = PEM_read_X509(cert_file, NULL, NULL, NULL);
fclose(cert_file);
if (!cert) {
est_set_error("Failed to parse certificate");
return EST_ERROR_CRYPTO;
}
/* Get subject from existing certificate */
X509_NAME *subject = X509_get_subject_name(cert);
if (!subject) {
X509_free(cert);
est_set_error("Failed to get certificate subject");
return EST_ERROR_CRYPTO;
}
/* Duplicate subject name (we'll free cert but need subject) */
X509_NAME *subject_dup = X509_NAME_dup(subject);
X509_free(cert);
if (!subject_dup) {
est_set_error("Failed to duplicate subject name");
return EST_ERROR_MEMORY;
}
/* Read private key */
FILE *key_file = fopen(key_path, "r");
if (!key_file) {
X509_NAME_free(subject_dup);
est_set_error("Cannot open private key: %s", key_path);
return EST_ERROR_GENERAL;
}
EVP_PKEY *pkey = PEM_read_PrivateKey(key_file, NULL, NULL, NULL);
fclose(key_file);
if (!pkey) {
X509_NAME_free(subject_dup);
est_set_error("Failed to parse private key");
return EST_ERROR_CRYPTO;
}
/* Create new CSR */
X509_REQ *req = X509_REQ_new();
if (!req) {
EVP_PKEY_free(pkey);
X509_NAME_free(subject_dup);
est_set_error("Failed to create CSR");
return EST_ERROR_MEMORY;
}
/* Set version */
X509_REQ_set_version(req, 0L);
/* Set subject */
X509_REQ_set_subject_name(req, subject_dup);
X509_NAME_free(subject_dup);
/* Set public key */
X509_REQ_set_pubkey(req, pkey);
/* Sign CSR */
if (!X509_REQ_sign(req, pkey, EVP_sha256())) {
X509_REQ_free(req);
EVP_PKEY_free(pkey);
est_set_error("Failed to sign CSR");
return EST_ERROR_CRYPTO;
}
EVP_PKEY_free(pkey);
/* Write CSR to memory (DER format) */
BIO *bio = BIO_new(BIO_s_mem());
if (!bio) {
X509_REQ_free(req);
est_set_error("Failed to create BIO");
return EST_ERROR_MEMORY;
}
if (!i2d_X509_REQ_bio(bio, req)) {
BIO_free(bio);
X509_REQ_free(req);
est_set_error("Failed to write CSR");
return EST_ERROR_CRYPTO;
}
X509_REQ_free(req);
/* Get DER data */
BUF_MEM *bio_buf;
BIO_get_mem_ptr(bio, &bio_buf);
/* Base64 encode (no headers) */
BIO *b64 = BIO_new(BIO_f_base64());
BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL);
BIO *mem_bio = BIO_new(BIO_s_mem());
BIO_push(b64, mem_bio);
BIO_write(b64, bio_buf->data, bio_buf->length);
BIO_flush(b64);
BUF_MEM *b64_buf;
BIO_get_mem_ptr(mem_bio, &b64_buf);
*csr_out = malloc(b64_buf->length + 1);
if (!*csr_out) {
BIO_free_all(b64);
BIO_free(bio);
est_set_error("Out of memory");
return EST_ERROR_MEMORY;
}
memcpy(*csr_out, b64_buf->data, b64_buf->length);
(*csr_out)[b64_buf->length] = 0;
*csr_len = b64_buf->length;
BIO_free_all(b64);
BIO_free(bio);
return EST_SUCCESS;
}
/**
* Convert PKCS#7 to PEM format
*/
int est_pkcs7_to_pem(const char *pkcs7_data, size_t pkcs7_len,
char **pem_out, size_t *pem_len)
{
if (!pkcs7_data || !pkcs7_len || !pem_out || !pem_len) {
est_set_error("Invalid arguments");
return EST_ERROR_INVALID;
}
*pem_out = NULL;
*pem_len = 0;
/* Decode base64 */
BIO *b64 = BIO_new(BIO_f_base64());
BIO_set_flags(b64, BIO_FLAGS_BASE64_NO_NL);
BIO *bio_mem = BIO_new_mem_buf((void*)pkcs7_data, pkcs7_len);
BIO_push(b64, bio_mem);
/* Read PKCS#7 structure */
PKCS7 *p7 = d2i_PKCS7_bio(b64, NULL);
BIO_free_all(b64);
if (!p7) {
est_set_error("Failed to parse PKCS#7 data");
return EST_ERROR_CRYPTO;
}
/* Extract certificates from PKCS#7 */
STACK_OF(X509) *certs = NULL;
int type = OBJ_obj2nid(p7->type);
if (type == NID_pkcs7_signed) {
certs = p7->d.sign->cert;
} else if (type == NID_pkcs7_signedAndEnveloped) {
certs = p7->d.signed_and_enveloped->cert;
}
if (!certs || sk_X509_num(certs) == 0) {
PKCS7_free(p7);
est_set_error("No certificates in PKCS#7");
return EST_ERROR_CRYPTO;
}
/* Write certificates to PEM */
BIO *out = BIO_new(BIO_s_mem());
if (!out) {
PKCS7_free(p7);
est_set_error("Failed to create output BIO");
return EST_ERROR_MEMORY;
}
for (int i = 0; i < sk_X509_num(certs); i++) {
X509 *cert = sk_X509_value(certs, i);
if (!PEM_write_bio_X509(out, cert)) {
BIO_free(out);
PKCS7_free(p7);
est_set_error("Failed to write certificate");
return EST_ERROR_CRYPTO;
}
}
PKCS7_free(p7);
/* Get PEM data */
BUF_MEM *pem_buf;
BIO_get_mem_ptr(out, &pem_buf);
*pem_out = malloc(pem_buf->length + 1);
if (!*pem_out) {
BIO_free(out);
est_set_error("Out of memory");
return EST_ERROR_MEMORY;
}
memcpy(*pem_out, pem_buf->data, pem_buf->length);
(*pem_out)[pem_buf->length] = 0;
*pem_len = pem_buf->length;
BIO_free(out);
return EST_SUCCESS;
}
/**
* Perform EST HTTP request
*/
static int est_http_request(const char *url, const char *cert, const char *key,
const char *ca_bundle, const char *post_data, size_t post_len,
char **response_out, size_t *response_len)
{
CURL *curl;
CURLcode res;
struct est_memory response = {0};
curl = curl_easy_init();
if (!curl) {
est_set_error("Failed to initialize CURL");
return EST_ERROR_GENERAL;
}
/* Set URL */
curl_easy_setopt(curl, CURLOPT_URL, url);
/* Client certificate authentication */
curl_easy_setopt(curl, CURLOPT_SSLCERTTYPE, "PEM");
curl_easy_setopt(curl, CURLOPT_SSLCERT, cert);
curl_easy_setopt(curl, CURLOPT_SSLKEYTYPE, "PEM");
curl_easy_setopt(curl, CURLOPT_SSLKEY, key);
/* CA bundle for server verification */
if (ca_bundle)
curl_easy_setopt(curl, CURLOPT_CAINFO, ca_bundle);
/* Response handler */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, est_write_callback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, (void *)&response);
/* Timeout */
curl_easy_setopt(curl, CURLOPT_TIMEOUT, EST_TIMEOUT_SECONDS);
/* POST request if data provided */
if (post_data && post_len > 0) {
struct curl_slist *headers = NULL;
headers = curl_slist_append(headers, "Content-Type: application/pkcs10");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(curl, CURLOPT_POST, 1L);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, post_data);
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, post_len);
}
/* Perform request */
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
if (res != CURLE_OK) {
if (response.data)
free(response.data);
est_set_error("CURL error: %s", curl_easy_strerror(res));
return EST_ERROR_NETWORK;
}
*response_out = response.data;
*response_len = response.size;
return EST_SUCCESS;
}
int est_simple_enroll(const char *est_server, const char *birth_cert,
const char *birth_key, const char *ca_bundle,
char **operational_cert_out)
{
if (!est_server || !birth_cert || !birth_key || !operational_cert_out) {
est_set_error("Invalid arguments");
return EST_ERROR_INVALID;
}
*operational_cert_out = NULL;
/* Generate CSR */
char *csr = NULL;
size_t csr_len = 0;
int ret = est_generate_csr(birth_cert, birth_key, &csr, &csr_len);
if (ret != EST_SUCCESS) {
return ret;
}
/* Build EST URL */
char url[512];
snprintf(url, sizeof(url), "https://%s%s", est_server, EST_PATH_SIMPLEENROLL);
/* Perform enrollment */
char *response = NULL;
size_t response_len = 0;
ret = est_http_request(url, birth_cert, birth_key, ca_bundle,
csr, csr_len, &response, &response_len);
free(csr);
if (ret != EST_SUCCESS) {
return ret;
}
/* Convert PKCS#7 response to PEM */
char *pem = NULL;
size_t pem_len = 0;
ret = est_pkcs7_to_pem(response, response_len, &pem, &pem_len);
free(response);
if (ret != EST_SUCCESS) {
return ret;
}
*operational_cert_out = pem;
return EST_SUCCESS;
}
int est_simple_reenroll(const char *est_server, const char *operational_cert,
const char *key, const char *ca_bundle,
char **renewed_cert_out)
{
if (!est_server || !operational_cert || !key || !renewed_cert_out) {
est_set_error("Invalid arguments");
return EST_ERROR_INVALID;
}
*renewed_cert_out = NULL;
/* Generate CSR */
char *csr = NULL;
size_t csr_len = 0;
int ret = est_generate_csr(operational_cert, key, &csr, &csr_len);
if (ret != EST_SUCCESS) {
return ret;
}
/* Build EST URL */
char url[512];
snprintf(url, sizeof(url), "https://%s%s", est_server, EST_PATH_SIMPLEREENROLL);
/* Perform reenrollment */
char *response = NULL;
size_t response_len = 0;
ret = est_http_request(url, operational_cert, key, ca_bundle,
csr, csr_len, &response, &response_len);
free(csr);
if (ret != EST_SUCCESS) {
return ret;
}
/* Convert PKCS#7 response to PEM */
char *pem = NULL;
size_t pem_len = 0;
ret = est_pkcs7_to_pem(response, response_len, &pem, &pem_len);
free(response);
if (ret != EST_SUCCESS) {
return ret;
}
*renewed_cert_out = pem;
return EST_SUCCESS;
}
int est_get_cacerts(const char *est_server, const char *cert, const char *key,
const char *ca_bundle, char **ca_certs_out)
{
if (!est_server || !cert || !key || !ca_certs_out) {
est_set_error("Invalid arguments");
return EST_ERROR_INVALID;
}
*ca_certs_out = NULL;
/* Build EST URL */
char url[512];
snprintf(url, sizeof(url), "https://%s%s", est_server, EST_PATH_CACERTS);
/* Perform GET request */
char *response = NULL;
size_t response_len = 0;
int ret = est_http_request(url, cert, key, ca_bundle,
NULL, 0, &response, &response_len);
if (ret != EST_SUCCESS) {
return ret;
}
/* Convert PKCS#7 response to PEM */
char *pem = NULL;
size_t pem_len = 0;
ret = est_pkcs7_to_pem(response, response_len, &pem, &pem_len);
free(response);
if (ret != EST_SUCCESS) {
return ret;
}
*ca_certs_out = pem;
return EST_SUCCESS;
}
int est_save_cert(const char *cert_data, size_t cert_len, const char *file_path)
{
if (!cert_data || !cert_len || !file_path) {
est_set_error("Invalid arguments");
return EST_ERROR_INVALID;
}
FILE *f = fopen(file_path, "w");
if (!f) {
est_set_error("Cannot create file: %s", file_path);
return EST_ERROR_GENERAL;
}
if (fwrite(cert_data, 1, cert_len, f) != cert_len) {
fclose(f);
est_set_error("Failed to write to file: %s", file_path);
return EST_ERROR_GENERAL;
}
fclose(f);
return EST_SUCCESS;
}

View File

@@ -0,0 +1,124 @@
/* SPDX-License-Identifier: BSD-3-Clause */
#ifndef __EST_CLIENT_H
#define __EST_CLIENT_H
/**
* EST (Enrollment over Secure Transport) Client Implementation
*
* Implements RFC 7030 EST protocol for PKI 2.0 operational certificate
* enrollment and renewal. Uses libcurl for HTTPS transport and OpenSSL
* for cryptographic operations.
*/
#include <stddef.h>
/* EST operation return codes */
#define EST_SUCCESS 0
#define EST_ERROR_GENERAL -1
#define EST_ERROR_NETWORK -2
#define EST_ERROR_CRYPTO -3
#define EST_ERROR_MEMORY -4
#define EST_ERROR_INVALID -5
/**
* Generate a Certificate Signing Request (CSR) from an existing certificate
*
* @param cert_path Path to existing certificate (PEM format)
* @param key_path Path to private key (PEM format)
* @param csr_out Output buffer for CSR (base64, no headers) - caller must free()
* @param csr_len Output length of CSR data
* @return EST_SUCCESS on success, error code otherwise
*/
int est_generate_csr(const char *cert_path, const char *key_path,
char **csr_out, size_t *csr_len);
/**
* Perform EST simple enrollment - Get operational certificate from birth certificate
*
* @param est_server EST server URL (e.g., "est.certificates.open-lan.org")
* @param birth_cert Path to birth certificate (PEM format)
* @param birth_key Path to birth certificate private key (PEM format)
* @param ca_bundle Path to CA certificate bundle for server verification
* @param operational_cert_out Output operational certificate (PEM) - caller must free()
* @return EST_SUCCESS on success, error code otherwise
*/
int est_simple_enroll(const char *est_server,
const char *birth_cert, const char *birth_key,
const char *ca_bundle,
char **operational_cert_out);
/**
* Perform EST simple reenrollment - Renew operational certificate
*
* @param est_server EST server URL
* @param operational_cert Path to current operational certificate (PEM format)
* @param key Path to private key (PEM format)
* @param ca_bundle Path to CA certificate bundle
* @param renewed_cert_out Output renewed certificate (PEM) - caller must free()
* @return EST_SUCCESS on success, error code otherwise
*/
int est_simple_reenroll(const char *est_server,
const char *operational_cert, const char *key,
const char *ca_bundle,
char **renewed_cert_out);
/**
* Retrieve operational CA certificates from EST server
*
* @param est_server EST server URL
* @param cert Path to client certificate for authentication
* @param key Path to private key
* @param ca_bundle Path to CA bundle for server verification
* @param ca_certs_out Output CA certificates (PEM) - caller must free()
* @return EST_SUCCESS on success, error code otherwise
*/
int est_get_cacerts(const char *est_server,
const char *cert, const char *key,
const char *ca_bundle,
char **ca_certs_out);
/**
* Convert PKCS#7 format to PEM format using OpenSSL
*
* @param pkcs7_data PKCS#7 data (base64, no headers)
* @param pkcs7_len Length of PKCS#7 data
* @param pem_out Output PEM format certificate(s) - caller must free()
* @param pem_len Output length of PEM data
* @return EST_SUCCESS on success, error code otherwise
*/
int est_pkcs7_to_pem(const char *pkcs7_data, size_t pkcs7_len,
char **pem_out, size_t *pem_len);
/**
* Auto-detect EST server URL based on certificate issuer
*
* Inspects the certificate issuer field to determine appropriate EST server:
* - "OpenLAN Demo Birth CA" -> QA server
* - "OpenLAN Birth Issuing CA" -> Production server
*
* Can be overridden with EST_SERVER environment variable.
*
* @param cert_path Path to certificate to inspect
* @return EST server URL (static string), or NULL on error
*/
const char* est_get_server_url(const char *cert_path);
/**
* Save certificate to file
*
* @param cert_data Certificate data (PEM format)
* @param cert_len Length of certificate data
* @param file_path Destination file path
* @return EST_SUCCESS on success, error code otherwise
*/
int est_save_cert(const char *cert_data, size_t cert_len, const char *file_path);
/**
* Get last error message
*
* @return Human-readable error message (static string)
*/
const char* est_get_error(void);
#endif /* __EST_CLIENT_H */

View File

@@ -2,11 +2,12 @@
#include <netinet/in.h>
/* Fixed: Changed 'key' to proper struct definition with semicolon */
struct ucentral_router_fib_key {
/* TODO vrf */
struct in_addr prefix;
int prefix_len;
} key;
};
struct ucentral_router_fib_info { /* Destination info */
enum {
@@ -46,16 +47,16 @@ struct ucentral_router {
struct ucentral_router_fib_db_apply_args {
/* plat whould check info to determine if node channged */
int (*upd_cb)(const struct ucentral_router_fib_node *old,
int (*upd_cb)(const struct ucentral_router_fib_node *old_node,
int olen,
const struct ucentral_router_fib_node *new,
const struct ucentral_router_fib_node *new_node,
int nlen,
void *arg);
/* prefix = new, info = new */
int (*add_cb)(const struct ucentral_router_fib_node *new,
int (*add_cb)(const struct ucentral_router_fib_node *new_node,
int len, void *arg);
/* prefix = none */
int (*del_cb)(const struct ucentral_router_fib_node *old,
int (*del_cb)(const struct ucentral_router_fib_node *old_node,
int len, void *arg);
void *arg;
};
@@ -69,26 +70,27 @@ int ucentral_router_fib_db_append(struct ucentral_router *r,
struct ucentral_router_fib_node *n);
int ucentral_router_fib_key_cmp(const struct ucentral_router_fib_key *a,
const struct ucentral_router_fib_key *b);
bool ucentral_router_fib_info_cmp(const struct ucentral_router_fib_info *a,
const struct ucentral_router_fib_info *b);
int ucentral_router_fib_info_cmp(const struct ucentral_router_fib_info *a,
const struct ucentral_router_fib_info *b);
#define router_db_get(R, I) (I < (R)->len ? &(R)->arr[(I)] : NULL)
#define for_router_db_diff_CASE_UPD(DIFF) if (!(DIFF))
#define for_router_db_diff_CASE_DEL(DIFF) if ((DIFF) > 0)
#define for_router_db_diff_CASE_ADD(DIFF) if ((DIFF) < 0)
#define diff_case_upd(DIFF) (!(DIFF))
#define diff_case_del(DIFF) ((DIFF) > 0)
#define diff_case_add(DIFF) ((DIFF) < 0)
#define router_db_diff_get(NEW, OLD, INEW, IOLD) \
(IOLD) == (OLD)->len \
? -1 \
: (INEW) == (NEW)->len \
? 1 \
: ucentral_router_fib_key_cmp(&(NEW)->arr[(INEW)].key, &(OLD)->arr[(IOLD)].key)
#define for_router_db_diff(NEW, OLD, INEW, IOLD, DIFF) \
for ((INEW) = 0, (IOLD) = 0, (NEW)->sorted ? 0 : ucentral_router_fib_db_sort((NEW)), (OLD)->sorted ? 0 : ucentral_router_fib_db_sort((OLD)); \
((IOLD) != (OLD)->len || (INEW) != (NEW)->len) && \
(( \
(DIFF) = (IOLD) == (OLD)->len ? -1 : (INEW) == (NEW)->len ? 1 : ucentral_router_fib_key_cmp(&(NEW)->arr[(INEW)].key, &(OLD)->arr[(IOLD)].key) \
) || 1); \
(DIFF) == 0 ? ++(INEW) && ++(IOLD) : 0, (DIFF) > 0 ? ++(IOLD) : 0, (DIFF) < 0 ? ++(INEW) : 0\
for ((INEW) = 0, (IOLD) = 0, (DIFF) = 0; \
\
((IOLD) != (OLD)->len || (INEW) != (NEW)->len); \
\
(DIFF) == 0 ? ++(INEW) && ++(IOLD) : 0, \
(DIFF) > 0 ? ++(IOLD) : 0, \
(DIFF) < 0 ? ++(INEW) : 0 \
)
/*
* ((DIFF) == 0 && ++(INEW) && ++(IOLD)) || \
* ((DIFF) > 0 && ++(IOLD)) || \
* ((DIFF) < 0 && ++(INEW)) \
*/

View File

@@ -28,37 +28,6 @@ void uc_log_send_cb_register(void (*cb)(const char *, int sv));
void uc_log_severity_set(enum uc_log_component c, int sv);
void uc_log(enum uc_log_component c, int sv, const char *fmt, ...);
#ifdef PLAT_EC
#define UC_LOG_INFO(...) \
do { \
syslog(LOG_INFO, __VA_ARGS__); \
uc_log(UC_LOG_COMPONENT, UC_LOG_SV_INFO, __VA_ARGS__); \
} while (0)
#define UC_LOG_DBG(FMT, ...) \
do { \
syslog(LOG_DEBUG, "%s:%u: " FMT, __func__, \
(unsigned)__LINE__, ##__VA_ARGS__); \
uc_log(UC_LOG_COMPONENT, UC_LOG_SV_DEBUG, \
FMT, ##__VA_ARGS__); \
} while (0)
#define UC_LOG_ERR(FMT, ...) \
do { \
syslog(LOG_ERR, "%s:%u: " FMT, __func__, \
(unsigned)__LINE__, ##__VA_ARGS__); \
uc_log(UC_LOG_COMPONENT, UC_LOG_SV_ERR, \
FMT, ##__VA_ARGS__); \
} while (0)
#define UC_LOG_CRIT(FMT, ...) \
do { \
syslog(LOG_CRIT, "%s:%u: " FMT, __func__, \
(unsigned)__LINE__, ##__VA_ARGS__); \
uc_log(UC_LOG_COMPONENT, UC_LOG_SV_CRIT, \
FMT, ##__VA_ARGS__); \
} while (0)
#else
#define UC_LOG_INFO(...) \
do { \
syslog(LOG_INFO, __VA_ARGS__); \
@@ -88,6 +57,5 @@ void uc_log(enum uc_log_component c, int sv, const char *fmt, ...);
uc_log(UC_LOG_COMPONENT, UC_LOG_SV_CRIT, \
FMT __VA_OPT__(, ) __VA_ARGS__); \
} while (0)
#endif
#endif

View File

@@ -22,9 +22,6 @@ extern "C" {
#define MAX_NUM_OF_PORTS (100)
#define PORT_MAX_NAME_LEN (32)
#ifdef PLAT_EC
#define VLAN_MAX_NAME_LEN PORT_MAX_NAME_LEN
#endif
#define RTTY_CFG_FIELD_STR_MAX_LEN (64)
#define PLATFORM_INFO_STR_MAX_LEN (96)
#define SYSLOG_CFG_FIELD_STR_MAX_LEN (64)
@@ -34,6 +31,8 @@ extern "C" {
#define RADIUS_CFG_DEFAULT_PRIO (1)
#define HEALTHCHEK_MESSAGE_MAX_COUNT (10)
#define HEALTHCHEK_MESSAGE_MAX_LEN (100)
#define PLATFORM_MAC_STR_SIZE (18)
#define METRICS_WIRED_CLIENTS_MAX_NUM (2000)
/*
* TODO(vb) likely we need to parse interfaces in proto to understand
@@ -42,6 +41,8 @@ extern "C" {
*/
#define PID_TO_NAME(p, name) sprintf(name, "Ethernet%hu", p)
#define NAME_TO_PID(p, name) sscanf((name), "Ethernet%hu", (p))
#define VLAN_TO_NAME(v, name) sprintf((name), "Vlan%hu", (v))
#define NAME_TO_VLAN(v, name) sscanf((name), "Vlan%hu", (v))
struct plat_vlan_memberlist;
struct plat_port_vlan;
@@ -65,6 +66,18 @@ enum plat_ieee8021x_port_host_mode {
PLAT_802_1X_PORT_HOST_MODE_SINGLE_HOST,
};
enum plat_ieee8021x_das_auth_type {
PLAT_802_1X_DAS_AUTH_TYPE_ANY,
PLAT_802_1X_DAS_AUTH_TYPE_ALL,
PLAT_802_1X_DAS_AUTH_TYPE_SESSION_KEY,
};
enum plat_igmp_version {
PLAT_IGMP_VERSION_1,
PLAT_IGMP_VERSION_2,
PLAT_IGMP_VERSION_3
};
#define UCENTRAL_PORT_LLDP_PEER_INFO_MAX_MGMT_IPS (2)
/* Interface LLDP peer's data, as defined in interface.lldp.yml*/
struct plat_port_lldp_peer_info {
@@ -78,7 +91,7 @@ struct plat_port_lldp_peer_info {
/* The chassis name that our neighbour is announcing */
char name[64];
/* The chassis MAC that our neighbour is announcing */
char mac[18];
char mac[PLATFORM_MAC_STR_SIZE];
/* The chassis description that our neighbour is announcing */
char description[512];
/* The management IPs that our neighbour is announcing */
@@ -116,7 +129,7 @@ struct plat_poe_port_state {
struct plat_ieee8021x_authenticated_client_info {
char auth_method[32];
char mac_addr[18];
char mac_addr[PLATFORM_MAC_STR_SIZE];
size_t session_time;
char username[64];
char vlan_type[32];
@@ -253,15 +266,29 @@ struct plat_port_l2 {
struct plat_ipv4 ipv4;
};
struct plat_igmp {
bool exist;
bool snooping_enabled;
bool querier_enabled;
bool fast_leave_enabled;
uint32_t query_interval;
uint32_t last_member_query_interval;
uint32_t max_response_time;
enum plat_igmp_version version;
size_t num_groups;
struct {
struct in_addr addr;
struct plat_ports_list *egress_ports_list;
} *groups;
};
struct plat_port_vlan {
struct plat_vlan_memberlist *members_list_head;
struct plat_ipv4 ipv4;
struct plat_dhcp dhcp;
struct plat_igmp igmp;
uint16_t id;
uint16_t mstp_instance;
#ifdef PLAT_EC
char name[VLAN_MAX_NAME_LEN];
#endif
};
struct plat_vlans_list {
@@ -275,9 +302,6 @@ struct plat_vlan_memberlist {
uint16_t fp_id;
} port;
bool tagged;
#ifdef PLAT_EC
bool pvid;
#endif
struct plat_vlan_memberlist *next;
};
@@ -289,6 +313,18 @@ struct plat_syslog_cfg {
char host[SYSLOG_CFG_FIELD_STR_MAX_LEN];
};
struct plat_enabled_service_cfg {
struct {
bool enabled;
} ssh;
struct telnet {
bool enabled;
} telnet;
struct {
bool enabled;
} http;
};
struct plat_rtty_cfg {
char id[RTTY_CFG_FIELD_STR_MAX_LEN];
char passwd[RTTY_CFG_FIELD_STR_MAX_LEN];
@@ -331,6 +367,7 @@ struct plat_metrics_cfg {
int lldp_enabled;
int clients_enabled;
size_t interval;
unsigned max_mac_count;
/* IE GET max length. Should be enoug. */
char public_ip_lookup[2048];
} state;
@@ -343,8 +380,16 @@ struct plat_unit_poe_cfg {
bool is_usage_threshold_set;
};
struct plat_unit_system_cfg {
char password[64];
bool password_changed;
};
struct plat_unit {
struct plat_unit_poe_cfg poe;
struct plat_unit_system_cfg system;
bool mc_flood_control;
bool querier_enable;
};
enum plat_stp_mode {
@@ -376,6 +421,31 @@ struct plat_radius_hosts_list {
struct plat_radius_host host;
};
struct plat_ieee8021x_dac_host {
char hostname[RADIUS_CFG_HOSTNAME_STR_MAX_LEN];
char passkey[RADIUS_CFG_PASSKEY_STR_MAX_LEN];
};
struct plat_ieee8021x_dac_list {
struct plat_ieee8021x_dac_list *next;
struct plat_ieee8021x_dac_host host;
};
struct plat_port_isolation_session_ports {
struct plat_ports_list *ports_list;
};
struct plat_port_isolation_session {
uint64_t id;
struct plat_port_isolation_session_ports uplink;
struct plat_port_isolation_session_ports downlink;
};
struct plat_port_isolation_cfg {
struct plat_port_isolation_session *sessions;
size_t sessions_num;
};
struct plat_cfg {
struct plat_unit unit;
/* Alloc all ports, but access them only if bit is set. */
@@ -385,6 +455,7 @@ struct plat_cfg {
BITMAP_DECLARE(vlans_to_cfg, MAX_VLANS);
struct plat_metrics_cfg metrics;
struct plat_syslog_cfg *log_cfg;
struct plat_enabled_service_cfg enabled_services_cfg;
/* Port's interfaces (provide l2 iface w/o bridge caps) */
struct plat_port_l2 portsl2[MAX_NUM_OF_PORTS];
struct ucentral_router router;
@@ -393,9 +464,24 @@ struct plat_cfg {
/* Instance zero is for global instance (like common values in rstp) */
struct plat_stp_instance_cfg stp_instances[MAX_VLANS];
struct plat_radius_hosts_list *radius_hosts_list;
bool ieee8021x_is_auth_ctrl_enabled;
struct {
bool is_auth_ctrl_enabled;
bool bounce_port_ignore;
bool disable_port_ignore;
bool ignore_server_key;
bool ignore_session_key;
char server_key[RADIUS_CFG_PASSKEY_STR_MAX_LEN];
enum plat_ieee8021x_das_auth_type das_auth_type;
struct plat_ieee8021x_dac_list *das_dac_list;
} ieee8021x;
struct plat_port_isolation_cfg port_isolation_cfg;
};
struct plat_learned_mac_addr {
char port[PORT_MAX_NAME_LEN];
int vid;
char mac[PLATFORM_MAC_STR_SIZE];
};
typedef void (*plat_alarm_cb)(struct plat_alarm *);
@@ -457,9 +543,6 @@ typedef void (*plat_run_script_cb)(int err, struct plat_run_script_result *,
void *ctx);
enum {
#ifdef PLAT_EC
UCENTRAL_PORT_SPEED_NONE,
#endif
UCENTRAL_PORT_SPEED_10_E,
UCENTRAL_PORT_SPEED_100_E,
UCENTRAL_PORT_SPEED_1000_E,
@@ -472,9 +555,6 @@ enum {
};
enum {
#ifdef PLAT_EC
UCENTRAL_PORT_DUPLEX_NONE,
#endif
UCENTRAL_PORT_DUPLEX_HALF_E,
UCENTRAL_PORT_DUPLEX_FULL_E,
};
@@ -501,17 +581,60 @@ enum {
PLAT_REBOOT_CAUSE_REBOOT_CMD,
PLAT_REBOOT_CAUSE_POWERLOSS,
PLAT_REBOOT_CAUSE_CRASH,
PLAT_REBOOT_CAUSE_UNAVAILABLE,
};
enum sfp_form_factor {
UCENTRAL_SFP_FORM_FACTOR_NA = 0,
UCENTRAL_SFP_FORM_FACTOR_SFP,
UCENTRAL_SFP_FORM_FACTOR_SFP_PLUS,
UCENTRAL_SFP_FORM_FACTOR_SFP_28,
UCENTRAL_SFP_FORM_FACTOR_SFP_DD,
UCENTRAL_SFP_FORM_FACTOR_QSFP,
UCENTRAL_SFP_FORM_FACTOR_QSFP_PLUS,
UCENTRAL_SFP_FORM_FACTOR_QSFP_28,
UCENTRAL_SFP_FORM_FACTOR_QSFP_DD
};
enum sfp_link_mode {
UCENTRAL_SFP_LINK_MODE_NA = 0,
UCENTRAL_SFP_LINK_MODE_1000_X,
UCENTRAL_SFP_LINK_MODE_2500_X,
UCENTRAL_SFP_LINK_MODE_4000_SR,
UCENTRAL_SFP_LINK_MODE_10G_SR,
UCENTRAL_SFP_LINK_MODE_25G_SR,
UCENTRAL_SFP_LINK_MODE_40G_SR,
UCENTRAL_SFP_LINK_MODE_50G_SR,
UCENTRAL_SFP_LINK_MODE_100G_SR,
};
struct plat_port_transceiver_info {
char vendor_name[64];
char part_number[64];
char serial_number[64];
char revision[64];
enum sfp_form_factor form_factor;
enum sfp_link_mode *supported_link_modes;
size_t num_supported_link_modes;
float temperature;
float tx_optical_power;
float rx_optical_power;
float max_module_power;
};
struct plat_port_info {
struct plat_port_counters stats;
struct plat_port_lldp_peer_info lldp_peer_info;
struct plat_ieee8021x_port_info ieee8021x_info;
struct plat_port_transceiver_info transceiver_info;
uint32_t uptime;
uint32_t speed;
uint8_t carrier_up;
uint8_t duplex;
uint8_t has_lldp_peer_info;
uint8_t has_transceiver_info;
char name[PORT_MAX_NAME_LEN];
};
@@ -525,6 +648,24 @@ struct plat_system_info {
double load_average[3]; /* 1, 5, 15 minutes load average */
};
struct plat_iee8021x_coa_counters {
uint64_t coa_req_received;
uint64_t coa_ack_sent;
uint64_t coa_nak_sent;
uint64_t coa_ignored;
uint64_t coa_wrong_attr;
uint64_t coa_wrong_attr_value;
uint64_t coa_wrong_session_context;
uint64_t coa_administratively_prohibited_req;
};
struct plat_gw_address {
struct in_addr ip;
uint32_t metric;
char port[PORT_MAX_NAME_LEN];
char mac[PLATFORM_MAC_STR_SIZE];
};
struct plat_state_info {
struct plat_poe_state poe_state;
struct plat_poe_port_state poe_ports_state[MAX_NUM_OF_PORTS];
@@ -532,8 +673,15 @@ struct plat_state_info {
struct plat_port_info *port_info;
int port_info_count;
struct plat_port_vlan *vlan_info;
size_t vlan_info_count;
struct plat_learned_mac_addr *learned_mac_list;
size_t learned_mac_list_size;
struct plat_gw_address *gw_addr_list;
size_t gw_addr_list_size;
struct plat_system_info system_info;
struct plat_iee8021x_coa_counters ieee8021x_global_coa_counters;
};
struct plat_upgrade_info {
@@ -559,7 +707,14 @@ struct plat_event_callbacks {
plat_poe_link_faultcode_cb poe_link_faultcode_cb;
};
enum plat_script_type {
PLAT_SCRIPT_TYPE_NA = 0,
PLAT_SCRIPT_TYPE_SHELL = 1,
PLAT_SCRIPT_TYPE_DIAGNOSTICS = 2,
};
struct plat_run_script_result {
enum plat_script_type type;
const char *stdout_string;
size_t stdout_string_len;
int exit_status;
@@ -567,7 +722,7 @@ struct plat_run_script_result {
};
struct plat_run_script {
const char *type;
enum plat_script_type type;
const char *script_base64;
plat_run_script_cb cb;
void *ctx;
@@ -586,11 +741,7 @@ int plat_metrics_save(const struct plat_metrics_cfg *cfg);
int plat_metrics_restore(struct plat_metrics_cfg *cfg);
int plat_saved_config_id_get(uint64_t *id);
void plat_config_destroy(struct plat_cfg *cfg);
#ifdef PLAT_EC
int plat_factory_default(bool keep_redirector);
#else
int plat_factory_default(void);
#endif
int plat_rtty(struct plat_rtty_cfg *rtty_cfg);
int plat_upgrade(char *uri, char *signature);
@@ -621,15 +772,10 @@ int plat_run_script(struct plat_run_script *);
int plat_port_list_get(uint16_t list_size, struct plat_ports_list *ports);
int plat_port_num_get(uint16_t *num_of_active_ports);
int plat_running_img_name_get(char *str, size_t str_max_len);
int plat_revision_get(char *str, size_t str_max_len);
int
plat_reboot_cause_get(struct plat_reboot_cause *cause);
int plat_diagnostic(char *res_path);
#ifdef PLAT_EC
void clean_stats();
#endif
#ifdef __cplusplus
}
#endif

View File

@@ -1,12 +1,19 @@
plat.a: plat.o
ar crs $@ $^
plat.o: plat-gnma.o gnma/gnma.full.a
plat.o: plat-gnma.o gnma/gnma.full.a netlink/netlink.full.a
# TODO(vb) get back to this
gcc -r -nostdlib -o $@ $^
gnma/gnma.full.a:
$(MAKE) -C $(dir $@) $(notdir $@)
netlink/netlink.full.a:
$(MAKE) -C $(dir $@) $(notdir $@)
%.o: %.c
ifdef PLATFORM_REVISION
gcc -c -o $@ ${CFLAGS} -I ./ -I ../../include -D PLATFORM_REVISION='"$(PLATFORM_REVISION)"' $^
else
gcc -c -o $@ ${CFLAGS} -I ./ -I ../../include $^
endif

View File

@@ -1,7 +1,7 @@
all: gnma.a
%.o: %.c
gcc -c -o $@ ${CFLAGS} -I ./ -I../../../include $<
gcc -c -o $@ ${CFLAGS} -I ./ -I../../../include -I../netlink $<
gnma.a: gnma_common.o
ar crs $@ $^

File diff suppressed because it is too large Load Diff

View File

@@ -7,6 +7,7 @@
#define GNMA_RADIUS_CFG_HOSTNAME_STR_MAX_LEN (64)
#define GNMA_RADIUS_CFG_PASSKEY_STR_MAX_LEN (64)
#define GNMA_OK 0
#define GNMA_ERR_COMMON -1
#define GNMA_ERR_OVERFLOW -2
@@ -26,6 +27,16 @@ struct gnma_radius_host_key {
char hostname[GNMA_RADIUS_CFG_HOSTNAME_STR_MAX_LEN];
};
struct gnma_das_dac_host_key {
char hostname[GNMA_RADIUS_CFG_HOSTNAME_STR_MAX_LEN];
};
typedef enum _gnma_das_auth_type_t {
GNMA_802_1X_DAS_AUTH_TYPE_ANY,
GNMA_802_1X_DAS_AUTH_TYPE_ALL,
GNMA_802_1X_DAS_AUTH_TYPE_SESSION_KEY,
} gnma_das_auth_type_t;
struct gnma_metadata {
char platform[GNMA_METADATA_STR_MAX_LEN];
char hwsku[GNMA_METADATA_STR_MAX_LEN];
@@ -58,6 +69,17 @@ typedef enum _gnma_port_stat_type_t {
} gnma_port_stat_type_t;
typedef enum _gnma_ieee8021x_das_dac_stat_type_t {
GNMA_IEEE8021X_DAS_DAC_STAT_IN_COA_PKTS,
GNMA_IEEE8021X_DAS_DAC_STAT_OUT_COA_ACK_PKTS,
GNMA_IEEE8021X_DAS_DAC_STAT_OUT_COA_NAK_PKTS,
GNMA_IEEE8021X_DAS_DAC_STAT_IN_COA_IGNORED_PKTS,
GNMA_IEEE8021X_DAS_DAC_STAT_IN_COA_WRONG_ATTR_PKTS,
GNMA_IEEE8021X_DAS_DAC_STAT_IN_COA_WRONG_ATTR_VALUE_PKTS,
GNMA_IEEE8021X_DAS_DAC_STAT_IN_COA_WRONG_SESSION_CONTEXT_PKTS,
GNMA_IEEE8021X_DAS_DAC_STAT_IN_COA_ADMINISTRATIVELY_PROHIBITED_REQ_PKTS,
} gnma_ieee8021x_das_dac_stat_type_t;
struct gnma_alarm {
const char *id;
const char *resource;
@@ -129,7 +151,9 @@ struct gnma_route_attrs {
} connected;
struct {
uint16_t vid;
uint32_t metric;
struct in_addr gw;
struct gnma_port_key egress_port;
} nexthop;
};
};
@@ -256,6 +280,47 @@ struct gnma_vlan_member_bmap {
} vlan[GNMA_MAX_VLANS];
};
typedef enum _gnma_fdb_entry_type_t {
GNMA_FDB_ENTRY_TYPE_STATIC,
GNMA_FDB_ENTRY_TYPE_DYNAMIC,
} gnma_fdb_entry_type_t;
struct gnma_fdb_entry {
struct gnma_port_key port;
gnma_fdb_entry_type_t type;
int vid;
char mac[18];
};
typedef enum _gnma_igmp_version_t {
GNMA_IGMP_VERSION_NA = 0,
GNMA_IGMP_VERSION_1 = 1,
GNMA_IGMP_VERSION_2 = 2,
GNMA_IGMP_VERSION_3 = 3
} gnma_igmp_version_t;
struct gnma_igmp_snoop_attr {
bool enabled;
bool querier_enabled;
bool fast_leave_enabled;
uint32_t query_interval;
uint32_t last_member_query_interval;
uint32_t max_response_time;
gnma_igmp_version_t version;
};
struct gnma_igmp_static_group_attr {
struct in_addr address;
size_t num_ports;
struct gnma_port_key *egress_ports;
};
struct gnma_vlan_ip_t {
uint16_t vid;
uint16_t prefixlen;
struct in_addr address;
};
int gnma_switch_create(/* TODO id */ /* TODO: attr (adr, login, psw) */);
int gnma_port_admin_state_set(struct gnma_port_key *port_key, bool up);
int gnma_port_speed_set(struct gnma_port_key *port_key, const char *speed);
@@ -380,6 +445,9 @@ int gnma_route_remove(uint16_t vr_id /* 0 - default */,
int gnma_route_list_get(uint16_t vr_id, uint32_t *list_size,
struct gnma_ip_prefix *prefix_list,
struct gnma_route_attrs *attr_list);
int gnma_dyn_route_list_get(size_t *list_size,
struct gnma_ip_prefix *prefix_list,
struct gnma_route_attrs *attr_list);
int gnma_stp_mode_set(gnma_stp_mode_t mode, struct gnma_stp_attr *attr);
int gnma_stp_mode_get(gnma_stp_mode_t *mode, struct gnma_stp_attr *attr);
@@ -390,23 +458,53 @@ int gnma_stp_ports_enable(uint32_t list_size, struct gnma_port_key *ports_list);
int gnma_stp_instance_set(uint16_t instance, uint16_t prio,
uint32_t list_size, uint16_t *vid_list);
int gnma_stp_vids_enable(uint32_t list_size, uint16_t *vid_list);
int gnma_stp_vids_enable_all(void);
int gnma_stp_vids_set(uint32_t list_size, uint16_t *vid_list, bool enable);
int gnma_stp_vids_set_all(bool enable);
int gnma_stp_vid_set(uint16_t vid, struct gnma_stp_attr *attr);
int gnma_stp_vid_bulk_get(struct gnma_stp_attr *list, ssize_t size);
int gnma_ieee8021x_system_auth_control_set(bool is_enabled);
int gnma_ieee8021x_system_auth_control_get(bool *is_enabled);
int gnma_ieee8021x_system_auth_clients_get(char *buf, size_t buf_size);
int gnma_ieee8021x_das_bounce_port_ignore_set(bool bounce_port_ignore);
int gnma_ieee8021x_das_bounce_port_ignore_get(bool *bounce_port_ignore);
int gnma_ieee8021x_das_disable_port_ignore_set(bool disable_port_ignore);
int gnma_ieee8021x_das_disable_port_ignore_get(bool *disable_port_ignore);
int gnma_ieee8021x_das_ignore_server_key_set(bool ignore_server_key);
int gnma_ieee8021x_das_ignore_server_key_get(bool *ignore_server_key);
int gnma_ieee8021x_das_ignore_session_key_set(bool ignore_session_key);
int gnma_ieee8021x_das_ignore_session_key_get(bool *ignore_session_key);
int gnma_ieee8021x_das_auth_type_key_set(gnma_das_auth_type_t auth_type);
int gnma_ieee8021x_das_auth_type_key_get(gnma_das_auth_type_t *auth_type);
int gnma_ieee8021x_das_dac_hosts_list_get(size_t *list_size,
struct gnma_das_dac_host_key *das_dac_keys_arr);
int gnma_ieee8021x_das_dac_host_add(struct gnma_das_dac_host_key *key,
const char *passkey);
int gnma_ieee8021x_das_dac_host_remove(struct gnma_das_dac_host_key *key);
int
gnma_iee8021x_das_dac_global_stats_get(uint32_t num_of_counters,
gnma_ieee8021x_das_dac_stat_type_t *counter_ids,
uint64_t *counters);
int gnma_radius_hosts_list_get(size_t *list_size,
struct gnma_radius_host_key *hosts_list);
int gnma_radius_host_add(struct gnma_radius_host_key *key, const char *passkey,
uint16_t auth_port, uint8_t prio);
int gnma_radius_host_remove(struct gnma_radius_host_key *key);
int gnma_mac_address_list_get(size_t *list_size, struct gnma_fdb_entry *list);
int gnma_system_password_set(char *password);
int gnma_igmp_snooping_set(uint16_t vid, struct gnma_igmp_snoop_attr *attr);
int gnma_igmp_static_groups_set(uint16_t vid, size_t num_groups,
struct gnma_igmp_static_group_attr *groups);
int gnma_nei_addr_get(struct gnma_port_key *iface, struct in_addr *ip,
char *mac, size_t buf_size);
int gnma_igmp_iface_groups_get(struct gnma_port_key *iface,
char *buf, size_t *buf_size);
struct gnma_change *gnma_change_create(void);
void gnma_change_destory(struct gnma_change *);
int gnma_change_exec(struct gnma_change *);
int gnma_techsupport_start(char *res_path);
int gnma_ip_iface_addr_get(struct gnma_vlan_ip_t *address_list, size_t *list_size);

View File

@@ -0,0 +1,10 @@
all: netlink.a
%.o: %.c
gcc -c -o $@ ${CFLAGS} -I ./ -I/usr/include/libnl3 -lnl-3 -lnl-route-3 $<
netlink.a: netlink_common.o
ar crs $@ $^
netlink.full.a: netlink.a
ar crsT $@ $^

View File

@@ -0,0 +1,220 @@
#include <sys/socket.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <arpa/inet.h>
#include <unistd.h>
#include <net/if.h>
#include <netlink/netlink.h>
#include <netlink/route/link.h>
#include <netlink/route/route.h>
#include <netlink/route/addr.h>
#include <errno.h>
#include <netlink_common.h>
#define BUFFER_SIZE 4096
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
#define for_each_nlmsg(n, buf, len) \
for (n = (struct nlmsghdr*)buf; \
NLMSG_OK(n, (uint32_t)len) && n->nlmsg_type != NLMSG_DONE; \
n = NLMSG_NEXT(n, len))
#define for_each_rattr(n, buf, len) \
for (n = (struct rtattr*)buf; RTA_OK(n, len); n = RTA_NEXT(n, len))
static int _nl_connect(int *sock)
{
int s;
s = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
if (s == -1)
return -1;
*sock = s;
return 0;
}
static void _nl_disconnect(int sock)
{
close(sock);
}
static int _nl_request_ip_send(int sock)
{
struct sockaddr_nl sa = {.nl_family = AF_NETLINK};
char buf[BUFFER_SIZE];
struct ifaddrmsg *ifa;
struct nlmsghdr *nl;
struct msghdr msg;
struct iovec iov;
int res;
memset(&msg, 0, sizeof(msg));
memset(buf, 0, BUFFER_SIZE);
nl = (struct nlmsghdr*)buf;
nl->nlmsg_len = NLMSG_LENGTH(sizeof(struct ifaddrmsg));
nl->nlmsg_type = RTM_GETADDR;
nl->nlmsg_flags = NLM_F_REQUEST | NLM_F_ROOT;
iov.iov_base = nl;
iov.iov_len = nl->nlmsg_len;
ifa = (struct ifaddrmsg*)NLMSG_DATA(nl);
ifa->ifa_family = AF_INET; /* IPv4 */
msg.msg_name = &sa;
msg.msg_namelen = sizeof(sa);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
res = sendmsg(sock, &msg, 0);
if (res < 0)
return -1;
return 0;
}
static int _nl_response_get(int sock, void *buf, size_t *len)
{
struct iovec iov = {.iov_base = buf, .iov_len = *len};
struct sockaddr_nl sa = {.nl_family = AF_NETLINK};
struct msghdr msg = {
.msg_name = &sa,
.msg_namelen = sizeof(sa),
.msg_iov = &iov,
.msg_iovlen = 1
};
int res;
res = recvmsg(sock, &msg, 0);
if (res < 0)
return -1;
*len = res;
return 0;
}
static int _nl_iface_addr_parse(uint32_t vid, void *buf, size_t len,
unsigned char prefixlen, struct nl_vid_addr *addr)
{
struct rtattr *rta = NULL;
for_each_rattr(rta, buf, len) {
if (rta->rta_type == IFA_LOCAL) {
memcpy(&addr->address, RTA_DATA(rta), sizeof(addr->address));
addr->vid = vid;
addr->prefixlen = prefixlen;
break;
}
}
return 0;
}
static int _nl_response_addr_parse(void *buf,
size_t len,
struct nl_vid_addr *addr_list,
size_t *list_size)
{
struct ifaddrmsg *iface_addr;
struct nlmsghdr *nl = NULL;
char ifname[IF_NAMESIZE];
size_t num_addrs = 0;
uint32_t vid;
int err = 0;
for_each_nlmsg(nl, buf, len) {
if (nl->nlmsg_type == NLMSG_ERROR)
return -1;
if (nl->nlmsg_type != RTM_NEWADDR) /* only care for addr */
continue;
iface_addr = (struct ifaddrmsg*)NLMSG_DATA(nl);
if (!if_indextoname(iface_addr->ifa_index, ifname))
return -1;
if (sscanf(ifname, "Vlan%u", &vid) != 1)
continue;
if (!addr_list || *list_size == 0) {
num_addrs++;
continue;
}
if (num_addrs > *list_size)
return -EOVERFLOW;
err = _nl_iface_addr_parse(vid, IFA_RTA(iface_addr), IFA_PAYLOAD(nl),
iface_addr->ifa_prefixlen,
&addr_list[num_addrs++]);
if (err)
break;
}
if (num_addrs > *list_size)
err = -EOVERFLOW;
*list_size = num_addrs;
if (err)
return err;
return nl->nlmsg_type == NLMSG_DONE? -ENODATA : 0;
}
int nl_get_ip_list(struct nl_vid_addr *addr_list, size_t *list_size)
{
size_t buf_len = BUFFER_SIZE, batch_size = 0, num_addrs = 0;
char buf[BUFFER_SIZE];
int sock = 0;
int err;
err = _nl_connect(&sock);
if (err)
return err;
err = _nl_request_ip_send(sock);
if (err)
goto out;
while (1) {
err = _nl_response_get(sock, buf, &buf_len);
if (err)
goto out;
err = _nl_response_addr_parse(buf, buf_len, NULL, &batch_size);
if (err == -ENODATA) {
err = 0;
break;
}
if (err && err != -EOVERFLOW) {
goto out;
}
num_addrs += batch_size;
if (!addr_list || *list_size == 0)
continue;
if (num_addrs > *list_size) {
err = -EOVERFLOW;
break;
}
err = _nl_response_addr_parse(buf, buf_len, &addr_list[num_addrs - batch_size], &batch_size);
if (unlikely(err == -ENODATA)) {
err = 0;
break;
}
if (err)
goto out;
}
if (num_addrs > *list_size)
err = -EOVERFLOW;
*list_size = num_addrs;
out:
_nl_disconnect(sock);
return err;
}

View File

@@ -0,0 +1,12 @@
#ifndef _NETLINK_COMMON
#define _NETLINK_COMMON
struct nl_vid_addr {
uint16_t vid;
uint16_t prefixlen;
uint32_t address;
};
int nl_get_ip_list(struct nl_vid_addr *addr_list, size_t *list_size);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
#ifndef _PLAT_REVISION
#define _PLAT_REVISION
#define XSTR(x) STR(x)
#define STR(x) #x
#define PLATFORM_REL_NUM 2.2
#define PLATFORM_BUILD_NUM 5
#ifndef PLATFORM_REVISION
#define PLATFORM_REVISION "Rel " XSTR(PLATFORM_REL_NUM) " build " XSTR(PLATFORM_BUILD_NUM)
#endif
#endif

View File

@@ -1,3 +0,0 @@
list(APPEND PLAT_SOURCES
${CMAKE_CURRENT_LIST_DIR}/plat-ec.c
)

File diff suppressed because it is too large Load Diff

View File

@@ -2,4 +2,8 @@ plat.a: plat-example.o
ar crs $@ $^
%.o: %.c
ifdef PLATFORM_REVISION
gcc -c -o $@ ${CFLAGS} -I ./ -I ../../include -D PLATFORM_REVISION='"$(PLATFORM_REVISION)"' $^
else
gcc -c -o $@ ${CFLAGS} -I ./ -I ../../include $^
endif

View File

@@ -2,6 +2,7 @@
#include <ucentral-platform.h>
#include <ucentral-log.h>
#include <plat-revision.h>
#define UNUSED_PARAM(param) (void)((param))
@@ -12,7 +13,11 @@ int plat_init(void)
int plat_info_get(struct plat_platform_info *info)
{
UNUSED_PARAM(info);
*info = (struct plat_platform_info){0};
snprintf(info->platform, sizeof info->platform, "%s", "Example Platform" );
snprintf(info->hwsku, sizeof info->hwsku, "%s", "example-platform-sku");
snprintf(info->mac, sizeof info->mac, "%s", "24:fe:9a:0f:48:f0");
return 0;
}
@@ -156,10 +161,45 @@ int plat_port_num_get(uint16_t *num_of_active_ports)
UNUSED_PARAM(num_of_active_ports);
return 0;
}
int plat_revision_get(char *str, size_t str_max_len)
{
snprintf(str, str_max_len, PLATFORM_REVISION);
return 0;
}
int plat_reboot_cause_get(struct plat_reboot_cause *cause)
{
UNUSED_PARAM(cause);
return 0;
}
int plat_event_subscribe(const struct plat_event_callbacks *cbs)
{
UNUSED_PARAM(cbs);
return 0;
}
void plat_event_unsubscribe(void)
{
return;
}
int plat_running_img_name_get(char *str, size_t str_max_len)
{
UNUSED_PARAM(str_max_len);
UNUSED_PARAM(str);
return 0;
}
int plat_metrics_save(const struct plat_metrics_cfg *cfg)
{
UNUSED_PARAM(cfg);
return 0;
}
int plat_metrics_restore(struct plat_metrics_cfg *cfg)
{
UNUSED_PARAM(cfg);
return 0;
}
int plat_run_script(struct plat_run_script *p)
{
UNUSED_PARAM(p);
return 0;
}

View File

@@ -0,0 +1,14 @@
#ifndef _PLAT_REVISION
#define _PLAT_REVISION
#define XSTR(x) STR(x)
#define STR(x) #x
#define PLATFORM_REL_NUM 3.2.0
#define PLATFORM_BUILD_NUM 5
#ifndef PLATFORM_REVISION
#define PLATFORM_REVISION "Rel " XSTR(PLATFORM_REL_NUM) " build " XSTR(PLATFORM_BUILD_NUM)
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -70,12 +70,13 @@ int ucentral_router_fib_key_cmp(const struct ucentral_router_fib_key *a,
return 0;
}
/* bool result, as we have no criteria to sort this */
bool ucentral_router_fib_info_cmp(const struct ucentral_router_fib_info *a,
const struct ucentral_router_fib_info *b)
int ucentral_router_fib_info_cmp(const struct ucentral_router_fib_info *a,
const struct ucentral_router_fib_info *b)
{
if (a->type != b->type)
return false;
if (a->type > b->type)
return 1;
if (a->type < b->type)
return -1;
switch (a->type) {
case UCENTRAL_ROUTE_BLACKHOLE:
@@ -83,24 +84,32 @@ bool ucentral_router_fib_info_cmp(const struct ucentral_router_fib_info *a,
case UCENTRAL_ROUTE_UNREACHABLE:
break;
case UCENTRAL_ROUTE_CONNECTED:
if (a->connected.vid != b->connected.vid)
return false;
if (a->connected.vid > b->connected.vid)
return 1;
if (a->connected.vid < b->connected.vid)
return -1;
break;
case UCENTRAL_ROUTE_BROADCAST:
if (a->broadcast.vid != b->broadcast.vid)
return false;
if (a->broadcast.vid > b->broadcast.vid)
return 1;
if (a->broadcast.vid < b->broadcast.vid)
return -1;
break;
case UCENTRAL_ROUTE_NH:
if (a->nh.vid != b->nh.vid)
return false;
if (a->nh.gw.s_addr != b->nh.gw.s_addr)
return false;
if (a->nh.vid > b->nh.vid)
return 1;
if (a->nh.vid < b->nh.vid)
return -1;
if (a->nh.gw.s_addr > b->nh.gw.s_addr)
return 1;
if (a->nh.gw.s_addr < b->nh.gw.s_addr)
return -1;
break;
default:
break;
}
return true;
return 0;
}
static int __fib_node_key_cmp_cb(const void *a, const void *b)

View File

@@ -22,17 +22,15 @@
#include <cjson/cJSON.h>
#include "ucentral.h"
#include "ucentral-json-parser.h"
/* WA for parser issue */
/* #include "ucentral-json-parser.h" */
#include <openssl/conf.h>
#include <openssl/err.h>
#include <openssl/pem.h>
#include <openssl/x509v3.h>
#ifdef PLAT_EC
#include "api_device.h"
#include "api_session.h"
#endif
#include "est-client.h"
struct per_vhost_data__minimal {
struct lws_context *context;
@@ -52,7 +50,6 @@ time_t conn_time;
static int conn_successfull;
struct plat_metrics_cfg ucentral_metrics;
static struct uc_json_parser parser;
static int interrupted;
static pthread_t sigthread;
@@ -70,13 +67,10 @@ lws_protocols protocols[] = {
};
struct client_config client = {
#ifdef PLAT_EC
.redirector_file = "/etc/ucentral/redirector.json",
.redirector_file_dbg = "/etc/ucentral/firstcontact.hdr",
#else
.redirector_file = "/tmp/ucentral-redirector.json",
.redirector_file_dbg = "/tmp/firstcontact.hdr",
#endif
.ols_schema_version_file = "/etc/schema.json",
.ols_client_version_file = "/etc/version.json",
.server = NULL,
.port = 15002,
.path = "/",
@@ -84,6 +78,10 @@ struct client_config client = {
.CN = {0},
.firmware = {0},
.devid = {0},
/* PKI 2.0 defaults */
.ca = NULL, /* Will be set to operational.ca or cas.pem */
.cert = NULL, /* Will be set to operational.pem or cert.pem */
.hostname_validate = 0,
};
static const char file_cert[] = UCENTRAL_CONFIG "cert.pem";
@@ -205,7 +203,11 @@ int ssl_cert_get_common_name(char *cn, size_t size, const char *cert_path)
return 0;
}
static int
/*
* LEGACY: DigiCert redirector parsing - deprecated with PKI 2.0
* Kept for reference during transition period. To be removed in future release.
*/
static int __attribute__((unused))
ucentral_redirector_parse(char **gw_host)
{
size_t json_data_size = 0;
@@ -349,6 +351,7 @@ sul_connect_attempt(struct lws_sorted_usec_list *sul)
UC_LOG_DBG("Connected\n");
}
/* WA for parser issue
static void parse_cb(cJSON *j, void *data)
{
(void)data;
@@ -360,6 +363,7 @@ static void parse_error_cb(void *data)
(void)data;
UC_LOG_ERR("JSON config parse failed");
}
*/
static const char *redirector_host_get(void)
{
@@ -384,12 +388,8 @@ static int gateway_cert_trust(void)
static int redirector_cert_trust(void)
{
#ifdef PLAT_EC
return 1;
#else
char *v = getenv("UC_REDIRECTOR_CERT_TRUST");
return v && *v && strcmp("0", v);
#endif
}
static int
@@ -438,12 +438,15 @@ callback_broker(struct lws *wsi, enum lws_callback_reasons reason,
websocket = wsi;
connect_send();
conn_successfull = 1;
uc_json_parser_init(&parser, parse_cb, parse_error_cb, 0);
/* WA for parser issue */
/* uc_json_parser_init(&parser, parse_cb, parse_error_cb, 0); */
lws_callback_on_writable(websocket);
break;
case LWS_CALLBACK_CLIENT_RECEIVE:
uc_json_parser_feed(&parser, in, len);
/* WA for parser issue */
/* uc_json_parser_feed(&parser, in, len); */
proto_handle((char *)in);
break;
case LWS_CALLBACK_CLIENT_CONNECTION_ERROR:
@@ -457,7 +460,8 @@ callback_broker(struct lws *wsi, enum lws_callback_reasons reason,
/* fall through */
case LWS_CALLBACK_CLIENT_CLOSED:
UC_LOG_INFO("connection closed\n");
uc_json_parser_uninit(&parser);
/* WA for parser issue */
/* uc_json_parser_uninit(&parser); */
websocket = NULL;
set_conn_time();
vhd->client_wsi = NULL;
@@ -514,8 +518,10 @@ static int client_config_read(void)
const char *file_devid = UCENTRAL_CONFIG "dev-id";
/* UGLY W/A for now: get MAC from cert's CN */
if (ssl_cert_get_common_name(client.CN, 63, file_cert)) {
UC_LOG_ERR("CN read from cert failed");
/* PKI 2.0: Extract CN from operational or birth certificate */
const char *cert_for_cn = client.cert ? client.cert : file_cert;
if (ssl_cert_get_common_name(client.CN, 63, cert_for_cn)) {
UC_LOG_ERR("CN read from cert failed (%s)\n", cert_for_cn);
return -1;
}
client.serial = &client.CN[10];
@@ -554,7 +560,12 @@ static int client_config_read(void)
return 0;
}
static int firstcontact(void)
/*
* LEGACY: DigiCert first contact flow - deprecated with PKI 2.0
* Kept for reference during transition period. To be removed in future release.
*/
static int __attribute__((unused))
firstcontact(void)
{
const char *redirector_host = redirector_host_get();
FILE *fp_json;
@@ -697,6 +708,143 @@ static void sigthread_create(void)
}
}
static int get_updated_pass(char *pass, size_t *len) {
char *passwd_file_path = "/var/lib/ucentral/admin-cred.buf";
size_t password_size;
int passwd_fd = -1;
char password[64];
if (access(passwd_file_path, F_OK))
goto out;
passwd_fd = open(passwd_file_path, O_RDONLY);
if (passwd_fd < 0) {
UC_LOG_ERR("Failed to open %s", passwd_file_path);
goto out;
}
memset(&password, 0, sizeof(password));
password_size = read(passwd_fd, &password, sizeof(password));
if (password_size == sizeof(password)) {
UC_LOG_ERR("%s is too big", passwd_file_path);
goto out_close;
}
if (!password_size) {
UC_LOG_ERR("failed to read %s", passwd_file_path);
goto out_close;
}
if (*len < password_size) {
UC_LOG_ERR("out buffer is too small (%lu < %lu)",
*len, password_size);
goto out_close;
}
/* remove password from buffer */
close(passwd_fd);
passwd_fd = -1;
if (remove(passwd_file_path)) {
UC_LOG_ERR("Failed to remove %s", passwd_file_path);
goto out;
}
strncpy(pass, password, password_size);
*len = password_size;
return 0;
out_close:
close(passwd_fd);
out:
return -1;
}
/**
* PKI 2.0: Check for operational certificate and enroll if needed
* Follows TIP/OpenWifi proven pattern from wlan-ap
*
* Returns: 0 on success (operational cert available), -1 on failure
*/
static int pki2_check_and_enroll(void)
{
const char *operational_cert = UCENTRAL_CONFIG "operational.pem";
const char *operational_ca = UCENTRAL_CONFIG "operational.ca";
const char *birth_cert = UCENTRAL_CONFIG "cert.pem";
const char *birth_key = UCENTRAL_CONFIG "key.pem";
const char *birth_ca = UCENTRAL_CONFIG "cas.pem"; /* or insta.pem */
struct stat st;
int ret;
/* Check if operational certificate already exists */
if (stat(operational_cert, &st) == 0) {
UC_LOG_INFO("PKI 2.0: Operational certificate found, using it\n");
client.cert = operational_cert;
client.ca = operational_ca;
return 0;
}
/* No operational cert - check if we have birth certificate */
if (stat(birth_cert, &st) != 0) {
UC_LOG_ERR("PKI 2.0: No birth certificate found at %s\n", birth_cert);
return -1;
}
UC_LOG_INFO("PKI 2.0: No operational certificate, attempting EST enrollment\n");
/* Get EST server URL (auto-detects from certificate issuer) */
const char *est_server = est_get_server_url(birth_cert);
if (!est_server) {
UC_LOG_ERR("PKI 2.0: Failed to determine EST server\n");
/* Fall back to using birth certificate */
client.cert = birth_cert;
client.ca = birth_ca;
return -1;
}
UC_LOG_INFO("PKI 2.0: EST server: %s\n", est_server);
/* Perform EST simple enrollment */
char *enrolled_cert = NULL;
ret = est_simple_enroll(est_server, birth_cert, birth_key, birth_ca, &enrolled_cert);
if (ret != EST_SUCCESS) {
UC_LOG_ERR("PKI 2.0: EST enrollment failed: %s\n", est_get_error());
/* Fall back to using birth certificate */
client.cert = birth_cert;
client.ca = birth_ca;
return -1;
}
UC_LOG_INFO("PKI 2.0: EST enrollment successful\n");
/* Save operational certificate */
ret = est_save_cert(enrolled_cert, strlen(enrolled_cert), operational_cert);
free(enrolled_cert);
if (ret != EST_SUCCESS) {
UC_LOG_ERR("PKI 2.0: Failed to save operational certificate: %s\n", est_get_error());
client.cert = birth_cert;
client.ca = birth_ca;
return -1;
}
/* Get operational CA certificates */
char *ca_certs = NULL;
ret = est_get_cacerts(est_server, operational_cert, birth_key, birth_ca, &ca_certs);
if (ret == EST_SUCCESS) {
est_save_cert(ca_certs, strlen(ca_certs), operational_ca);
free(ca_certs);
UC_LOG_INFO("PKI 2.0: Operational CA certificates saved\n");
} else {
UC_LOG_INFO("PKI 2.0: Failed to get CA certs, using birth CA\n");
}
/* Use newly enrolled operational certificate */
client.cert = operational_cert;
client.ca = operational_ca;
UC_LOG_INFO("PKI 2.0: Successfully enrolled and saved operational certificate\n");
return 0;
}
int main(void)
{
int logs = LLL_USER | LLL_ERR | LLL_WARN | LLL_NOTICE | LLL_CLIENT;
@@ -707,12 +855,9 @@ int main(void)
struct lws_context_creation_info info = {0};
bool reboot_reason_sent = false;
char *gw_host = NULL;
size_t password_len;
char password[64];
struct stat st;
int ret;
#ifdef PLAT_EC
sleep(50); // wait for system ready
#endif
sigthread_create(); /* move signal handling to a dedicated thread */
@@ -731,17 +876,6 @@ int main(void)
uc_log_severity_set(UC_LOG_COMPONENT_CLIENT, UC_LOG_SV_ERR);
uc_log_severity_set(UC_LOG_COMPONENT_PLAT, UC_LOG_SV_ERR);
#ifdef PLAT_EC
int status = session_start();
if (status == STATUS_SUCCESS) {
UC_LOG_INFO("Successfully connected to SNMP!\n");
} else {
UC_LOG_INFO("Could not connect to SNMP!\n");
exit(EXIT_FAILURE);;
}
#endif
if (client_config_read()) {
UC_LOG_CRIT("client_config_read failed");
exit(EXIT_FAILURE);
@@ -751,68 +885,60 @@ int main(void)
UC_LOG_CRIT("Platform initialization failed");
}
plat_running_img_name_get(client.firmware, sizeof(client.firmware));
plat_revision_get(client.firmware, sizeof(client.firmware));
#ifdef PLAT_EC
FILE *f = fopen(REDIRECTOR_USER_DEFINE_FILE, "r");
/* PKI 2.0: Check for operational certificate and enroll if needed */
if (pki2_check_and_enroll() != 0) {
UC_LOG_INFO("PKI 2.0: Enrollment failed, using birth certificate as fallback\n");
}
if (f) {
size_t cnt;
char redirector_url[256];
memset(redirector_url, 0, sizeof(redirector_url));
cnt = fread(redirector_url, 1, sizeof(redirector_url), f);
fclose(f);
client.server = redirector_url;
} else {
ret = ucentral_redirector_parse(&gw_host);
if (ret) {
/* parse failed by present redirector file, try to get redirector file from digicert */
#else
/* Get gateway address from environment or use default */
if ((gw_host = getenv("UC_GATEWAY_ADDRESS"))) {
gw_host = strdup(gw_host);
} else {
#endif
while (1) {
if (uc_loop_interrupted_get())
char *colon_pos;
/* Parse host:port format */
colon_pos = strrchr(gw_host, ':');
if (colon_pos && colon_pos != gw_host) {
/* Found colon - split into host and port */
size_t host_len = colon_pos - gw_host;
int env_port;
client.server = strndup(gw_host, host_len);
env_port = atoi(colon_pos + 1);
if (env_port == 0) {
UC_LOG_ERR("Invalid port in UC_GATEWAY_ADDRESS: %s\n", gw_host);
goto exit;
if (firstcontact()) {
UC_LOG_INFO(
"Firstcontact failed; trying again in 30 second...\n");
#ifdef PLAT_EC
sleep(30);
#else
sleep(1);
#endif
continue;
}
break;
}
/* Workaround for now: if parse failed, use default one */
ret = ucentral_redirector_parse(&gw_host);
if (ret) {
UC_LOG_ERR("Firstcontact json data parse failed: %d\n",
ret);
/* Only use port from environment if not already set via command line */
if (client.port == 0 || client.port == 15002) { /* 15002 is the default */
client.port = env_port;
}
UC_LOG_INFO("Using gateway from environment: %s:%u\n", client.server, client.port);
} else {
client.server = gw_host;
/* No colon found - assume just hostname */
client.server = strdup(gw_host);
UC_LOG_INFO("Using gateway from environment: %s (using port %u)\n",
client.server, client.port);
}
#ifdef PLAT_EC
} else {
client.server = gw_host;
}
#endif
} else {
UC_LOG_ERR("No gateway address configured. Set UC_GATEWAY_ADDRESS environment variable.\n");
/* TODO: Could add discovery service support here if needed */
goto exit;
}
memset(&info, 0, sizeof info);
info.port = CONTEXT_PORT_NO_LISTEN;
info.options = LWS_SERVER_OPTION_DO_SSL_GLOBAL_INIT;
info.client_ssl_cert_filepath = UCENTRAL_CONFIG"cert.pem";
/* Use PKI 2.0 certificates (operational or birth fallback) */
info.client_ssl_cert_filepath = client.cert ? client.cert : UCENTRAL_CONFIG"cert.pem";
if (!stat(UCENTRAL_CONFIG"key.pem", &st))
info.client_ssl_private_key_filepath = UCENTRAL_CONFIG"key.pem";
info.ssl_ca_filepath = UCENTRAL_CONFIG"cas.pem";
info.ssl_ca_filepath = client.ca ? client.ca : UCENTRAL_CONFIG"cas.pem";
UC_LOG_INFO("PKI 2.0: Using certificate: %s\n", info.client_ssl_cert_filepath);
UC_LOG_INFO("PKI 2.0: Using CA: %s\n", info.ssl_ca_filepath);
info.protocols = protocols;
info.fd_limit_per_thread = 1 + 1 + 1;
info.connect_timeout_secs = 30;
@@ -825,13 +951,20 @@ int main(void)
}
sigthread_context_set(context);
password_len = sizeof(password);
if (get_updated_pass(password, &password_len))
password_len = 0;
proto_start();
while (!uc_loop_interrupted_get()) {
lws_service_tsi(context, 0, 0);
if (conn_successfull) {
deviceupdate_send();
if (password_len) {
deviceupdate_send(password);
password_len = 0;
}
if (!reboot_reason_sent) {
device_rebootcause_send();
reboot_reason_sent = true;
@@ -848,9 +981,5 @@ exit:
free(gw_host);
curl_global_cleanup();
#ifdef PLAT_EC
session_close();
clean_stats();
#endif
return 0;
}

View File

@@ -175,12 +175,7 @@ void uc_json_parser_init(struct uc_json_parser *uctx, uc_json_parse_cb cb,
void uc_json_parser_uninit(struct uc_json_parser *uctx)
{
/* The function lejp_destruct() cause segmentation fault on EC platform, comment out this line when building EC platform.
* The function lejp_destruct() describes "no allocations... just let callback know what it happening".
*/
#ifndef PLAT_EC
lejp_destruct(&uctx->ctx);
#endif
free(uctx->str);
cJSON_Delete(uctx->root);
*uctx = (struct uc_json_parser){ 0 };

View File

@@ -32,10 +32,6 @@ extern "C" {
#define UCENTRAL_TMP "/tmp/ucentral.cfg"
#define UCENTRAL_LATEST "/etc/ucentral/ucentral.active"
#ifdef PLAT_EC
#define REDIRECTOR_USER_DEFINE_FILE "/etc/ucentral/redirector-user-defined"
#endif
/* It's expected that dev-id format is the following:
* 11111111-1111-1111-1111-111111111111
* and the max size of such string is 36 symbols.
@@ -45,6 +41,8 @@ extern "C" {
struct client_config {
const char *redirector_file;
const char *redirector_file_dbg;
const char *ols_client_version_file;
const char *ols_schema_version_file;
const char *server;
int16_t port;
const char *path;
@@ -54,6 +52,10 @@ struct client_config {
char devid[UCENTRAL_DEVID_F_MAX_LEN + 1];
int selfsigned;
int debug;
/* PKI 2.0 certificate configuration */
const char *ca; /* CA certificate path (default: operational.ca) */
const char *cert; /* Client certificate path (default: operational.pem) */
int hostname_validate; /* Enable hostname validation */
};
typedef void (*uc_send_msg_cb)(const char *msg, size_t len);
@@ -64,14 +66,14 @@ extern time_t conn_time;
extern struct plat_metrics_cfg ucentral_metrics;
/* proto.c */
void proto_handle(cJSON *cmd);
void proto_handle(char *cmd);
void proto_cb_register_uc_send_msg(uc_send_msg_cb cb);
void proto_cb_register_uc_connect_msg_send(uc_send_connect_msg_cb cb);
void connect_send(void);
void ping_send(void);
void health_send(struct plat_health_info *);
void state_send(struct plat_state_info *plat_state_info);
void deviceupdate_send(void);
void deviceupdate_send(const char *updated_pass);
void device_rebootcause_send(void);
void telemetry_send(struct plat_state_info *plat_state_info);
void log_send(const char *message, int severity);

View File

@@ -0,0 +1,580 @@
# Adding New Platform to Test Framework
This document explains how to add support for a new platform to the configuration testing framework.
## Overview
The test framework supports two modes:
- **Stub mode**: Fast testing with simple stubs (no platform code)
- **Platform mode**: Integration testing with real platform implementation
To add a new platform, you need to:
1. Implement the platform in `src/ucentral-client/platform/your-platform/`
2. Create platform mocks in `tests/config-parser/platform-mocks/your-platform.c`
3. Test the integration
## What Gets Tested vs Mocked
### ✅ Your Platform Code is TESTED
When you add platform support, these parts of your code are **fully tested**:
- `plat_config_apply()` - Main configuration application function
- All `config_*_apply()` functions - VLAN, port, STP, etc.
- Configuration parsing and validation logic
- Business rules and error handling
- All the code in `platform/your-platform/*.c`
### ❌ Only Hardware Interface is MOCKED
Only the lowest-level hardware abstraction layer is mocked:
- gNMI/gNOI calls (for SONiC-based platforms)
- REST API calls (if your platform uses REST)
- System calls (ioctl, sysfs, etc.)
- Hardware driver calls
**Important**: You're testing real platform code, just not sending commands to actual hardware.
## Prerequisites
Your platform must:
- Implement `plat_config_apply()` in `platform/your-platform/`
- Build a `plat.a` static library
- Follow the platform interface defined in `include/ucentral-platform.h`
## Step-by-Step Guide
### Step 1: Verify Platform Implementation
Ensure your platform builds successfully:
```bash
cd src/ucentral-client/platform/your-platform
make clean
make
ls -la plat.a # Should exist
```
### Step 2: Create Platform Mock File
Create a mock file for your platform's hardware abstraction layer:
```bash
cd tests/config-parser/platform-mocks
cp example-platform.c your-platform.c
```
Edit `your-platform.c` and add mock implementations for your platform's HAL functions.
**Strategy:**
1. Start with empty file (just includes)
2. Try to build: `make clean && make test-config-parser USE_PLATFORM=your-platform`
3. For each "undefined reference" error, add a mock function
4. Repeat until it links successfully
**Example mock function:**
```c
/* Mock your platform's port configuration function */
int your_platform_port_set(int port_id, int speed, int duplex)
{
fprintf(stderr, "[MOCK:your-platform] port_set(port=%d, speed=%d, duplex=%d)\n",
port_id, speed, duplex);
return 0; /* Success */
}
```
### Step 3: Test Your Platform Integration
```bash
cd tests/config-parser
# Try to build
make clean
make test-config-parser USE_PLATFORM=your-platform
# If build fails, check for:
# - Missing mock functions (add to your-platform.c)
# - Missing includes (add to Makefile PLAT_INCLUDES)
# - Missing libraries (add to Makefile PLAT_LDFLAGS)
# When build succeeds, run tests
make test-config-full USE_PLATFORM=your-platform
```
### Step 4: Verify Test Results
Your tests should:
- ✓ Parse configurations successfully
- ✓ Call `plat_config_apply()` from your platform
- ✓ Exercise your platform's configuration logic
- ✓ Report apply success/failure correctly
Check test output for:
```
[TEST] cfg0.json
✓ SCHEMA: Valid
✓ PARSER: Success
✓ APPLY: Success
```
### Step 5: Add Platform-Specific Test Configurations
Create test configurations that exercise your platform's specific features:
```bash
cd config-samples
vim cfg_your_platform_feature.json
```
Run tests to verify:
```bash
cd tests/config-parser
make test-config-full USE_PLATFORM=your-platform
```
## Platform Mock Guidelines
### What to Mock
Mock your platform's **hardware abstraction layer** (HAL) functions - the lowest-level functions that would normally talk to hardware or external services.
**Examples:**
- gNMI/gNOI calls (brcm-sonic platform)
- REST API calls (if your platform uses REST)
- System calls (ioctl, sysfs access, etc.)
- Hardware driver calls
### What NOT to Mock
Don't mock your platform's **configuration logic** - that's what we're testing!
**Don't mock:**
- `plat_config_apply()` - This is the function under test
- `config_vlan_apply()`, `config_port_apply()`, etc. - Platform logic
- Validation functions - These should run normally
### Mock Strategies
#### Strategy 1: Success Stubs (Simple)
Return success for all operations, no state tracking.
```c
int platform_hal_function(...)
{
fprintf(stderr, "[MOCK] platform_hal_function(...)\n");
return 0; /* Success */
}
```
**Pros:** Simple, fast to implement
**Cons:** Doesn't catch validation errors in platform code
**Use when:** Getting started, CI/CD
#### Strategy 2: Validation Mocks (Moderate)
Check parameters, return errors for invalid inputs.
```c
int platform_hal_port_set(int port_id, int speed)
{
fprintf(stderr, "[MOCK] platform_hal_port_set(port=%d, speed=%d)\n",
port_id, speed);
/* Validate parameters */
if (port_id < 0 || port_id >= MAX_PORTS) {
fprintf(stderr, "[MOCK] ERROR: Invalid port ID\n");
return -1; /* Error */
}
if (speed != 100 && speed != 1000 && speed != 10000) {
fprintf(stderr, "[MOCK] ERROR: Invalid speed\n");
return -1; /* Error */
}
return 0; /* Success */
}
```
**Pros:** Catches some validation bugs
**Cons:** More code to maintain
**Use when:** Pre-release testing, debugging
#### Strategy 3: Stateful Mocks (Complex)
Track configuration state, simulate hardware behavior.
```c
/* Mock hardware state */
static struct {
int port_speed[MAX_PORTS];
bool port_enabled[MAX_PORTS];
} mock_hw_state = {0};
int platform_hal_port_set(int port_id, int speed)
{
fprintf(stderr, "[MOCK] platform_hal_port_set(port=%d, speed=%d)\n",
port_id, speed);
/* Validate */
if (port_id < 0 || port_id >= MAX_PORTS)
return -1;
/* Update mock state */
mock_hw_state.port_speed[port_id] = speed;
return 0;
}
int platform_hal_port_get(int port_id, int *speed)
{
if (port_id < 0 || port_id >= MAX_PORTS)
return -1;
/* Return mock state */
*speed = mock_hw_state.port_speed[port_id];
return 0;
}
```
**Pros:** Full platform behavior simulation
**Cons:** Significant effort, complex maintenance
**Use when:** Comprehensive integration testing
## Troubleshooting
### Build Errors
**Problem:** `undefined reference to 'some_function'`
**Solution:** Add mock implementation to `platform-mocks/your-platform.c`
**Problem:** `fatal error: your_platform.h: No such file or directory`
**Solution:** Add include path to Makefile `PLAT_INCLUDES`
If your platform has additional include directories, update the Makefile:
```makefile
# In tests/config-parser/Makefile, update the platform mode section:
PLAT_INCLUDES = -I $(PLAT_DIR) -I $(PLAT_DIR)/your_extra_dir
```
**Problem:** `undefined reference to 'pthread_create'` (or other library)
**Solution:** Add library to Makefile `PLAT_LDFLAGS`
If your platform needs additional libraries, update the Makefile:
```makefile
# In tests/config-parser/Makefile, update the platform mode section:
PLAT_LDFLAGS = -lgrpc++ -lprotobuf -lyour_library
```
### Runtime Errors
**Problem:** Segmentation fault in platform code
**Solution:** Check mock functions return valid pointers, not NULL
**Problem:** Tests always pass even with bad configurations
**Solution:** Your mocks might be too simple - add validation logic
**Problem:** Tests fail but should pass
**Solution:** Check if mock functions are returning correct values
## Example: Adding Your Platform
Let's walk through adding support for your platform (we'll use "myvendor" as an example):
### 1. Check platform exists
```bash
$ ls src/ucentral-client/platform/
brcm-sonic/ example-platform/ myvendor/
```
Platform exists. Good!
### 2. Create mock file
```bash
$ cd tests/config-parser/platform-mocks
$ touch myvendor.c
```
### 3. Try building (will fail with undefined references)
```bash
$ cd ..
$ make clean
$ make test-config-parser USE_PLATFORM=myvendor
...
undefined reference to `myvendor_port_config_set'
undefined reference to `myvendor_vlan_create'
...
```
### 4. Add mocks iteratively
Edit `platform-mocks/myvendor.c`:
```c
/* platform-mocks/myvendor.c */
#include <stdio.h>
int myvendor_port_config_set(int port, int speed, int duplex)
{
fprintf(stderr, "[MOCK:myvendor] port_config_set(%d, %d, %d)\n",
port, speed, duplex);
return 0;
}
int myvendor_vlan_create(int vlan_id)
{
fprintf(stderr, "[MOCK:myvendor] vlan_create(%d)\n", vlan_id);
return 0;
}
/* Add more as linker reports undefined references */
```
### 5. Build until successful
```bash
$ make clean
$ make test-config-parser USE_PLATFORM=myvendor
# Add more mocks for any remaining undefined references
# Repeat until build succeeds
```
### 6. Run tests
```bash
$ make test-config-full USE_PLATFORM=myvendor
========= running schema validation =========
[SCHEMA] cfg0.json: VALID
...
========= running config parser tests =========
Mode: platform
Platform: myvendor
================================================
[TEST] cfg0.json
[MOCK:myvendor] port_config_set(1, 1000, 1)
[MOCK:myvendor] vlan_create(100)
✓ SCHEMA: Valid
✓ PARSER: Success
✓ APPLY: Success
...
Total tests: 25
Passed: 25
Failed: 0
```
Success! Your platform is now integrated into the test framework.
## Property Database
### Overview
The testing framework uses property databases to track where configuration properties are parsed in the codebase. This enables detailed test reports showing the exact source location (file, function, line number) for each property.
Property tracking uses **separate databases**:
- **Base database** (`property-database-base.c`): Tracks properties parsed in proto.c
- **Platform database** (`property-database-platform-PLATFORM.c`): Tracks platform-specific properties
### Database Architecture
**Stub mode (no platform):**
- Only uses `base_property_database` from property-database-base.c
- Shows proto.c source locations in test reports
**Platform mode (with USE_PLATFORM):**
- Uses both `base_property_database` AND `platform_property_database_PLATFORM`
- Shows both proto.c and platform source locations in test reports
### When to Regenerate Property Databases
**Regenerate base database if you:**
- Modified proto.c parsing code
- Added vendor-specific #ifdef code to proto.c
- Line numbers changed significantly
- Want updated line numbers in test reports
**Regenerate platform database when:**
- Modified platform parsing code (plat-*.c)
- Added new configuration features to platform
- Want updated line numbers in test reports
### How to Regenerate Databases
```bash
# Regenerate base database (if you modified proto.c)
cd tests/config-parser
make regenerate-property-db
# Regenerate platform database (requires USE_PLATFORM)
make regenerate-platform-property-db USE_PLATFORM=myvendor
```
**What happens:**
- Base: Extracts properties from proto.c and finds line numbers
- Platform: Extracts properties from platform/myvendor/plat-*.c and finds line numbers
- Generated database files are C arrays included by test-config-parser.c
### Test Reports with Property Tracking
**Stub mode shows only proto.c properties:**
```
✓ Successfully Configured: 3 properties
- ethernet[0].speed = 1000 [proto.c:cfg_ethernet_parse():line 1119]
- ethernet[0].duplex = "full" [proto.c:cfg_ethernet_parse():line 1120]
- ethernet[0].enabled = false [proto.c:cfg_ethernet_parse():line 1119]
```
**Platform mode shows both proto.c AND platform properties:**
```
✓ Successfully Configured: 3 properties (proto.c)
- ethernet[0].speed = 1000 [proto.c:cfg_ethernet_parse():line 1119]
- ethernet[0].duplex = "full" [proto.c:cfg_ethernet_parse():line 1120]
✓ Platform Applied: 2 properties (platform/myvendor)
- ethernet[0].lldp.enabled [platform/myvendor/plat-config.c:config_lldp_apply():line 1234]
- ethernet[0].lacp.mode [platform/myvendor/plat-config.c:config_lacp_apply():line 567]
```
### Property Database Template Structure
Property databases are C arrays with this structure:
```c
static const struct property_metadata base_property_database[] = {
{"ethernet[].speed", PROP_CONFIGURED, "proto.c", "cfg_ethernet_parse", 1119, ""},
{"ethernet[].duplex", PROP_CONFIGURED, "proto.c", "cfg_ethernet_parse", 1120, ""},
// ... more entries ...
{NULL, 0, NULL, NULL, 0, NULL} /* Terminator */
};
```
Each entry maps:
- JSON property path → Source file → Parser function → Line number
### Vendor Modifications to proto.c
**Important for vendors:** If your platform adds #ifdef code to proto.c:
```c
/* In proto.c - vendor-specific code */
#ifdef VENDOR_MYVENDOR
/* Parse vendor-specific property */
if (cJSON_HasObjectItem(obj, "my-vendor-feature")) {
cfg->vendor_feature = cJSON_GetNumber...
}
#endif
```
You should regenerate the base property database in your repository:
```bash
cd tests/config-parser
make regenerate-property-db
```
This captures your proto.c modifications in the base database, ensuring accurate test reports showing your vendor-specific parsing code.
## Testing Workflow
### Development Workflow (Fast Iteration)
```bash
# Use stub mode for quick testing during development
make test-config-full
# Fast, no platform dependencies
# Edit code → test → repeat
```
### Pre-Release Workflow (Comprehensive Testing)
```bash
# Use platform mode for integration testing before release
make test-config-full USE_PLATFORM=your-platform
# Slower, but exercises real platform code
# Catches platform-specific bugs
```
### CI/CD Workflow (Automated)
```yaml
# .gitlab-ci.yml or similar
test-parser:
script:
- cd tests/config-parser
- make test-config-full # Stub mode for speed
test-platform:
script:
- cd tests/config-parser
- make test-config-full USE_PLATFORM=brcm-sonic
allow_failure: true # Platform mode may need special environment
```
## Advanced: Platform-Specific Makefile Configuration
If your platform needs special build configuration, you can add conditional logic to the Makefile:
```makefile
# In tests/config-parser/Makefile, in the platform mode section:
ifeq ($(PLATFORM_NAME),your-platform)
# Add your-platform specific includes
PLAT_INCLUDES += -I $(PLAT_DIR)/special_dir
# Add your-platform specific libraries
PLAT_LDFLAGS += -lyour_special_lib
endif
```
## Summary Checklist
Before considering your platform integration complete:
- [ ] Platform implementation exists in `platform/your-platform/`
- [ ] Platform builds successfully and produces `plat.a`
- [ ] Created `platform-mocks/your-platform.c` with mock HAL functions
- [ ] Build succeeds: `make test-config-parser USE_PLATFORM=your-platform`
- [ ] Tests run: `make test-config-full USE_PLATFORM=your-platform`
- [ ] All existing test configs pass (or expected failures documented)
- [ ] Generated platform property database: `make regenerate-platform-property-db USE_PLATFORM=your-platform`
- [ ] Test reports show platform source locations (file, function, line number)
- [ ] If modified proto.c with #ifdef, regenerated base database: `make regenerate-property-db`
- [ ] Added platform-specific test configurations (optional)
- [ ] Documented platform-specific requirements (optional)
- [ ] Tested in Docker environment (if applicable)
## Getting Help
- **Build issues**: Check Makefile configuration and include paths
- **Link issues**: Add missing mock functions to platform mock file
- **Runtime issues**: Check mock function return values and pointers
- **Test failures**: Verify platform code validation logic
For more help, see:
- `tests/config-parser/TEST_CONFIG_README.md` - Test framework documentation
- `TESTING_FRAMEWORK.md` - Overall testing architecture
- `platform/example-platform/` - Platform implementation template
- `tests/config-parser/platform-mocks/README.md` - Mock implementation guide
## FAQ
**Q: Why do I need to create mocks? Can't the test just call the platform code directly?**
A: The platform code calls hardware abstraction functions (like gNMI APIs). Those functions need hardware or external services. Mocks let us test the platform logic without requiring actual hardware.
**Q: How do I know which functions to mock?**
A: Try building - the linker will tell you with "undefined reference" errors. Mock those functions.
**Q: Can I reuse mocks from another platform?**
A: Not usually - each platform has its own HAL. But you can use another platform's mock file as a reference for structure.
**Q: Do I need to mock every function perfectly?**
A: No! Start with simple success stubs (return 0). Add validation later if needed.
**Q: My platform doesn't use gNMI. Can I still add it?**
A: Yes! Mock whatever your platform uses (REST API, system calls, etc.). The same principles apply.
**Q: The tests pass but I want more validation. What should I do?**
A: Add validation logic to your mock functions (Strategy 2 or 3 above). Check parameters and return errors for invalid inputs.
**Q: Can I test multiple platforms in the same test run?**
A: Not currently. Run separate test commands for each platform: `make test-config-full USE_PLATFORM=platform1` then `make test-config-full USE_PLATFORM=platform2`

1188
tests/MAINTENANCE.md Normal file

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More