Validation

Run-time integration tests

The run-time integration tests are a mechanism for validating the Arm Auto Solutions’ core functionalities.

The tests are run on the image using the OEQA test framework. Refer to OEQA FVP for more information on this framework.

In this section, details on the structure, implementation and debugging of the tests is given.

OEQA tests in the BSP

The Processing Elements and Components tested by the framework are detailed below. The testing scripts can be found in yocto/meta-arm-bsp-extras/lib/oeqa/runtime/cases and meta-arm/lib/oeqa/runtime/cases/.

  • test_00_aspen_boot
    • test_safety_island_c0

      This validates that the CMN has been configured, the handshake from the RSE has been received and that the SCP-firmware module initialization has completed successfully.

    • test_uboot_boot

      This method monitors the console output for the expected U-Boot message within a defined timeout period, ensuring the uboot bootloader has successfully initialized.

  • test_00_rse
    • test_normal_boot

      This validates that the SI CL0 is released out of reset and the handshake from the SCP-firmware has been received for CSS-Aspen.

    • test_measured_boot

      This validates enhanced trustworthiness provided by measured boot functionality by reading the slot and sw_type from the boot logs.

  • Primary Compute
    • FVP devices

      The entry point to these tests is meta-arm/lib/oeqa/runtime/cases/fvp_devices.py. To find out more about the applicable tests, see FVP device tests.

    • FVP boot

      The script that implements the test is meta-arm/lib/oeqa/runtime/cases/fvp_boot.py. The test waits for Linux to boot on the Primary Compute then checks for common error patterns on all consoles.

    • test_20_aspen_ap_dsu
      • test_dsu_cluster

        This validates that the AP’s DSU-120AE has been configured correctly by checking the L3 cache size, shared CPU list and the DSU-120AE PMU counters.

    • test_01_systemd_boot
      • test_systemd_boot_message

        This test ensures that the RD-Aspen platform is using the UEFI boot manager, systemd-boot. It verifies that the boot message contains the string ‘Boot in’ to confirm systemd-boot is being used.

    • test_30_configurable_pc_cores
      • test_configured_pc_cpus_in_tf_a

        This validates that the TF-A correctly brings up the configured number of Primary Compute CPUs.

      • test_configured_pc_cpus_in_linux

        This validates that the configured number of Primary Compute CPUs is visible in Linux by checking the number of CPUs listed in the device tree and the number of CPUs started at runtime using the nproc command.

    • test_00_secure_partition
      • test_optee_normal

        The test waits for the Primary Compute to log that OP-TEE loads the required Secure Partitions (SPs) and primary CPU switches to Normal world boot.

FVP device tests

These tests consist of a series of device tests that can be found in meta-arm/lib/oeqa/runtime/cases/fvp_devices.py.

  • networking

    Checks that the network device and its correct driver are available and accessible via the filesystem and that outbound connections work (invoking wget).

  • RTC

    Checks that the Real-Time Clock (RTC) device and its correct driver are available and accessible via the filesystem and verifies that the hwclock command runs successfully.

  • cpu_hotplug

    Checks for CPU availability and that basic functionality works, like enabling and stopping CPUs and preventing all of them from being disabled at the same time.

  • virtiorng

    Check that the virtio-rng device is available through the filesystem and that it is able to generate random numbers when required.

  • watchdog

    Checks that the watchdog device and its correct driver are available and accessible via the filesystem.

PSA APIs test suite integration on Primary Compute

The meta-arm Yocto layer provides Trusted Service OEQA tests which you can use for automated Trusted Services Test Executables. The script that implements the test is meta-arm/lib/oeqa/runtime/cases/trusted_services.py.

Currently, the following test cases for psa-api-test (from the PSA Arch Tests project) are supported:

  • ts-psa-crypto-api-test

    Used for PSA Crypto API conformance testing for PSA Crypto API.

  • ts-psa-ps-api-test

    Used for PSA Protected Storage API conformance testing for PSA Secure Storage API.

  • ts-psa-its-api-test

    Used for PSA Internal Trusted Storage API conformance testing for PSA Secure Storage API.

  • ts-psa-iat-api-test

    Used for PSA Initial Attestation API conformance testing for PSA Attestation API.

Platform Fault Detection Interface (PFDI) Test

The Platform Fault Detection Interface (PFDI) test is designed to validate the correct functioning of the PFDI integration. It does this by verifying the systemd service status of pfdi-app, the execution of the PFDI application, and the validation of the PFDI command-line interface (CLI).

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_10_pfdi.py.

The following tests are executed to validate PFDI

  • test_init_systemd_service

    The test_init_systemd_service method verifies that the pfdi-app systemd service starts correctly on boot. It uses journalctl to inspect the logs, ensuring the presence of expected service initialization messages and confirming the absence of error patterns in the log output.

  • test_pfdi_app

    The test_pfdi_app method validates the end-to-end execution of PFDI tool commands. It uses pfdi-tool to generate and pack diagnostic configuration files, then runs those diagnostics using the pfdi-sample-app. The test checks that diagnostics execute successfully across all CPU cores configured in the system.

  • test_pfdi_cli

    The test_pfdi_cli method checks the CLI interface by running commands such as --info, --pfdi_info, and --count. It validates that version information is correctly reported and that each core passes the Out of Reset (OoR) diagnostic check using the --result command.

  • test_pfdi_cli_force_error

    The test_pfdi_cli_force_error method injects a simulated fault on a CPU core using the pfdi-cli -e command. It then checks the systemd journal to verify that the failure was captured correctly, with log entries indicating that the Online (OnL) test failed for a CPU and reporting the appropriate input/output error code.

  • test_pfdi_app_monitoring

    The test_pfdi_app_monitoring test checks that PFDI monitoring starts properly on every CPU core. It looks at the system’s cluster and core layout, then confirms that each one shows the correct Started PFDI monitoring log message. If any core’s log is missing, late, or incorrect, the test will fail.

  • test_pfdi_app_monitoring_error

    The test_pfdi_app_monitoring_error test checks how the system behaves when an error is forced using the pfdi-cli. For each CPU core in every cluster, it triggers an error with the --force_error option and then verifies that the PFDI monitor reports the correct failure message. The test passes if all cores show the expected “Failed, stopping PFDI monitoring” logs.

  • test_pfdi_sbistc

    The test_pfdi_sbistc test validates system response when PFDI errors are forced on every CPU core. For each (cluster, core), it triggers an error using the pfdi-cli and then checks that the expected FMU non-critical fault and SBISTC failure logs appear. The test passes if all cores report both log messages within the timeout windows, it fails if any expected log is missing, delayed, or incorrect.

Safety Diagnostics tests

These tests consist of safety island tests that can be found in yocto/meta-arm-bsp-extras/lib/oeqa/runtime/cases/ test_10_safetydiagnostics_ssu_fmu.py.

  • test_10_safetydiagnostics_ssu_fmu
    • test_safety_island_fmu

      This validates that FMU collects all faults from upstream fault sources and collates them to a single pair of non-critical(NC) and critical(C) error signals.

    • test_safety_island_ssu

      This validates that SSU has mechanism to validate critical or non-critical state transition with SSU SYS_CTRL and SYS_STATUS registers.

Primary Compute CPUs RAS tests

These tests consist of Error processing tests that can be found in yocto/meta-arm-bsp-extras/lib/oeqa/runtime/cases/test_00_tftf.py.

The validation for RAS are Trusted Firmware-A Tests (TF-A-Tests) based, and a special build configuration is used where U-Boot is replaced with the Trusted Firmware-A Tests (TF-A-Tests).

The following test is executed as part of the validation.

  • TftfTest

    The TftfTest verifies that each RAS error is processed correctly by the firmware. The test injects a RAS error and then awaits the error to be cleared successfully. This test also verifies Transient Fault Protection enablement.

Safety Island Cluster 1

This test validates Safety Island Cluster 1 and is implemented in yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_10_safety_island.py.

  • test_10_safety_island
    • test_cluster1 Verifies the Safety Island Cluster 1 (Zephyr) boot flow for the CSS-Aspen platform. The test checks the Zephyr Hello World demo application boot on the cluster, and also checks that all SMD cores are up and operational.

Arm Cryptographic Extension Performance Tests

The Arm Cryptographic Extension performance test validates the performance benefits of the Arm Cryptographic Extension by comparing HTTPS download times with and without the extension enabled. This test demonstrates real-world performance improvements in cryptographic operations. On the FVP, the Arm Cryptographic Extension is simulated with a cryptography plugin.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_50_cryptographic_extension.py.

  • test_50_cryptographic_extension
    • test_cryptographic_extension_performance

      This test validates the performance benefits of the Arm Cryptographic Extension through a comprehensive HTTPS download comparison. The test performs the following operations:

      1. Certificate Generation: Creates a self-signed certificate using OpenSSL with RSA 2048-bit key for secure SSL/TLS connections.

      2. SSL Server Setup: Starts an SSL server that serves 10MB of random data using the generated certificate, simulating real-world encrypted data transfer scenarios.

      3. Performance Measurement with Extension: Downloads data over HTTPS with the Arm Cryptographic Extension enabled, using AES256-GCM-SHA384 cipher suite. The time command measures real time, user time, and system time for the operation.

      4. Performance Measurement without Extension: Downloads the same data with the Arm Cryptographic Extension disabled by setting OPENSSL_armcap=0x0 environment variable, forcing OpenSSL to use software-based cryptographic implementations.

      5. Performance Validation: Compares the timing results to verify that: * Real time (wall-clock time) is lower with the extension enabled

        • User time (CPU time in user mode) is significantly reduced with hardware acceleration

        • The cryptographic extension provides measurable performance improvements

      6. Cleanup: Properly terminates the SSL server process and removes generated certificate files to ensure clean test environment.

      The test uses OpenSSL’s capability detection and cipher suite selection to demonstrate hardware-accelerated cryptography versus software-only implementation. Performance improvements are expected due to dedicated cryptographic hardware instructions available in the Arm Cortex-A720AE core.

Power Management CPU idle power states (C-states)

The CPU Idle test suite validates the correct functionality of the CPU idle states and transitions on the Primary Compute of the CSS-Aspen platform. It includes tests for usage, entry and exit latency, residency, and transitions between different CPU idle states and CPU idle governors.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_60_cpuidle_cstates.py.

The following tests validate CPU idle functionality:

  • test_ensure_cpuidle_or_skip

    This test checks if the cpuidle sysfs interface is present on the system and loads the C-state information for all CPUs. If no C-states are found, the subsequent tests are skipped. This serves as a prerequisite validation to ensure the CPU idle framework is available.

  • test_cpuidle_c_states

    This test validates that the required CPU idle C-states exist and have the expected names. It verifies the presence of three C-states: WFI (state0), cpu-sleep (state1), and cluster-sleep (state2) for each CPU core by checking the sysfs interface.

  • test_cstates_default_status

    This test verifies that all required CPU idle C-states are enabled by default when the kernel exposes the default_status interface. It ensures that the power management states are properly configured for optimal system operation.

  • test_disable_cstate

    This test validates the ability to disable individual C-states and verifies that usage counters do not increase while a state is disabled. The test also ensures that the original state can be restored, confirming proper runtime control of CPU idle states.

  • test_cstate_residency_latency

    This test checks that the latency and residency values for each C-state match the expected platform-specific values. It also verifies that usage and time counters advance when C-states are entered, confirming that the power management states are actively used.

  • test_cpuidle_governors

    This test validates the CPU idle governor framework by checking that the current governor (read-only interface) is one of the available governors, and if a read-write interface exists, it matches the read-only value. This ensures proper governor configuration and interface consistency.

  • test_cpuidle_governor_switching

    This test validates runtime switching between CPU idle governors when supported. It attempts to switch to each available governor and verifies that the change takes effect in both read-only and read-write interfaces, ensuring dynamic power management policy changes work correctly.

  • test_invalid_cpuidle_governor

    This test ensures that writing an invalid governor name fails appropriately and does not change the current governor setting. It validates the robustness of the governor selection interface against invalid inputs.

CPU Frequency Scaling tests

The CPU Frequency Scaling test suite validates the correct functionality of CPU frequency scaling (DVFS - Dynamic Voltage and Frequency Scaling) on the Primary Compute of the CSS-Aspen platform. It includes comprehensive tests for frequency policies, governors, frequency ranges, and the SCMI-based scaling driver functionality.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_60_cpu_frequency.py.

The following tests validate CPU frequency scaling functionality:

  • test_cpu_frequency_policy

    This test validates that CPU frequency policies are available for all online cores and verifies the correct number of policies based on the performance domain configuration. For CSS-Aspen, it expects one policy per 4-core cluster and confirms that all required governors (ondemand, performance, powersave, and schedutil) are available for each policy.

  • test_cpufreq_default_governors

    This test verifies that the default CPU frequency governor is set to schedutil for all policies. The schedutil governor provides CPU frequency scaling based on scheduler utilization data, offering optimal performance and power balance.

  • test_cpufreq_set_governors

    This test validates that all supported CPU frequency governors can be set for each policy. It iterates through all available governors (ondemand, performance, powersave, schedutil) and verifies that each can be applied and read back correctly. The test restores the default governor after testing.

  • test_cpufreq_scaling_driver

    This test verifies that the CPU frequency scaling driver is configured as scmi for all policies. The SCMI (System Control and Management Interface) driver enables communication with the System Control Processor (SCP) for frequency management operations.

  • test_current_frequency_per_governor

    This test validates that the current frequency is reported correctly for each governor. It sets each governor in turn and verifies that the reported current frequency falls within the expected range of supported frequencies (1.8, 2.0, 2.5 MHz). This ensures proper frequency reporting and governor functionality.

  • test_cpufreq_affected_cpus_per_policy

    This test verifies that CPU frequency changes apply to the correct set of CPUs within each performance domain. For CSS-Aspen’s cluster configuration, it validates that each policy affects exactly 4 consecutive CPU cores, confirming proper performance domain mapping.

  • test_update_invalid_governor

    This test ensures system robustness by verifying that attempts to set invalid governor names fail gracefully without changing the current governor setting. It validates proper error handling in the governor selection interface.

  • test_update_scaling_min_frequencies

    This test validates the ability to adjust minimum scaling frequencies for each policy. It tests setting various frequency values within the supported range while ensuring the minimum frequency does not exceed the maximum frequency. The test verifies proper frequency boundary enforcement and restores original settings after testing.

  • test_update_scaling_max_frequencies

    This test validates the ability to adjust maximum scaling frequencies for each policy. It tests setting various frequency values within the supported range while ensuring the maximum frequency is not set below the minimum frequency. The test verifies proper frequency limit management and configuration persistence.

  • test_update_min_max_scaling_frequencies_negative

    This test validates system robustness by ensuring that invalid frequency configurations are rejected. It attempts to set minimum frequencies higher than maximum frequencies and vice versa, verifying that the system prevents invalid configurations and maintains frequency boundary integrity. When invalid values are provided, the system either rejects them entirely or clamps them to valid ranges.

Integration tests validating Xen

These tests consist of Xen integration tests that can be found in yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_40_virtualization.py.

  • DomU lifecycle management

    Test verifies DomU Lifecycle management, including status checking, destroy and restart. It uses ptest-runner to execute 01-xendomains.bats Bash Automated Test System (BATS) tests in yocto/meta-arm-auto-solutions/recipes-test/xen/files/tests/01-xendomains.bats

  • FVP Guest Devices
    • networking

      Checks that the network device and its correct driver are available and accessible via the filesystem, and that outbound connections work (invoking wget).

    • cpu_hotplug

      Checks for CPU availability and that basic functionality works, like enabling and stopping CPUs and preventing all of them from being disabled at the same time.

    • RTC, virtiorng, and watchdog

      These devices are not available for the Xen guests and are skipped.

Mission Based Power Profile (MBPP) demonstration tests

These tests validate the Mission Based Power Profile (MBPP) demonstration script.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_70_mission_based_profiles.py.

  • test_01_script_exists_and_is_executable

    Verifies that the mbpp.sh script exists in the /root directory and has the correct executable permissions (-r-xr--r--). Ensures the script is available, accessible, and executable for runtime validation.

  • test_02_help_and_list

    Verifies that running mbpp.sh with the -h and -l options displays the correct help information and available power profiles. Ensures that Parking, City and Highway profiles are listed without any console errors or missing details.

  • test_03_dump_initial_then_set_parking_and_verify

    Performs an initial state dump using mbpp.sh -d to verify the current power profile, then sets the system to parking mode using -s parking. Confirms that the mode change is successful and that the current mode dump matches the expected setting.

  • test_04_idempotent_all_profiles

    Verifies idempotent behavior by re-selecting each profile (parking, city and highway). Ensures that when a profile is already active, the script correctly reports “Power profile is already set.” without redundant reconfiguration.

  • test_05_case_insensitive_all_profiles

    Validates that profile names are case-insensitive. Checks variants such as PARKING, ParkIng and parking to ensure consistent behavior and correct application of CPU governor settings for each mode.

  • test_06_invalid_profile_selection

    Ensures proper handling of invalid inputs such as sport, eco and xyz. Verifies that the script returns an appropriate “Invalid profile selection” message and that the previously active profile remains unchanged.

  • test_07_toggle_all_modes

    Cycles through all valid profiles (city, highway and parking) multiple times. Ensures consistent transitions between modes and verifies that the correct CPU governors are applied after each switch without error or inconsistency.

  • test_08_guard_when_not_all_cores_online

    Validates that the MBPP script correctly detects when not all CPU cores are online. Ensures that in such cases, the script aborts the operation and reports “Not all N cores are online.” to maintain system integrity.

  • test_09_set_governor_to_default

    Restores all CPU frequency governors to the default schedutil mode after the MBPP tests are executed. Brings all CPU cores online, and updates each CPU’s governor to schedutil. Ensures that the test environment returns to a clean and consistent state for subsequent test runs or validation cycles.