User Guide
Our Test Automation uses pytest for extensive plugin support meeting industry level standard and configurable YAML based system for platform setup.
Features
- Telnet-based multi-terminal session handling for Arm FVP integration (extendable to FPGA and SoC environments). 
- Pytest-based test execution with modular plugin extensions. 
- Log-based result tracking and verification, including ANSI/log filtering utilities. 
- Reusable command execution utilities with prompt detection and automatic login handling. 
- Platform-based configuration system using properties and YAML templates. 
Getting Started
This section provides a detailed, step-by-step guide for setting up an automated test environment for Arm Automotive Software Reference Stack.
Each run creates a parent folder inside the central logs directory.
The folder name follows this format: logs/<target>_<platform>_<timestamp>/
Inside that folder, normalized log files are generated, such as:
- Boot log - e.g. - <target>_<platform>_boot.log
- Telnet console logs - e.g. - telnet_5001.log(one per console/port)
- Command outputs - e.g. - cmd.logfor captured command results
This consistent layout provides a clear way to inspect results, share logs, and debug issues.
Prerequisites
To run tests locally on FVP, make sure to have:
- Python environment - Ensure Python 3.10 or higher is installed. 
- FVP binary - The required Fast Models (FVP) executable must be installed on the host machine. When running pytest, the path to the binary must be passed explicitly through: - --fvp-binary /opt/arm/FVP/models/Linux64_GCC-9.3/FVP
- Crypto Library - For targeting the RD-Aspen FVP, - Crypto.soplugin is needed. The test will try to auto-detect this file under the FVP installation.
- Build Images - Tests require prebuilt images from the software reference stack build. The directory path provided to - --build-dirmust contain the required image files (such as- rse-rom-image.img,- ap-flash-image.img, and- .wic) needed to boot the platform.- These images may come from a local build or downloaded from any source. The images must be present under the directory passed to - --build-dir.- $ pytest -s tests/test_sample.py \ --config ./my_platform_config.yaml \ --build-dir ~/my-reference-stack/build/tmp/deploy/images/rd-aspen \ --fvp-binary /opt/arm/FVP \ --platform fvp_rd_aspen 
Installation
- Create Python environment Create and activate a virtual environment, then install dependencies in editable mode: - $ python3 -m venv venv $ source venv/bin/activate $ pip install -e . 
Configuration
Follow all the steps in Prerequisites and Installation before running tests.
Note
Before testing, ensure the .wic image name in the default YAML
configuration matches the correct build architecture.
For baremetal builds, update this argument accordingly in the YAML file:
- "-C ros.virtio_block0.image_path=${BUILD_DIR}/\
   baremetal-image-fvp-rd-aspen.wic"
This configuration has not been validated for virtualization-based builds. Users attempting to run should proceed at their own discretion and may need additional configuration updates.
To run tests for other platform variants such as RD-Aspen Cfg2, some
additional parameters may need to be updated or modified in the YAML
configuration. For example, enabling an additional UART cluster terminal and
adjust related entries in required_terminals, prompts, and
port_map to include terminal_uart_si_cluster1.
Other variant-specific changes (like additional telnet ports or prompt patterns) should be updated as needed before running tests. For instance:
--parameter css.smb.si.terminal_uart_si_cluster1.start_telnet=0
Update or extend arguments like the above based on the specific platform variant being tested, such as RD-Aspen Cfg1 or Cfg2.
Running Sample Test
- Default config: - $ pytest -s tests/test_sample.py \ --config test_automation/configs/standalone_config.yaml \ --build-dir /tmp/images/ \ --fvp-binary /tmp/FVP \ --platform fvp_rd_aspen 
A successful sample test run should produce output similar to the following:
------------------------------------------------------- live log call ------------------------------------------------------- INFO tests.test_samp: Running test on Platform: fvp_rd_aspen ... INFO tests.test_samp: Command Executed ('touch demo.txt; ls') Command output : demo.txt PASSED ... ----------------------------------------------------- live log teardown ----------------------------------------------------- INFO tests.conftest: Powering off platform ... =============================================== 1 passed in 64.02s (0:01:04) ===============================================The sample test typically completes within 1-2 minutes on a standard host system (but may vary depending on host system performance).
- Custom config: - $ pytest -s tests/test_sample.py \ --config ./my_platform_config.yaml \ --build-dir /tmp/images/ \ --fvp-binary /tmp/FVP \ --platform fvp_rd_aspen 
Note
- The - --platformargument must exactly match the platform value in the config file (e.g.,- fvp_rd_aspen).
- The value should contain “rd_aspen” if tests are for RD-Aspen (e.g., - fvp_rd_aspen).
- Default logging level is INFO. To enable DEBUG logs, add - --debug-logs.
- For more detailed output during debugging, extra pytest arguments can be used: - -rs --setup-show -vv.
- Environment variables can be used: - FVP_BINARYinstead of- --fvp-binary
- FVP_CRYPTO_PATHfor location of- Crypto.so
 
Add New Test Cases
Test files should be placed under the tests/ directory and follow the naming convention test_*.py.
Test Structure
A typical test file should contain:
- A - pytesttest function
- Usage of built-in fixtures such as - platform_base_obj,- platform_name, and- telnet_mgr
- Prompt-based command execution and output validation 
Example:
class TestMyExample:
   def test_my_example(self, platform_base_obj, platform_name):
      code, output = platform_base_obj.mgr.execute_command_with_prompt_capture(
         port=platform_base_obj.default_console,
         command="echo hello",
         timeout=10
      )
      assert code == 0
      assert "hello" in output
Note
- Always use - session_manager.wait_for_prompt_in_log()or- session_manager.execute_command_with_prompt_capture().
- Match terminal names as in the YAML config. 
- conftest.pyprovides global fixtures — no need to redeclare managers in each test.