FVP Platform
Prerequisites
To run tests locally on FVP, make sure to have:
FVP binary
The required Fast Models (FVP) executable must be installed on the host machine. When running pytest, the path to the binary must be passed explicitly through:
--fvp-binary /opt/arm/FVP/models/Linux64_GCC-9.3/FVPCrypto Library
For targeting the RD-Aspen FVP,
Crypto.soplugin is needed. The test will try to auto-detect this file under the FVP installation.Build Images
Tests require prebuilt images from the software reference stack build. The directory path provided to
--build-dirmust contain the required image files (such asrse-rom-image.img,ap-flash-image.img, and.wic) needed to boot the platform.These images may come from a local build or downloaded from any source. The images must be present under the directory passed to
--build-dir.$ pytest -s tests/test_sample.py \ --config ./my_platform_config.yaml \ --build-dir ~/my-reference-stack/build/tmp/deploy/images/rd-aspen \ --fvp-binary /opt/arm/FVP \ --platform fvp_rd_aspen
Configuration
Follow all the steps in Prerequisites and Installation before running tests.
Note
Before testing, ensure the .wic image name in the default YAML
configuration matches the correct build architecture.
For baremetal builds, update this argument accordingly in the YAML file:
- "-C ros.virtio_block0.image_path=${BUILD_DIR}/\
baremetal-image-fvp-rd-aspen.wic"
To run tests for other platform variants such as RD-Aspen Cfg2, some
additional parameters may need to be updated or modified in the YAML
configuration. For example, enabling an additional UART cluster terminal and
adjust related entries in required_terminals, prompts, and
port_map to include terminal_uart_si_cluster1.
Other variant-specific changes (like additional telnet ports or prompt patterns) should be updated as needed before running tests. For instance:
--parameter css.smb.si.terminal_uart_si_cluster1.start_telnet=0
Update or extend arguments like the above based on the specific platform variant being tested, such as RD-Aspen Cfg1 or Cfg2.
Running Sample Test
Default config:
$ pytest -s tests/test_sample.py \ --config test_automation/configs/standalone_config.yaml \ --build-dir /tmp/images/ \ --fvp-binary /tmp/FVP \ --platform fvp_rd_aspen
A successful sample test run should produce output similar to the following:
==================================================================== test session starts ========================================================== platform linux -- Python 3.10.x, pytest-x.x.x, pluggy-x.x.x -- <python install dir>/bin/python3 cachedir: .pytest_cache rootdir: <rootdir path> configfile: pyproject.toml plugins: timeout-x.x.x collected 1 item tests/test_sample.py::test_sample PASSED ==================================================================== 1 passed in 205.51s (0:03:25) ================================================The sample test typically completes within 3-4 minutes on a standard host system (but may vary depending on host system performance).
Custom config:
$ pytest -s tests/test_sample.py \ --config ./my_platform_config.yaml \ --build-dir /tmp/images/ \ --fvp-binary /tmp/FVP \ --platform fvp_rd_aspen
Note
The
--platformargument must exactly match the platform value in the config file (e.g.,fvp_rd_aspen).The value should contain “rd_aspen” if tests are for RD-Aspen (e.g.,
fvp_rd_aspen).Default logging level is INFO. To enable DEBUG logs, add
--debug-logs.For more detailed output during debugging, extra pytest arguments can be used:
-rs --setup-show -vv.If you don’t see INFO logs after using verbose command line options, use
-o log_cli=true --log-cli-level=INFO.Environment variables can be used:
FVP_BINARYinstead of--fvp-binaryFVP_CRYPTO_PATHfor location ofCrypto.so
Add New Test Cases for FVP
FVP tests use built-in fixtures such as platform_base_obj and related helpers
to interact with the platform via console-based command execution and
prompt-based validation.
Example:
class TestMyExample:
def test_my_example(self, platform_base_obj, platform_name):
code, output = platform_base_obj.mgr.execute_command_with_prompt_capture(
port=platform_base_obj.default_console,
command="echo hello",
timeout=10
)
assert code == 0
assert "hello" in output
Note
Always use
session_manager.wait_for_prompt_in_log()orsession_manager.execute_command_with_prompt_capture().Match terminal names as in the YAML config.
conftest.pyprovides global fixtures — no need to redeclare managers in each test.