tests.utils.scp_cli_utils ========================= .. py:module:: tests.utils.scp_cli_utils Classes ------- .. autoapisummary:: tests.utils.scp_cli_utils.ScpCliUtils tests.utils.scp_cli_utils.ScpTestUtils Module Contents --------------- .. py:class:: ScpCliUtils SCP Debugger CLI utility class. Provides helpers to interact with the SCP CLI: - Enter or exit CLI - Capture logs between test markers .. py:attribute:: DEFAULT_TIMEOUT :value: 120 .. py:method:: enter_scp_cli(session, pattern, prompt, timeout) Send Ctrl+E and wait for the SCP CLI prompt to appear. :param session: pexpect session connected to the console :param pattern: Regex pattern to identify successful entry into CLI :param prompt: Regex pattern to identify the CLI prompt after entry :param timeout: Timeout waiting for patterns :returns: Captured output from the CLI entry process :raises AssertionError: If expected patterns are not found within the timeout .. py:method:: exit_scp_cli(session, pattern, timeout) Send Ctrl+D, ensure the exit banner is printed, and return it. :param session: pexpect session connected to the console :param pattern: Regex pattern to identify successful exit from CLI :param timeout: Timeout for waiting for the exit pattern :returns: Captured output from the CLI exit process :raises AssertionError: If the expected exit pattern is not found within the timeout .. py:method:: capture_integration_logs(manager, session, test_name, timeout) Capture logs emitted between the Start/End markers of a test. :param manager: The console manager to use for pattern matching :param session: pexpect session connected to the console :param test_name: The name of the test, used to identify log markers :param timeout: Timeout for waiting for log markers :returns: Captured logs between the Start and End markers :raises AssertionError: If the expected log markers are not found within the timeout .. py:method:: _ensure_str(data) Ensure that the given data is returned as a string. If the input is of type ``bytes``, it is then decoded using UTF-8 with errors ignored. For all other types, the value is converted using ``str()``. :param data: Value to normalize into a string representation. .. py:class:: ScpTestUtils Reusable SCP test helpers .. py:attribute:: DEFAULT_TIMEOUT :value: 120 .. py:attribute:: TEST_RESULT_RE :value: '(?P.+?):(?P\\d+):(?P[^:]+):(?PPASS|FAIL|IGNORE)' .. py:method:: _build_test_patterns(test_name) Build regular expression patterns for integration test. The provided test name is escaped to ensure safe inclusion in a regular expression. The returned patterns match log lines of the form:: [INTEGRATION_TEST] Start: [INTEGRATION_TEST] End: :param test_name: Name of the integration test. :returns: Tuple containing the start and end regex patterns. .. py:method:: _wait_for_pattern(session, pattern, timeout, manager) Wait for a specific pattern to appear in the console session. If a ``manager`` object providing an ``_expect_pattern`` method is supplied, it is used for pattern matching. Otherwise, the method falls back to direct ``pexpect`` matching. :param session: Active pexpect session connected to the console. :param pattern: Regular expression pattern to wait for. :param timeout: Maximum time in seconds to wait for the pattern. :param manager: Optional console manager implementing ``_expect_pattern``. :raises AssertionError: If the expected pattern is not matched within given timeout when using the manager. :raises pexpect.TIMEOUT: If direct ``pexpect`` matching times out. :raises pexpect.EOF: If the session ends unexpectedly. .. py:method:: _normalize_output(data) Normalize raw console output into a clean string. This helper ensures that data returned from pexpect or other subprocess interactions is consistently represented as a string. If the input is of type `bytes`, it is decoded using UTF-8 with errors ignored. The result is then converted to a string (if not already) and stripped of leading and trailing whitespace. :param data: Raw console output to normalize. :returns: Cleaned string representation of the input. .. py:method:: capture_logs(session, test_name, timeout = None, manager = None) Capture integration test logs between Start/End markers. This method waits for the integration test start and end markers corresponding to the given ``test_name`` and extracts all console output emitted between those markers. The method supports both direct ``pexpect`` pattern matching and manager-assisted matching (when a console manager providing ``_expect_pattern`` is supplied). :param session: Active pexpect session connected to the console. :param test_name: Name of the integration test. :param timeout: Optional timeout in seconds. If not provided, ``DEFAULT_TIMEOUT`` is used. :param manager: Optional console manager providing ``_expect_pattern`` for matching. :returns: Captured console output between test start and end markers. :raises AssertionError: If the expected start or end markers are not detected within the timeout. .. py:method:: _compute_width(runs) Compute the display width required for the RUN column in the summary. The width is determined by the longest run name in the provided list, A minimum width of 12 character is enforced to keep table formatting readable. :param runs: List of per-run summary dictionaries containing at least a ``"name"`` key. :returns: Calculated column width for the RUN field. .. py:method:: _compute_totals(runs) Compute aggregated totals across multiple test runs. This method sums the total number of executed, passed, failed, and ignored tests from the provided run summaries and determines an overall status. :param runs: List of per-run summary dictionaries containing the keys ``"total"``, ``"passed"``, ``"failures"``, ``"ignored"``, and optionally ``"ok"``. :returns: Dictionary containing aggregated totals and overall status. Keys include ``"total"``, ``"passed"``, ``"failures"``, ``"ignored"``, and ``"status"``. .. py:method:: _build_header(title, meta) Build the formatted header section for the combined summary output. The header includes: - A top separator line - The provided title - Optional metadata fields (platform, port, suite) - A separator line below the metadata :param title: Title displayed at the top of the summary. :param meta: Dictionary containing optional metadata fields. :returns: List of formatted header lines. .. py:method:: _build_table(runs, totals, width_name) Build the formatted summary table section. This method generates a fixed-width, human-readable table showing: - Per-run statistics (total, passed, failed, ignored, result) - An aggregated "OVERALL" summary row The table layout is aligned using the provided column width for the RUN column to ensure consistent formatting. :param runs: List of per-run summary dictionaries. Each dictionary must contain the keys ``name``, ``total``, ``passed``, ``failures``, ``ignored``, and ``status``. :param totals: Dictionary containing aggregated totals and overall status values. :param width_name: Width used to left-align the ``RUN`` column. :returns: List of formatted table lines. .. py:method:: _build_footer(totals, _width_name) Build the footer section of the combined summary output. The footer displays the overall aggregated result, including total, passed, failed, and ignored counts, along with the computed overall status. It is visually separated from the table using delimiter lines. :param totals: Dictionary containing aggregated totals and overall status values. Expected keys include ``total``, ``passed``, ``failures``, ``ignored``, and ``status``. :param _width_name: Unused width parameter included for interface consistency with other builder methods. :returns: List of formatted footer lines. .. py:method:: _build_appendix(runs, overall_status) Build an optional appendix section listing failed or ignored tests. The appendix is included only when: - The overall status is not "PASS", or - At least one run contains failed or ignored test entries. For each run with issues, the appendix lists the specific failed and/or ignored test names to provide additional detail. :param runs: List of per-run summary dictionaries. Each dictionary may contain ``failed_tests`` and ``ignored_tests`` keys. :param overall_status: The computed overall test status (typically ``"PASS"`` or ``"FAIL"``). :returns: List of formatted appendix lines. Returns an empty list if the appendix should be omitted. .. py:method:: _should_skip_appendix(runs, overall_status) Determine whether the appendix section should be omitted. :param runs: List of per-run summary dictionaries. Each dictionary may contain ``failed_tests`` and ``ignored_tests`` keys. :param overall_status: The computed overall test status. :returns: ``True`` if the appendix should be omitted, else ``False`` .. py:method:: _format_run_details(run) Format detailed failure/ignore information for a single run. This method generates a list of lines describing failed and/or ignored tests for the given run. If no such tests exist, an empty list is returned. :param run: Dictionary containing per-run summary information. Expected keys include ``name``, ``failed_tests`` and ``ignored_tests``. :returns: List of formatted strings describing failed and/or ignored tests. Returns an empty list if none exist. .. py:method:: format_combined_summary(runs, title = 'SCP Integration Test Summary', meta = None) Generate a formatted combined summary for multiple test runs. This method aggregates per-run results, computes overall totals, and produces a structured, human-readable summary string :param runs: List of per-run summary dictionaries. Each dictionary must contain keys such as ``name``, ``total``, ``passed``, ``failures``, ``ignored``, and ``status``. :param title: Title displayed at the top of the summary. Defaults to ``"SCP Integration Test Summary"``. :param meta: Optional metadata dictionary containing fields such as ``platform``, ``port``, or ``suite``. :returns: A newline-separated formatted summary string. .. py:method:: _parse_overall_summary(text, test_name) Parse the overall test summary line from captured output. :param text: Full captured console output containing the summary line. :param test_name: The test being parsed. Used for error reporting. :returns: A tuple containing (total_tests, failures, ignored, passed). :raises AssertionError: If ``SUMMARY_RE`` is not defined on the class. :raises AssertionError: If the expected summary line is not found in the provided text. .. py:method:: _parse_test_results(text) Extract individual test result entries from captured output. :param text: Full captured console output containing per-test results. :returns: A list of dictionaries with keys ``"name"`` and ``"status"`` representing individual test outcomes. .. py:method:: _validate_results(test_name, ok_present, failures, ignored, failed_tests, ignored_tests) Validate parsed test results and raise an error if validation fails. :param test_name: Name of the integration test being validated. :param ok_present: Indicates whether the ``OK`` marker was detected. :param failures: Number of failed tests reported in the summary. :param ignored: Number of ignored tests reported in the summary. :param failed_tests: List of individual failed test names. :param ignored_tests: List of individual ignored test names. :raises AssertionError: If the ``OK`` marker is missing or if any tests failed or were ignored. .. py:method:: _collect_problem_tests(results) Collect failed and ignored test names from parsed test results. This method iterates over parsed per-test result entries and separates tests marked as ``FAIL`` and ``IGNORE`` into two lists. :param results: Parsed per-test result entries, typically returned by :meth:`_parse_test_results`. :returns: A tuple containing ``(failed_tests, ignored_tests)``. .. py:method:: summarize_results(text, test_name, *, raise_on_fail = True) Generate a structured summary from raw integration test output. :param text: Full captured console output of the integration test. :param test_name: Name of the test being summarized. :param raise_on_fail: Whether to raise an exception if validation fails. Defaults to ``True``. :returns: Dictionary containing structured summary information, including totals, status, and problematic test names. :raises AssertionError: If validation fails and ``raise_on_fail`` is ``True``.