2.1.2.1.6. tests.utils.scp_cli_utils
2.1.2.1.6.1. Classes
SCP Debugger CLI utility class. |
|
Reusable SCP test helpers |
2.1.2.1.6.2. Module Contents
- class tests.utils.scp_cli_utils.ScpCliUtils[source]
SCP Debugger CLI utility class.
- Provides helpers to interact with the SCP CLI:
Enter or exit CLI
Capture logs between test markers
- enter_scp_cli(session, pattern, prompt, timeout)[source]
Send Ctrl+E and wait for the SCP CLI prompt to appear.
- Parameters:
session (pexpect.spawn) – pexpect session connected to the console
pattern (str) – Regex pattern to identify successful entry into CLI
prompt (str) – Regex pattern to identify the CLI prompt after entry
timeout (int) – Timeout waiting for patterns
- Returns:
Captured output from the CLI entry process
- Raises:
AssertionError – If expected patterns are not found within the timeout
- Return type:
str
- exit_scp_cli(session, pattern, timeout)[source]
Send Ctrl+D, ensure the exit banner is printed, and return it.
- Parameters:
session (pexpect.spawn) – pexpect session connected to the console
pattern (str) – Regex pattern to identify successful exit from CLI
timeout (int) – Timeout for waiting for the exit pattern
- Returns:
Captured output from the CLI exit process
- Raises:
AssertionError – If the expected exit pattern is not found within the timeout
- Return type:
str
- capture_integration_logs(manager, session, test_name, timeout)[source]
Capture logs emitted between the Start/End markers of a test.
- Parameters:
manager (Any) – The console manager to use for pattern matching
session (pexpect.spawn) – pexpect session connected to the console
test_name (str) – The name of the test, used to identify log markers
timeout (int) – Timeout for waiting for log markers
- Returns:
Captured logs between the Start and End markers
- Raises:
AssertionError – If the expected log markers are not found within the timeout
- Return type:
str
- _ensure_str(data)[source]
Ensure that the given data is returned as a string.
If the input is of type
bytes, it is then decoded using UTF-8 with errors ignored. For all other types, the value is converted usingstr(). :param data: Value to normalize into a string representation.- Parameters:
data (object)
- Return type:
str
- class tests.utils.scp_cli_utils.ScpTestUtils[source]
Reusable SCP test helpers
- TEST_RESULT_RE = '(?P<file>.+?):(?P<line>\\d+):(?P<name>[^:]+):(?P<status>PASS|FAIL|IGNORE)'[source]
- _build_test_patterns(test_name)[source]
Build regular expression patterns for integration test.
The provided test name is escaped to ensure safe inclusion in a regular expression. The returned patterns match log lines of the form:
[INTEGRATION_TEST] Start: <test_name> [INTEGRATION_TEST] End: <test_name>
- Parameters:
test_name (str) – Name of the integration test.
- Returns:
Tuple containing the start and end regex patterns.
- Return type:
tuple[str, str]
- _wait_for_pattern(session, pattern, timeout, manager)[source]
Wait for a specific pattern to appear in the console session.
If a
managerobject providing an_expect_patternmethod is supplied, it is used for pattern matching. Otherwise, the method falls back to directpexpectmatching. :param session: Active pexpect session connected to the console. :param pattern: Regular expression pattern to wait for. :param timeout: Maximum time in seconds to wait for the pattern. :param manager: Optional console manager implementing_expect_pattern. :raises AssertionError: If the expected pattern is not matched within given timeout when using the manager. :raises pexpect.TIMEOUT: If directpexpectmatching times out. :raises pexpect.EOF: If the session ends unexpectedly.- Parameters:
session (pexpect.spawn)
pattern (str)
timeout (int)
manager (Any)
- Return type:
None
- _normalize_output(data)[source]
Normalize raw console output into a clean string.
This helper ensures that data returned from pexpect or other subprocess interactions is consistently represented as a string. If the input is of type bytes, it is decoded using UTF-8 with errors ignored. The result is then converted to a string (if not already) and stripped of leading and trailing whitespace.
- Parameters:
data (Any) – Raw console output to normalize.
- Returns:
Cleaned string representation of the input.
- Return type:
str
- capture_logs(session, test_name, timeout=None, manager=None)[source]
Capture integration test logs between Start/End markers.
This method waits for the integration test start and end markers corresponding to the given
test_nameand extracts all console output emitted between those markers.The method supports both direct
pexpectpattern matching and manager-assisted matching (when a console manager providing_expect_patternis supplied). :param session: Active pexpect session connected to the console. :param test_name: Name of the integration test. :param timeout: Optional timeout in seconds. If not provided,DEFAULT_TIMEOUTis used. :param manager: Optional console manager providing_expect_patternfor matching. :returns: Captured console output between test start and end markers. :raises AssertionError: If the expected start or end markers are not detected within the timeout.- Parameters:
session (pexpect.spawn)
test_name (str)
timeout (Optional[int])
manager (Any)
- Return type:
str
- _compute_width(runs)[source]
Compute the display width required for the RUN column in the summary.
The width is determined by the longest run name in the provided list, A minimum width of 12 character is enforced to keep table formatting readable. :param runs: List of per-run summary dictionaries containing at least a
"name"key. :returns: Calculated column width for the RUN field.- Parameters:
runs (List[Dict[str, Any]])
- Return type:
int
- _compute_totals(runs)[source]
Compute aggregated totals across multiple test runs.
This method sums the total number of executed, passed, failed, and ignored tests from the provided run summaries and determines an overall status. :param runs: List of per-run summary dictionaries containing the keys
"total","passed","failures","ignored", and optionally"ok". :returns: Dictionary containing aggregated totals and overall status. Keys include"total","passed","failures","ignored", and"status".- Parameters:
runs (List[Dict[str, Any]])
- Return type:
Dict[str, Any]
- _build_header(title, meta)[source]
Build the formatted header section for the combined summary output.
The header includes: - A top separator line - The provided title - Optional metadata fields (platform, port, suite) - A separator line below the metadata :param title: Title displayed at the top of the summary. :param meta: Dictionary containing optional metadata fields. :returns: List of formatted header lines.
- Parameters:
title (str)
meta (Dict[str, Any])
- Return type:
List[str]
- _build_table(runs, totals, width_name)[source]
Build the formatted summary table section.
This method generates a fixed-width, human-readable table showing: - Per-run statistics (total, passed, failed, ignored, result) - An aggregated “OVERALL” summary row
The table layout is aligned using the provided column width for the RUN column to ensure consistent formatting. :param runs: List of per-run summary dictionaries. Each dictionary must contain the keys
name,total,passed,failures,ignored, andstatus. :param totals: Dictionary containing aggregated totals and overall status values. :param width_name: Width used to left-align theRUNcolumn. :returns: List of formatted table lines.- Parameters:
runs (List[Dict[str, Any]])
totals (Dict[str, Any])
width_name (int)
- Return type:
List[str]
Build the footer section of the combined summary output.
The footer displays the overall aggregated result, including total, passed, failed, and ignored counts, along with the computed overall status. It is visually separated from the table using delimiter lines. :param totals: Dictionary containing aggregated totals and overall status values. Expected keys include
total,passed,failures,ignored, andstatus. :param _width_name: Unused width parameter included for interface consistency with other builder methods. :returns: List of formatted footer lines.- Parameters:
totals (Dict[str, Any])
_width_name (int)
- Return type:
List[str]
- _build_appendix(runs, overall_status)[source]
Build an optional appendix section listing failed or ignored tests.
- The appendix is included only when:
The overall status is not “PASS”, or
At least one run contains failed or ignored test entries.
For each run with issues, the appendix lists the specific failed and/or ignored test names to provide additional detail.
- Parameters:
runs (List[Dict[str, Any]]) – List of per-run summary dictionaries. Each dictionary may contain
failed_testsandignored_testskeys.overall_status (str) – The computed overall test status (typically
"PASS"or"FAIL").
- Returns:
List of formatted appendix lines. Returns an empty list if the appendix should be omitted.
- Return type:
List[str]
- _should_skip_appendix(runs, overall_status)[source]
Determine whether the appendix section should be omitted.
- Parameters:
runs (List[Dict[str, Any]]) – List of per-run summary dictionaries. Each dictionary may contain
failed_testsandignored_testskeys.overall_status (str) – The computed overall test status.
- Returns:
Trueif the appendix should be omitted, elseFalse- Return type:
bool
- _format_run_details(run)[source]
Format detailed failure/ignore information for a single run.
This method generates a list of lines describing failed and/or ignored tests for the given run. If no such tests exist, an empty list is returned. :param run: Dictionary containing per-run summary information. Expected keys include
name,failed_testsandignored_tests. :returns: List of formatted strings describing failed and/or ignored tests. Returns an empty list if none exist.- Parameters:
run (Dict[str, Any])
- Return type:
List[str]
- format_combined_summary(runs, title='SCP Integration Test Summary', meta=None)[source]
Generate a formatted combined summary for multiple test runs.
This method aggregates per-run results, computes overall totals, and produces a structured, human-readable summary string
- Parameters:
runs (List[Dict[str, Any]]) – List of per-run summary dictionaries. Each dictionary must contain keys such as
name,total,passed,failures,ignored, andstatus.title (str) – Title displayed at the top of the summary. Defaults to
"SCP Integration Test Summary".meta (Optional[Dict[str, Any]]) – Optional metadata dictionary containing fields such as
platform,port, orsuite.
- Returns:
A newline-separated formatted summary string.
- Return type:
str
- _parse_overall_summary(text, test_name)[source]
Parse the overall test summary line from captured output.
- Parameters:
text (str) – Full captured console output containing the summary line.
test_name (str) – The test being parsed. Used for error reporting.
- Returns:
A tuple containing (total_tests, failures, ignored, passed).
- Raises:
AssertionError – If
SUMMARY_REis not defined on the class.AssertionError – If the expected summary line is not found in the provided text.
- Return type:
tuple[int, int, int, int]
- _parse_test_results(text)[source]
Extract individual test result entries from captured output.
- Parameters:
text (str) – Full captured console output containing per-test results.
- Returns:
A list of dictionaries with keys
"name"and"status"representing individual test outcomes.- Return type:
List[Dict[str, str]]
- _validate_results(test_name, ok_present, failures, ignored, failed_tests, ignored_tests)[source]
Validate parsed test results and raise an error if validation fails.
- Parameters:
test_name (str) – Name of the integration test being validated.
ok_present (bool) – Indicates whether the
OKmarker was detected.failures (int) – Number of failed tests reported in the summary.
ignored (int) – Number of ignored tests reported in the summary.
failed_tests (list) – List of individual failed test names.
ignored_tests (list) – List of individual ignored test names.
- Raises:
AssertionError – If the
OKmarker is missing or if any tests failed or were ignored.- Return type:
None
- _collect_problem_tests(results)[source]
Collect failed and ignored test names from parsed test results.
This method iterates over parsed per-test result entries and separates tests marked as
FAILandIGNOREinto two lists. :param results: Parsed per-test result entries, typically returned by_parse_test_results(). :returns: A tuple containing(failed_tests, ignored_tests).
- summarize_results(text, test_name, *, raise_on_fail=True)[source]
Generate a structured summary from raw integration test output.
- Parameters:
text (str) – Full captured console output of the integration test.
test_name (str) – Name of the test being summarized.
raise_on_fail (bool) – Whether to raise an exception if validation fails. Defaults to
True.
- Returns:
Dictionary containing structured summary information, including totals, status, and problematic test names.
- Raises:
AssertionError – If validation fails and
raise_on_failisTrue.- Return type:
Dict[str, Any]