Reproduce

This section of the User Guide describes how to download, configure, build and execute the solutions in this repository.

This repository uses the kas menu tool to configure and customize the different Use cases via a set of configuration options provided in the configuration menu.

Note

You can copy all command examples on this page from the HTML document format by clicking the copy button. In the PDF document format, be aware that special characters are added when lines get wrapped.

Build host environment setup

Please see Build Host Environment for details on how to set up the build host environment.

Download

Note

Performing the builds and FVP execution in a tmux session is mandatory for Arm Automotive Solutions. This is because the runfvp tool that invokes the FVP expects the presence of a tmux session to attach its spawned tmux windows for console access to the processing elements.

See Tmux Documentation for more information on the usage of tmux. Arm recommends changing the default history-limit by adding set-option -g history-limit 3000 to ~/.tmux.conf before starting tmux.

Start a new tmux session, via:

tmux new-session -s arm-auto-solutions

To reconnect to an existing tmux session:

tmux attach -t arm-auto-solutions

Download the Arm Automotive Solutions repository using Git and checkout a release, via:

mkdir -p ~/arm-auto-solutions
cd ~/arm-auto-solutions
git clone https://git.gitlab.arm.com/automotive-and-industrial/arm-auto-solutions/sw-ref-stack.git --branch v2.0

Upgrading from a previous version

When upgrading from a previous version of Arm Automotive Solutions, which might use a different version of Yocto, you must run the following command to clean the build directory before rebuilding the use cases:

rm -rf build/cache build/tmp*

Reproducing the use cases

General

Kas build

Arm Automotive Solutions has a kas configuration menu that you can use to build the Use cases. It can also apply customizable parameters to build different Reference Software Stack Architecture types.

Note

Ensure that the configuration menu is run inside a tmux session.

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

Note

To build and run any image for an Arm FVP, you have to accept its EULA, which can be done by selecting the corresponding configuration option in the build setup. The Arm Automotive Solutions Demo is built as part of the default deployment.

The kas build configuration menu selections performed in each use case are saved. Ensure that you only select the options mentioned in the use case reproduce steps and deselect any other non-relevant ones.

Arm Auto Solutions Build Configuration Menu

Note

Typically, the build process will complete without any interruptions. If the process is interrupted, for example, manually by pressing Ctrl-C or because of network or resource failures, errors might occur when rerunning the build, such as:

NOTE: Reconnecting to bitbake server...
NOTE: No reply from server in 30s (for command <command> at 10:11:08.527092)

This happens because some processes might still be running in the background. To resolve this, you can manually terminate them using: killall -e Cooker

Check for lock files and ensure that there are no leftover lock files from the previous build. You can locate and remove them with: find . -name "bitbake.lock" -exec rm -f {} \;

If the above steps do not resolve the issue, a system reboot might help clear any lingering problems.

FVP

The runfvp tool that invokes the FVP creates one tmux terminal window per processing element. The default window displayed is that of the Primary Compute terminal titled as terminal_ns_uart0. User can press Ctrl-b w to see the list of tmux windows and use arrow keys to navigate through the windows and press Enter to select any processing element terminal.

Run the FVP

To start the FVP and connect to the Primary Compute terminal (running Linux):

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

The Reference Software Stack running on the Primary Compute can be logged into as root user without a password in the Linux terminal.

Shutting down the FVP

To shut down the FVP and terminate the emulation automatically, follow the below steps:

  • Issue the following command on the Primary Compute terminal.

shutdown now

The below messages show that the shutdown process is complete.

[  OK  ] Finished System Power Off.
[  OK  ] Reached target System Power Off.
reboot: Power down
  • Close the tmux pane started for the build host machine by pressing Ctrl-d.

Note

FVPs, and Fast Models in general, are functionally accurate, meaning that they fully execute all instructions correctly, however they are not cycle accurate. The main goal of the Reference Software Stack is to prove functionality only, and must not be used for performance analysis.

Arm Automotive Solutions demo build

The Arm Automotive Solutions Demo build supports the following use cases:

In general, it is not necessary to rebuild the Arm Automotive Solutions Demo for each use case, and it is not required to shut down and relaunch the FVP for every use case.

See the following instructions for building images for both Baremetal and Virtualization Architecture.

Build baremetal architecture

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build a Baremetal Architecture image:

  1. Select Use Case > Arm Automotive Solutions Demo.

  2. Select Reference Software Stack Architecture > Baremetal.

  3. Select Build.

Build virtualization architecture

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build a Virtualization Architecture image:

  1. Select Use Case > Arm Automotive Solutions Demo.

  2. Select Reference Software Stack Architecture > Virtualization.

  3. Select Build.

Critical application monitoring demo

The demo can be run on the Baremetal or Virtualization Architecture. See Critical application monitoring demo for further details.

Baremetal architecture

Build

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild it. For a first-time build, follow the instructions below.

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration and is connected to the Primary Compute terminal (running Linux), there is no need to stop and relaunch. For first-time use, follow the instructions below.

To start the FVP and connect to the Primary Compute terminal:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Wait for the system to boot and for the Linux prompt to appear.

The Safety Island (SI) Cluster 1 terminal running cam-service is available via the tmux window titled terminal_uart_si_cluster1. For ease of navigation, Arm recommends joining the SI Cluster 1 terminal window to the Primary Compute terminal window in order to issue commands on it.

Follow the steps below to achieve the same:

  1. Ensure that the tmux window titled terminal_ns_uart0 is selected. If not, press Ctrl-b w from the tmux session, navigate to the tmux window titled terminal_ns_uart0 using the arrow keys, then press the Enter key.

  2. Press Ctrl-b : and then type join-pane -s :terminal_uart_si_cluster1 -h followed by pressing Enter key to join the SI Cluster 1 terminal window to the Primary Compute terminal window.

See the following image of the tmux panes rearrangement. Panes can be navigated using Ctrl-b followed by the arrow keys.

Critical Application Monitoring Demo Reproduce Startup

The Reference Software Stack running on the Primary Compute can be logged into as root user without a password in the Linux terminal. Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Run the demo

Before running cam-app-example, .csd files corresponding to event streams produced by cam-app-example must be created and deployed to the system where cam-service runs (in this case SI Cluster 1). Run cam-app-example in calibration mode and then use cam-tool to generate the .csd files.

  1. Start cam-app-example in calibration mode from the Primary Compute terminal:

    cam-app-example -u 11085ddc-bc10-11ed-9a44-7ef9696e -t 3000 -c 10 -s 4 -C
    

    The stream event log files (.csel) for each stream are generated. The expected output is as follows:

    Cam application configuration:
        Service IP address: 127.0.0.1
        Service port: 21604
        UUID base: 11085ddc-bc10-11ed-9a44-7ef9696e
        Stream count: 4
        Processing period (ms): 3000
        Processing count: 10
        Multiple connection support: false
        Calibration mode support: true
        Calibration directory: ./[uuid].csel
        Fault injection support: false
        Event(s) interval time (ms): 0
    Using libcam v1.0
    Starting activity...
    Starting activity...
    Starting activity...
    Starting activity...
        Stream 0 sends event 0
        Stream 1 sends event 0
        Stream 2 sends event 0
        Stream 3 sends event 0
        ...
    

    List the files generated:

    ls -1 *.csel
    

    The stream event log files can be shown as below:

    11085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    11085ddc-bc10-11ed-9a44-7ef9696e0001.csel
    11085ddc-bc10-11ed-9a44-7ef9696e0002.csel
    11085ddc-bc10-11ed-9a44-7ef9696e0003.csel
    
  2. Run cam-tool from the Primary Compute terminal to analyze stream event log files and convert them to stream configuration files (.csc.yml):

    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    

    The analysis result is reported from the Primary Compute terminal as below, the timeout value might change:

    CAM event log analyze report:
    Input event log file:                   11085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    Output configuration file:              analyzed.csc.yml
    Stream UUID:                            11085ddc-bc10-11ed-9a44-7ef9696e0000
    Stream name:                            CAM STREAM  0
    Timeout between init and start:         300000
    Timeout between start and event:        450000
    Application running times:              1
    Processing count in each run:           [10]
    
    Event ID        timeout
    0               4000106
    

    The stream configuration files contain human-readable settings used for the deployment phase of a critical application. Users can modify this configuration, for example, to fine tune timeout values depending on the system capabilities.

    Run cam-tool three more times for each of the other three streams:

    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0001.csel
    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0002.csel
    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0003.csel
    

    Then, use the cam-tool pack command for each of the streams to generate deployment data:

    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0000.csc.yml
    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0001.csc.yml
    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0002.csc.yml
    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0003.csc.yml
    
  3. Run the cam-tool deploy command from the Primary Compute terminal to transfer the generated stream deployment data to SI Cluster 1 (where cam-service is running):

    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0000.csd -a 192.168.1.1
    

    The expected output on the SI Cluster 1 terminal looks as below, the connection number might change:

    Connection 4 is created.
    Deploy Message
    
    Connection 4 is closed.
    

    After that, the stream data of 11085ddc-bc10-11ed-9a44-7ef9696e0000 is deployed to the cam-service file system.

    Running cam-tool deploy three more times can deploy the data of three other streams to cam-service:

    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0001.csd -a 192.168.1.1
    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0002.csd -a 192.168.1.1
    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0003.csd -a 192.168.1.1
    

    The expected output on the SI Cluster 1 terminal after each one of the cam-tool deploy command looks as follows, the connection number might change:

    Connection 4 is created.
    Deploy Message
    
    Connection 4 is closed.
    

    List all the files from the SI Cluster 1 terminal:

    fs ls RAM:/
    

    The stream deployment data can be shown as below:

    11085ddc-bc10-11ed-9a44-7ef9696e0000.csd
    11085ddc-bc10-11ed-9a44-7ef9696e0001.csd
    11085ddc-bc10-11ed-9a44-7ef9696e0002.csd
    11085ddc-bc10-11ed-9a44-7ef9696e0003.csd
    
  4. Start cam-app-example from the Primary Compute terminal to create an application with four streams. Each stream sends an event message 10 times with a period of 3000 milliseconds:

    cam-app-example -u 11085ddc-bc10-11ed-9a44-7ef9696e -t 3000 -c 10 -s 4 -a 192.168.1.1
    

    The following configure messages are expected from the Primary Compute terminal:

    Cam application configuration:
        Service IP address: 192.168.1.1
        Service port: 21604
        UUID base: 11085ddc-bc10-11ed-9a44-7ef9696e
        Stream count: 4
        Processing period (ms): 3000
        Processing count: 10
        Multiple connection support: false
        Calibration mode support: false
        Fault injection support: false
        Event(s) interval time (ms): 0
    Using libcam v1.0
    Starting activity...
    Starting activity...
    Starting activity...
    Starting activity...
    

    And the log of sent event messages are shown repeatedly:

    Stream 0 sends event 0
    Stream 1 sends event 0
    Stream 2 sends event 0
    Stream 3 sends event 0
    Stream 0 sends event 0
    Stream 1 sends event 0
    Stream 2 sends event 0
    Stream 3 sends event 0
    ...
    

    As observed from the SI Cluster 1 terminal, cam-service is loading four stream deployment files for monitoring. In the following log, the stream messages are received and processed by it:

    Connection 4 is created.
    Init Message
    Stream 11085ddc-bc10-11ed-9a44-7ef9696e0001 configuration is loaded.
    Init Message
    Stream 11085ddc-bc10-11ed-9a44-7ef9696e0000 configuration is loaded.
    Init Message
    Stream 11085ddc-bc10-11ed-9a44-7ef9696e0002 configuration is loaded.
    Init Message
    Stream 11085ddc-bc10-11ed-9a44-7ef9696e0003 configuration is loaded.
    Start Message
    Start Message
    Start Message
    Start Message
    Event Message
    Event Message
    Event Message
    Event Message
    Event Message
    # Repeated event messages
    ...
    
  5. cam-app-example has a mode to inject a fault to test the CAM framework. Run cam-app-example again from the Primary Compute terminal with fault injection to event stream 0:

    cam-app-example -u 11085ddc-bc10-11ed-9a44-7ef9696e -t 3000 -c 10 -s 4 -f -S 0 -T 1000 -a 192.168.1.1
    

    The following configure messages are expected from the Primary Compute terminal:

    Cam application configuration:
        Service IP address: 192.168.1.1
        Service port: 21604
        UUID base: 111085ddc-bc10-11ed-9a44-7ef9696e
        Stream count: 4
        Processing period (ms): 3000
        Processing count: 10
        Multiple connection support: false
        Calibration mode support: false
        Fault injection support: true
        Fault injection time: 1000
        Fault injection stream: 0
        Event(s) interval time (ms): 0
    Using libcam v1.0
    Starting activity...
    Starting activity...
    Starting activity...
    Starting activity...
    

    And the log of sent event messages are shown repeatedly:

    Stream 0 sends event 0
    Stream 1 sends event 0
    Stream 2 sends event 0
    Stream 3 sends event 0
    Stream 0 sends event 0
    Stream 1 sends event 0
    Stream 2 sends event 0
    Stream 3 sends event 0
    ...
    

    The fault happens 1000ms after stream initialization. At that time cam-service detects a stream temporal error with the following output from the SI Cluster 1 terminal:

    # Repeated event messages
    ...
    ERROR: Stream temporal error:
    ERROR:     stream_name: CAM STREAM  0
    ERROR:     stream_uuid: 11085ddc-bc10-11ed-9a44-7ef9696e0000
    ERROR:     event_id: 0
    ERROR:     time_received: 0
    ERROR:     time_expected: 1710328901375511
    # Repeated event messages
    ...
    ERROR: Stream state error:
    ERROR:     stream_name: CAM STREAM  0
    ERROR:     stream_uuid: 11085ddc-bc10-11ed-9a44-7ef9696e0000
    ERROR:     timestamp: 1710328927375278
    ERROR:     current_state: Failed state
    ERROR:     requested_state: In-progress state
    

    Note

    Ignore time_received: 0 as the time_received is not set during a fault.

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

  6. To shut down the FVP and terminate the emulation automatically, issue the following command on the Primary Compute terminal:

    shutdown now
    

    The below messages show that the shutdown process is complete:

    [  OK  ] Finished System Power Off.
    [  OK  ] Reached target System Power Off.
    reboot: Power down
    

See the Automated validation for more details on how to trigger the automated validation

Virtualization architecture

Build

Note

If the Arm Automotive Solutions Demo for the Virtualization Architecture is the most recent build, there is no need to rebuild. For a first-time build, follow the instructions below.

To configure and build a Virtualization Architecture image see Build virtualization architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration and is connected to the Primary Compute terminal (running Linux), there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP and connect to the Primary Compute terminal:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Wait for the system to boot and for the Linux prompt to appear.

The Safety Island (SI) Cluster 1 terminal running cam-service is available via the tmux window titled terminal_uart_si_cluster1. For ease of navigation, Arm recommends joining the SI Cluster 1 terminal window to the Primary Compute terminal window in order to issue commands on it.

Follow the steps below to achieve the same:

  1. Ensure that the tmux window titled terminal_ns_uart0 is selected. If not, press Ctrl-b w from the tmux session, navigate to the tmux window titled terminal_ns_uart0 using the arrow keys, then press the Enter key.

  2. Press Ctrl-b : and then type join-pane -s :terminal_uart_si_cluster1 -h followed by pressing Enter key to join the SI Cluster 1 terminal window to the Primary Compute terminal window.

Both DomU1 and DomU2 are used to run cam-app-example, hence it is also recommended to create a tmux pane to connect to DomU2.

  1. Press Ctrl-b and the arrow keys to navigate to the terminal_ns_uart0 pane.

  2. Press Ctrl-b " to split the pane horizontally. The bottom pane is used to connect to DomU2.

See the following image of the tmux panes rearrangement. Panes can be navigated using Ctrl-b followed by the arrow keys.

Critical Application Monitoring Demo Reproduce Startup for Virtualization

The Reference Software Stack running on the Primary Compute can be logged into as root user without a password in the Linux terminal.

Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Note

A message similar to the following might appear in the Primary Compute terminal (XEN) d2v0: vGICR: SGI: unhandled word write 0x000000ffffffff to ICACTIVER0, this is an expected behavior.

Run the demo

Before running cam-app-example, .csd files corresponding to event streams produced by cam-app-example must be created and deployed to the system where cam-service runs (in this case SI Cluster 1). Run cam-app-example in calibration mode and then use cam-tool to generate the .csd files.

  1. From the Primary Compute terminal, enter the DomU1 console using the xl tool:

    xl console domu1
    

    DomU1 can be logged into as root user without a password in the Linux terminal. This command will provide a console on the DomU1. To exit, enter Ctrl-] (to access the FVP telnet shell), followed by typing send esc into the telnet shell and pressing Enter. See the xl documentation for further details.

  2. To improve the readability of commands and output on the DomU1 console, run the command below:

    stty rows 76 cols 282
    
  3. From the host terminal, SSH to the FVP then enter the DomU2 console using the xl tool:

    ssh [email protected] -p 2222
    xl console domu2
    

    DomU2 can be logged into as root user without a password in the Linux terminal. This command will provide a console on the DomU2. To exit, enter Ctrl-] (to access the FVP telnet shell), followed by typing send esc into the telnet shell and pressing Enter. See the xl documentation for further details.

  4. To improve the readability of commands and output on the DomU2 console, run the command below:

    stty rows 76 cols 282
    
  5. From the DomU1 terminal, check that the clock is synchronized using the command timedatectl, one of the line of its output needs to be System clock synchronized: yes to confirm that the clock is synchronized:

    timedatectl
    

    The expected output is as follows:

                   Local time: Thu 2024-03-14 12:56:26 UTC
               Universal time: Thu 2024-03-14 12:56:26 UTC
                     RTC time: n/a
                    Time zone: UTC (UTC, +0000)
    System clock synchronized: yes
                  NTP service: n/a
              RTC in local TZ: no
    

    The date and time can differ. In case the System clock synchronized: shows the value as no, wait 1 minute for the clock to synchronize, then repeat step 5 until the value is changed to yes.

  6. Start cam-app-example in calibration mode from the DomU1 terminal:

    cam-app-example -u 11085ddc-bc10-11ed-9a44-7ef9696e -t 3000 -c 10 -s 4 -C
    

    The stream event log files (.csel) for each stream are generated. The expected output is as follows:

    Cam application configuration:
        Service IP address: 127.0.0.1
        Service port: 21604
        UUID base: 11085ddc-bc10-11ed-9a44-7ef9696e
        Stream count: 4
        Processing period (ms): 3000
        Processing count: 10
        Multiple connection support: false
        Calibration mode support: true
        Calibration directory: ./[uuid].csel
        Fault injection support: false
        Event(s) interval time (ms): 0
    Using libcam v1.0
    Starting activity...
    Starting activity...
    Starting activity...
    Starting activity...
        Stream 0 sends event 0
        Stream 1 sends event 0
        Stream 2 sends event 0
        Stream 3 sends event 0
        ...
    

    List the files generated:

    ls -1 *.csel
    

    The stream event log files can be shown as below:

    11085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    11085ddc-bc10-11ed-9a44-7ef9696e0001.csel
    11085ddc-bc10-11ed-9a44-7ef9696e0002.csel
    11085ddc-bc10-11ed-9a44-7ef9696e0003.csel
    
  7. Run cam-tool from the DomU1 terminal to analyze stream event log files and convert them to stream configuration files (.csc.yml):

    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    

    The analysis result is reported from the DomU1 terminal as below, the timeout value might change:

    CAM event log analyze report:
    Input event log file:                   11085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    Output configuration file:              analyzed.csc.yml
    Stream UUID:                            11085ddc-bc10-11ed-9a44-7ef9696e0000
    Stream name:                            CAM STREAM  0
    Timeout between init and start:         300000
    Timeout between start and event:        450000
    Application running times:              1
    Processing count in each run:           [10]
    
    Event ID        timeout
    0               4000072
    

    The stream configuration files contain human-readable settings used for the deployment phase of a critical application. Users can modify this configuration, for example, to fine tune timeout values depending on the system capabilities.

    Run cam-tool three more times for each of the other three streams:

    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0001.csel
    
    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0002.csel
    
    cam-tool analyze -m 1000000 -i 11085ddc-bc10-11ed-9a44-7ef9696e0003.csel
    

    Then, use the cam-tool pack command for each of the streams to generate deployment data:

    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0000.csc.yml
    
    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0001.csc.yml
    
    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0002.csc.yml
    
    cam-tool pack -i 11085ddc-bc10-11ed-9a44-7ef9696e0003.csc.yml
    
  8. Run the cam-tool deploy command from the DomU1 terminal to transfer the generated stream deployment data to SI Cluster 1 (where cam-service is running):

    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0000.csd -a 192.168.1.1
    

    The expected output on the SI Cluster 1 terminal looks as follows, the connection number might change:

    Connection 4 is created.
    Deploy Message
    
    Connection 4 is closed.
    

    After that, the stream data of 11085ddc-bc10-11ed-9a44-7ef9696e0000 is deployed to the cam-service file system.

    Running cam-tool deploy three more times can deploy the data of three other streams to cam-service:

    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0001.csd -a 192.168.1.1
    
    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0002.csd -a 192.168.1.1
    
    cam-tool deploy -i 11085ddc-bc10-11ed-9a44-7ef9696e0003.csd -a 192.168.1.1
    

    The expected output on the SI Cluster 1 terminal after each one of the cam-tool deploy command looks as follows, the connection number might change:

    Connection 4 is created.
    Deploy Message
    
    Connection 4 is closed.
    
  9. From the DomU2 terminal, check that the clock is synchronized using the command timedatectl, one of the line of its output needs to be System clock synchronized: yes to confirm that the clock is synchronized:

    timedatectl
    

    The expected output is as follows:

                   Local time: Thu 2024-03-14 12:56:26 UTC
               Universal time: Thu 2024-03-14 12:56:26 UTC
                     RTC time: n/a
                    Time zone: UTC (UTC, +0000)
    System clock synchronized: yes
                  NTP service: n/a
              RTC in local TZ: no
    

    The date and time can differ. In case the System clock synchronized: shows the value as no, wait 1 minute for the clock to synchronize, then repeat step 9 until the value is changed to yes.

  10. Start cam-app-example in calibration mode from the DomU2 terminal:

    cam-app-example -u 22085ddc-bc10-11ed-9a44-7ef9696e -t 2000 -c 5 -s 2 -C
    

    The stream event log files (.csel) for each stream are generated. The expected output is as follows:

    Cam application configuration:
        Service IP address: 127.0.0.1
        Service port: 21604
        UUID base: 22085ddc-bc10-11ed-9a44-7ef9696e
        Stream count: 2
        Processing period (ms): 2000
        Processing count: 5
        Multiple connection support: false
        Calibration mode support: true
        Calibration directory: ./[uuid].csel
        Fault injection support: false
        Event(s) interval time (ms): 0
    Using libcam v1.0
    Starting activity...
    Starting activity...
        Stream 0 sends event 0
        Stream 1 sends event 0
        ...
    

    List the files generated:

    ls -1 *.csel
    

    The stream event log files can be shown as below:

    22085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    22085ddc-bc10-11ed-9a44-7ef9696e0001.csel
    
  11. Run cam-tool from the DomU2 terminal to analyze stream event log files and convert them to stream configuration files (.csc.yml):

    cam-tool analyze -m 1000000 -i 22085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    

    The analysis result is reported from the DomU2 terminal as below, the timeout value might change:

    CAM event log analyze report:
    Input event log file:                   22085ddc-bc10-11ed-9a44-7ef9696e0000.csel
    Output configuration file:              analyzed.csc.yml
    Stream UUID:                            22085ddc-bc10-11ed-9a44-7ef9696e0000
    Stream name:                            CAM STREAM  0
    Timeout between init and start:         300000
    Timeout between start and event:        450000
    Application running times:              1
    Processing count in each run:           [5]
    
    Event ID        timeout
    0               3000001
    

    The stream configuration files contain human-readable settings used for the deployment phase of a critical application. Users can modify this configuration, for example, to fine tune timeout values depending on the system capabilities.

    Run cam-tool for the other stream:

    cam-tool analyze -m 1000000 -i 22085ddc-bc10-11ed-9a44-7ef9696e0001.csel
    

    Then, use the cam-tool pack command for each of the streams to generate deployment data:

    cam-tool pack -i 22085ddc-bc10-11ed-9a44-7ef9696e0000.csc.yml
    cam-tool pack -i 22085ddc-bc10-11ed-9a44-7ef9696e0001.csc.yml
    
  12. Run the cam-tool deploy command from the DomU2 terminal to transfer the generated stream deployment data to SI Cluster 1 (where cam-service is running):

    cam-tool deploy -i 22085ddc-bc10-11ed-9a44-7ef9696e0000.csd -a 192.168.1.1
    

    The expected output on the SI Cluster 1 terminal is as follows, the connection number might change:

    Connection 4 is created.
    Deploy Message
    
    Connection 4 is closed.
    

    After that, the stream data of 22085ddc-bc10-11ed-9a44-7ef9696e0000 is deployed to the cam-service file system.

    Running cam-tool deploy one more time deploys the data of the other stream to cam-service:

    cam-tool deploy -i 22085ddc-bc10-11ed-9a44-7ef9696e0001.csd -a 192.168.1.1
    

    The expected output on the SI Cluster 1 terminal after each one of the cam-tool deploy command is as follows, the connection number might change:

    Connection 4 is created.
    Deploy Message
    
    Connection 4 is closed.
    
  13. List all the files from the SI Cluster 1 terminal:

    fs ls RAM:/
    

    The stream deployment data can be shown as below:

    11085ddc-bc10-11ed-9a44-7ef9696e0000.csd
    11085ddc-bc10-11ed-9a44-7ef9696e0001.csd
    11085ddc-bc10-11ed-9a44-7ef9696e0002.csd
    11085ddc-bc10-11ed-9a44-7ef9696e0003.csd
    22085ddc-bc10-11ed-9a44-7ef9696e0000.csd
    22085ddc-bc10-11ed-9a44-7ef9696e0001.csd
    
  14. Start cam-app-example from the DomU1 terminal to create an application with four streams. Each stream sends an event message 10 times with a period of 3000 milliseconds:

    cam-app-example -u 11085ddc-bc10-11ed-9a44-7ef9696e -t 3000 -c 10 -s 4 -a 192.168.1.1
    

    The following configure messages are expected from the Primary Compute terminal:

    Cam application configuration:
        Service IP address: 192.168.1.1
        Service port: 21604
        UUID base: 11085ddc-bc10-11ed-9a44-7ef9696e
        Stream count: 4
        Processing period (ms): 3000
        Processing count: 10
        Multiple connection support: false
        Calibration mode support: false
        Fault injection support: false
        Event(s) interval time (ms): 0
    Using libcam v1.0
    Starting activity...
    Starting activity...
    Starting activity...
    Starting activity...
    

    And the log of sent event messages are shown repeatedly:

    Stream 0 sends event 0
    Stream 1 sends event 0
    Stream 2 sends event 0
    Stream 3 sends event 0
    Stream 0 sends event 0
    Stream 1 sends event 0
    Stream 2 sends event 0
    Stream 3 sends event 0
    ...
    

    While cam-app-example is running on DomU1, start another instance on DomU2. cam-app-example has a mode to inject a fault to test the CAM framework. Run cam-app-example again from the DomU2 terminal with fault injection to event stream 0:

    cam-app-example -u 22085ddc-bc10-11ed-9a44-7ef9696e -t 2000 -c 5 -s 2 -f -S 0 -T 1000 -a 192.168.1.1
    

The following configure messages are expected from the Primary Compute terminal:

Cam application configuration:
    Service IP address: 192.168.1.1
    Service port: 21604
    UUID base: 22085ddc-bc10-11ed-9a44-7ef9696e
    Stream count: 2
    Processing period (ms): 2000
    Processing count: 5
    Multiple connection support: false
    Calibration mode support: false
    Fault injection support: true
    Fault injection time: 1000
    Fault injection stream: 0
    Event(s) interval time (ms): 0
Using libcam v1.0
Starting activity...
Starting activity...

And the log of sent event messages are shown repeatedly:

Stream 0 sends event 0
Stream 1 sends event 0
Stream 1 sends event 0
Stream 1 sends event 0
...

As observed from the SI Cluster 1 terminal, cam-service is loading four stream deployment files from DomU1 and two stream deployment files from DomU2 for monitoring. In the following log, the stream messages are received and processed by it:

Connection 4 is created.
Init Message
Stream 11085ddc-bc10-11ed-9a44-7ef9696e0002 configuration is loaded.
Init Message
Stream 11085ddc-bc10-11ed-9a44-7ef9696e0001 configuration is loaded.
Init Message
Stream 11085ddc-bc10-11ed-9a44-7ef9696e0003 configuration is loaded.
Init Message
Stream 11085ddc-bc10-11ed-9a44-7ef9696e0000 configuration is loaded.
Start Message
Start Message
Start Message
Start Message
Event Message
Event Message
Event Message
Event Message

Connection 5 is created.
Init Message
Stream 22085ddc-bc10-11ed-9a44-7ef9696e0001 configuration is loaded.
Init Message
Stream 22085ddc-bc10-11ed-9a44-7ef9696e0000 configuration is loaded.
Start Message
Start Message
Event Message
Event Message
Event Message
# Repeated event messages
...

The fault happens 1000ms after stream initialization. At that time cam-service detects a stream temporal error with the following output from the SI Cluster 1 terminal:

# Repeated event messages
...
ERROR: Stream temporal error:
ERROR:     stream_name: CAM STREAM  0
ERROR:     stream_uuid: 2285ddc-bc10-11ed-9a44-7ef9696e0000
ERROR:     event_id: 0
ERROR:     time_received: 0
ERROR:     time_expected: 1710275907816057
# Repeated event messages
...
ERROR: Stream state error:
ERROR:     stream_name: CAM STREAM  0
ERROR:     stream_uuid: 22085ddc-bc10-11ed-9a44-7ef9696e0000
ERROR:     timestamp: 1710275909816069
ERROR:     current_state: Failed state
ERROR:     requested_state: In-progress state

Note

Ignore time_received: 0 as the time_received is not set during a fault.

  1. To leave the DomU1 console, type Ctrl-] and enter send esc.

  2. To leave the DomU2 console, type Ctrl-].

Note

There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Virtualization Architecture).

  1. To shut down the FVP and terminate the emulation automatically, issue the following command on the Primary Compute terminal:

shutdown now

The below messages show that the shutdown process is complete:

[  OK  ] Finished System Power Off.
[  OK  ] Reached target System Power Off.
reboot: Power down

See the Automated validation for more details on how to trigger the automated validation

Safety Island actuation demo

The demo can be run on the Baremetal or Virtualization Architecture. See Safety Island Actuation Demo for further details.

Baremetal architecture

Build

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild. For a first-time build, follow the instructions below.

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration and is connected to the Primary Compute terminal (running Linux), there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP and connect to the Primary Compute terminal:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Wait for the system to boot and for the Linux prompt to appear.

The following image shows an example of how the terminal looks after the FVP invocation.

Arm Auto Solutions Linux Login Prompt

The Safety Island (SI) Cluster 2 terminal running the Actuation Service is available via the tmux window titled terminal_uart_si_cluster2. For ease of navigation, Arm recommends joining the SI Cluster 2 terminal window to the Primary Compute terminal window and creating a tmux pane attached to the build host machine in order to issue commands on it.

Follow the steps below to achieve the same:

  1. Ensure that the tmux window titled terminal_ns_uart0 is selected. If not, press Ctrl-b w from the tmux session, navigate to the tmux window titled terminal_ns_uart0 using the arrow keys, then press the Enter key.

  2. Press Ctrl-b % to add a new tmux pane which is used to issue commands on the build host machine.

  3. Press Ctrl-b : and then type join-pane -s :terminal_uart_si_cluster2 followed by pressing Enter key to join the Actuation Service terminal window to the Primary Compute terminal window.

See the following image of the tmux panes rearrangement. Panes can be navigated using Ctrl-b followed by the arrow keys.

Arm Auto Solutions FVP Rearrange Windows

The Reference Software Stack running on the Primary Compute can be logged into as root user without a password in the Linux terminal.

Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Run the demo
  1. Run the ping command from the Primary Compute terminal (running Linux) to verify that it can communicate with the Safety Island Cluster 2 (running Zephyr):

    ping 192.168.2.1 -c 10
    

    The expected output will look like the following line, repeated 10 times:

    64 bytes from 192.168.2.1 seq=0 ttl=64 time=0.151 ms
    
  2. From the tmux pane started for the build host machine terminal, start the Packet Analyzer:

    cd ~/arm-auto-solutions/
    kas shell -c "oe-run-native packet-analyzer-native start_analyzer -L debug -a localhost -c ./data"
    

    The following messages are expected from the host terminal:

    INFO : analyzer_client.py/_connect_to: Starting analyze, use Ctrl-C to stop the process.
    INFO : analyzer_client.py/_connect_to: Attempting a connect to (localhost : 49152)
    INFO : analyzer_client.py/_connect_to: Successfully connected to (localhost : 49152)
    

    A message similar to the following will appear on the SI Cluster 2 terminal:

    Actuation Service initialized.
    Accepted tcp connection from the Packet Analyzer: <11>
    

    See the following image for an invocation example of the Packet Analyzer.

    Arm Auto Solutions Packet Analyzer - Baremetal Architecture

  1. Start the Player on the Primary Compute terminal which replays a recording of a driving scenario:

    actuation_player -p /usr/share/actuation_player/
    

    A message similar to the following will appear on the Primary Compute terminal:

    Waiting for readers...     │1146571223376: -0.0000 (m/s^2) | 0.0000 (rad)
    ...Done. Starting replay.  │Thread get_analyzer_handle performing a blocking accept
    

    A message similar to the following will appear on the SI Cluster 2 terminal:

    51572682601: -0.0000 (m/s^2) |  0.0000 (rad)
    51597466928: -0.0000 (m/s^2) |  0.0000 (rad)
    51622532911: -0.0000 (m/s^2) |  0.0000 (rad)
    51647642316: -0.0000 (m/s^2) |  0.0000 (rad)
    51672535849: -0.0000 (m/s^2) |  0.0000 (rad)
    51697376579: -0.0000 (m/s^2) |  0.0000 (rad)
    51722500414: -0.0000 (m/s^2) |  0.0000 (rad)
    51747622543: -0.0000 (m/s^2) |  0.0000 (rad)
    51772496466: -0.0000 (m/s^2) |  0.0000 (rad)
    Thread get_analyzer_handle performing a blocking accept
    

    A message similar to the following will appear on the host terminal where the Packet Analyzer is running:

    INFO : analyzer_client.py/_connect_to: Starting analyze, use Ctrl-C to stop the process.
    INFO : analyzer_client.py/_connect_to: Attempting a connect to (localhost : 49152)
    INFO : analyzer_client.py/_connect_to: Successfully connected to (localhost : 49152)
    INFO : analyzer_client.py/run_analyze_on_chain: (1) Analyzer synced with packet chain
    

    The following messages will appear on the host terminal, but values might differ when the Packet Analyzer has finished running:

    INFO : analyzer_client.py/run_analyze_on_chain: All expected control packets received
    INFO : analyzer_client.py/_log_jitter: Observed Frequency = 21.36147200, Avg Jitter = 0.02624593, Std Deviation:0.06096328
    INFO : analyzer_client.py/run_analyze_on_chain: End of cycle: AnalyzerResult.SUCCESS
    
    INFO : analyzer_client.py/_tear_conn: Received fin ack from Actuation Service
    
    Chain ID   Result
    0          AnalyzerResult.SUCCESS
    

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

  2. To shut down the FVP and terminate the emulation automatically, issue the following command on the Primary Compute terminal:

    shutdown now
    

    The below messages show that the shutdown process is complete:

    [  OK  ] Finished System Power Off.
    [  OK  ] Reached target System Power Off.
    reboot: Power down
    

See Automated validation for more details on how to trigger the automated validation

Virtualization architecture

Build

Note

If the Arm Automotive Solutions Demo for the Virtualization Architecture is the most recent build, there is no need to rebuild. For a first-time build, follow the instructions provided below.

To configure and build a Virtualization Architecture image see Build virtualization architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration and is connected to the Primary Compute terminal (running Linux), there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP and connect to the Primary Compute terminal:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Wait for the system to boot and for the Linux prompt to appear.

On a Virtualization Architecture image, this will access the Dom0 terminal. The following image shows an example of how the terminal looks after the FVP invocation.

Arm Auto Solutions Linux Login Prompt

The Safety Island (SI) Cluster 2 terminal running the Actuation Service is available via the tmux window titled terminal_uart_si_cluster2. For ease of navigation, Arm recommends joining the SI Cluster 2 terminal window to the Primary Compute terminal window and creating a tmux pane attached to the build host machine in order to issue commands on it.

Follow the steps below to achieve the same:

  1. Ensure that the tmux window titled terminal_ns_uart0 is selected. If not, press Ctrl-b w from the tmux session, navigate to the tmux window titled terminal_ns_uart0 using the arrow keys, then press the Enter key.

  2. Press Ctrl-b % to add a new tmux pane which is used to issue commands on the build host machine.

  3. Press Ctrl-b : and then type join-pane -s :terminal_uart_si_cluster2 followed by pressing Enter key to join the Actuation Service terminal window to the Primary Compute terminal window.

See the following image of the tmux panes rearrangement. Panes can be navigated using Ctrl-b followed by the arrow keys.

Arm Auto Solutions FVP Rearrange Windows

The Reference Software Stack running on the Primary Compute can be logged into as root user without a password in the Linux terminal.

Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Note

A message similar to the following might appear in the Primary Compute terminal (XEN) d2v0: vGICR: SGI: unhandled word write 0x000000ffffffff to ICACTIVER0, this is an expected behavior.

Run the demo
  1. Enter the DomU1 console using the xl tool:

    xl console domu1
    

    DomU1 can be logged into as root user without a password in the Linux terminal. This command will provide a console on the DomU1. To exit, enter Ctrl-] (to access the FVP telnet shell), followed by typing send esc into the telnet shell and pressing Enter. See the xl documentation for further details.

  2. Run the ping command from the DomU1 terminal (running Linux) to verify that it can communicate with the Safety Island Cluster 2 (running Zephyr):

    ping 192.168.2.1 -c 10
    

    The expected output will look like the following line, repeated 10 times:

    64 bytes from 192.168.2.1 seq=0 ttl=64 time=0.151 ms
    
  3. From the tmux pane started for the build host machine terminal, start the Packet Analyzer:

    cd ~/arm-auto-solutions/
    kas shell -c "oe-run-native packet-analyzer-native start_analyzer -L debug -a localhost -c ./data"
    

    The following messages are expected from the host terminal:

    INFO : analyzer_client.py/_connect_to: Starting analyze, use Ctrl-C to stop the process.
    INFO : analyzer_client.py/_connect_to: Attempting a connect to (localhost : 49152)
    INFO : analyzer_client.py/_connect_to: Successfully connected to (localhost : 49152)
    

    A message similar to the following will appear on the SI Cluster 2 terminal:

    Actuation Service initialized.
    Accepted tcp connection from the Packet Analyzer: <11>
    

    See the following image for an invocation example of the Packet Analyzer.

    Arm Auto Solutions Packet Analyzer - Virtualization Architecture

  1. Start the Player on DomU1 which replays a recording of a driving scenario:

    actuation_player -p /usr/share/actuation_player/
    

    A message similar to the following will appear on the Primary Compute terminal:

    A message similar to the following will appear on the SI Cluster 2 terminal:

    51572682601: -0.0000 (m/s^2) |  0.0000 (rad)
    51597466928: -0.0000 (m/s^2) |  0.0000 (rad)
    51622532911: -0.0000 (m/s^2) |  0.0000 (rad)
    51647642316: -0.0000 (m/s^2) |  0.0000 (rad)
    51672535849: -0.0000 (m/s^2) |  0.0000 (rad)
    51697376579: -0.0000 (m/s^2) |  0.0000 (rad)
    51722500414: -0.0000 (m/s^2) |  0.0000 (rad)
    51747622543: -0.0000 (m/s^2) |  0.0000 (rad)
    51772496466: -0.0000 (m/s^2) |  0.0000 (rad)
    Thread get_analyzer_handle performing a blocking accept
    

    A message similar to the following will appear on the host terminal where the Packet Analyzer is running:

    INFO : analyzer_client.py/_connect_to: Starting analyze, use Ctrl-C to stop the process.
    INFO : analyzer_client.py/_connect_to: Attempting a connect to (localhost : 49152)
    INFO : analyzer_client.py/_connect_to: Successfully connected to (localhost : 49152)
    INFO : analyzer_client.py/run_analyze_on_chain: (1) Analyzer synced with packet chain
    

    The following messages will appear, but values might differ when the Packet Analyzer has finished running:

    INFO : analyzer_client.py/run_analyze_on_chain: All expected control packets received
    INFO : analyzer_client.py/_log_jitter: Observed Frequency = 21.36147200, Avg Jitter = 0.02624593, Std Deviation:0.06096328
    INFO : analyzer_client.py/run_analyze_on_chain: End of cycle: AnalyzerResult.SUCCESS
    
    INFO : analyzer_client.py/_tear_conn: Received fin ack from Actuation Service
    
    Chain ID   Result
    0          AnalyzerResult.SUCCESS
    
  2. To leave the DomU1 console, type Ctrl-] and enter send esc.

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Virtualization Architecture).

  3. To shut down the FVP and terminate the emulation automatically, issue the following command on the Primary Compute terminal:

    shutdown now
    

    The below messages show that the shutdown process is complete:

    [  OK  ] Finished System Power Off.
    [  OK  ] Reached target System Power Off.
    reboot: Power down
    

See the Automated validation for more details on how to trigger the automated validation

Parsec-enabled TLS demo

The demo can be run on the Baremetal Architecture. It consists of a TLS server and a TLS client application.

See Parsec-enabled TLS demo for more information on this application.

Baremetal architecture

Build

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild. For a first-time build, follow the instructions below.

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration and is connected to the Primary Compute terminal (running Linux), there is no need to relaunch it. For first-time launch, follow the instructions below.

To start the FVP and connect to the Primary Compute terminal:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Wait for the system to boot and for the Linux prompt to appear.

The Reference Software Stack running on the Primary Compute can be logged into as root user without a password in the Linux terminal.

Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Run the demo

The demo consists of a TLS server and a TLS client application.

See Parsec-enabled TLS demo for more information on this application.

  1. Run ssl_server from the Primary Compute terminal in the background and press the Enter key to continue:

    ssl_server &
    

    A message similar to the following will appear:

    . Seeding the random number generator... ok
    . Loading the server cert. and key... ok
    . Bind on https://localhost:4433/ ... ok
    . Setting up the SSL data.... ok
    . Waiting for a remote connection ...
    

    The TLS client application can take an optional parameter as the TLS server IP address. The default value of the parameter is localhost.

  2. Run ssl_client1 from the Primary Compute terminal in a container:

    docker run  --rm -v /run/parsec/parsec.sock:/run/parsec/parsec.sock -v /usr/bin/ssl_client1:/usr/bin/ssl_client1 --network host docker.io/library/ubuntu:22.04 ssl_client1
    

    After a few seconds, a message similar to the following will appear:

    Trying to pull docker.io/library/ubuntu:22.04...
    Getting image source signatures
    Copying blob a186900671ab done   |
    Copying config 981912c48e done   |
    Writing manifest to image destination
    
       . Seeding the random number generator... ok
       . Loading the CA root certificate ... ok (0 skipped)
       . Connecting to tcp/localhost/4433... ok
       . Performing the SSL/TLS handshake... ok
       . Setting up the SSL/TLS structure... ok
       . Performing the SSL/TLS handshake... ok
       < Read from client: 18 bytes read
    
    GET / HTTP/1.0
    
       > Write to client: 156 bytes written
    
    HTTP/1.0 200 OK
    Content-Type: text/html
    
    <h2>Mbed TLS Test Server</h2>
    <p>Successful connection using: TLS-ECDHE-RSA-WITH-CHACHA20-POLY1305-SHA256</p>
    
     ok
      . Verifying peer X.509 certificate... ok
      > Write to server:  . Closing the connection... ok
     18 bytes written
    
    GET / HTTP/1.0
    
      < Read from server:  . Waiting for a remote connection ... 156 bytes read
    
    HTTP/1.0 200 OK
    Content-Type: text/html
    
    <h2>Mbed TLS Test Server</h2>
    <p>Successful connection using: TLS-ECDHE-RSA-WITH-CHACHA20-POLY1305-SHA256</p>
    
  3. Stop the TLS server and synchronize the container image to the persistent storage:

    pkill ssl_server
    sync
    

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

  4. To shut down the FVP and terminate the emulation automatically, issue the following command on the Primary Compute terminal:

    shutdown now
    

    The below messages show that the shutdown process is complete:

    [  OK  ] Finished System Power Off.
    [  OK  ] Reached target System Power Off.
    reboot: Power down
    

See Automated validation for more details on how to trigger the automated validation

Primary Compute PSA Secure Storage and Crypto APIs architecture test suite

The demo can be run on the Baremetal Architecture.

See Primary Compute secure services for more information on this application. This demo is included as part of the Critical Application Monitoring Demo.

Baremetal architecture

Build

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild. For a first-time build, follow the instructions below.

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration and is connected to the Primary Compute terminal (running Linux), there is no need to stop and relaunch. For first-time launch, follow the instructions to start the FVP and connect it to the Primary Compute terminal.

To start the FVP and connect to the Primary Compute terminal:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Wait for the system to boot and for the Linux prompt to appear.

The Reference Software Stack running on the Primary Compute can be logged into as root user without a password in the Linux terminal. Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Run the demo

The demo consists of simple tests run from the Linux terminal.

See Primary Compute secure services for more information on this application.

  1. Run the PSA Crypto API tests from the Primary Compute terminal using the following command:

    psa-crypto-api-test
    

    A message similar to the following will appear when the tests have completed:

    ************ Crypto Suite Report **********
    TOTAL TESTS     : 59
    TOTAL PASSED    : 59
    TOTAL SIM ERROR : 0
    TOTAL FAILED    : 0
    TOTAL SKIPPED   : 0
    ******************************************
    
  2. Run the PSA Protected Storage API tests from the Primary Compute terminal using the following command:

    psa-ps-api-test
    

    A message similar to the following will appear when the tests have completed:

    ************ Storage Suite Report **********
    TOTAL TESTS     : 17
    TOTAL PASSED    : 11
    TOTAL SIM ERROR : 0
    TOTAL FAILED    : 0
    TOTAL SKIPPED   : 6
    ******************************************
    
  3. Run the PSA Internal Trusted Storage API tests from the Primary Compute terminal using the following command:

    psa-its-api-test
    

    A message similar to the following will appear when the tests have completed:

    ************ Storage Suite Report **********
    TOTAL TESTS     : 10
    TOTAL PASSED    : 10
    TOTAL SIM ERROR : 0
    TOTAL FAILED    : 0
    TOTAL SKIPPED   : 0
    ******************************************
    

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

  4. To shut down the FVP and terminate the emulation automatically, issue the following command on the Primary Compute terminal:

    shutdown now
    

    The below messages show that the shutdown process is complete:

    [  OK  ] Finished System Power Off.
    [  OK  ] Reached target System Power Off.
    reboot: Power down
    

See Automated validation for more details on how to trigger the automated validation

Safety Island PSA secure storage APIs architecture test suite

The demo can be run on the Baremetal Architecture. See Safety Island PSA Architecture Test Suite for further details.

Baremetal architecture

Build

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild. For a first-time build, follow the instructions below.

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration, there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Note

This use case does not require waiting for the Primary Compute to boot.

Run the demo

The PSA Secure Storage APIs Architecture Test Suite is deployed on the Safety Island Cluster 1. You can navigate to the terminal_uart_si_cluster1 window by pressing Ctrl-b w and arrow keys followed by the Enter key.

The instructions below demonstrate running PSA Secure Storage APIs Architecture Test Suite on Cluster 1.

To run the PSA Secure Storage APIs Architecture Test Suite:

psa test

The complete test suite takes around 5 minutes to complete. When the tests finish, the expected output is as follows:

***** PSA Architecture Test Suite - Version 1.4 *****
Running.. Storage Suite
******************************************
TEST: 401 | DESCRIPTION: UID not found check | UT: STORAGE
[Info] Executing tests from non-secure
[Info] Executing ITS Tests
[Check 1] Call get API for UID 6 which is not set
[Check 2] Call get_info API for UID 6 which is not set
[Check 3] Call remove API for UID 6 which is not set
[Check 4] Call get API for UID 6 which is removed
[Check 5] Call get_info API for UID 6 which is removed
[Check 6] Call remove API for UID 6 which is removed
Set storage for UID 6
[Check 7] Call get API for different UID 5
[Check 8] Call get_info API for different UID 5
[Check 9] Call remove API for different UID 5

[Info] Executing PS Tests
[Check 1] Call get API for UID 6 which is not set
[Check 2] Call get_info API for UID 6 which is not set
[Check 3] Call remove API for UID 6 which is not set
[Check 4] Call get API for UID 6 which is removed
[Check 5] Call get_info API for UID 6 which is removed
[Check 6] Call remove API for UID 6 which is removed
Set storage for UID 6
[Check 7] Call get API for different UID 5
[Check 8] Call get_info API for different UID 5
[Check 9] Call remove API for different UID 5

TEST RESULT: PASSED

******************************************

<further tests removed from log for brevity>

************ Storage Suite Report **********
TOTAL TESTS     : 17
TOTAL PASSED    : 11
TOTAL SIM ERROR : 0
TOTAL FAILED    : 0
TOTAL SKIPPED   : 6
******************************************

It is normal for some tests to be skipped but there must be no failed tests.

Note

There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

To shut down the FVP and terminate the emulation, select the terminal titled as python3 where the runfvp was launched by pressing Ctrl-b 0 and press Ctrl-c to stop the FVP process.

See Automated validation for more details on how to trigger the automated validation

Safety Island PSA crypto APIs architecture test suite

The demo can be run on the Baremetal Architecture. See Safety Island PSA Architecture Test Suite for further details.

Baremetal architecture

Build

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild. For a first-time build, follow the instructions below.

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration, there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Note

This use case does not require waiting for the Primary Compute to boot.

Run the tests

The PSA Crypto APIs Architecture Test Suite is deployed on the Safety Island Cluster 2. You can navigate to terminal_uart_si_cluster2 window by pressing Ctrl-b w and arrow keys followed by the Enter key.

The instructions below demonstrate running PSA Crypto APIs Architecture Test Suite on Cluster 2.

To run the PSA Crypto APIs Architecture Test Suite:

psa test

The complete test suite takes around 5 minutes to complete. When the tests finish, the expected output is as follows, with all tests passing:

***** PSA Architecture Test Suite - Version 1.4 *****
Running.. Crypto Suite
******************************************
TEST: 201 | DESCRIPTION: Testing crypto key management APIs | UT:  psa_crypto_init
[Info] Executing tests from non-secure
[Check 1] Test calling crypto functions before psa_crypto_init
[Check 2] Test psa_crypto_init
[Check 3] Test multiple psa_crypto_init

TEST RESULT: PASSED
******************************************

<further tests removed from log for brevity>

************ Crypto Suite Report **********
TOTAL TESTS     : 61
TOTAL PASSED    : 61
TOTAL SIM ERROR : 0
TOTAL FAILED    : 0
TOTAL SKIPPED   : 0
******************************************

Note

There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

To shut down the FVP and terminate the emulation, select the terminal titled as python3 where the runfvp was launched by pressing Ctrl-b 0 and press Ctrl-c to stop the FVP process.

See the Automated validation for more details on how to trigger the automated validation

Fault management demo

The demo uses the Safety Island Cluster 1 console and it can be run on the Baremetal Architecture.

See Fault management for further details.

Baremetal architecture

Build

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild it. For a first-time build, follow the instructions below.

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration, there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Note

This use case does not require waiting for the Primary Compute to boot.

The Fault Management subsystem is deployed on Safety Island Cluster 1 so the instructions below must be executed on its terminal.

The Safety Island Cluster 1 tmux window can be accessed by typing Ctrl-b w, using the arrow keys to select terminal_uart_si_cluster1 then pressing the Enter key.

Run the demo

The instructions below demonstrate injecting faults into both the System FMU and GIC-720AE FMU and how this affects the SSU safety state.

  1. Start by enumerating the configured fault device tree:

    fault tree
    

    The output shows the root fault device fmu@2a510000 (the System FMU), after which are the attached safety state device ssu@2a500000, the fault device fmu@2a570000 (the Primary Compute GIC-720AE FMU) and the fault device fmu@2a530000 (the Safety Island GIC-720AE FMU):

    Root 0: fmu@2a510000
            Safety: ssu@2a500000
            Slot 0: fmu@2a570000
            Slot 1: fmu@2a530000
    
  2. After booting, query the initial state of the SSU:

    fault safety_status ssu@2a500000
    

    The initial state is TEST:

    Status: TEST (0x0)
    
  3. It is expected that a Fault Management deployment would perform a self-test after boot then signal its outcome to the SSU. For demonstration purposes, simulate a successful self-test completion by issuing the compl_ok signal to the SSU:

    fault safety_control ssu@2a500000 compl_ok
    

    The system is now SAFE for operation:

    Signal: compl_ok (0x0)
    State: SAFE (0x3)
    
  4. Simulate an internal Lockstep error (0x4) in the System FMU:

    fault inject fmu@2a510000 0x4
    

    Three events are logged:

    • The subsystem reports that it received the fault and that it was non-critical (all System FMU internal faults are non-critical).

    • The safety component reports that this caused the SSU to enter the

      ERRN state.

    • The storage component reports that the total historical fault count for

      this fault on this device is now 1.

    Injecting fault 0x4 to device fmu@2a510000
    [00:04:49.110,000] <inf> fault_mgmt: Fault received (non-critical): 0x4 on fmu@2a510000
    
    [00:04:49.110,000] <inf> fault_mgmt_safety: Safety status: ERRN (0x5) on ssu@2a500000
    
    [00:04:49.160,000] <inf> fault_mgmt_protected_storage: Fault count for 0x4 on fmu@2a510000: 1
    
  5. The SSU will remain in the ERRN state until signaled (unless a critical fault occurs). Send a compl_ok signal again to recover from this fault:

    fault safety_control ssu@2a500000 compl_ok
    

    The SSU is now in the SAFE state again:

    Signal: compl_ok (0x0)
    State: SAFE (0x3)
    
  6. Next, inject an SPI collator external error (0x20000a00) into the GIC-720AE FMU:

    fault inject fmu@2a570000 0x20000a00
    

    This results in a similar output to above, except that the received fault was critical and the safety status is now ERRC. (GIC-720AE FMU faults are critical by default, but this can be changed from the shell using the fault set_critical sub-command):

    Injecting fault 0x20000a00 to device fmu@2a570000
    [00:09:13.210,000] <inf> fault_mgmt: Fault received (critical): 0x20000a00 on fmu@2a570000
    
    [00:09:13.210,000] <inf> fault_mgmt_safety: Safety status: ERRC (0x6) on ssu@2a500000
    
    [00:09:13.270,000] <inf> fault_mgmt_protected_storage: Fault count for 0x20000a00 on fmu@2a570000: 1
    
  7. The number of occurrences of each fault is tracked per device by the storage component. Inject another Lockstep error into the System FMU:

    fault inject fmu@2a510000 0x4
    

    The fault count is now 2. Note that the safety status is still ERRC:

    Injecting fault 0x4 to device fmu@2a510000
    [00:14:02.800,000] <inf> fault_mgmt: Fault received (non-critical): 0x4 on fmu@2a510000
    
    [00:14:02.800,000] <inf> fault_mgmt_safety: Safety status:  ERRC (0x6) on ssu@2a500000
    
    [00:14:02.860,000] <inf> fault_mgmt_protected_storage: Fault count for 0x4 on fmu@2a510000: 2
    

    The full list of stored faults can also be queried:

    fault list
    

    This shows all the faults injected into both FMUs above:

    Fault history:
    Fault received (non-critical): 0x4 on fmu@2a510000 : count 2
    
    Fault received (critical): 0x20000a00 on fmu@2a570000 : count 1
    
  8. The ERRC represents a critical system failure and cannot be recovered by the software - confirm this by trying to issue compl_ok again:

    fault safety_control ssu@2a500000 compl_ok
    

    The SSU status is still ERRC:

    Signal: compl_ok (0x0)
    State: ERRC (0x6)
    

    The state can now only be affected through a full system reset (for example, by stopping and starting the FVP), after which the state is TEST once again.

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

  9. To shut down the FVP and terminate the emulation, select the terminal titled as python3 where the runfvp was launched by pressing Ctrl-b 0 and press Ctrl-c to stop the FVP process.

See the Shell reference for more details about these and other Fault Management shell sub-commands.

See the Automated validation for more details on how to trigger the automated validation

Secure firmware update

The firmware update is on-disk to meet Arm SystemReady IR requirements.

Baremetal architecture

Build

The firmware update capsule for testing is generated together with the image for the software stack when building. The firmware capsule is placed on a removable storage device (in the case of Arm FVPs, an MMC card implementation).

Note

If the Arm Automotive Solutions Demo for the Baremetal Architecture is the most recent build, there is no need to rebuild it. For a first-time build, follow the instructions below.

Run this step to ensure the creation of the initial firmware flash images:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To configure and build a Baremetal Architecture image see Build baremetal architecture.

Run the FVP

Note

If the FVP has already been launched with the specified build configuration, there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Note that the main tmux windows involved in the Secure Firmware Update are terminal_ns_uart0 and terminal_rse_uart. For ease of navigation, it is recommended to join these in a single window with two panes.

Follow the steps below to achieve the same:

  1. Ensure that the tmux window titled terminal_ns_uart0 is selected. If not, press Ctrl-b w from the tmux session, navigate to the tmux window titled terminal_ns_uart0 followed by pressing the Enter key.

  2. Press Ctrl-b : and then type join-pane -s :terminal_rse_uart -h followed by pressing the Enter key to join the RSE terminal window to the Primary Compute terminal window.

See the following image of the tmux panes rearrangement. Panes can be navigated using Ctrl-b followed by the arrow keys.

Arm Auto Solutions Secure Firmware Update Baremetal FVP Windows

Wait for the system to boot and for the Linux prompt to appear.

The Reference Software Stack running on the Primary Compute can be logged into as the root user without a password in the Linux terminal. Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Run the demo
  1. To start the Secure Firmware Update, create a directory named UpdateCapsule and copy the update capsule into it. Then perform a reboot. Run the following commands to perform these operations:

    mount /dev/vda1 /boot
    mount /dev/mmcblk0p1 /mnt
    mkdir -p /boot/EFI/UpdateCapsule
    cp -f /mnt/fw.cap /boot/EFI/UpdateCapsule
    reboot
    

    The system will begin the firmware update automatically. The following logs show that the upgrade process has started and is in progress.

    In terminal_ns_uart0:

    FF-A driver 1.0
    FF-A framework 1.0
    FF-A versions are compatible
    EFI: MM partition ID 0x8003
    EFI: FVP: Capsule shared buffer at 0x81000000 , size 8192 pages
    

    In terminal_rse_uart:

    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 0
    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 0
    [INF]:[FWU]: FMP image update: image id = 1
    [INF]:[FWU]: FMP image update: status = 0, version=7, last_attempt_version=0.
    [INF]: [FWU]: Host acknowledged.
    [INF]:[FWU]: pack_image_info:207 ImageInfo size = 105, ImageName size = 14, ImageVersionName size = 14
    [INF]: [FWU]: Getting image info succeeded.
    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 0
    [INF]:[FWU]: uefi_capsule_retrieve_images: enter, capsule ptr = 0x0x65000000
    [INF]:[FWU]: uefi_capsule_retrieve_images: capsule size = 18284656, image count = 1
    [INF]:[FWU]: uefi_capsule_retrieve_images: image 0, version = 3
    [INF]:[FWU]: uefi_capsule_retrieve_images: image 0 at 0x65000070, size=18284560
    [INF]:[FWU]: flash_rss_capsule: enter: image = 0x65000070, size = 16187408, version = 3
    [INF]:[FWU]: erase_bank: erasing sectors = 4080, from offset = 16748544
    [INF]:[FWU]: flash_rss_capsule: writing capsule to the flash at offset = 16748544...
    

    Note

    This step will take about 10 minutes.

  2. The system will reset after a successful firmware update and boot with the updated firmware. This can be confirmed by checking the terminal logs; if there are lines in the log like below, then the upgrade was successful and the system has successfully rebooted with the updated firmware.

    In terminal_rse_uart:

    [INF]: [FWU]: Flashing the image succeeded.
    [INF]: [FWU]: Performing system reset...
    ...
    ...
    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 1
    

    Note

    The following error messages will appear in the logs:

    [ERR]:[FWU]: flash_rss_capsule: version error
    ...
    [ERR]:[FWU]: flash_fip_capsule: version error
    ...
    ...
    [ERR]: [FWU]: Flashing the image Failed.
    

    These errors are expected. As long as you scroll upwards and see the aforementioned lines then the update was successful.

  3. The system will eventually boot into Linux using the upgraded firmware.

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Baremetal Architecture).

  4. To shut down the FVP and terminate the emulation automatically, follow the below steps:

    Issue the following command on the Primary Compute terminal:

    shutdown now
    

    The below messages show that the shutdown process is complete:

    [  OK  ] Finished System Power Off.
    [  OK  ] Reached target System Power Off.
    reboot: Power down
    

See Integration tests validating secure firmware update for more details.

See the Automated validation for more details on how to trigger the automated validation

Virtualization architecture

Build

The firmware update capsule for testing is generated together with the image for the software stack when building. The firmware capsule is placed on a removable storage device (in the case of Arm FVPs, an MMC card implementation).

Note

If the Arm Automotive Solutions Demo for the Virtualization Architecture is the most recent build, there is no need to rebuild it. For a first-time build, follow the instructions below.

Run this step to ensure the creation of the initial firmware flash images:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To configure and build a Virtualization Architecture image see Build virtualization architecture:

Run the FVP

Note

If the FVP has already been launched with the specified build configuration, there is no need to stop and relaunch. For first-time launch, follow the instructions below.

To start the FVP and connect to the Primary Compute terminal:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

Note that the main tmux windows involved in the Secure Firmware Update are terminal_ns_uart0 and terminal_rse_uart. For ease of navigation, it is recommended to join these in a single window with two panes.

Follow the steps below to achieve the same:

  1. Ensure that the tmux window titled terminal_ns_uart0 is selected. If not, press Ctrl-b w from the tmux session, navigate to the tmux window titled terminal_ns_uart0 followed by pressing the Enter key.

  2. Press Ctrl-b : and then type join-pane -s :terminal_rse_uart -h followed by pressing the Enter key to join the RSE terminal window to the Primary Compute terminal window.

See the following image of the tmux panes rearrangement. Panes can be navigated using Ctrl-b followed by the arrow keys.

Arm Auto Solutions Secure Firmware Update Virtualization FVP Windows

Wait for the system to boot and for the Linux prompt to appear.

The Reference Software Stack running on the Primary Compute can be logged into as the root user without a password in the Linux terminal. Run the below command to ensure that all the expected services have been initialized:

systemctl is-system-running --wait

Wait for it to return. The expected terminal output is running.

Run the demo
  1. To start the Secure Firmware Update, run the following commands to create the UpdateCapsule directory and copy the update capsule into it, followed by a reboot:

    mount /dev/vda1 /boot
    mount /dev/mmcblk0p1 /mnt
    mkdir -p /boot/EFI/UpdateCapsule
    cp -f /mnt/fw.cap /boot/EFI/UpdateCapsule
    reboot
    

    The system will begin the firmware update automatically. The following logs show that the upgrade process has started and is in progress.

    In terminal_ns_uart0:

    FF-A driver 1.0
    FF-A framework 1.0
    FF-A versions are compatible
    EFI: MM partition ID 0x8003
    EFI: FVP: Capsule shared buffer at 0x81000000 , size 8192 pages
    

    In terminal_rse_uart:

    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 0
    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 0
    [INF]:[FWU]: FMP image update: image id = 1
    [INF]:[FWU]: FMP image update: status = 0, version=0, last_attempt_version=0.
    [INF]: [FWU]: Host acknowledged.
    [INF]:[FWU]: pack_image_info:207 ImageInfo size = 105, ImageName size = 14, ImageVersionName size = 14
    [INF]: [FWU]: Getting image info succeeded.
    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 0
    [INF]:[FWU]: uefi_capsule_retrieve_images: enter, capsule ptr = 0x0x65000000
    [INF]:[FWU]: uefi_capsule_retrieve_images: capsule size = 18219116, image count = 1
    [INF]:[FWU]: uefi_capsule_retrieve_images: image 0, version = 3
    [INF]:[FWU]: uefi_capsule_retrieve_images: image 0 at 0x6500006c, size=18219024
    [INF]:[FWU]: flash_rss_capsule: enter: image = 0x6500006c, size = 16121872, version = 3
    [INF]:[FWU]: erase_bank: erasing sectors = 4080, from offset = 16748544
    [INF]:[FWU]: flash_rss_capsule: writing capsule to the flash at offset = 16748544...
    

    Note

    This step will take about 10 minutes.

  2. The system will reset after a successful firmware update and boot with the updated firmware. This can be confirmed by checking the terminal logs; if there are lines in the log like below, then the upgrade was successful and the system has successfully rebooted with the updated firmware.

    In terminal_rse_uart:

    [INF]: [FWU]: Flashing the image succeeded.
    [INF]: [FWU]: Performing system reset...
    ...
    ...
    [INF]:[FWU]: get_fwu_agent_state: enter, boot_index = 1
    

    Note

    The following error messages will appear in the logs:

    [ERR]:[FWU]: flash_rss_capsule: version error
    ...
    [ERR]:[FWU]: flash_fip_capsule: version error
    ...
    ...
    [ERR]: [FWU]: Flashing the image Failed.
    

    These errors are expected. As long as you scroll upwards and see the aforementioned lines then the update was successful.

  3. The system will eventually boot into Linux using the upgraded firmware.

    Note

    There is no need to shut down and relaunch the FVP before demonstrating another use case on the Arm Automotive Solutions Demo build (Virtualization Architecture).

  4. To shut down the FVP and terminate the emulation automatically, follow the below steps.

    Issue the following command on the Primary Compute terminal:

    shutdown now
    

    The below messages show that the shutdown process is complete:

    [  OK  ] Finished System Power Off.
    [  OK  ] Reached target System Power Off.
    reboot: Power down
    

See Integration tests validating secure firmware update for more details.

See the Automated validation for more details on how to trigger the automated validation

Automated validation

Automated validation validates all the use cases in the Arm Automotive Solutions Demo for both Baremetal and Virtualization Architectures. To trigger the automated validation follow the below instructions.

Baremetal architecture

Run this step to ensure the creation of the initial firmware flash images:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To run the validation tests:

  1. Select Use Case > Arm Automotive Solutions Demo.

  2. Select Reference Software Stack Architecture > Baremetal.

  3. Select Runtime Validation Setup > Run Automated Validation.

  4. Select Build.

The complete test suite takes around 90 minutes to complete. See Run-time integration tests for more details.

Expect the following output after completing the automated validation:

RESULTS - test_10_linuxboot.LinuxBootTest.test_linux_boot: PASSED (242.04s)
RESULTS - test_10_linuxlogin.LinuxLoginTest.test_linux_login: PASSED (16.35s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_cpu_hotplug: PASSED (159.68s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_networking: PASSED (61.53s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_rtc: PASSED (13.95s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_virtiorng: PASSED (13.99s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_watchdog: PASSED (10.68s)
RESULTS - test_30_actuation.ActuationTest.test_analyzer_help: PASSED (1.07s)
RESULTS - test_30_actuation.ActuationTest.test_ping: PASSED (10.24s)
RESULTS - test_30_actuation.ActuationTest.test_player_to_analyzer: PASSED (80.32s)
RESULTS - test_30_cam.CAMServiceTest.test_cam_service_boot_on_si: PASSED (0.00s)
RESULTS - test_30_cam.CAMTest.test_cam_app_example_to_service_on_si: PASSED (20.19s)
RESULTS - test_30_cam.CAMTest.test_cam_app_example_to_service_on_si_with_multiple_connections: PASSED (23.62s)
RESULTS - test_30_cam.CAMTest.test_cam_ptp_sync: PASSED (0.00s)
RESULTS - test_30_cam.CAMTest.test_cam_tool_deploy_to_si: PASSED (55.94s)
RESULTS - test_30_cam.CAMTest.test_cam_tool_pack: PASSED (58.80s)
RESULTS - test_30_cam.CAMTest.test_data_calibration: PASSED (242.30s)
RESULTS - test_30_cam.CAMTest.test_logical_check_on_si: PASSED (12.14s)
RESULTS - test_30_cam.CAMTest.test_temporal_check_on_si: PASSED (25.14s)
RESULTS - test_30_si0_bridge_ethernet0.BridgeTest.test_si0_bridge_ethernet0: PASSED (0.00s)
RESULTS - test_30_si0_bridge_ethernet0.BridgeTest.test_si1_bridge_ethernet0: PASSED (15.06s)
RESULTS - test_30_si0_bridge_ethernet0.BridgeTest.test_si2_bridge_ethernet0: PASSED (17.25s)
RESULTS - test_40_hipc.HIPCTestBase.test_hipc_cluster0: PASSED (114.72s)
RESULTS - test_40_hipc.HIPCTestBase.test_hipc_cluster1: PASSED (175.25s)
RESULTS - test_40_hipc.HIPCTestBase.test_hipc_cluster2: PASSED (362.61s)
RESULTS - test_40_hipc.HIPCTestBase.test_hipc_cluster_cl0_cl1: PASSED (47.68s)
RESULTS - test_40_hipc.HIPCTestBase.test_hipc_cluster_cl0_cl2: PASSED (89.36s)
RESULTS - test_40_hipc.HIPCTestBase.test_hipc_cluster_cl1_cl2: PASSED (116.22s)
RESULTS - test_40_hipc.HIPCTestBase.test_ping_cl0_cl1: PASSED (10.20s)
RESULTS - test_40_hipc.HIPCTestBase.test_ping_cl0_cl2: PASSED (10.07s)
RESULTS - test_40_hipc.HIPCTestBase.test_ping_cl1_cl2: PASSED (10.10s)
RESULTS - test_40_hipc.HIPCTestBase.test_ping_cluster0: PASSED (28.15s)
RESULTS - test_40_hipc.HIPCTestBase.test_ping_cluster1: PASSED (28.19s)
RESULTS - test_40_hipc.HIPCTestBase.test_ping_cluster2: PASSED (29.17s)
RESULTS - test_40_parsec.ParsecTest.test_parsec_demo: PASSED (198.56s)
RESULTS - test_40_ptp.PTPTest.test_ptp_linux_services: PASSED (3.05s)
RESULTS - test_40_ptp.PTPTest.test_ptp_si_cl1: PASSED (7.45s)
RESULTS - test_40_si_psa_arch_tests.SIPSAArchTests.test_psa_si_cluster1: PASSED (11.98s)
RESULTS - test_40_si_psa_arch_tests.SIPSAArchTests.test_psa_si_cluster2: PASSED (13.63s)
RESULTS - test_40_sve.SVETestBase.test_sve_config: PASSED (13.65s)
RESULTS - test_40_sve.SVETestBase.test_sve_enabled: PASSED (13.65s)
RESULTS - test_50_trusted_services.ArmAutoSolutionsTrustedServices.test_03_psa_crypto_api_test: PASSED (130.27s)
RESULTS - test_50_trusted_services.ArmAutoSolutionsTrustedServices.test_04_psa_its_api_test: PASSED (14.27s)
RESULTS - test_50_trusted_services.ArmAutoSolutionsTrustedServices.test_05_psa_ps_api_test: PASSED (29.24s)
RESULTS - test_60_fault_mgmt.FaultMgmtSSUTest.test_gic_fmu_ssu_compl_ok: PASSED (96.63s)
RESULTS - test_60_fault_mgmt.FaultMgmtSSUTest.test_ssu_ce_not_ok: PASSED (31.10s)
RESULTS - test_60_fault_mgmt.FaultMgmtSSUTest.test_ssu_nce_ok: PASSED (28.08s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_fmu_fault_clear: PASSED (14.56s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_fmu_fault_count: PASSED (12.90s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_fmu_fault_list: PASSED (85.20s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_fmu_fault_summary: PASSED (14.67s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_gic_fmu_inject: PASSED (64.08s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_system_fmu_internal_inject: PASSED (13.54s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_system_fmu_internal_set_enabled: PASSED (10.45s)
RESULTS - test_60_fault_mgmt.FaultMgmtTest.test_tree: PASSED (0.19s)
RESULTS - test_99_fwu.SecureFirmwareUpdateTest.test_securefirmwareupdate: PASSED (564.00s)
RESULTS - test_99_linuxshutdown.LinuxShutdownTest.test_linux_shutdown: PASSED (271.06s)
RESULTS - test_00_rse.RseTest.test_normal_boot: PASSED (17.33s)
RESULTS - test_00_secure_partition.OpteeTest.test_optee_normal: PASSED (1.75s)
RESULTS - test_99_uefi_secure_boot.UEFI_Secure_Boot_Test.test_unsigned_kernel_image: PASSED (18.86s)
RESULTS - ping.PingTest.test_ping: PASSED (0.01s)
RESULTS - ssh.SSHTest.test_ssh: PASSED (3.45s)

Virtualization architecture

Run this step to ensure the creation of the initial firmware flash images:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To run the validation tests:

  1. Select Use Case > Arm Automotive Solutions Demo.

  2. Select Reference Software Stack Architecture > Virtualization.

  3. Select Runtime Validation Setup > Run Automated Validation.

  4. Select Build.

The complete test suite takes around 120 minutes to complete. See Run-time integration tests for more details.

Expect the following output after completing the automated validation:

RESULTS - test_10_linuxboot.LinuxBootTest.test_linux_boot: PASSED (437.10s)
RESULTS - test_10_linuxlogin.LinuxLoginTest.test_linux_login: PASSED (683.65s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_cpu_hotplug: PASSED (28.31s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_networking: PASSED (104.30s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_rtc: PASSED (13.95s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_virtiorng: PASSED (13.95s)
RESULTS - test_20_fvp_devices.ArmAutoSolutionsFvpDevicesTest.test_watchdog: PASSED (9.30s)
RESULTS - test_30_actuation.ActuationTest.test_analyzer_help: PASSED (0.48s)
RESULTS - test_30_actuation.ActuationTest.test_ping: PASSED (33.25s)
RESULTS - test_30_actuation.ActuationTest.test_player_to_analyzer: PASSED (196.48s)
RESULTS - test_30_cam.CAMServiceTest.test_cam_service_boot_on_si: PASSED (0.00s)
RESULTS - test_30_cam.CAMTest.test_cam_app_example_to_service_on_si: PASSED (43.06s)
RESULTS - test_30_cam.CAMTest.test_cam_app_example_to_service_on_si_with_multiple_connections: PASSED (40.96s)
RESULTS - test_30_cam.CAMTest.test_cam_ptp_sync: PASSED (6.93s)
RESULTS - test_30_cam.CAMTest.test_cam_tool_deploy_to_si: PASSED (41.37s)
RESULTS - test_30_cam.CAMTest.test_cam_tool_pack: PASSED (43.08s)
RESULTS - test_30_cam.CAMTest.test_data_calibration: PASSED (143.44s)
RESULTS - test_30_cam.CAMTest.test_logical_check_on_si: PASSED (16.48s)
RESULTS - test_30_cam.CAMTest.test_temporal_check_on_si: PASSED (51.86s)
RESULTS - test_30_cam.CAMTestDomU2.test_cam_app_example_to_service_on_si: PASSED (43.04s)
RESULTS - test_30_cam.CAMTestDomU2.test_cam_app_example_to_service_on_si_with_multiple_connections: PASSED (43.30s)
RESULTS - test_30_cam.CAMTestDomU2.test_cam_ptp_sync: PASSED (8.84s)
RESULTS - test_30_cam.CAMTestDomU2.test_cam_tool_deploy_to_si: PASSED (25.12s)
RESULTS - test_30_cam.CAMTestDomU2.test_cam_tool_pack: PASSED (23.19s)
RESULTS - test_30_cam.CAMTestDomU2.test_data_calibration: PASSED (83.89s)
RESULTS - test_30_cam.CAMTestDomU2.test_logical_check_on_si: PASSED (17.71s)
RESULTS - test_30_cam.CAMTestDomU2.test_temporal_check_on_si: PASSED (50.94s)
RESULTS - test_30_cam.CAMTestMultiDom.test_cam_app_example_to_service_on_si_with_multiple_vms: PASSED (50.66s)
RESULTS - test_30_si0_bridge_ethernet0.BridgeTest.test_si0_bridge_ethernet0: PASSED (0.00s)
RESULTS - test_30_si0_bridge_ethernet0.BridgeTest.test_si1_bridge_ethernet0: PASSED (15.37s)
RESULTS - test_30_si0_bridge_ethernet0.BridgeTest.test_si2_bridge_ethernet0: PASSED (17.43s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_hipc_cluster0: PASSED (335.27s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_hipc_cluster1: PASSED (374.01s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_hipc_cluster2: PASSED (412.47s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_hipc_cluster_cl0_cl1: PASSED (61.55s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_hipc_cluster_cl0_cl2: PASSED (102.63s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_hipc_cluster_cl1_cl2: PASSED (124.31s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_ping_cl0_cl1: PASSED (33.63s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_ping_cl0_cl2: PASSED (33.04s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_ping_cl1_cl2: PASSED (32.50s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_ping_cluster0: PASSED (85.79s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_ping_cluster1: PASSED (91.81s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU1.test_ping_cluster2: PASSED (87.37s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU2.test_hipc_cluster1: PASSED (324.24s)
RESULTS - test_40_hipc_virtualization.HIPCTestDomU2.test_ping_cluster1: PASSED (84.19s)
RESULTS - test_40_parsec.ParsecTest.test_parsec: PASSED (101.16s)
RESULTS - test_40_ptp_virtualization.PTPTestDom0.test_ptp_linux_services: PASSED (5.10s)
RESULTS - test_40_ptp_virtualization.PTPTestDom0.test_ptp_si_cl1: PASSED (17.99s)
RESULTS - test_40_ptp_virtualization.PTPTestDomU1.test_ptp_domu_client: PASSED (28.83s)
RESULTS - test_40_ptp_virtualization.PTPTestDomU1.test_ptp_linux_services: PASSED (1.70s)
RESULTS - test_40_ptp_virtualization.PTPTestDomU2.test_ptp_domu_client: PASSED (31.45s)
RESULTS - test_40_ptp_virtualization.PTPTestDomU2.test_ptp_linux_services: PASSED (1.70s)
RESULTS - test_40_si_psa_arch_tests.SIPSAArchTests.test_psa_si_cluster1: PASSED (17.95s)
RESULTS - test_40_si_psa_arch_tests.SIPSAArchTests.test_psa_si_cluster2: PASSED (22.31s)
RESULTS - test_40_sve.SVETestBase.test_sve_config: PASSED (19.23s)
RESULTS - test_40_sve.SVETestBase.test_sve_enabled: PASSED (10.54s)
RESULTS - test_40_sve.SVETestDomU1.test_sve_config: PASSED (23.26s)
RESULTS - test_40_sve.SVETestDomU1.test_sve_enabled: PASSED (4.65s)
RESULTS - test_40_sve.SVETestDomU2.test_sve_config: PASSED (32.30s)
RESULTS - test_40_sve.SVETestDomU2.test_sve_enabled: PASSED (4.65s)
RESULTS - test_40_virtualization.FvpDevicesTestDomU1.test_cpu_hotplug: PASSED (84.49s)
RESULTS - test_40_virtualization.FvpDevicesTestDomU1.test_networking: PASSED (40.64s)
RESULTS - test_40_virtualization.FvpDevicesTestDomU2.test_cpu_hotplug: PASSED (28.63s)
RESULTS - test_40_virtualization.FvpDevicesTestDomU2.test_networking: PASSED (40.39s)
RESULTS - test_40_virtualization.GICv4DomU1Test.test_gicv4_1: PASSED (9.30s)
RESULTS - test_40_virtualization.ParsecDomU1Test.test_parsec: PASSED (81.56s)
RESULTS - test_40_virtualization.ParsecDomU2Test.test_parsec: PASSED (86.37s)
RESULTS - test_40_virtualization.PtestRunnerDom0Test.test_ptestrunner: PASSED (750.15s)
RESULTS - test_99_fwu.SecureFirmwareUpdateTest.test_securefirmwareupdate: PASSED (771.36s)
RESULTS - test_99_linuxshutdown.LinuxShutdownTest.test_linux_shutdown: PASSED (452.85s)
RESULTS - test_00_rse.RseTest.test_normal_boot: PASSED (17.51s)
RESULTS - ping.PingTest.test_ping: PASSED (0.01s)
RESULTS - ssh.SSHTest.test_ssh: PASSED (3.82s)

Note

There is a rare known failure where a timeout might occur during test execution.

See Known issues for possible workarounds.

Arm systemready IR validation

Arm systemready IR firmware build

The Arm SystemReady IR Firmware Build option just builds the Arm SystemReady IR-aligned firmware.

See Arm SystemReady IR for more details.

Arm Auto Solutions Build Configuration Menu - Arm SystemReady IR Firmware Build

Build

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build the Arm SystemReady IR-aligned firmware image:

  1. Select Use Case > Arm SystemReady IR Validation > Arm SystemReady IR Firmware Build.

  2. Select Build.

The firmware images listed below can be found in the directory build/tmp_systemready/deploy/images/fvp-rd-kronos/.

  • ap-flash-image.img

  • encrypted_cm_provisioning_bundle_0.bin

  • encrypted_dm_provisioning_bundle.bin

  • rse-flash-image.img

  • rse-otp-image.img

  • rse-rom-image.img

Arm systemready IR architecture compliance suite (ACS) tests

The ACS for the Arm SystemReady IR certification is delivered through a live OS image, which enables the basic automation to run the tests.

The system boots with the ACS live OS image and the ACS tests run automatically after the system boots. See Arm SystemReady IR ACS tests for more details.

Build and automated validation

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build and run the Arm SystemReady IR ACS tests:

  1. Select Use Case > Arm SystemReady IR Validation > Arm SystemReady IR Architecture Compliance Suite (ACS) Tests.

  2. Select Build.

A similar output to the following is printed out:

NOTE: recipe arm-systemready-ir-acs-2.1.0-r0: task do_testimage: Started
Creating terminal default on terminal_ns_uart0
Creating terminal tf-a on terminal_sec_uart
Creating terminal scp on terminal_uart_scp
Creating terminal rse on terminal_rse_uart
Creating terminal safety_island_c0 on terminal_uart_si_cluster0
Creating terminal safety_island_c1 on terminal_uart_si_cluster1
Creating terminal safety_island_c2 on terminal_uart_si_cluster2
ACS SIE + bbr/bsa running
Test Group (VariableAttributes): PASSED
Test Group (VariableUpdates): FAILED
Test Group (AuthVar_Conf): FAILED
Test Group (AuthVar_Func): FAILED
Test Group (BBSRVariableSizeTest_func): FAILED
Test Group (ImageLoading): FAILED
Test Group (PlatformSpecificElements): FAILED
Test Group (RequiredElements): FAILED
Test Group (CheckEvent_Conf): PASSED
Test Group (CheckEvent_Func): PASSED
Test Group (CloseEvent_Func): PASSED
Test Group (CreateEventEx_Conf): PASSED
Test Group (CreateEventEx_Func): PASSED
Test Group (CreateEvent_Conf): PASSED
Test Group (CreateEvent_Func): PASSED
Test Group (RaiseTPL_Func): PASSED
Test Group (RestoreTPL_Func): PASSED
Test Group (SetTimer_Conf): PASSED
Test Group (SetTimer_Func): PASSED
Test Group (SignalEvent_Func): PASSED
Test Group (WaitForEvent_Conf): PASSED
Test Group (WaitForEvent_Func): PASSED
Test Group (AllocatePages_Conf): PASSED
Test Group (AllocatePages_Func): PASSED
Test Group (AllocatePool_Conf): PASSED
Test Group (AllocatePool_Func): PASSED
Test Group (FreePages_Conf): PASSED
Test Group (FreePages_Func): PASSED
Test Group (GetMemoryMap_Conf): PASSED
Test Group (GetMemoryMap_Func): PASSED
...
...
Transitioned to on
SystemReady IR UEFI Shell test
Transitioned to off
RESULTS:
RESULTS - arm_systemready_ir_acs_shutdown.SystemReadyACSShutdownTest.test_shutdown: PASSED (36.92s)
RESULTS - arm_systemready_ir_capsule_update.SystemReadyIRCapsuleUpdateTest.test_capsule_update: PASSED (517.82s)
RESULTS - arm_systemready_ir_test_setup.SystemReadyIRTestSetup.test_ir_setup: PASSED (52.34s)
RESULTS - arm_systemready_ir_uefi_shell.SystemReadyIRBandUEFIShellTest.test_uefi_shell: PASSED (63.73s)
RESULTS - arm_systemready_ir_acs.SystemReadyACSTest.test_acs: PASSED (37017.45s)
SUMMARY:
arm-systemready-ir-acs () - Ran 5 tests in 37750.922s
arm-systemready-ir-acs - OK - All required tests passed (successes=5, skipped=0, failures=0, errors=0)

As seen in the above logs, some Test Groups are expected to fail. The following messages are expected to validate this use case:

RESULTS - arm_systemready_ir_acs_shutdown.SystemReadyACSShutdownTest.test_shutdown: PASSED (36.92s)
RESULTS - arm_systemready_ir_capsule_update.SystemReadyIRCapsuleUpdateTest.test_capsule_update: PASSED (517.82s)
RESULTS - arm_systemready_ir_test_setup.SystemReadyIRTestSetup.test_ir_setup: PASSED (52.34s)
RESULTS - arm_systemready_ir_uefi_shell.SystemReadyIRBandUEFIShellTest.test_uefi_shell: PASSED (63.73s)
RESULTS - arm_systemready_ir_acs.SystemReadyACSTest.test_acs: PASSED (37017.45s)

Note

Running the ACS tests more than once will have them resume from where they last stopped. Additionally, consecutive runs are not supported by the ACS logs; it will result in a failure after the end of the tests. To run the ACS tests again, use the following to refresh the firmware images in flash and re-start the entire ACS test suite properly:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"
kas shell -c "bitbake arm-systemready-ir-acs -C unpack"

Note

The ACS tests take hours to complete. The actual time taken will vary depending on the performance of the build host. The default timeout setting for the tests is 12 hours for an x86_64 host or 24 hours for an aarch64 host. If a timeout failure occurs, increase the timeout setting and re-run the tests with the following command on the build host terminal. The example command below changes the timeout setting to 16 hours:

TEST_OVERALL_TIMEOUT="\${@16*60*60}" kas shell -c "bitbake arm-systemready-ir-acs -C unpack"

Note

There is a rare known failure where a timeout might occur during test execution.

See Known issues for possible workarounds.

See Arm SystemReady IR ACS tests for an explanation on how the ACS tests are set up and how they work in the Reference Software Stack.

Linux distribution installation (Debian, openSUSE and Fedora)

The Arm SystemReady IR-aligned firmware must boot at least three unmodified generic UEFI distribution images from an ISO image.

This Software Stack currently supports three Linux distributions: Debian Stable, openSUSE Leap and Fedora Server.

Note

Arm SystemReady IR ACS tests runs the Security Interface Extension (SIE) tests, which enrolls the authenticated variables for UEFI Secure Boot, so running the Linux distros installation after running the ACS tests will result in a failure. The firmware flash images need to be recreated with the following command:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

See UEFI secure boot for more information.

Note

The manual installation of a Linux distribution requires some manual interaction, for example, some necessary selections, confirmations or entering the user and password.

The whole installation process takes a long time (possibly up to 10 hours, or even longer).

We suggest that when running the Linux distribution installations the FVP is the only running process as it will consume large amounts of RAM that can make the system unstable.

See Linux distributions installation tests for an explanation on how the Linux distros installation is set up and how they work in the Reference Software Stack.

Debian

Distro unattended installation

In this test we have modified the installation ISO image to add the preconfiguration file preseed.cfg inside it. This required editing the grub.cfg file inside the ISO image to select starting the unattended installation from the preconfiguration file (meta-arm-systemready/recipes-test/arm-systemready-linux-distros/files/unattended-boot-conf/Debian/preseed.cfg).

Distro installation

The firmware flash images need to be recreated with the following command:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build the Arm SystemReady IR Linux distros installation tests:

  1. Select Debian Linux Distro Installation under Linux Distribution Installation (Debian, openSUSE and Fedora) from the Use-Case menu.

  2. Select Build.

Arm Auto Solutions Build Configuration Menu - Debian Linux Distro Installation

A similar output to the following indicates when the installation is finished, which will take around 5 hours:

Transitioned to on
Installation status: Scanning installation media..
Installation status: Detecting network hardware...
Installation status: Installing the base system...
Installation status: Installing GRUB...
Installation status: Finishing the installation...
Installation status: Debian installation finished successfully.
Transitioned to off
RESULTS:
RESULTS - arm_systemready_debian_unattended.SystemReadyDebianUnattendedTest.test_debian_unattended: PASSED (16400.47s)
SUMMARY:
arm-systemready-linux-distros-debian () - Ran 1 test in 16400.47s
  • Log in

    After the installation is finished, run the following command to log into the Linux shell:

    kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"
    

    Log into the Linux shell with the user created during the installation using the username user and the password unsafe.

  • Terminate the FVP

    To shut down the FVP and terminate the emulation automatically, run the following command.

    sudo shutdown now
    

    The below message indicates the shutdown process is complete.

    reboot: Power down
    

    Subsequently running the FVP will boot into Debian.

Distro manual installation

To install Debian, see the Debian GNU/Linux Installation Guide.

Distro installation media preparation

The firmware flash images need to be recreated with the following command:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build the Arm SystemReady IR Linux distros installation tests:

  1. Select Use Case > Linux Distribution Installation (Debian, openSUSE and Fedora) > Debian Linux Distro Installation.

  2. Select Build.

Arm Auto Solutions Build Configuration Menu - Debian Linux Distro Installation

Distro installation

Run the following command to start the installation:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

The whole process of installing Debian will probably take about 5 hours. The installation process begins when you see the following:

Grub Install Options Menu - Debian Linux Distro Installation

Select Install to start the installation process.

The following is the problem that has been encountered during the Debian installation process and how to solve it:

  • Install the GRUB boot loader

    When the installation reaches the Install the GRUB boot loader phase, choose Yes.

    Grub Installation Prompt - Debian Linux Distro Installation

Expect an error Unable to install GRUB in dummy. This is because on an EBBR platform, UEFI SetVariable() is not required at runtime (however, it is required at boot time).

Grub Installation Failure Prompt - Debian Linux Distro Installation

One workaround we have is to “execute a shell” when the GRUB install phase throws the above error. To execute a shell, press Ctrl-a n to switch the debug shell, and run the following commands:

chroot /target
update-grub
cp -v /boot/efi/EFI/debian/grubaa64.efi /boot/efi/EFI/BOOT/bootaa64.efi

A snapshot is as below:

Grub Workaround Console Output - Debian Linux Distro Installation

After doing the above GRUB workaround, press Ctrl-a p to go back to the installer again. Select Continue on the GRUB failure screen.

Second Grub Installation Failure Prompt - Debian Linux Distro Installation

Select Continue without boot loader in the Debian installer main menu and continue.

Debian Installer Main Menu - Debian Linux Distro Installation

Select Yes in the Update NVRAM variables to boot automatically into Debian and continue.

Install the GRUB boot loader - Debian Linux Distro Installation

  • Log in

    When the installation reaches the final Finishing the installation phase, you will need to wait some time to finish the remaining tasks, and then it will automatically reboot into the installed OS. You can log into the Linux shell with the user created during installation.

  • Terminate the FVP

    To shut down the FVP and terminate the emulation automatically, log into the Linux shell as the root user then run the following command:

    shutdown now
    

    The below message shows that the shutdown process is complete:

    reboot: Power down
    

    Subsequently running the FVP will boot into Debian.

openSUSE

Distro unattended installation

In this test we have modified the installation ISO image to add the automatic installation file inside it. This required adding the autoinst.xml file inside the ISO image to locate the installation configuration file (meta-arm-systemready/recipes-test/arm-systemready-linux-distros/files/unattended-boot-conf/openSUSE/autoinst.xml).

Distro installation

The firmware flash images need to be recreated with the following command:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build the Arm SystemReady IR Linux distros installation tests:

  1. Select Use Case > Linux Distribution Installation (Debian, openSUSE and Fedora) > openSUSE Linux Distro Installation.

  2. Select Build.

Arm Auto Solutions Build Configuration Menu - openSUSE Linux Distro Installation

This installation will take around 12 hours to complete. A similar output to the following shows when the installation is finished:

Transitioned to on
Installation status: Loading the kernel, initrd and basic drivers...
Installation status: Starting hardware detection...
Installation status: Loading Installation System...
Installation status: Performing Installation...
Installation status: Finishing Configuration...
Installation status: openSUSE installation finished successfully.
Transitioned to off
RESULTS:
RESULTS - arm_systemready_opensuse_unattended.SystemReadyOpenSUSEUnattendedTest.test_opensuse_unattended: PASSED (24367.99s)
SUMMARY:
arm-systemready-linux-distros-opensuse () - Ran 1 test in 24367.997s
  • Log in

    After the installation is finished, run the following command to log into the Linux shell:

    kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"
    

    Log into the Linux shell with the user created during the installation using the username user and the password unsafe.

  • Terminate the FVP

    To shut down the FVP and terminate the emulation automatically, run the following command:

    sudo shutdown now
    

    The below message shows that the shutdown process is complete:

    reboot: Power down
    

    Subsequently running the FVP will boot into openSUSE.

Distro manual installation

To install openSUSE, see the openSUSE Installation Guide.

Distro installation media preparation

The firmware flash images need to be recreated with the following command:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build the Arm SystemReady IR Linux distros installation tests:

  1. Select Use Case > Linux Distribution Installation (Debian, openSUSE and Fedora) > openSUSE Linux Distro Installation.

  2. Unselect Distros Unattended Installation Setup > Run Unattended Installation.

  3. Select Build.

Arm Auto Solutions Build Configuration Menu - openSUSE Linux Distro Installation

Run the following command to start the installation:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

The whole process of installing openSUSE will take several hours. The install process begins when you see the following:

Leap Install Options Menu - openSUSE Linux Distro Installation

Select No when you get to the Online Repositories screen.

Online Repositories Options Menu - openSUSE Linux Distro Installation

Select Installation to start the installation process.

  • System Role

    When you get to the System Role screen, select Server, then select Next to continue with the installation.

    System Role Selection Menu - openSUSE Linux Distro Installation

    Tip

    Use Tab to cycle through options on screens during installation.

  • Installation process

    When you have selected Install on the Confirm Installation screen, the installation will proceed and it will take several hours. The steps of the installation process are:

    • Installing Packages...

    • Save configuration

    • Save installation settings

    • Install boot manager

    • Prepare system for initial boot

    • Then the system will reboot automatically in 10s, you can select OK to reboot immediately.

  • Log in

    After the reboot process, log into the Linux shell with the user created during installation.

  • Terminate the FVP

    To shut down the FVP and terminate the emulation automatically, run the following command:

    sudo shutdown now
    

    The below message shows that the shutdown process is complete:

    reboot: Power down
    

    Subsequently running the FVP will boot into openSUSE.

Fedora

Distro unattended installation

In this test we have modified the installation ISO image to add the kickstart configuration file inside it. This required editing the grub.cfg file inside the ISO image to locate the kickstart configuration file (meta-arm-systemready/recipes-test/arm-systemready-linux-distros/files/unattended-boot-conf/Fedora/ks.cfg).

Distro installation

The firmware flash images need to be recreated with the following command:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build the Arm SystemReady IR Linux distros installation tests:

  1. Select Use Case > Linux Distribution Installation (Debian, openSUSE and Fedora) > Fedora Linux Distro Installation.

  2. Select Build.

Arm Auto Solutions Build Configuration Menu - Fedora Linux Distro Installation

This installation will take around 12 hours to complete. A similar output to the following shows when the installation is finished:

Transitioned to on
Installation status: Loading the installer, kernel and initrd...
Installation status: Setting up the installation environment...
Installation status: Installing the software packages...
Installation status: Fedora installation finished successfully.
Transitioned to off
RESULTS:
RESULTS - arm_systemready_fedora_unattended.SystemReadyFedoraUnattendedTest.test_fedora_unattended: PASSED (38743.47s)
SUMMARY:
arm-systemready-linux-distros-fedora () - Ran 1 test in 38743.478s
  • Log in

    After the installation is finished, run the following command to log into the Linux shell:

    kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"
    

    Log into the Linux shell with the user created during the installation using the username user and the password unsafe.

  • Terminate the FVP

    To shut down the FVP and terminate the emulation automatically, run the following command:

    sudo shutdown now
    

    The below message shows that the shutdown process is complete:

    reboot: Power down
    

    Subsequently running the FVP will boot into Fedora.

Distro manual installation

To install Fedora, see the Fedora Installation Guide.

Distro installation media preparation

The firmware flash images need to be recreated with the following command:

kas shell -c "bitbake firmware-fvp-rd-kronos -C deploy"

To run the configuration menu:

kas menu sw-ref-stack/Kconfig

To build the Arm SystemReady IR Linux distros installation tests:

  1. Select Use Case > Linux Distribution Installation (Debian, openSUSE and Fedora) > Fedora Linux Distro Installation.

  2. Unselect Distros Unattended Installation Setup > Run Unattended Installation.

  3. Select Build.

Arm Auto Solutions Build Configuration Menu - Fedora Linux Distro Installation

Distro installation

Run the following command to start the installation:

kas shell -c "../layers/meta-arm/scripts/runfvp -t tmux --verbose"

The whole process of installing Fedora will probably take about 24 hours. The installation process begins when you see the following:

Grub Install Options Menu - Fedora Linux Distro Installation

Select Install Fedora 39 to start the installation process.

Here are some tips for installing Fedora:

  1. It will take a few minutes for GRUB to load the installer, kernel and initrd.

  2. When the installer has started, enter 2 to choose Use text mode.

    Starting installer, one moment...
    anaconda 39.32.6-2.fc39 for Fedora 39 started.
     * installation log files are stored in /tmp during the installation
     * shell is available on TTY2 and in second TMUX pane (ctrl+b, then press 2)
     * when reporting a bug add logs from /tmp as separate text/plain attachments
    
    X or window manager startup failed, falling back to text mode.
    ================================================================================
    ================================================================================
    X was unable to start on your machine. Would you like to start VNC to connect to
    this computer from another computer and perform a graphical installation or
    continue with a text mode installation?
    
    1) Start VNC
    2) Use text mode
    
    Please make a selection from the above ['c' to continue, 'h' to help, 'q' to
    quit, 'r' to refresh]: 2
    
  3. When reaching the installation menu, you will see several items marked as ! which shows that the item needs to be configured before proceeding.

    ================================================================================
    ================================================================================
    Installation
    
    1) [x] Language settings                 2) [x] Time settings
           (English (United States))                (America/Chicago timezone)
    3) [!] Installation source               4) [!] Software selection
           (Setting up installation                 (Processing...)
           source...)
    5) [!] Installation Destination          6) [x] Network configuration
           (Processing...)                          (Connected: eth0)
    7) [!] Root password                     8) [!] User creation
           (Root account is disabled)               (No user will be created)
    
    Please make a selection from the above ['b' to begin installation, 'h' to help,
    'q' to quit, 'r' to refresh]:
    

    For 3) [!] Installation source, enter 3, then 1 to select CD/DVD.

    ================================================================================
    ================================================================================
    Installation source
    
    Choose an installation source type.
    1) CD/DVD
    2) local ISO file
    3) Network
    
    Please make a selection from the above ['c' to continue, 'h' to help, 'q' to
    quit, 'r' to refresh]: 1
    

    For 4) [!] Software selection, enter 4, then c to continue.

    For 5) [!] Installation Destination, enter 5, then c to select the default options.

    For 6) [!] Network configuration, it will automatically change to x.

    For 7) [!] Root password, follow the prompts to enter the password and confirm.

    After entering root password, 8) [ ] User creation becomes optional and can be skipped.

    The final configuration will appear as follows:

    ================================================================================
    ================================================================================
    Installation
    
    1) [x] Language settings                 2) [x] Time settings
           (English (United States))                (America/Chicago timezone)
    3) [x] Installation source               4) [x] Software selection
           (Local media)                            (Fedora Server Edition)
    5) [x] Installation Destination          6) [x] Network configuration
           (Automatic partitioning                  (Connected: eth0)
           selected)
    7) [x] Root password                     8) [ ] User creation
           (Root password is set)                   (No user will be created)
    
    Please make a selection from the above ['b' to begin installation, 'h' to help,
    'q' to quit, 'r' to refresh]:
    

    Now enter b to start the installation.

  4. The installer is expected to stay at Configuring kernel-core.aarch64 for several hours. The installer will then verify the installed packages and continue to install the boot loader.

  5. The following error is expected while installing the boot loader. Ignore the error by responding yes and continue.

    Installing boot loader
    ================================================================================
    ================================================================================
    Question
    
    The following error occurred while installing the boot loader. The system will
    not be bootable. Would you like to ignore this and continue with installation?
    
    Failed to set new efi boot target. This is most likely a kernel or firmware bug.
    
    Please respond 'yes' or 'no': yes
    
    [anaconda]1:main* 2:shell  3:log  4:storage-log >Switch tab: Alt+Tab | Help: F1
    
  • Log in

    When the installation reaches the final Finishing the installation phase, you will need to wait some time to finish the remaining tasks. When you see the message Installation complete. Press ENTER to quit:, press enter to reboot into the installed OS. You can log into the Linux shell with the user created during installation.

  • Terminate the FVP

    To shut down the FVP and terminate the emulation automatically, log into the Linux shell as the root user then run the following command:

    shutdown now
    

    The below message shows that the shutdown process is complete:

    reboot: Power down
    

    Subsequently running the FVP will boot into Fedora.