Jekyll2020-08-22T09:08:25-04:00https://futurewei-cloud.github.io/ARM-Datacenter/feed.xmlARM-DatacenterThe ARM-Datacenter blog includes articles on QEMU, Linux kernel scheduler and other ongoing projects. Our goal is to create and contribute to Software infrastructure that simplifies adoption of ARM based hardware in a Datacenter environment.FutureweiHow to setup NVMe/TCP with NVME-oF using KVM and QEMU2020-08-07T07:52:34-04:002020-08-07T07:52:34-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/nvme-of-tcp-vms<p>In this post we will explain how to connect two QEMU Guests (using KVM) with NVMe over Fabrics using TCP as the transport.</p>
<p>We will show how to connect an NVMe target to an NVMe initiator, using the NVMe/TCP transport. It is worth mentioning before we get started that we will use the terms of “target” to describe the guest which exports the target, and “initiator” to describe the guest which connects to the target.</p>
<p>The target QEMU guest will export a simulated NVME drive which we will create from an image file. The initiator guest will connect to the target and will be able to access this NVME drive.</p>
<p>Note that this configuration is largely an example to be used for evaluation and/or development with NVME-of. This setup described is not intended to be used for a production environment.</p>
<h2 id="first-step-create-a-guest">First Step: Create a Guest</h2>
<p>Before we can get started, we need to bring up our QEMU guests and get them sharing the same network.</p>
<p>Fortunately, we described in an earlier post <a href="https://futurewei-cloud.github.io/qemu/network-aarch64-qemu-guests">how to setup a shared network for two QEMU guests</a>. That’s a good place to start.</p>
<p>We also have other posts for getting an aarch64 VM up and running, including:</p>
<ul>
<li><a href="https://futurewei-cloud.github.io/qemu/how-to-launch-aarch64-vm">setting up an aarch64 QEMU guest from scratch</a></li>
<li><a href="https://futurewei-cloud.github.io/qemu/qemu-aarch64-vms">using QEMU’s vm-build automation to create aarch64 guest</a></li>
<li><a href="https://futurewei-cloud.github.io/qemu/lisa-qemu-demo1">using LISA-QEMU to create and launch an aarch64 guest</a></li>
</ul>
<h2 id="kernel-configuration">Kernel Configuration</h2>
<p>Before we get started we will make sure that the guest’s kernel has all the right modules built in.</p>
<p>The guest’s kernel config should have these modules.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cat /boot/config-`uname -r` | grep NVME
# NVME Support
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
# CONFIG_NVME_MULTIPATH is not set
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=m
CONFIG_NVME_FC=m
CONFIG_NVME_TCP=m
CONFIG_NVME_TARGET=m
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_FC=m
# CONFIG_NVME_TARGET_FCLOOP is not set
CONFIG_NVME_TARGET_TCP=m
# end of NVME Support
</code></pre></div></div>
<h2 id="nvme-cli">nvme-cli</h2>
<p>Make sure the nvme-cli is installed on the guests.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install nvme-cli
</code></pre></div></div>
<h2 id="initiator-guest-setup">Initiator Guest Setup</h2>
<p>This is the QEMU command we use to bring up the initiator QEMU guest.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo qemu-system-aarch64 -nographic -machine virt,gic-version=max -m 8G -cpu max \
-drive file=./ubuntu20-a.img,if=none,id=drive0,cache=writeback \
-device virtio-blk,drive=drive0,bootindex=0 \
-drive file=./flash0-a.img,format=raw,if=pflash \
-drive file=./flash1-a.img,format=raw,if=pflash \
-smp 4 -accel kvm -netdev bridge,id=hn1 \
-device virtio-net,netdev=hn1,mac=e6:c8:ff:09:76:99
</code></pre></div></div>
<h2 id="target-guest-setup">Target Guest Setup</h2>
<p>When you bring up the target’s QEMU guest, be sure to include an NVME disk.</p>
<p>We can create the disk with the below.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>qemu-img create -f qcow2 nvme.img 10G
</code></pre></div></div>
<p>When we bring up QEMU, add this set of options so that the guest sees the NVME disk.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>-drive file=./nvme.img,if=none,id=nvme0 -device nvme,drive=nvme0,serial=1234
</code></pre></div></div>
<p>This is the QEMU command we use to bring up the target QEMU guest.</p>
<p>Note how we added in the options for the NVMe device.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo qemu-system-aarch64 -nographic -machine virt,gic-version=max -m 8G -cpu max \
-drive file=./ubuntu20-b.img,if=none,id=drive0,cache=writeback \
-device virtio-blk,drive=drive0,bootindex=0 \
-drive file=./flash0-b.img,format=raw,if=pflash \
-drive file=./flash1-b.img,format=raw,if=pflash \
-smp 4 -accel kvm -netdev bridge,id=hn1 \
-device virtio-net,netdev=hn1,mac=e6:c8:ff:09:76:9c \
-drive file=./nvme.img,if=none,id=nvme0 -device nvme,drive=nvme0,serial=1234
</code></pre></div></div>
<h2 id="configure-target">Configure Target</h2>
<p>Load the following modules on the target:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo modprobe nvmet
sudo modprobe nvmet-tcp
</code></pre></div></div>
<p>Next, create and configure an NVMe Target subsystem.
This includes creating a namespace.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /sys/kernel/config/nvmet/subsystems
sudo mkdir nvme-test-target
cd nvme-test-target/
echo 1 | sudo tee -a attr_allow_any_host > /dev/null
sudo mkdir namespaces/1
cd namespaces/1
</code></pre></div></div>
<p>Before we can attach our NVMe device to this target, we need to find the name.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- ------ ---------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 1234 QEMU NVMe Ctrl 1 10.74 GB / 10.74 GB 512 B + 0 B 1.0
</code></pre></div></div>
<p>The next step attaches our NVMe device /dev/nvme0n1 to this target and enables it.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo -n /dev/nvme0n1 |sudo tee -a device_path > /dev/null
echo 1|sudo tee -a enable > /dev/null
</code></pre></div></div>
<p>Next we will create an NVMe target port, and configure the IP address and other parameters.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo mkdir /sys/kernel/config/nvmet/ports/1
cd /sys/kernel/config/nvmet/ports/1
echo 192.168.0.16 |sudo tee -a addr_traddr > /dev/null
echo tcp|sudo tee -a addr_trtype > /dev/null
echo 4420|sudo tee -a addr_trsvcid > /dev/null
echo ipv4|sudo tee -a addr_adrfam > /dev/null
</code></pre></div></div>
<p>The final step creates a link to the subsystem from the port.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo ln -s /sys/kernel/config/nvmet/subsystems/nvme-test-target/ /sys/kernel/config/nvmet/ports/1/subsystems/nvme-test-target
</code></pre></div></div>
<p>At this point we should see a message in the dmesg log</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dmesg |grep "nvmet_tcp"
[81528.143604] nvmet_tcp: enabling port 1 (192.168.0.16:4420)
</code></pre></div></div>
<h2 id="mount-target-on-initiator">Mount Target on Initiator</h2>
<p>Load the following modules on the initiator:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo modprobe nvme
sudo modprobe nvme-tcp
</code></pre></div></div>
<p>Next, check that we currently do not see any NVMe devices. The output of the following command should be blank.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nvme list
</code></pre></div></div>
<p>Next, we will attempt to discover the remote target.</p>
<p>When we initially tried the “discover” command we got an error that our hostnqn was needed. In our example below you will notice that we are providing a hostnqn.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nvme discover -t tcp -a 192.168.0.16 -s 4420 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1b4e28ba-2fa1-11d2-883f-0016d3ccabcd
Discovery Log Number of Records 1, Generation counter 2
=====Discovery Log Entry 0======
trtype: tcp
adrfam: ipv4
subtype: nvme subsystem
treq: not specified, sq flow control disable supported
portid: 1
trsvcid: 4420
subnqn: nvme-test-target
traddr: 192.168.0.16
sectype: none
</code></pre></div></div>
<p>Using the subnqn as the -n argument, we will connect to the discovered target.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nvme connect -t tcp -n nvme-test-target -a 192.168.0.16 -s 4420 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1b4e28ba-2fa1-11d2-883f-0016d3ccabcd
</code></pre></div></div>
<p>Success. We can immediately check the nvme list for the attached device.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- ---------------- ------ --------- -------------------------- ---------------- --------
/dev/nvme0n1 84cfc88e9ba4a8f4 Linux 1 10.74 GB / 10.74 GB 512 B + 0 B 5.8.0-rc
</code></pre></div></div>
<p>To detach the target, run the following command on the initiator.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo nvme disconnect /dev/nvme0n1 -n nvme-test-target
</code></pre></div></div>
<p>References:</p>
<ul>
<li><a href="https://futurewei-cloud.github.io/qemu/network-aarch64-qemu-guests">how to connect two aarch64 QEMU guests using a bridged network</a></li>
<li><a href="https://futurewei-cloud.github.io/qemu/how-to-launch-aarch64-vm">how to launch ARM aarch64 VMs from scratch</a></li>
<li><a href="https://futurewei-cloud.github.io/qemu/lisa-qemu-demo1">using LISA-QEMU to create and launch an aarch64 guest</a></li>
<li><a href="https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp">NVMe over Fabrics Using TCP</a></li>
</ul>Rob FoleyIn this post we will explain how to connect two QEMU Guests (using KVM) with NVMe over Fabrics using TCP as the transport.How to connect two aarch64 QEMU guests with a bridge2020-08-06T15:52:34-04:002020-08-06T15:52:34-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/network-aarch64-qemu-guests<p>In this post we will show how to share a network between two QEMU guests using a bridge.</p>
<p>There are many possible uses of this kind of a setup. One of them is for integration
testing of for example, target and initiator code using one guest for the initiator and another
for the target.</p>
<p>This post creates a bridge on the host, which the guests both share.</p>
<h2 id="create-bridge-for-shared-network">Create Bridge for Shared Network</h2>
<p>We will first create the bridge and give it a “local” address, since for now
we are not planning on exporting this network off the host. You will also notice
we add an IP address for the host on this network 192.168.0.1</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo ip link add br0 type bridge
sudo ip addr add 192.168.0.1/24 dev br0
sudo ip link set br0 up
</code></pre></div></div>
<p>Now we can check that the bridge exists and is ready (state is UP).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ip addr
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:06:bb:4c:37:a1 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/24 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::8c39:6fff:fe23:ca06/64 scope link
valid_lft forever preferred_lft forever
</code></pre></div></div>
<p>Next we will tell QEMU about this bridge by adding it to the QEMU bridge configuration file.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ echo 'allow br0' | sudo tee -a /etc/qemu/bridge.conf
</code></pre></div></div>
<h2 id="starting-the-guests">Starting the guests</h2>
<p>When we bring up the QEMU guests, we will provide the -netdev option to specify a bridge that
our guests will use for their network. Below is an example of these network options.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>-netdev bridge,id=hn1 -device virtio-net,netdev=hn1,mac=e6:c8:ff:09:76:99
</code></pre></div></div>
<p>Here are the full set of options to bring up our aarch64 guests.</p>
<p>Note that we specify a different MAC address for each guest.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>guest A:
$ sudo qemu-system-aarch64 -nographic -machine virt,gic-version=max -m 8G -cpu max \
-drive file=./ubuntu20-a.img,if=none,id=drive0,cache=writeback \
-device virtio-blk,drive=drive0,bootindex=0 \
-drive file=./flash0-a.img,format=raw,if=pflash \
-drive file=./flash1-a.img,format=raw,if=pflash \
-smp 4 -accel kvm -netdev bridge,id=hn1 \
-device virtio-net,netdev=hn1,mac=e6:c8:ff:09:76:99
guest B:
$ sudo qemu-system-aarch64 -nographic -machine virt,gic-version=max -m 8G -cpu max \
-drive file=./ubuntu20-b.img,if=none,id=drive0,cache=writeback \
-device virtio-blk,drive=drive0,bootindex=0 \
-drive file=./flash0-b.img,format=raw,if=pflash \
-drive file=./flash1-b.img,format=raw,if=pflash \
-smp 4 -accel kvm -netdev bridge,id=hn1 \
-device virtio-net,netdev=hn1,mac=e6:c8:ff:09:76:9c
</code></pre></div></div>
<p>Once the guests are up, you can configure the IP addresses quickly for both
guests via the below commands.</p>
<p>Note that we chose IP addresses of 192.168.0.8 for guest a and .16 for guest b.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ip addr
2: enp0s3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e6:c8:ff:09:76:99 brd ff:ff:ff:ff:ff:ff
$ sudo ip addr add 192.168.0.8/24 dev enp0s3
$ sudo ip link set enp0s3 up
</code></pre></div></div>
<h2 id="testing-the-shared-network">Testing the Shared Network</h2>
<p>To test that the guest’s network is up, check ip addr again. It should show “state UP”.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ip addr
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether e6:c8:ff:09:76:99 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.8/24 scope global enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::e4c8:ffff:fe09:7699/64 scope link
valid_lft forever preferred_lft forever
</code></pre></div></div>
<p>Then we can try pinging the host.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.193 ms
</code></pre></div></div>
<p>It works ! Now let’s try pinging the other guest.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ping 192.168.0.16
PING 192.168.0.16 (192.168.0.16) 56(84) bytes of data.
64 bytes from 192.168.0.16: icmp_seq=1 ttl=64 time=0.358 ms
</code></pre></div></div>
<p>Also worked ! The guests can now see each other and are sharing the same network.</p>
<h2 id="access-to-network-beyond-host">Access to Network Beyond Host</h2>
<p>Suppose that we wanted to get access to the external network also.
This can be added by simply adding a device to the bridge.
In our case the device is: enahisic2i0, and we add it to bridge br0</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo ip link set enahisic2i0 master br0
</code></pre></div></div>
<p>After that, we just add public IP addresses on our bridge.
You might need to remove it from the device before adding it to the bridge.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo ip addr del 1.234.55.67/24 dev enahisic2i0
sudo ip addr add 1.234.55.67/24 dev br0
</code></pre></div></div>
<p>Finally, inside the guests, give them public IP addreses as well:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo ip addr add 1.234.55.65/24 dev enp0s3
</code></pre></div></div>
<p>Note that to access beyond your local subnet, you might need to add a default route:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo ip route add default via 1.234.55.1 dev enops
</code></pre></div></div>
<p>References:</p>
<ul>
<li><a href="http://www.kaizou.org/2018/06/qemu-bridge.html">Bridging two QEmu guests</a></li>
<li><a href="https://wiki.qemu.org/Documentation/Networking">QEMU Networking documentation</a></li>
</ul>Rob FoleyIn this post we will show how to share a network between two QEMU guests using a bridge.NUMA balancing2020-04-24T19:04:30-04:002020-04-24T19:04:30-04:00https://futurewei-cloud.github.io/ARM-Datacenter/cfs%20(completely%20fair%20scheduler)/numa-balancing<h1 id="numa-balancing-impact-on-common-benchmarks">NUMA balancing impact on common benchmarks</h1>
<p><strong>NUMA balancing can lead to performance degradation on NUMA-based arm64 systems when tasks migrate,<br />
and their memory accesses now suffer additional latency.</strong></p>
<h1 id="platform">Platform</h1>
<table>
<thead>
<tr>
<th style="text-align: left">System</th>
<th style="text-align: left">Information</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">Architecture</td>
<td style="text-align: left">aarch64</td>
</tr>
<tr>
<td style="text-align: left">Processor version</td>
<td style="text-align: left">Kunpeng 920-6426</td>
</tr>
<tr>
<td style="text-align: left">CPUs</td>
<td style="text-align: left">128</td>
</tr>
<tr>
<td style="text-align: left">NUMA nodes</td>
<td style="text-align: left">4</td>
</tr>
<tr>
<td style="text-align: left">Kernel release</td>
<td style="text-align: left">5.6.0+</td>
</tr>
<tr>
<td style="text-align: left">Node name</td>
<td style="text-align: left">ARMv2-3</td>
</tr>
</tbody>
</table>
<h1 id="test-results">Test results</h1>
<h2 id="perfbenchschedpipe">PerfBenchSchedPipe</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>perf bench -f simple sched pipe
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">10.012 (usecs/op)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">10.509 (usecs/op)</td>
</tr>
</tbody>
</table>
<h2 id="perfbenchschedmessaging">PerfBenchSchedMessaging</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>perf bench -f simple sched messaging -l 10000
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">6.417 (Sec)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">6.494 (Sec)</td>
</tr>
</tbody>
</table>
<h2 id="perfbenchmemmemset">PerfBenchMemMemset</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>perf bench -f simple mem memset -s 4GB -l 5 -f default
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">17.438783330964565 (GB/sec)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">17.63163114627642 (GB/sec)</td>
</tr>
</tbody>
</table>
<h2 id="perfbenchfutexwake">PerfBenchFutexWake</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>perf bench -f simple futex wake -s -t 1024 -w 1
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">9.2742 (ms)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">9.2178 (ms)</td>
</tr>
</tbody>
</table>
<h2 id="sysbenchcpu">SysBenchCpu</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sysbench cpu --time=10 --threads=64 --cpu-max-prime=10000 run
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">214960.28 (Events/sec)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">214965.55 (Events/sec)</td>
</tr>
</tbody>
</table>
<h2 id="sysbenchmemory">SysBenchMemory</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sysbench memory --memory-access-mode=rnd --threads=64 run
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">1645 (MB/s)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">1959 (MB/s)</td>
</tr>
</tbody>
</table>
<h2 id="sysbenchthreads">SysBenchThreads</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sysbench threads --threads=64 run
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">4604 (Events/sec)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">5390 (Events/sec)</td>
</tr>
</tbody>
</table>
<h2 id="sysbenchmutex">SysBenchMutex</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sysbench mutex --mutex-num=1 --threads=512 run
</code></pre></div></div>
<table>
<thead>
<tr>
<th style="text-align: left">Test</th>
<th style="text-align: left">Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">numa_balancing-ON</td>
<td style="text-align: left">33.2165 (Sec)</td>
</tr>
<tr>
<td style="text-align: left">numa_balancing-OFF</td>
<td style="text-align: left">32.1088 (Sec)</td>
</tr>
</tbody>
</table>PeterNUMA balancing impact on common benchmarksLISA-QEMU Presentation2020-04-23T11:27:07-04:002020-04-23T11:27:07-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/lisa-qemu-presentation<p>We recently gave a presentation on LISA-QEMU to the Linaro Toolchain Working Group.<br /></p>
<p>This presentation highlights our work on LISA-QEMU and provides all the details on what LISA-QEMU is, why we established this project, and how to get up and running creating VMs with the tools we developed.</p>
<p>Please visit the links below to view the presentation or meeting recording.<br /></p>
<ul>
<li><a href="https://futurewei-cloud.github.io/ARM-Datacenter/assets/presentations/lisa-qemu-presentation.pdf">Presentation</a></li>
<li><a href="https://drive.google.com/file/d/1oxMiq4JOCC308GNQVMhe-ThSUBkvX-y9/view?usp=sharing">Video</a></li>
</ul>Rob FoleyWe recently gave a presentation on LISA-QEMU to the Linaro Toolchain Working Group.How to debug kernel using QEMU and aarch64 VM.2020-04-22T06:51:34-04:002020-04-22T06:51:34-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/aarch64-debug-kernel<p>QEMU is a great tool to use when needing to debug the kernel.<br />
There are many recipes online for this too, I have listed a few helpful ones at the end of the article for reference.</p>
<p>We would like to share our steps for debug the kernel, but focused on aarch64 systems, as some of the steps might be slightly different for this type of system.</p>
<p>First, create a directory to work in and run these commands to create the flash images:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dd if=/dev/zero of=flash1.img bs=1M count=64
dd if=/dev/zero of=flash0.img bs=1M count=64
dd if=/usr/share/qemu-efi-aarch64/QEMU_EFI.fd of=flash0.img conv=notrunc
</code></pre></div></div>
<p>Next, download a QEMU image. We will use an ubuntu image that we previously created.</p>
<p>We should mention that our procedure involves building our own kernel from scratch, and feeding this image to QEMU.</p>
<p>Thus the first step is to actually create a QEMU image. We will assume you already have an image to use. If not, check out our articles on:</p>
<ul>
<li><a href="https://futurewei-cloud.github.io/qemu/lisa-qemu-demo1">how to create a VM using LISA-QEMU</a>.</li>
<li><a href="https://futurewei-cloud.github.io/qemu/how-to-launch-aarch64-vm">how to create aarch64 VM using QEMU vm-build</a>.</li>
<li><a href="https://futurewei-cloud.github.io/qemu/qemu-aarch64-vms">how to create an aarch64 VM from scratch</a>.</li>
</ul>
<p>We prefer the first procedure using LISA-QEMU since we also have a helpful script to install your kernel into the VM image automatically.</p>
<p>But don’t worry, if you want to take a different route we will show all the steps for that too!</p>
<h1 id="installing-kernel">Installing Kernel</h1>
<p>You have a few options here. One is to boot the image and install the image manually or use LISA-QEMU scripts to install it. The below command will boot the image in case you want to use the later manual approach to boot the image, scp in the kernel (maybe a .deb file) and install it manually with deb -i <kernel>.deb.</kernel></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>qemu/build/aarch64-softmmu/qemu-system-aarch64 -nographic\
-machine virt,gic-version=max -m 2G -cpu max\
-netdev user,id=vnet,hostfwd=:127.0.0.1:0-:22\
-device virtio-net-pci,netdev=vnet\
-drive file=./mini_ubuntu.img,if=none,id=drive0,cache=writeback\
-device virtio-blk,drive=drive0,bootindex=0\
-drive file=./flash0.img,format=raw,if=pflash \
-drive file=./flash1.img,format=raw,if=pflash -smp 4
</code></pre></div></div>
<p>To bring up QEMU with a kernel, typically you will need a kernel image (that you built), an initrd image (built after installing the kernel in your image), and the OS image (created above).</p>
<p>Keep in mind the below steps assume a raw image. If you have a qcow2, then use qemu-img to convert it to raw first.
For example:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>qemu-img convert -O raw my_image.qcow2 my_image_output.raw
</code></pre></div></div>
<p>Below is how to mount an image to copy out files. You need to copy out the initrd in this case.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mkdir mnt
$ sudo losetup -f -P ubuntu.img
$ sudo losetup -l
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC
/dev/loop0 0 0 0 0 ubuntu.img 0 512
$ sudo mount /dev/loop0p2 ./mnt
$ ls ./mnt/boot
config-4.15.0-88-generic grub initrd.img-5.5.11 System.map-5.5.11 vmlinuz-5.5.11
config-5.5.11 initrd.img initrd.img.old vmlinuz vmlinuz.old
efi initrd.img-4.15.0-88-generic System.map-4.15.0-88-generic vmlinuz-4.15.0-88-generic
$ cp ./mnt/initrd.img-5.5.11 .
$ sudo umount ./mnt
$ sudo losetup -d /dev/loop0
</code></pre></div></div>
<p>Next, boot the kernel you built with your initrd. Note the kernel you built can be found at
arch/arm64/boot/Image.</p>
<p>This command line will bring up your kernel image with your initrd and your OS Image.</p>
<p>One item you might need to customize is the “root=/dev/vda1” argument. This tells the kernel where to find your boot partition. This might vary depending on your VM image.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>qemu/build/aarch64-softmmu/qemu-system-aarch64 -nographic\
-machine virt,gic-version=max -m 2G -cpu max\
-netdev user,id=vnet,hostfwd=:127.0.0.1:0-:22\
-device virtio-net-pci,netdev=vnet\
-drive file=./mini_ubuntu.img,if=none,id=drive0,cache=writeback\
-device virtio-blk,drive=drive0,bootindex=0\
-drive file=./flash0.img,format=raw,if=pflash\
-drive file=./flash1.img,format=raw,if=pflash -smp 4\
-kernel ./linux/arch/arm64/boot/Image\
-append "root=/dev/vda2 nokaslr console=ttyAMA0"\
-initrd ./initrd.img-5.5.11 -s -S
</code></pre></div></div>
<p><b>-s</b> tells QEMU to use the TCP port :1234<br />
<b>-S</b> will pause at startup, waiting for the debugger to attach.</p>
<p>Before we get started debugging, update your ~/.gdbinit with the following:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>add-auto-load-safe-path linux-5.5.11/scripts/gdb/vmlinux-gdb.py
</code></pre></div></div>
<p>In another window, start the debugger.
Note, if you are on a x86 host debugging aarch64, then you need to use gdb-multiarch (sudo apt-get gdb-multiarch). In our case below we are on an aarch64 host, so we just use gdb.</p>
<p>It’s very important to note that we receive the “done” message below indicating symbols were loaded successfully, otherwise the following steps will not work.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gdb linux-5.5.11/vmlinux
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
Reading symbols from linux-5.5.11/vmlinux...done.
</code></pre></div></div>
<p>Attach the debugger to the kernel. Remember the -s argument above? It told QEMU to use port :1234. We will connect to it now.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(gdb) target remote localhost:1234
Remote debugging using localhost:1234
0x0000000000000000 in ?? ()
</code></pre></div></div>
<p>That it. The debugger is connected.</p>
<p>Now let’s test out the setup. <br />
Add a breakpoint in the kernel as a test.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(gdb) hbreak start_kernel
Hardware assisted breakpoint 1 at 0xffff800011330cdc: file init/main.c, line 577.
(gdb) c
Continuing.
Thread 1 hit Breakpoint 1, start_kernel () at init/main.c:577
577 {
(gdb) l
572 {
573 rest_init();
574 }
575
576 asmlinkage __visible void __init start_kernel(void)
577 {
578 char *command_line;
579 char *after_dashes;
580
581 set_task_stack_end_magic(&init_task);
(gdb)
</code></pre></div></div>
<p>We hit the breakpoint !</p>
<p>Remember above that we used the -S option to QEMU? This told QEMU to wait to start running the image until we connected the debugger. Thus once we hit continue, QEMU actually starts booting the kernel.</p>
<p>References:</p>
<ul>
<li><a href="https://yulistic.gitlab.io/2018/12/debugging-linux-kernel-with-gdb-and-qemu/">debugging-linux-kernel-with-gdb-and-qemu</a></li>
<li><a href="http://nickdesaulniers.github.io/blog/2018/10/24/booting-a-custom-linux-kernel-in-qemu-and-debugging-it-with-gdb/">booting-a-custom-linux-kernel-in-qemu-and-debugging-it-with-gdb</a></li>
</ul>Rob FoleyQEMU is a great tool to use when needing to debug the kernel. There are many recipes online for this too, I have listed a few helpful ones at the end of the article for reference.How to easily install the kernel in a VM2020-04-08T07:50:07-04:002020-04-08T07:50:07-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/lisa-qemu-change-kernel-cmdline<p><span style="font-size:60%">This article is a follow-up to an earlier article we wrote <a href="https://futurewei-cloud.github.io/qemu/lisa-qemu">Introducing LISA-QEMU</a>.</span></p>
<p>This article will outline the steps to install a kernel into a VM using some scripts we developed. In our case we have an x86_64 host and a aarch64 VM.</p>
<p>We will assume you have cloned the <a href="https://github.com/rf972/lisa-qemu">LISA-QEMU</a> repository already. As part of the LISA-QEMU integration we have added a script to automate the process of installing a kernel into a VM. The scripts we talk about below can be found in the <a href="https://github.com/rf972/lisa-qemu">LISA-QEMU github</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/rf972/lisa-qemu.git
cd lisa-qemu
git submodule update --init --recursive
</code></pre></div></div>
<p>We also assume you have built the kernel .deb install package. We covered the detailed steps in our <a href="https://github.com/rf972/lisa-qemu/blob/master/README.md">README</a>. You can also find needed dependencies for this article at that link.</p>
<p>You can use install_kernel.py to generate a new image with the kernel of choice installed.<br />
Assuming you have the VM image that was created similar to the steps in <a href="https://futurewei-cloud.github.io/qemu/lisa-qemu-demo1">this post</a>, just launch a command like the below to install your kernel.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo python3 scripts/install_kernel.py --kernel_pkg ../linux/linux-image-5.5.11_5.5.11-1_arm64.deb
scripts/install_kernel.py: image: build/VM-ubuntu.aarch64/ubuntu.aarch64.img
scripts/install_kernel.py: kernel_pkg: ../linux/linux-image-5.5.11_5.5.11-1_arm64.deb
Install kernel successful.
Image path: /home/rob/qemu/lisa-qemu/build/VM-ubuntu.aarch64/ubuntu.aarch64.img.kernel-5.5.11-1
To start this image run this command:
python3 /home/rob/qemu/lisa-qemu/scripts/launch_image.py -p /home/rob/qemu/lisa-qemu/build/VM-ubuntu.aarch64/ubuntu.aarch64.img.kernel-5.5.11-1
</code></pre></div></div>
<p>We need to use sudo for these commands since sudo is required as part of mounting images.</p>
<p>Note that the argument is:<br />
<b>-p or –kernel_pkg</b> argument with the .deb kernel package</p>
<p>Also note that the last lines in the output show the command to issue to bring this image up.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>To start this image run this command:
python3 /home/rob/qemu/lisa-qemu/scripts/launch_image.py -p /home/rob/qemu/lisa-qemu/build/VM-ubuntu.aarch64/ubuntu.aarch64.img.kernel-5.5.11-1
</code></pre></div></div>
<p>You might wonder where we got the VM image from?<br />
It was found in a default location after running our build_image.py script. See <a href="https://futurewei-cloud.github.io/qemu/lisa-qemu-demo1">this post</a> for more details.<br /></p>
<p>If you want to supply your own image, we have an argument for that. :)<br />
<b>–image</b> argument with the VM image to start from.</p>
<p>When supplying the image, the command line might look like the below.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo python3 scripts/install_kernel.py --kernel_pkg ../linux/linux-image-5.5.11_5.5.11-1_arm64.deb --image build/VM-ubuntu.aarch64/ubuntu.aarch64.img
</code></pre></div></div>
<p>There are a few options for installing the kernel.</p>
<p>By default install_kernel.py will attempt to install your kernel using a chroot environment. This is done for speed more than anything else since in our case is is faster to use the chroot than to bring up the aarch64 emulated VM and install the kernel.</p>
<p>We also support the <b>–vm</b> option which will bring up the VM with QEMU and then install the kernel into it. If you run into issues with the chroot environment install this would be a good alternative.</p>
<p>An example of the VM install method.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo python3 scripts/install_kernel.py --vm --kernel_pkg ../linux/linux-image-5.5.11_5.5.11-1_arm64.deb
</code></pre></div></div>
<p><br />
Thanks for taking the time to learn more about our work on LISA-QEMU !</p>Rob FoleyThis article is a follow-up to an earlier article we wrote Introducing LISA-QEMU.LISA-QEMU Demo2020-04-02T07:07:05-04:002020-04-02T07:07:05-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/lisa-qemu-demo1<p><span style="font-size:60%">This article is a follow-up to an earlier article we wrote <a href="https://futurewei-cloud.github.io/qemu/lisa-qemu">Introducing LISA-QEMU</a>.</span></p>
<p>LISA-QEMU provides an integration which allows <a href="https://github.com/ARM-software/lisa">LISA</a> to work with QEMU VMs. LISA’s goal is to help Linux kernel developers to measure the impact of modifications in core parts of the kernel.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote">1</a></sup> Integration with QEMU will allow developers to test wide variety of hardware configurations including ARM architecture and complex NUMA topologies.</p>
<p>This demo will walk through all the steps needed to build and bring up an aarch64 VM on an x86 platform. Future articles will work through reconfiguring the hardware for these VMs, inserting a new kernel into these VMs and more !</p>
<p>The first step is to get your linux machine ready to run LISA-QEMU. In this step we will download all the dependencies needed. We assume Ubuntu in the below steps.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get build-dep -y qemu
apt-get install -y python3-yaml wget git qemu-efi-aarch64 qemu-utils genisoimage qemu-user-static git
</code></pre></div></div>
<p>Now that we have the correct dependencies, let’s download the LISA-QEMU code.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/rf972/lisa-qemu.git
cd lisa-qemu
git submodule update --init --progress --recursive
</code></pre></div></div>
<p>One note on the above. If you do not plan to use lisa, then you can leave off the –recursive and it will update much quicker.</p>
<p>The next step is to build a new VM. This build command takes all the defaults. If you want to learn more about the possible options take a look at build_image.py –help.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ time python3 scripts/build_image.py --help
usage: build_image.py [-h] [--debug] [--dry_run] [--ssh]
[--image_type IMAGE_TYPE] [--image_path IMAGE_PATH]
[--config CONFIG] [--skip_qemu_build]
Build the qemu VM image for use with lisa.
optional arguments:
-h, --help show this help message and exit
--debug, -D enable debug output
--dry_run for debugging. Just show commands to issue.
--ssh Launch VM and open an ssh shell.
--image_type IMAGE_TYPE, -i IMAGE_TYPE
Type of image to build.
From external/qemu/tests/vm.
default is ubuntu.aarch64
--image_path IMAGE_PATH, -p IMAGE_PATH
Allows overriding path to image.
--config CONFIG, -c CONFIG
config file.
default is conf/conf_default.yml.
--skip_qemu_build For debugging script.
examples:
To select all defaults:
scripts/build_image.py
Or select one or more arguments
scripts/build_image.py -i ubuntu.aarch64 -c conf/conf_default.yml
</code></pre></div></div>
<p>But we digress… Below is the command to build the image.</p>
<p>OK let’s build that image…</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python3 scripts/build_image.py
</code></pre></div></div>
<p>You will see the progress of the build and other steps of the image creation on your screen. If you would like to see more comprehensive output and progress, use the <b>–debug</b> option.</p>
<p>Depending on your system this might take many minutes. Below are some example times.</p>
<p>50 minutes - Intel i7 laptop with 2 cores and 16 GB of memory<br />
6 minutes - Huawei Taishan 2286 V2 with 128 ARM cores and 512 GB of memory.</p>
<p>Once the image creation is complete, you will see a message like the following.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Image creation successful.
Image path: /home/lisa-qemu/build/VM-ubuntu.aarch64/ubuntu.aarch64.img
</code></pre></div></div>
<p>Now that we have an image, we can test it out by bringing up the image and opening an ssh connection to it.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>python3 scripts/launch_image.py
</code></pre></div></div>
<p>The time to bring up the VM will vary based on your machine, but it should come up in about 2-3 minutes on most machines.</p>
<p>You should expect to see the following as the system boots and we open an ssh connection to bring us to the guest prompt.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python3 scripts/launch_image.py
Conf: /home/lisa-qemu/build/VM-ubuntu.aarch64/conf.yml
Image type: ubuntu.aarch64
Image path: /home/lisa-qemu/build/VM-ubuntu.aarch64/ubuntu.aarch64.img
qemu@ubuntu-aarch64-guest:~$
</code></pre></div></div>
<p>Now that the system is up and running, you could for example, use it for a lisa test.</p>
<p>In our case we issue one command to show that we are in fact an aarch64 architecture with 8 cores.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>qemu@ubuntu-guest:~$ lscpu
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: 0x00
Model: 0
Stepping: 0x0
BogoMIPS: 125.00
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma sha3 sm3 sm4 asimddp sha512 sve asimdfhm flagm
</code></pre></div></div>
<p>Once you are done with the VM, you can close the VM simply by typing “exit” a the command prompt.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>qemu@ubuntu-guest:~$ exit
exit
Connection to 127.0.0.1 closed by remote host.
</code></pre></div></div>
<p>That’s it. The VM was gracefully powered off.</p>
<p>We hope this article was helpful to understand just how easy it can be to build and launch a VM with LISA-QEMU !</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>This definition can be found on the <a href="https://github.com/ARM-software/lisa">LISA github page</a> <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Rob FoleyThis article is a follow-up to an earlier article we wrote Introducing LISA-QEMU.Introducing LISA-QEMU2020-04-01T16:30:05-04:002020-04-01T16:30:05-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/lisa-qemu<p>LISA-QEMU provides an integration which allows <a href="https://github.com/ARM-software/lisa">LISA</a> to work with QEMU VMs. LISA’s goal is to help Linux kernel developers to measure the impact of modifications in core parts of the kernel<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote">1</a></sup>. Integration with QEMU will allow developers to test wide variety of hardware configurations including ARM architecture and complex NUMA topologies.</p>
<p>One of our goals is to allow developers to test the impact of modifications on aarch64 architectures with complex NUMA topologies. Currently we are focusing on testing kernel CFS scheduler task placement decision mechanism interaction with NUMA_BALANCING</p>
<p>In order to simplify and streamline the development process we created scripts and configuration files, which allow developers to quickly create QEMU VMs with a configurable number of cores and NUMA nodes. We also created a script to install custom build kernel on these VMs. Once a VM is configured with the desired topology and kernel version developers can run interactive and/or automated LISA tests.</p>
<p>Please note, that you do not need physical aarch64 hardware. In fact we have demoed this project on a laptop with a Core-i7-7600U CPU with two cores.</p>
<p>Our approach is to contribute improvements in QEMU and LISA back to the mainstream. In our repository we will keep scripts and configurations belonging to the integration between LISA and QEMU.</p>
<p>LISA Overview:
The LISA project provides a toolkit that supports regression testing and interactive analysis of Linux kernel behavior. LISA’s goal is to help Linux kernel developers measure the impact of modifications in core parts of the kernel. LISA itself runs on a host machine, and uses the devlib toolkit to interact with the target via SSH, ADB or telnet. LISA provides features to describe workloads (notably using rt-app) and run them on targets. It can collect trace files from the target OS (e.g. systrace and ftrace traces), parse them via the TRAPpy framework. These traces can then be parsed and analysed in order to examine detailed target behaviour during the workload’s execution.<sup id="fnref:1:1" role="doc-noteref"><a href="#fn:1" class="footnote">1</a></sup></p>
<p><a href="https://futurewei-cloud.github.io/arm-datacenter/welcome-to-ARM_Datacenter/">Peter</a> also contributed to this article.</p>
<p>We also have articles on LISA-QEMU:</p>
<ul>
<li><a href="https://futurewei-cloud.github.io/qemu/lisa-qemu-demo1">Demo of LISA-QEMU</a>.</li>
<li><a href="https://futurewei-cloud.github.io/qemu/lisa-qemu-change-kernel-cmdline">Easily change Kernel in a VM</a>.</li>
</ul>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>This definition can be found on the <a href="https://github.com/ARM-software/lisa">LISA github page</a> <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a> <a href="#fnref:1:1" class="reversefootnote" role="doc-backlink">↩<sup>2</sup></a></p>
</li>
</ol>
</div>Rob FoleyLISA-QEMU provides an integration which allows LISA to work with QEMU VMs. LISA’s goal is to help Linux kernel developers to measure the impact of modifications in core parts of the kernel1. Integration with QEMU will allow developers to test wide variety of hardware configurations including ARM architecture and complex NUMA topologies. This definition can be found on the LISA github page ↩Understanding pthread_cond_broadcast2020-03-25T06:25:34-04:002020-03-25T06:25:34-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/pthread-cond-broadcast<p>Recently we came across a piece of code in QEMU using the <a href="https://manpages.debian.org/testing/glibc-doc/pthread_cond_broadcast.3.en.html">pthread_cond_broadcast</a> function.</p>
<p>This method is intended to wake up all threads waiting on a condition variable. However, the method needs to be used with care. In particular, it should only be used if you can guarantee:
a) that the waiter is in fact waiting
or
b) that there is another mechanism to wake up the waiter if the broadcast signal arrives when the thread is not waiting</p>
<p>For example, suppose we have the following code:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pthread_mutex_lock(&first_cpu->lock);
while (first_cpu->stopped) {
pthread_cond_wait(first_cpu->halt_cond, first_cpu->lock);
pthread_mutex_unlock(&first_cpu->lock);
/* process any pending work */
pending_work();
pthread_mutex_lock(&first_cpu->lock);
}
pthread_mutex_unlock(&first_cpu->lock);
</code></pre></div></div>
<p>Also suppose we have another thread which will call to pthread_cond_broadcast() to wakeup this thread.</p>
<p>If the above thread is waiting in pthread_cond_wait() when it is woken up by pthread_cond_broadcast(), then all is well.</p>
<p>However, if this thread is outside of the pthread_cond_wait() in the loop when pthread_cond_broadcast() is called, then this thread will not be woken up. In other words, when the thread loops around to pthread_cond_wait() it will <em>NOT</em> wait.</p>
<p>This means that either we need to guarantee the thread is waiting when the broadcast is sent OR we need to make sure that there is another way to wakeup the thread.</p>
<p>One other option is to change the pthread_cond_wait() to a pthread_cond_timedwait() to ensure that we will periodically perform this “pending_work(), even if the pthread_cond_brodcast() is missed.</p>Rob FoleyRecently we came across a piece of code in QEMU using the pthread_cond_broadcast function.Testing QEMU emulation: how to change QTest Accelerator2020-03-23T09:25:05-04:002020-03-23T09:25:05-04:00https://futurewei-cloud.github.io/ARM-Datacenter/qemu/debug-qemu-qtest-accelerators<p><b>How can we change <a href="https://www.qemu.org/">QEMU</a> QTest to use different accelerators? And why would we do this?</b><br />
<span style="font-size:60%">This article is a follow-up to a prior article we posted on <a href="https://futurewei-cloud.github.io/qemu/debug-qemu-qtests">how to debug QEMU Qtests</a>.</span></p>
<p>Each QTest will decide which accelerators it uses. For example, the test might try to use ‘kvm’, which causes QEMU to use KVM to execute code. Or the test might try to use ‘TCG’ support, where QEMU will emulate the instructions itself. Regardless of which path is chosen, this choice inevitably results in different code paths getting exercised inside QEMU itself.</p>
<p>In some cases when developing QEMU code, we might want to force certain code paths which are specific to different accelerators. In this case we have a few things to decide. Take the case for example, where we want to force a specific TCG code path on an aarch64 machine for an aarch64 QTest. We will use the tests/qtest/arm-cpu-features test as an example.</p>
<p>This test it selects the specific accelerator(s) to use for each test case. It is possible that we might want to force the use of a specific accelerator to force that code path in QEMU. We might want to use TCG instead of kvm for instance.</p>
<p>In this case we would need to edit the test, for instance tests/qtest/arm-cpu-features.c, and replace the use of “kvm” with “tcg”, or in cases where both -accel kvm and -accel tcg are used, just remove the kvm.</p>
<p>This will have the effect of forcing the use of a specific code path, which can be very useful when debugging or validating a change.</p>Rob FoleyHow can we change QEMU QTest to use different accelerators? And why would we do this? This article is a follow-up to a prior article we posted on how to debug QEMU Qtests.