appliance: Use -cpu max.

QEMU has a newish feature (from about 2017 / qemu 2.9) called -cpu max
which is supposed to select the best CPU, ideal for libguestfs.

After this change, on x86-64:

               KVM                          TCG

Direct         -cpu max                     -cpu max
(non-libvirt)

Libvirt   <cpu mode="host-passthrough">     <cpu mode="host-model">
            <model fallback="allow"/>         <model fallback="allow"/>
          </cpu>                            </cpu>

Thanks: Daniel Berrangé
This commit is contained in:
Richard W.M. Jones
2021-01-28 12:20:49 +00:00
parent 7ac82beb89
commit 30f74f38bd
2 changed files with 17 additions and 8 deletions

View File

@@ -38,6 +38,11 @@
*
* The literal string C<"host"> means use C<-cpu host>.
*
* =item C<"max">
*
* The literal string C<"max"> means use C<-cpu max> (the best
* possible). This requires awkward translation for libvirt.
*
* =item some string
*
* Some string such as C<"cortex-a57"> means use C<-cpu cortex-a57>.
@@ -80,14 +85,9 @@ guestfs_int_get_cpu_model (int kvm)
/* See discussion in https://bugzilla.redhat.com/show_bug.cgi?id=1605071 */
return NULL;
#else
/* On most architectures, it is faster to pass the CPU host model to
* the appliance, allowing maximum speed for things like checksums
* and encryption. Only do this with KVM. It is broken in subtle
* ways on TCG, and fairly pointless when you're emulating anyway.
/* On most architectures we can use "max" to get the best possible CPU.
* For recent qemu this should work even on TCG.
*/
if (kvm)
return "host";
else
return NULL;
return "max";
#endif
}

View File

@@ -1169,6 +1169,15 @@ construct_libvirt_xml_cpu (guestfs_h *g,
attribute ("fallback", "allow");
} end_element ();
}
else if (STREQ (cpu_model, "max")) {
if (params->data->is_kvm)
attribute ("mode", "host-passthrough");
else
attribute ("mode", "host-model");
start_element ("model") {
attribute ("fallback", "allow");
} end_element ();
}
else
single_element ("model", cpu_model);
} end_element ();