1. 23 Mar, 2017 2 commits
  2. 14 Mar, 2017 1 commit
    • Jitendra Kolhe's avatar
      mem-prealloc: reduce large guest start-up and migration time. · 1e356fc1
      Jitendra Kolhe authored
      Using "-mem-prealloc" option for a large guest leads to higher guest
      start-up and migration time. This is because with "-mem-prealloc" option
      qemu tries to map every guest page (create address translations), and
      make sure the pages are available during runtime. virsh/libvirt by
      default, seems to use "-mem-prealloc" option in case the guest is
      configured to use huge pages. The patch tries to map all guest pages
      simultaneously by spawning multiple threads. Currently limiting the
      change to QEMU library functions on POSIX compliant host only, as we are
      not sure if the problem exists on win32. Below are some stats with
      "-mem-prealloc" option for guest configured to use huge pages.
      
      ------------------------------------------------------------------------
      Idle Guest      | Start-up time | Migration time
      ------------------------------------------------------------------------
      Guest stats with 2M HugePage usage - single threaded (existing code)
      ------------------------------------------------------------------------
      64 Core - 4TB   | 54m11.796s    | 75m43.843s
      64 Core - 1TB   | 8m56.576s     | 14m29.049s
      64 Core - 256GB | 2m11.245s     | 3m26.598s
      ------------------------------------------------------------------------
      Guest stats with 2M HugePage usage - map guest pages using 8 threads
      ------------------------------------------------------------------------
      64 Core - 4TB   | 5m1.027s      | 34m10.565s
      64 Core - 1TB   | 1m10.366s     | 8m28.188s
      64 Core - 256GB | 0m19.040s     | 2m10.148s
      -----------------------------------------------------------------------
      Guest stats with 2M HugePage usage - map guest pages using 16 threads
      -----------------------------------------------------------------------
      64 Core - 4TB   | 1m58.970s     | 31m43.400s
      64 Core - 1TB   | 0m39.885s     | 7m55.289s
      64 Core - 256GB | 0m11.960s     | 2m0.135s
      -----------------------------------------------------------------------
      
      Changed in v2:
       - modify number of memset threads spawned to min(smp_cpus, 16).
       - removed 64GB memory restriction for spawning memset threads.
      
      Changed in v3:
       - limit number of threads spawned based on
         min(sysconf(_SC_NPROCESSORS_ONLN), 16, smp_cpus)
       - implement memset thread specific siglongjmp in SIGBUS signal_handler.
      
      Changed in v4
       - remove sigsetjmp/siglongjmp and SIGBUS unblock/block for main thread
         as main thread no longer touches any pages.
       - simplify code my returning memset_thread_failed status from
         touch_all_pages.
      Signed-off-by: 's avatarJitendra Kolhe <jitendra.kolhe@hpe.com>
      Message-Id: <1487907103-32350-1-git-send-email-jitendra.kolhe@hpe.com>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      1e356fc1
  3. 20 Feb, 2017 1 commit
  4. 31 Jan, 2017 4 commits
  5. 27 Jan, 2017 6 commits
  6. 12 Jan, 2017 1 commit
  7. 10 Jan, 2017 3 commits
  8. 24 Dec, 2016 4 commits
  9. 01 Nov, 2016 1 commit
  10. 30 Oct, 2016 3 commits
  11. 28 Oct, 2016 2 commits
  12. 24 Oct, 2016 7 commits
  13. 17 Oct, 2016 2 commits
  14. 22 Sep, 2016 1 commit
    • Lin Ma's avatar
      msmouse: Fix segfault caused by free the chr before chardev cleanup. · 9e14037f
      Lin Ma authored
      Segfault happens when leaving qemu with msmouse backend:
      
       #0  0x00007fa8526ac975 in raise () at /lib64/libc.so.6
       #1  0x00007fa8526add8a in abort () at /lib64/libc.so.6
       #2  0x0000558be78846ab in error_exit (err=16, msg=0x558be799da10 ...
       #3  0x0000558be7884717 in qemu_mutex_destroy (mutex=0x558be93be750) at ...
       #4  0x0000558be7549951 in qemu_chr_free_common (chr=0x558be93be750) at ...
       #5  0x0000558be754999c in qemu_chr_free (chr=0x558be93be750) at ...
       #6  0x0000558be7549a20 in qemu_chr_delete (chr=0x558be93be750) at ...
       #7  0x0000558be754a8ef in qemu_chr_cleanup () at qemu-char.c:4643
       #8  0x0000558be755843e in main (argc=5, argv=0x7ffe925d7118, ...
      
      The chr was freed by msmouse close callback before chardev cleanup,
      Then qemu_mutex_destroy triggered raise().
      
      Because freeing chr is handled by qemu_chr_free_common, Remove the free from
      msmouse_chr_close to avoid double free.
      
      Fixes: c1111a24
      Cc: qemu-stable@nongnu.org
      Signed-off-by: 's avatarLin Ma <lma@suse.com>
      Message-Id: <20160915143158.4796-1-lma@suse.com>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      9e14037f
  15. 14 Sep, 2016 1 commit
  16. 13 Sep, 2016 1 commit
    • Daniel P. Berrange's avatar
      hw: replace most use of qemu_chr_fe_write with qemu_chr_fe_write_all · 6ab3fc32
      Daniel P. Berrange authored
      The qemu_chr_fe_write method will return -1 on EAGAIN if the
      chardev backend write would block. Almost no callers of the
      qemu_chr_fe_write() method check the return value, instead
      blindly assuming data was successfully sent. In most cases
      this will lead to silent data loss on interactive consoles,
      but in some cases (eg RNG EGD) it'll just cause corruption
      of the protocol being spoken.
      
      We unfortunately can't fix the virtio-console code, due to
      a bug in the Linux guest drivers, which would cause the
      entire Linux kernel to hang if we delay processing of the
      incoming data in any way. Fixing this requires first fixing
      the guest driver to not hold spinlocks while writing to the
      hvc device backend.
      
      Fixes bug: https://bugs.launchpad.net/qemu/+bug/1586756Signed-off-by: 's avatarDaniel P. Berrange <berrange@redhat.com>
      Message-Id: <1473170165-540-4-git-send-email-berrange@redhat.com>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      6ab3fc32