1. 20 Apr, 2017 2 commits
  2. 23 Mar, 2017 2 commits
  3. 21 Mar, 2017 1 commit
    • Markus Armbruster's avatar
      Revert "hostmem: fix QEMU crash by 'info memdev'" · 658ae5a7
      Markus Armbruster authored
      This reverts commit 1454d33f.
      
      The string input visitor regression fixed in the previous commit made
      visit_type_uint16List() fail on empty input.  query_memdev() calls it
      via object_property_get_uint16List().  Because it doesn't expect it to
      fail, it passes &error_abort, and duly crashes.
      
      Commit 1454d33f "fixes" this crash by making
      host_memory_backend_get_host_nodes() return a list containing just
      MAX_NODES instead of the empty list.  Papers over the regression, and
      leads to bogus "info memdev" output, as shown below; revert.
      
      I suspect that if we had bisected the crash back then, we would have
      found and fixed the actual bug instead of papering over it.
      
      To reproduce, run HMP command "info memdev" with
      
          $ qemu-system-x86_64 --nodefaults -S -display none -monitor stdio -object memory-backend-ram,id=mem1,size=4k
      
      With this commit, "info memdev" prints
      
          memory backend: mem1
            size:  4096
            merge: true
            dump: true
            prealloc: false
            policy: default
            host nodes:
      
      exactly like before commit 74f24cb6.
      
      Between commit 1454d33f and this commit, it prints
      
          memory backend: mem1
            size:  4096
            merge: true
            dump: true
            prealloc: false
            policy: default
            host nodes: 128
      
      The last line is bogus.
      
      Between commit 74f24cb6 and 1454d33f, it crashes like this:
      
          Unexpected error in parse_str() at /work/armbru/tmp/qemu/qapi/string-input-visitor.c:126:
          Parameter 'null' expects an int64 value or range
          Aborted (core dumped)
      
      Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: 's avatarMarkus Armbruster <armbru@redhat.com>
      Message-Id: <1490026424-11330-3-git-send-email-armbru@redhat.com>
      Reviewed-by: 's avatarMichael Roth <mdroth@linux.vnet.ibm.com>
      Reviewed-by: 's avatarEric Blake <eblake@redhat.com>
      658ae5a7
  4. 14 Mar, 2017 1 commit
    • Jitendra Kolhe's avatar
      mem-prealloc: reduce large guest start-up and migration time. · 1e356fc1
      Jitendra Kolhe authored
      Using "-mem-prealloc" option for a large guest leads to higher guest
      start-up and migration time. This is because with "-mem-prealloc" option
      qemu tries to map every guest page (create address translations), and
      make sure the pages are available during runtime. virsh/libvirt by
      default, seems to use "-mem-prealloc" option in case the guest is
      configured to use huge pages. The patch tries to map all guest pages
      simultaneously by spawning multiple threads. Currently limiting the
      change to QEMU library functions on POSIX compliant host only, as we are
      not sure if the problem exists on win32. Below are some stats with
      "-mem-prealloc" option for guest configured to use huge pages.
      
      ------------------------------------------------------------------------
      Idle Guest      | Start-up time | Migration time
      ------------------------------------------------------------------------
      Guest stats with 2M HugePage usage - single threaded (existing code)
      ------------------------------------------------------------------------
      64 Core - 4TB   | 54m11.796s    | 75m43.843s
      64 Core - 1TB   | 8m56.576s     | 14m29.049s
      64 Core - 256GB | 2m11.245s     | 3m26.598s
      ------------------------------------------------------------------------
      Guest stats with 2M HugePage usage - map guest pages using 8 threads
      ------------------------------------------------------------------------
      64 Core - 4TB   | 5m1.027s      | 34m10.565s
      64 Core - 1TB   | 1m10.366s     | 8m28.188s
      64 Core - 256GB | 0m19.040s     | 2m10.148s
      -----------------------------------------------------------------------
      Guest stats with 2M HugePage usage - map guest pages using 16 threads
      -----------------------------------------------------------------------
      64 Core - 4TB   | 1m58.970s     | 31m43.400s
      64 Core - 1TB   | 0m39.885s     | 7m55.289s
      64 Core - 256GB | 0m11.960s     | 2m0.135s
      -----------------------------------------------------------------------
      
      Changed in v2:
       - modify number of memset threads spawned to min(smp_cpus, 16).
       - removed 64GB memory restriction for spawning memset threads.
      
      Changed in v3:
       - limit number of threads spawned based on
         min(sysconf(_SC_NPROCESSORS_ONLN), 16, smp_cpus)
       - implement memset thread specific siglongjmp in SIGBUS signal_handler.
      
      Changed in v4
       - remove sigsetjmp/siglongjmp and SIGBUS unblock/block for main thread
         as main thread no longer touches any pages.
       - simplify code my returning memset_thread_failed status from
         touch_all_pages.
      Signed-off-by: 's avatarJitendra Kolhe <jitendra.kolhe@hpe.com>
      Message-Id: <1487907103-32350-1-git-send-email-jitendra.kolhe@hpe.com>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      1e356fc1
  5. 20 Feb, 2017 1 commit
  6. 31 Jan, 2017 4 commits
  7. 27 Jan, 2017 6 commits
  8. 12 Jan, 2017 1 commit
  9. 10 Jan, 2017 3 commits
  10. 24 Dec, 2016 4 commits
  11. 01 Nov, 2016 1 commit
  12. 30 Oct, 2016 3 commits
  13. 28 Oct, 2016 2 commits
  14. 24 Oct, 2016 7 commits
  15. 17 Oct, 2016 2 commits