1. 07 Jun, 2017 1 commit
  2. 05 Jun, 2017 1 commit
    • David Gibson's avatar
      migration: Mark CPU states dirty before incoming migration/loadvm · 75e972da
      David Gibson authored
      As a rule, CPU internal state should never be updated when
      !cpu->kvm_vcpu_dirty (or the HAX equivalent).  If that is done, then
      subsequent calls to cpu_synchronize_state() - usually safe and idempotent -
      will clobber state.
      
      However, we routinely do this during a loadvm or incoming migration.
      Usually this is called shortly after a reset, which will clear all the cpu
      dirty flags with cpu_synchronize_all_post_reset().  Nothing is expected
      to set the dirty flags again before the cpu state is loaded from the
      incoming stream.
      
      This means that it isn't safe to call cpu_synchronize_state() from a
      post_load handler, which is non-obvious and potentially inconvenient.
      
      We could cpu_synchronize_all_state() before the loadvm, but that would be
      overkill since a) we expect the state to already be synchronized from the
      reset and b) we expect to completely rewrite the state with a call to
      cpu_synchronize_all_post_init() at the end of qemu_loadvm_state().
      
      To clear this up, this patch introduces cpu_synchronize_pre_loadvm() and
      associated helpers, which simply marks the cpu state as dirty without
      actually changing anything.  i.e. it says we want to discard any existing
      KVM (or HAX) state and replace it with what we're going to load.
      
      Cc: Juan Quintela <quintela@redhat.com>
      Cc: Dave Gilbert <dgilbert@redhat.com>
      Signed-off-by: 's avatarDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: 's avatarJuan Quintela <quintela@redhat.com>
      75e972da
  3. 11 May, 2017 1 commit
  4. 10 May, 2017 1 commit
  5. 10 Apr, 2017 8 commits
  6. 28 Mar, 2017 1 commit
  7. 20 Mar, 2017 1 commit
    • Vincent Palatin's avatar
      hax: fix breakage in locking · b3d3a426
      Vincent Palatin authored
      use qemu_mutex_lock_iothread consistently in qemu_hax_cpu_thread_fn() as
      done in other _thread_fn functions, instead of grabbing directly the
      BQL. This way we ensure that iothread_locked is properly set.
      
      On v2.9.0-rc0, QEMU was dying in an assertion in the mutex code when
      running with '--enable-hax' either on OSX or Windows. This bug was triggered
      since the code modification for multithreading added new usages of
      qemu_mutex_iothread_locked.
      This fixes the breakage on both platforms, I can now run again a full
      Chromium OS image with HAX kernel acceleration.
      Signed-off-by: 's avatarVincent Palatin <vpalatin@chromium.org>
      Message-Id: <20170320101549.150076-1-vpalatin@chromium.org>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      b3d3a426
  8. 14 Mar, 2017 2 commits
    • Paolo Bonzini's avatar
      icount: process QEMU_CLOCK_VIRTUAL timers in vCPU thread · 6b8f0187
      Paolo Bonzini authored
      icount has become much slower after tcg_cpu_exec has stopped
      using the BQL.  There is also a latent bug that is masked by
      the slowness.
      
      The slowness happens because every occurrence of a QEMU_CLOCK_VIRTUAL
      timer now has to wake up the I/O thread and wait for it.  The rendez-vous
      is mediated by the BQL QemuMutex:
      
      - handle_icount_deadline wakes up the I/O thread with BQL taken
      - the I/O thread wakes up and waits on the BQL
      - the VCPU thread releases the BQL a little later
      - the I/O thread raises an interrupt, which calls qemu_cpu_kick
      - the VCPU thread notices the interrupt, takes the BQL to
        process it and waits on it
      
      All this back and forth is extremely expensive, causing a 6 to 8-fold
      slowdown when icount is turned on.
      
      One may think that the issue is that the VCPU thread is too dependent
      on the BQL, but then the latent bug comes in.  I first tried removing
      the BQL completely from the x86 cpu_exec, only to see everything break.
      The only way to fix it (and make everything slow again) was to add a dummy
      BQL lock/unlock pair.
      
      This is because in -icount mode you really have to process the events
      before the CPU restarts executing the next instruction.  Therefore, this
      series moves the processing of QEMU_CLOCK_VIRTUAL timers straight in
      the vCPU thread when running in icount mode.
      
      The required changes include:
      
      - make the timer notification callback wake up TCG's single vCPU thread
        when run from another thread.  By using async_run_on_cpu, the callback
        can override all_cpu_threads_idle() when the CPU is halted.
      
      - move handle_icount_deadline after qemu_tcg_wait_io_event, so that
        the timer notification callback is invoked after the dummy work item
        wakes up the vCPU thread
      
      - make handle_icount_deadline run the timers instead of just waking the
        I/O thread.
      
      - stop processing the timers in the main loop
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      6b8f0187
    • Paolo Bonzini's avatar
      cpus: define QEMUTimerListNotifyCB for QEMU system emulation · 3f53bc61
      Paolo Bonzini authored
      There is no change for now, because the callback just invokes
      qemu_notify_event.
      Reviewed-by: 's avatarEdgar E. Iglesias <edgar.iglesias@xilinx.com>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      3f53bc61
  9. 09 Mar, 2017 2 commits
  10. 03 Mar, 2017 4 commits
  11. 24 Feb, 2017 7 commits
    • Pranith Kumar's avatar
      tcg: handle EXCP_ATOMIC exception for system emulation · 08e73c48
      Pranith Kumar authored
      The patch enables handling atomic code in the guest. This should be
      preferably done in cpu_handle_exception(), but the current assumptions
      regarding when we can execute atomic sections cause a deadlock.
      
      The current mechanism discards the flags which were set in atomic
      execution. We ensure they are properly saved by calling the
      cc->cpu_exec_enter/leave() functions around the loop.
      
      As we are running cpu_exec_step_atomic() from the outermost loop we
      need to avoid an abort() when single stepping over atomic code since
      debug exception longjmp will point to the the setlongjmp in
      cpu_exec(). We do this by setting a new jmp_env so that it jumps back
      here on an exception.
      Signed-off-by: 's avatarPranith Kumar <bobby.prani@gmail.com>
      [AJB: tweak title, merge with new patches, add mmap_lock]
      Signed-off-by: 's avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      CC: Paolo Bonzini <pbonzini@redhat.com>
      08e73c48
    • Alex Bennée's avatar
      tcg: enable thread-per-vCPU · 37257942
      Alex Bennée authored
      There are a couple of changes that occur at the same time here:
      
        - introduce a single vCPU qemu_tcg_cpu_thread_fn
      
        One of these is spawned per vCPU with its own Thread and Condition
        variables. qemu_tcg_rr_cpu_thread_fn is the new name for the old
        single threaded function.
      
        - the TLS current_cpu variable is now live for the lifetime of MTTCG
          vCPU threads. This is for future work where async jobs need to know
          the vCPU context they are operating in.
      
      The user to switch on multi-thread behaviour and spawn a thread
      per-vCPU. For a simple test kvm-unit-test like:
      
        ./arm/run ./arm/locking-test.flat -smp 4 -accel tcg,thread=multi
      
      Will now use 4 vCPU threads and have an expected FAIL (instead of the
      unexpected PASS) as the default mode of the test has no protection when
      incrementing a shared variable.
      
      We enable the parallel_cpus flag to ensure we generate correct barrier
      and atomic code if supported by the front and backends. This doesn't
      automatically enable MTTCG until default_mttcg_enabled() is updated to
      check the configuration is supported.
      Signed-off-by: 's avatarKONRAD Frederic <fred.konrad@greensocs.com>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      [AJB: Some fixes, conditionally, commit rewording]
      Signed-off-by: 's avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      37257942
    • Alex Bennée's avatar
      tcg: remove global exit_request · e5143e30
      Alex Bennée authored
      There are now only two uses of the global exit_request left.
      
      The first ensures we exit the run_loop when we first start to process
      pending work and in the kick handler. This is just as easily done by
      setting the first_cpu->exit_request flag.
      
      The second use is in the round robin kick routine. The global
      exit_request ensured every vCPU would set its local exit_request and
      cause a full exit of the loop. Now the iothread isn't being held while
      running we can just rely on the kick handler to push us out as intended.
      
      We lightly re-factor the main vCPU thread to ensure cpu->exit_requests
      cause us to exit the main loop and process any IO requests that might
      come along. As an cpu->exit_request may legitimately get squashed
      while processing the EXCP_INTERRUPT exception we also check
      cpu->queued_work_first to ensure queued work is expedited as soon as
      possible.
      Signed-off-by: 's avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      e5143e30
    • Jan Kiszka's avatar
      tcg: drop global lock during TCG code execution · 8d04fb55
      Jan Kiszka authored
      This finally allows TCG to benefit from the iothread introduction: Drop
      the global mutex while running pure TCG CPU code. Reacquire the lock
      when entering MMIO or PIO emulation, or when leaving the TCG loop.
      
      We have to revert a few optimization for the current TCG threading
      model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
      kicking it in qemu_cpu_kick. We also need to disable RAM block
      reordering until we have a more efficient locking mechanism at hand.
      
      Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
      These numbers demonstrate where we gain something:
      
      20338 jan       20   0  331m  75m 6904 R   99  0.9   0:50.95 qemu-system-arm
      20337 jan       20   0  331m  75m 6904 S   20  0.9   0:26.50 qemu-system-arm
      
      The guest CPU was fully loaded, but the iothread could still run mostly
      independent on a second core. Without the patch we don't get beyond
      
      32206 jan       20   0  330m  73m 7036 R   82  0.9   1:06.00 qemu-system-arm
      32204 jan       20   0  330m  73m 7036 S   21  0.9   0:17.03 qemu-system-arm
      
      We don't benefit significantly, though, when the guest is not fully
      loading a host CPU.
      Signed-off-by: 's avatarJan Kiszka <jan.kiszka@siemens.com>
      Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
      [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
      Signed-off-by: 's avatarKONRAD Frederic <fred.konrad@greensocs.com>
      [EGC: fixed iothread lock for cpu-exec IRQ handling]
      Signed-off-by: 's avatarEmilio G. Cota <cota@braap.org>
      [AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
      Signed-off-by: 's avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      Reviewed-by: 's avatarPranith Kumar <bobby.prani@gmail.com>
      [PM: target-arm changes]
      Acked-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      8d04fb55
    • Alex Bennée's avatar
      tcg: rename tcg_current_cpu to tcg_current_rr_cpu · 791158d9
      Alex Bennée authored
      ..and make the definition local to cpus. In preparation for MTTCG the
      concept of a global tcg_current_cpu will no longer make sense. However
      we still need to keep track of it in the single-threaded case to be able
      to exit quickly when required.
      
      qemu_cpu_kick_no_halt() moves and becomes qemu_cpu_kick_rr_cpu() to
      emphasise its use-case. qemu_cpu_kick now kicks the relevant cpu as
      well as qemu_kick_rr_cpu() which will become a no-op in MTTCG.
      
      For the time being the setting of the global exit_request remains.
      Signed-off-by: 's avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      Reviewed-by: 's avatarPranith Kumar <bobby.prani@gmail.com>
      791158d9
    • Alex Bennée's avatar
      tcg: add kick timer for single-threaded vCPU emulation · 6546706d
      Alex Bennée authored
      Currently we rely on the side effect of the main loop grabbing the
      iothread_mutex to give any long running basic block chains a kick to
      ensure the next vCPU is scheduled. As this code is being re-factored and
      rationalised we now do it explicitly here.
      Signed-off-by: 's avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      Reviewed-by: 's avatarPranith Kumar <bobby.prani@gmail.com>
      6546706d
    • KONRAD Frederic's avatar
      tcg: add options for enabling MTTCG · 8d4e9146
      KONRAD Frederic authored
      We know there will be cases where MTTCG won't work until additional work
      is done in the front/back ends to support. It will however be useful to
      be able to turn it on.
      
      As a result MTTCG will default to off unless the combination is
      supported. However the user can turn it on for the sake of testing.
      Signed-off-by: 's avatarKONRAD Frederic <fred.konrad@greensocs.com>
      [AJB: move to -accel tcg,thread=multi|single, defaults]
      Signed-off-by: 's avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      8d4e9146
  12. 16 Feb, 2017 1 commit
  13. 19 Jan, 2017 2 commits
  14. 31 Oct, 2016 5 commits
  15. 26 Oct, 2016 1 commit
  16. 29 Sep, 2016 1 commit
  17. 27 Sep, 2016 1 commit