1. 06 Jun, 2016 1 commit
  2. 04 Mar, 2016 2 commits
  3. 01 Mar, 2016 1 commit
    • Lluís Vilanova's avatar
      tcg: Add type for vCPU pointers · 1bcea73e
      Lluís Vilanova authored
      Adds the 'TCGv_env' type for pointers to 'CPUArchState' objects. The
      tracing infrastructure later needs to differentiate between regular
      pointers and pointers to vCPUs.
      
      Also changes all targets to use the new 'TCGv_env' type instead of the
      generic 'TCGv_ptr'. As of now, the change is merely cosmetic ('TCGv_env'
      translates into 'TCGv_ptr'), but that could change in the future to
      enforce the difference.
      
      Note that a 'TCGv_env' type (for 'CPUState') is not added, since all
      helpers currently receive the architecture-specific
      pointer ('CPUArchState').
      Signed-off-by: 's avatarLluís Vilanova <vilanova@ac.upc.edu>
      Acked-by: 's avatarRichard Henderson <rth@twiddle.net>
      Message-id: 145641859552.30295.7821536833590725201.stgit@localhost
      Signed-off-by: 's avatarStefan Hajnoczi <stefanha@redhat.com>
      1bcea73e
  4. 07 Oct, 2015 1 commit
  5. 14 Sep, 2015 2 commits
  6. 08 Sep, 2015 1 commit
  7. 06 Jul, 2015 1 commit
    • Peter Maydell's avatar
      target-arm: Split DISAS_YIELD from DISAS_WFE · 049e24a1
      Peter Maydell authored
      Currently we use DISAS_WFE for both WFE and YIELD instructions.
      This is functionally correct because at the moment both of them
      are implemented as "yield this CPU back to the top level loop so
      another CPU has a chance to run". However it's rather confusing
      that YIELD ends up calling HELPER(wfe), and if we ever want to
      implement real behaviour for WFE and SEV it's likely to trip us up.
      
      Split out the yield codepath to use DISAS_YIELD and a new
      HELPER(yield) function, and have HELPER(wfe) call HELPER(yield).
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      Message-id: 1435672316-3311-2-git-send-email-peter.maydell@linaro.org
      Reviewed-by: 's avatarPeter Crosthwaite <peter.crosthwaite@xilinx.com>
      049e24a1
  8. 29 May, 2015 2 commits
  9. 13 Mar, 2015 1 commit
    • Richard Henderson's avatar
      tcg: Change translator-side labels to a pointer · 42a268c2
      Richard Henderson authored
      This is improved type checking for the translators -- it's no longer
      possible to accidentally swap arguments to the branch functions.
      
      Note that the code generating backends still manipulate labels as int.
      
      With notable exceptions, the scope of the change is just a few lines
      for each target, so it's not worth building extra machinery to do this
      change in per-target increments.
      
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
      Cc: Michael Walle <michael@walle.cc>
      Cc: Leon Alrae <leon.alrae@imgtec.com>
      Cc: Anthony Green <green@moxielogic.com>
      Cc: Jia Liu <proljc@gmail.com>
      Cc: Alexander Graf <agraf@suse.de>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Blue Swirl <blauwirbel@gmail.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Reviewed-by: 's avatarBastian Koppelmann <kbastian@mail.uni-paderborn.de>
      Signed-off-by: 's avatarRichard Henderson <rth@twiddle.net>
      42a268c2
  10. 05 Feb, 2015 1 commit
    • Peter Maydell's avatar
      target-arm: Define correct mmu_idx values and pass them in TB flags · c1e37810
      Peter Maydell authored
      We currently claim that for ARM the mmu_idx should simply be the current
      exception level. However this isn't actually correct -- secure EL0 and EL1
      should have separate indexes from non-secure EL0 and EL1 since their
      VA->PA mappings may differ. We also will want an index for stage 2
      translations when we properly support EL2.
      
      Define and document all seven mmu index values that we require, and
      pass the mmu index in the TB flags rather than exception level or
      priv/user bit.
      
      This change doesn't update the get_phys_addr() code, so our page
      table walking still assumes a simplistic "user or priv?" model for
      the moment.
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: 's avatarGreg Bellows <greg.bellows@linaro.org>
      ---
      This leaves some odd gaps in the TB flags usage. I will circle
      back and clean this up later (including moving the other common
      flags like the singlestep ones to the top of the flags word),
      but I didn't want to bloat this patchseries further.
      c1e37810
  11. 11 Dec, 2014 1 commit
  12. 24 Oct, 2014 2 commits
  13. 29 Sep, 2014 1 commit
    • Peter Maydell's avatar
      target-arm: Don't handle c15_cpar changes via tb_flush() · c0f4af17
      Peter Maydell authored
      At the moment we try to handle c15_cpar with the strategy of:
       * emit generated code which makes assumptions about its value
       * when the register value changes call tb_flush() to throw
         away the now-invalid generated code
      This works because XScale CPUs are always uniprocessor, but
      it's confusing because it suggests that the same approach can
      be taken for other registers. It also means we do a tb_flush()
      on CPU reset, which makes multithreaded linux-user binaries
      even more likely to fail than would otherwise be the case.
      
      Replace it with a combination of TB flags for the access
      checks done on cp0/cp1 for the XScale and iwMMXt instructions,
      plus a runtime check for cp2..cp13 coprocessor accesses.
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      Message-id: 1411056959-23070-1-git-send-email-peter.maydell@linaro.org
      c0f4af17
  14. 19 Aug, 2014 1 commit
  15. 27 May, 2014 2 commits
  16. 17 Apr, 2014 4 commits
    • Peter Maydell's avatar
      target-arm: Dump 32-bit CPU state if 64 bit CPU is in AArch32 · 17731115
      Peter Maydell authored
      For system mode, we may have a 64 bit CPU which is currently executing
      in AArch32 state; if we're dumping CPU state to the logs we should
      therefore show the correct state for the current execution state,
      rather than hardwiring it based on the type of the CPU. For consistency
      with how we handle translation, we leave the 32 bit dump function
      as the default, and have it hand off control to the 64 bit dump code
      if we're in AArch64 mode.
      Reported-by: 's avatarRob Herring <rob.herring@linaro.org>
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      17731115
    • Peter Maydell's avatar
      target-arm: A64: Add assertion that FP access was checked · 90e49638
      Peter Maydell authored
      Because unallocated encodings generate different exception syndrome
      information from traps due to FP being disabled, we can't do a single
      "is fp access disabled" check at a high level in the decode tree.
      To help in catching bugs where the access check was forgotten in some
      code path, we set this flag when the access check is done, and assert
      that it is set at the point where we actually touch the FP regs.
      
      This requires us to pass the DisasContext to the vec_reg_offset
      and fp_reg_offset functions.
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: 's avatarPeter Crosthwaite <peter.crosthwaite@xilinx.com>
      90e49638
    • Peter Maydell's avatar
      target-arm: A64: Correctly fault FP/Neon if CPACR.FPEN set · 8c6afa6a
      Peter Maydell authored
      For the A64 instruction set, the only FP/Neon disable trap
      is the CPACR FPEN bits, which may indicate "enabled", "disabled"
      or "disabled for EL0". Add a bit to the AArch64 tb flags indicating
      whether FP/Neon access is currently enabled and make the decoder
      emit code to raise exceptions on use of FP/Neon insns if it is not.
      
      We use a new flag in DisasContext rather than borrowing the
      existing vfp_enabled flag because the A32/T32 decoder is going
      to need both.
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      Acked-by: 's avatarPeter Crosthwaite <peter.crosthwaite@xilinx.com>
      ---
      I'm aware this is a rather hard to review patch; sorry.
      I have done an exhaustive check that we have fp access checks
      in all code paths with the aid of the assertions added in the
      next patch plus the code-coverage hack patch I posted to the
      list earlier.
      
      This patch is correct as of
      09e03735 target-arm: A64: Add saturating accumulate ops (USQADD/SUQADD)
      which was the last of the Neon insns to be added, so assuming
      no refactoring of the code it should be fine.
      8c6afa6a
    • Peter Maydell's avatar
      target-arm: Add support for generating exceptions with syndrome information · d4a2dc67
      Peter Maydell authored
      Add new helpers exception_with_syndrome (for generating an exception
      with syndrome information) and exception_uncategorized (for generating
      an exception with "Unknown or Uncategorized Reason", which have a syndrome
      register value of zero), and use them to generate the correct syndrome
      information for exceptions which are raised directly from generated code.
      
      This patch includes moving the A32/T32 gen_exception_insn functions
      further up in the source file; they will be needed for "VFP/Neon disabled"
      exception generation later.
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: 's avatarPeter Crosthwaite <peter.crosthwaite@xilinx.com>
      d4a2dc67
  17. 17 Mar, 2014 1 commit
    • Peter Maydell's avatar
      target-arm: A64: Implement PMULL instruction · a984e42c
      Peter Maydell authored
      Implement the PMULL instruction; this is the last unimplemented insn
      in the three-reg-diff group.
      
      Note that PMULL with size 3 is considered part of the AES part
      of the crypto extensions (see the ID_AA64ISAR0_EL1 register definition
      in the v8 ARM ARM), so it isn't necessary to burn an extra feature
      bit on it, even though we're using more feature bits than a single
      "crypto extension present/not present" toggle.
      Signed-off-by: 's avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: 's avatarRichard Henderson <rth@twiddle.net>
      Message-id: 1394822294-14837-2-git-send-email-peter.maydell@linaro.org
      a984e42c
  18. 10 Mar, 2014 1 commit
  19. 07 Jan, 2014 1 commit
  20. 17 Dec, 2013 3 commits
  21. 10 Sep, 2013 4 commits