aboutsummaryrefslogtreecommitdiff
path: root/src/long_mode
AgeCommit message (Collapse)Author
2023-07-09restructuring of hotpath code, not worse but not betteriximeow
2023-07-08consistently report end of prefixes/start of opcodeiximeow
2023-07-08todo for 2.xiximeow
2023-07-08seems like this makes things a bit faster...?iximeow
2023-07-08move rip-rel check to a slightly colder spot...iximeow
2023-07-08actually reject lock prefixes in vex instructionsiximeow
2023-07-08fix v(p)gather situations, get vex tests passing againiximeow
2023-07-06defer assigning mem_size or operand_count tooiximeow
2023-07-06M_Gv should be unreachable too...iximeow
2023-07-06defer initial assignment of regs and operands as much as possibleiximeow
not a huge improvement, but something
2023-07-05fix operand handling for the psl/psr family of xmm shifts/rotatesiximeow
these instructions ignored rex bits even for xmm reigsters, which is incorrect (so says xed)
2023-07-05re-correct operand order of movdq2qiximeow
2023-07-04more read_E hoistingiximeow
2023-07-04incidental cleanup, see if inlining in evex helps/hurts (it hurts)iximeow
2023-07-04fix xbegin/xend (broken in DecodeCtx::rrr)iximeow
2023-07-04finally delete top-level modrm (50.10cpi, 2322ms)iximeow
2023-07-04begin project to hoist all read_E (perf better again! 50.21cpi)iximeow
2023-07-04fix f6 test imm lengths (perf regression :( )iximeow
2023-07-04new high score 49.89cpi (2259ms)iximeow
vex/rex prefix cleanup, finally profitable to inline read_0f*_opcode
2023-07-04more read_E cleanupiximeow
2023-07-04new struct for temporary decode context (prefix management)iximeow
2023-07-04new record: 50.56cpi (2290ms)iximeow
2023-07-04new perf record: 50.79cpi (2316ms)iximeow
2023-07-04best: 54.3cpi (2512ms)iximeow
2023-07-04new perf record: 51.88cpi (2363ms)iximeow
2023-07-04wipiximeow
2023-07-04more micro-opts...iximeow
set_embedded_instructions was unnecessarily appilied to many operand codes; this was never a correctness issue, but meant many operand decodings took a few more instruction than necessary to do nothing. setting all registers to `rax` is unnecessary, only the first register's defaulting to `rax` is effectual. this allows for not using a movabs to load initial rax state. adjust vex decoder inlining. this will be followed up by some cleanup for vex operand codes.
2023-07-04move some unlikely checks behind a branch that implies their possibilityiximeow
slightly fewer (perfectly predicted anyway) branches this way
2023-07-04fidget with read_E inlining AGAINiximeow
2023-07-04make operandcode 16b againiximeow
2023-07-04line up Opcode values for simple translation from opc bytesiximeow
2023-07-04fixup: handle mnemonic ordering tooiximeow
2023-07-04avoid committing values to instructions until necessary, likely opc tweaksiximeow
2023-07-04make base opcode map translation a bit simpleriximeow
now the bits line up with enum variants directly (hopefully..)
2023-07-04store non-rex expected bank when first witnessing operand size prefixiximeow
the expectation here is that we can set a default `vqp_size` pretty cheaply (Prefixes::new is one store, on x86_64 anyway...). then, when we see an `operand_size` prefix, it's rare enough we can pay a little extra to speculate on *likely* implication, and update some state (`vqp_size` is *probably* going to be 2 because of it) accordingly. the cases where `vqp_size` would go unused and this was wasted effort are relativlely rare. on the other hand, we can't profitably give `rex` this treatment: `rex.w` would set `vqp_size` to `qword`, but rex-prefixed instructions are so often byte-size registers that updating `vqp_size` (conditionally, no less), is only break-even. so, keep a check for `rex.w` at use site, where it's only a choice between `qword` or `whatver-size-a-non-rex.w-prefixed-instruction-would-be-sized`, which has been kept up to date by speculation when detecting `operand_size`.
2023-07-04fix some dancing between bank size and RegisterBank enum valuesiximeow
in the process, fixed a decoding bug dealing with a0/a1/a2/a3 movs (respected rex.b when rex.b should have been ignored) this seems to maybe improve runtime ever so slightly, but this is really meant as a cleanup commit more than anything.
2023-07-04pick useful numeric values for RegisterBankiximeow
these coincidentally have the general-purpose banks (rB excepted) matching their size in bytes
2023-07-04OperandCode as a u16 caused gross movzwl, this seems just a bit betteriximeow
2023-07-04try slimming down read_opc_hotpath moreiximeow
2023-03-05add `Opcode::is_jcc`, `Opcode::is_setcc`, and `Opcode::is_cmovcc` helpersiximeow
this request/suggestion comes from [github](https://github.com/iximeow/yaxpeax-x86/issues/29)! thank you!
2023-02-19deprecate `pub fn cs`, which is an intensely embarrassing bug of a functioniximeow
unlike every other function to test if a particular selector is picked by prefixes, `Prefixes::cs` does not return bool, nor does it check the currently-selected prefix. instead, it modifies the decoded `Prefixes` to set the current prefix to `cs`. this has been a bug all the way since 0.0.1 was released. the function now does nothing, and is marked deprecated. in a future 2.x release, the function will be changed to return `bool` and be in-line with other segment selector-checking functions. in the mean time, a new `Prefixes::selects_cs()` does the correct thing. thank you to @meithecatte who pointed this out in https://github.com/iximeow/yaxpeax-x86/issues/28!
2022-12-03fix incorrect rex selection and field description offsetsiximeow
2022-12-0366 prefixes are common, 0f opcodes are commoniximeow
2022-12-03support a fast path through the decoder for [rex-prefixed]opcode instsiximeow
the overwhelming majority of x86 instructions are either a single-byte opcode or a single-byte opcode with a rex prefix. supporting these specially means that we don't have to length-check on every byte or go through the full decode loop while reading the most likely instructions. this is a significant improvement on typical x86 streams, but comes at a moderate penalty for crafted x86 instructions. the penalty is still not very bad, as the fast path is exited in favor of the full decode loop as soon as we see a non-rex prefix byte; this adds maybe a dozen instructions to the slow path.
2022-12-03just a bit more code motion that seemed to help things sometimesiximeow
2022-12-03reorder prefix checks, extract vex/evex prefix handlingiximeow
sharing vex/evex invalid prefix checks improves codegen a bit, but ordering prefix checks by likeliest prefix first reduces time falling through prefix handling arms. both together are a notable improvement in throughput on typical x86 code. bundled in here is some code motion to where `mem_size = 0` and `operand_count = 2` are executed; this is because, at least on zen2 and cascade lake parts, bunching all stores to the instruction together caused small stalls getting into the decoder. spreading out stores seems to mix these assignments with parts of code that was not using memory anyway, and pipelines better.
2022-12-03move opcode lookup tables into const arraysiximeow
cleanliness, but also slightly better codegen somehow?
2022-12-03replace size lookup logic with a LUTiximeow
the match compiled into some indirect branch awfulness!! no thank you
2022-09-23Fix some typos.Bruce Mitchener
2022-05-07more annotation fixes?iximeow