Intel® Xeon® Processor Scalable Family based on Skylake microarchitecture
This section provides reference for hardware events that can be monitored for the CPU(s):
Event Name Description Additional Info EventType
CORE CoreOnly
INST_RETIRED.ANY Counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, Counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions. IA32_FIXED_CTR0
Architectural, Fixed
CoreOnly
CPU_CLK_UNHALTED.THREAD Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. IA32_FIXED_CTR1
Architectural, Fixed
CoreOnly
CPU_CLK_UNHALTED.THREAD_ANY Core cycles when at least one thread on the physical core is not in halt state. IA32_FIXED_CTR1
Architectural, Fixed
CoreOnly
CPU_CLK_UNHALTED.REF_TSC Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case. IA32_FIXED_CTR2
Architectural, Fixed
CoreOnly
BR_INST_RETIRED.ALL_BRANCHES Counts all (macro) branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=00H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
BR_MISP_RETIRED.ALL_BRANCHES Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path. EventSel=C5H UMask=00H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
CPU_CLK_THREAD_UNHALTED.REF_XCLK Core crystal clock cycles when the thread is unhalted. EventSel=3CH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY Core crystal clock cycles when at least one thread on the physical core is unhalted. EventSel=3CH UMask=01H AnyThread=1
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
CPU_CLK_UNHALTED.REF_XCLK Core crystal clock cycles when the thread is unhalted. EventSel=3CH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
CPU_CLK_UNHALTED.REF_XCLK_ANY Core crystal clock cycles when at least one thread on the physical core is unhalted. EventSel=3CH UMask=01H AnyThread=1
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
CPU_CLK_UNHALTED.RING0_TRANS Counts when the Current Privilege Level (CPL) transitions from ring 1, 2 or 3 to ring 0 (Kernel). EventSel=3CH UMask=00H EdgeDetect=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
CPU_CLK_UNHALTED.THREAD_P This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time. EventSel=3CH UMask=00H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
CPU_CLK_UNHALTED.THREAD_P_ANY Core cycles when at least one thread on the physical core is not in halt state. EventSel=3CH UMask=00H AnyThread=1
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
INST_RETIRED.ANY_P Counts the number of instructions (EOMs) retired. Counting covers macro-fused instructions individually (that is, increments by two).
Errata: SKL091, SKL044
EventSel=C0H UMask=00H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
LONGEST_LAT_CACHE.MISS Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches from L1 and L2. It does not include all misses to the L3.
Errata: SKL057
EventSel=2EH UMask=41H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
LONGEST_LAT_CACHE.REFERENCE Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches from L1 and L2. It does not include all accesses to the L3.
Errata: SKL057
EventSel=2EH UMask=4FH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Architectural
CoreOnly
ARITH.DIVIDER_ACTIVE Cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operations. EventSel=14H UMask=01H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
BACLEARS.ANY Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore. EventSel=E6H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
BR_INST_RETIRED.ALL_BRANCHES_PS This is a precise version of BR_INST_RETIRED.ALL_BRANCHES that counts all (macro) branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.COND_NTAKEN This event counts not taken branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
BR_INST_RETIRED.CONDITIONAL This event counts conditional branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.CONDITIONAL_PS This is a precise version (that is, uses PEBS) of the event that counts conditional branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.FAR_BRANCH This event counts far branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.FAR_BRANCH_PS This is a precise version (that is, uses PEBS) of the event that counts far branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.NEAR_CALL This event counts both direct and indirect near call instructions retired.
Errata: SKL091
EventSel=C4H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.NEAR_CALL_PS This is a precise version (that is, uses PEBS) of the event that counts both direct and indirect near call instructions retired.
Errata: SKL091
EventSel=C4H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.NEAR_RETURN This event counts return instructions retired.
Errata: SKL091
EventSel=C4H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.NEAR_RETURN_PS This is a precise version (that is, uses PEBS) of the event that counts return instructions retired.
Errata: SKL091
EventSel=C4H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.NEAR_TAKEN This event counts taken branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.NEAR_TAKEN_PS This is a precise version (that is, uses PEBS) of the event that counts taken branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_INST_RETIRED.NOT_TAKEN This event counts not taken branch instructions retired.
Errata: SKL091
EventSel=C4H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_EXEC.ALL_BRANCHES This event counts both taken and not taken speculative and retired mispredicted branch instructions. EventSel=89H UMask=FFH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
BR_MISP_EXEC.INDIRECT Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded). EventSel=89H UMask=E4H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
BR_MISP_RETIRED.ALL_BRANCHES_PS This is a precise version of BR_MISP_RETIRED.ALL_BRANCHES that counts all mispredicted macro branch instructions retired. EventSel=C5H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.CONDITIONAL This event counts mispredicted conditional branch instructions retired. EventSel=C5H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.CONDITIONAL_PS This is a precise version (that is, uses PEBS) of the event that counts mispredicted conditional branch instructions retired. EventSel=C5H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.NEAR_CALL Counts both taken and not taken retired mispredicted direct and indirect near calls, including both register and memory indirect. EventSel=C5H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.NEAR_CALL_PS This event counts both taken and not taken retired mispredicted direct and indirect near calls, including both register and memory indirect. EventSel=C5H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.NEAR_TAKEN Number of near branch instructions retired that were mispredicted and taken. EventSel=C5H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.NEAR_TAKEN_PS Number of near branch instructions retired that were mispredicted and taken. EventSel=C5H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.RET This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired. EventSel=C5H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
BR_MISP_RETIRED.RET_PS This is a precise version (that is, uses PEBS) of the event that counts mispredicted return instructions retired. EventSel=C5H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
CORE_POWER.LVL0_TURBO_LICENSE Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes. EventSel=28H UMask=07H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_POWER.LVL1_TURBO_LICENSE Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions. EventSel=28H UMask=18H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_POWER.LVL2_TURBO_LICENSE Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchitecture). This includes high current AVX 512-bit instructions. EventSel=28H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_POWER.THROTTLE Core cycles the out-of-order engine was throttled due to a pending power level request. EventSel=28H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_SNOOP_RESPONSE.RSP_IFWDFE CORE_SNOOP_RESPONSE.RSP_IFWDFE EventSel=EFH UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_SNOOP_RESPONSE.RSP_IFWDM CORE_SNOOP_RESPONSE.RSP_IFWDM EventSel=EFH UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_SNOOP_RESPONSE.RSP_IHITFSE CORE_SNOOP_RESPONSE.RSP_IHITFSE EventSel=EFH UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_SNOOP_RESPONSE.RSP_IHITI CORE_SNOOP_RESPONSE.RSP_IHITI EventSel=EFH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_SNOOP_RESPONSE.RSP_SFWDFE CORE_SNOOP_RESPONSE.RSP_SFWDFE EventSel=EFH UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_SNOOP_RESPONSE.RSP_SFWDM CORE_SNOOP_RESPONSE.RSP_SFWDM EventSel=EFH UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CORE_SNOOP_RESPONSE.RSP_SHITFSE CORE_SNOOP_RESPONSE.RSP_SHITFSE EventSel=EFH UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE Core crystal clock cycles when this thread is unhalted and the other thread is halted. EventSel=3CH UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE Core crystal clock cycles when this thread is unhalted and the other thread is halted. EventSel=3CH UMask=02H CMask=0
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.CYCLES_L1D_MISS Cycles while L1 cache miss demand load is outstanding. EventSel=A3H UMask=08H CMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.CYCLES_L2_MISS Cycles while L2 cache miss demand load is outstanding. EventSel=A3H UMask=01H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.CYCLES_L3_MISS Cycles while L3 cache miss demand load is outstanding. EventSel=A3H UMask=02H CMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.CYCLES_MEM_ANY Cycles while memory subsystem has an outstanding load. EventSel=A3H UMask=10H CMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.STALLS_L1D_MISS Execution stalls while L1 cache miss demand load is outstanding. EventSel=A3H UMask=0CH CMask=0CH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.STALLS_L2_MISS Execution stalls while L2 cache miss demand load is outstanding. EventSel=A3H UMask=05H CMask=05H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.STALLS_L3_MISS Execution stalls while L3 cache miss demand load is outstanding. EventSel=A3H UMask=06H CMask=06H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
CYCLE_ACTIVITY.STALLS_MEM_ANY Execution stalls while memory subsystem has an outstanding load. EventSel=A3H UMask=14H CMask=14H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
CoreOnly
CYCLE_ACTIVITY.STALLS_TOTAL Total execution stalls. EventSel=A3H UMask=04H CMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DSB2MITE_SWITCHES.COUNT This event counts the number of the Decode Stream Buffer (DSB)-to-MITE switches including all misses because of missing Decode Stream Buffer (DSB) cache and u-arch forced misses. Note: Invoking MITE requires two or three cycles delay. EventSel=ABH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DSB2MITE_SWITCHES.PENALTY_CYCLES Counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs.Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 0–2 cycles. EventSel=ABH UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK Counts demand data loads that caused a page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels, but the walk need not have completed. EventSel=08H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.STLB_HIT Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB). EventSel=08H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.WALK_ACTIVE Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a load. EventSel=08H UMask=10H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.WALK_COMPLETED Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=08H UMask=0EH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.WALK_COMPLETED_1G Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=08H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=08H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.WALK_COMPLETED_4K Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=08H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_LOAD_MISSES.WALK_PENDING Counts 1 per cycle for each PMH that is busy with a page walk for a load. EPT page walk duration are excluded in Skylake microarchitecture. EventSel=08H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.MISS_CAUSES_A_WALK Counts demand data stores that caused a page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels, but the walk need not have completed. EventSel=49H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.STLB_HIT Stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB). EventSel=49H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.WALK_ACTIVE Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a store. EventSel=49H UMask=10H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.WALK_COMPLETED Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=49H UMask=0EH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.WALK_COMPLETED_1G Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=49H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=49H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.WALK_COMPLETED_4K Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault. EventSel=49H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
DTLB_STORE_MISSES.WALK_PENDING Counts 1 per cycle for each PMH that is busy with a page walk for a store. EPT page walk duration are excluded in Skylake microarchitecture. EventSel=49H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
EPT.WALK_PENDING Counts cycles for each PMH (Page Miss Handler) that is busy with an EPT (Extended Page Table) walk for any request type. EventSel=4FH UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
EXE_ACTIVITY.1_PORTS_UTIL Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty. EventSel=A6H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
EXE_ACTIVITY.2_PORTS_UTIL Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty. EventSel=A6H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
EXE_ACTIVITY.3_PORTS_UTIL Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty. EventSel=A6H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
EXE_ACTIVITY.4_PORTS_UTIL Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty. EventSel=A6H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
EXE_ACTIVITY.BOUND_ON_STORES Cycles where the Store Buffer was full and no outstanding load. EventSel=A6H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
EXE_ACTIVITY.EXE_BOUND_0_PORTS Counts cycles during which no uops were executed on all ports and Reservation Station (RS) was not empty. EventSel=A6H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE Counts once for most SIMD 128-bit packed computational single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.4_FLOPS Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=18H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=80H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.8_FLOPS Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=18H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.SCALAR Counts once for most SIMD scalar computational single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=03H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.SCALAR_DOUBLE Counts once for most SIMD scalar computational double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.SCALAR_SINGLE Counts once for most SIMD scalar computational single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events. EventSel=C7H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ARITH_INST_RETIRED.VECTOR Number of any Vector retired FP arithmetic instructions EventSel=C7H UMask=FCH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FP_ASSIST.ANY Counts cycles with any input and output SSE or x87 FP assist. If an input and output assist are detected on the same cycle the event increments by 1. EventSel=CAH UMask=1EH CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
FRONTEND_RETIRED.ANY_DSB_MISS Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.ANY_DSB_MISS_PS Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.DSB_MISS Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=11H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.DSB_MISS_PS Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=11H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.ITLB_MISS Counts retired Instructions that experienced iTLB (Instruction TLB) true miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=14H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.ITLB_MISS_PS Counts retired Instructions that experienced iTLB (Instruction TLB) true miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=14H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.L1I_MISS Retired Instructions who experienced Instruction L1 Cache true miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=12H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.L1I_MISS_PS Retired Instructions who experienced Instruction L1 Cache true miss. Precise Event. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=12H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.L2_MISS Retired Instructions who experienced Instruction L2 Cache true miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=13H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.L2_MISS_PS Retired Instructions who experienced Instruction L2 Cache true miss. Precise Event. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=13H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_1 Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=400106H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_128 Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=408006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_128_PS Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=408006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_16 Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=401006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_16_PS Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=401006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2 Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=400206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1 Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=100206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=100206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_2 Retired instructions that are fetched after an interval where the front-end had at least 2 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=200206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_2_PS Retired instructions that are fetched after an interval where the front-end had at least 2 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=200206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_3 Retired instructions that are fetched after an interval where the front-end had at least 3 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=300206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_3_PS Retired instructions that are fetched after an interval where the front-end had at least 3 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=300206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_2_PS Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=400206H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_256 Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=410006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_256_PS Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=410006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_32 Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=402006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_32_PS Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=402006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_4 Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=400406H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_4_PS Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=400406H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_512 Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=420006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_512_PS Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=420006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_64 Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=404006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_64_PS Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall. Precise Event. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=404006H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_8 Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=400806H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.LATENCY_GE_8_PS Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops. EventSel=C6H UMask=01H CMask=0 MSR_PEBS_FRONTEND(3F7H)=400806H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.STLB_MISS Counts retired Instructions that experienced STLB (2nd level TLB) true miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=15H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
FRONTEND_RETIRED.STLB_MISS_PS Counts retired Instructions that experienced STLB (2nd level TLB) true miss. EventSel=C6H UMask=01H MSR_PEBS_FRONTEND(3F7H)=15H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.ABORTED Number of times HLE abort was triggered. EventSel=C8H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.ABORTED_EVENTS Number of times an HLE execution aborted due to unfriendly events (such as interrupts). EventSel=C8H UMask=80H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.ABORTED_MEM Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts). EventSel=C8H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.ABORTED_MEMTYPE Number of times an HLE execution aborted due to incompatible memory type. EventSel=C8H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.ABORTED_PS Number of times HLE abort was triggered. (PEBS) EventSel=C8H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.ABORTED_TIMER Number of times an HLE execution aborted due to hardware timer expiration. EventSel=C8H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.ABORTED_UNFRIENDLY Number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.). EventSel=C8H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.COMMIT Number of times HLE commit succeeded. EventSel=C8H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HLE_RETIRED.START Number of times we entered an HLE region. Does not count nested transactions. EventSel=C8H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
HW_INTERRUPTS.RECEIVED Counts the number of hardware interruptions received by the processor. EventSel=CBH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ICACHE_16B.IFDATA_STALL Cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. EventSel=80H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ICACHE_64B.IFTAG_HIT Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity. EventSel=83H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ICACHE_64B.IFTAG_MISS Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity. EventSel=83H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ICACHE_64B.IFTAG_STALL Cycles where a code fetch is stalled due to L1 instruction cache tag miss. EventSel=83H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDI_MISC.WB_DOWNGRADE Counts number of cache lines that are dropped and not written back to L3 as they are deemed to be less likely to be reused shortly. EventSel=FEH UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDI_MISC.WB_UPGRADE Counts number of cache lines that are allocated and written back to L3 with the intention that they are more likely to be reused shortly. EventSel=FEH UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.ALL_DSB_CYCLES_4_UOPS Counts the number of cycles 4 or more uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. EventSel=79H UMask=18H CMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.ALL_DSB_CYCLES_ANY_UOPS Counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. EventSel=79H UMask=18H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.ALL_MITE_CYCLES_4_UOPS Counts the number of cycles 4 uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. Counting includes uops that may 'bypass' the IDQ. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB). EventSel=79H UMask=24H CMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.ALL_MITE_CYCLES_ANY_UOPS Counts the number of cycles uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. Counting includes uops that may 'bypass' the IDQ. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB). EventSel=79H UMask=24H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.DSB_CYCLES Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQ. EventSel=79H UMask=08H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.DSB_UOPS Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQ. EventSel=79H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.MITE_CYCLES Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ. EventSel=79H UMask=04H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.MITE_UOPS Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB). EventSel=79H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.MS_CYCLES Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE. EventSel=79H UMask=30H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.MS_DSB_CYCLES Counts cycles during which uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ. EventSel=79H UMask=10H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.MS_MITE_UOPS Counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ. EventSel=79H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.MS_SWITCHES Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer. EventSel=79H UMask=30H EdgeDetect=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ.MS_UOPS Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MS. EventSel=79H UMask=30H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ_UOPS_NOT_DELIVERED.CORE Counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding “4 – x” when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread. b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions). c. Instruction Decode Queue (IDQ) delivers four uops. EventSel=9CH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE Counts, on the per-thread basis, cycles when no uops are delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core =4. EventSel=9CH UMask=01H CMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE. EventSel=9CH UMask=01H Invert=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE Counts, on the per-thread basis, cycles when less than 1 uop is delivered to Resource Allocation Table (RAT). IDQ_Uops_Not_Delivered.core >= 3. EventSel=9CH UMask=01H CMask=03H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE Cycles with less than 2 uops delivered by the front-end. EventSel=9CH UMask=01H CMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE Cycles with less than 3 uops delivered by the front-end. EventSel=9CH UMask=01H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ILD_STALL.LCP Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. EventSel=87H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
INST_DECODED.DECODERS Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions. EventSel=55H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
INST_RETIRED.NOP Number of all retired NOP instructions.
Errata: SKL091, SKL044
EventSel=C0H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP, Counter=0,1,2,3]
CoreOnly
INST_RETIRED.NOP_PS Number of all retired NOP instructions.
Errata: SKL091, SKL044
EventSel=C0H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
INST_RETIRED.PREC_DIST A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled.
Errata: SKL091, SKL044
EventSel=C0H UMask=01H
Counter=1 CounterHTOff=1
PEBS:[Precise]
CoreOnly
INST_RETIRED.TOTAL_CYCLES_PS Number of cycles using an always true condition applied to PEBS instructions retired event. (inst_ret< 16)
Errata: SKL091, SKL044
EventSel=C0H UMask=01H Invert=1 CMask=0AH
Counter=0,2,3 CounterHTOff=0,2,3
PEBS:[PreciseEventingIP]
CoreOnly
INT_MISC.CLEAR_RESTEER_CYCLES Cycles the issue-stage is waiting for front-end to fetch from resteered path following branch misprediction or machine clear events. EventSel=0DH UMask=80H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
INT_MISC.CLEARS_COUNT Counts the number of speculative clears due to any type of branch misprediction or machine clears EventSel=0DH UMask=01H EdgeDetect=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
INT_MISC.RECOVERY_CYCLES Core cycles the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event. EventSel=0DH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
INT_MISC.RECOVERY_CYCLES_ANY Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke). EventSel=0DH UMask=01H AnyThread=1
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB.ITLB_FLUSH Counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific). EventSel=AEH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.MISS_CAUSES_A_WALK Counts page walks of any page size (4K/2M/4M/1G) caused by a code fetch. This implies it missed in the ITLB and further levels of TLB, but the walk need not have completed. EventSel=85H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.STLB_HIT Instruction fetch requests that miss the ITLB and hit the STLB. EventSel=85H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.WALK_ACTIVE Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request. EPT page walk duration are excluded in Skylake microarchitecture. EventSel=85H UMask=10H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.WALK_COMPLETED Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault. EventSel=85H UMask=0EH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.WALK_COMPLETED_1G Counts completed page walks (1G page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault. EventSel=85H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.WALK_COMPLETED_2M_4M Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault. EventSel=85H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.WALK_COMPLETED_4K Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault. EventSel=85H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ITLB_MISSES.WALK_PENDING Counts 1 per cycle for each PMH (Page Miss Handler) that is busy with a page walk for an instruction fetch request. EPT page walk duration are excluded in Skylake microarchitecture. EventSel=85H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L1D.REPLACEMENT Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace. EventSel=51H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L1D_PEND_MISS.FB_FULL Number of times a request needed a FB (Fill Buffer) entry but there was no entry available for it. A request includes cacheable/uncacheable demands that are load, store or SW prefetch instructions. EventSel=48H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L1D_PEND_MISS.PENDING Counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch.Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type. EventSel=48H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L1D_PEND_MISS.PENDING_CYCLES Counts duration of L1D miss outstanding in cycles. EventSel=48H UMask=01H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L1D_PEND_MISS.PENDING_CYCLES_ANY Cycles with L1D load Misses outstanding from any thread on physical core. EventSel=48H UMask=01H AnyThread=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_LINES_IN.ALL Counts the number of L2 cache lines filling the L2. Counting does not cover rejects. EventSel=F1H UMask=1FH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_LINES_OUT.NON_SILENT Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines can be either in modified state or clean state. Modified lines may either be written back to L3 or directly written to memory and not allocated in L3. Clean lines may either be allocated in L3 or dropped. EventSel=F2H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_LINES_OUT.SILENT Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared state. A non-threaded event. EventSel=F2H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_LINES_OUT.USELESS_HWPF Counts the number of lines that have been hardware prefetched but not used and now evicted by L2 cache EventSel=F2H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.ALL_CODE_RD Counts the total number of L2 code requests. EventSel=24H UMask=E4H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.ALL_DEMAND_DATA_RD Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted. EventSel=24H UMask=E1H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.ALL_DEMAND_MISS Demand requests that miss L2 cache. EventSel=24H UMask=27H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.ALL_DEMAND_REFERENCES Demand requests to L2 cache. EventSel=24H UMask=E7H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.ALL_PF Counts the total number of requests from the L2 hardware prefetchers. EventSel=24H UMask=F8H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.ALL_RFO Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches. EventSel=24H UMask=E2H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.CODE_RD_HIT Counts L2 cache hits when fetching instructions, code reads. EventSel=24H UMask=C4H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.CODE_RD_MISS Counts L2 cache misses when fetching instructions. EventSel=24H UMask=24H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.DEMAND_DATA_RD_HIT Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cache EventSel=24H UMask=C1H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.DEMAND_DATA_RD_MISS Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted. EventSel=24H UMask=21H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.MISS All requests that miss L2 cache. EventSel=24H UMask=3FH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.PF_HIT Counts requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that hit L2 cache. EventSel=24H UMask=D8H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.PF_MISS Counts requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that miss L2 cache. EventSel=24H UMask=38H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.REFERENCES All L2 requests. EventSel=24H UMask=FFH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.RFO_HIT Counts the RFO (Read-for-Ownership) requests that hit L2 cache. EventSel=24H UMask=C2H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_RQSTS.RFO_MISS Counts the RFO (Read-for-Ownership) requests that miss L2 cache. EventSel=24H UMask=22H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
L2_TRANS.L2_WB Counts L2 writebacks that access L2 cache. EventSel=F0H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
LD_BLOCKS.NO_SR The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use. EventSel=03H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
LD_BLOCKS.STORE_FORWARD Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide. EventSel=03H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
LD_BLOCKS_PARTIAL.ADDRESS_ALIAS Counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliased. EventSel=07H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
LOAD_HIT_PRE.SW_PF Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions. EventSel=4CH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
LSD.CYCLES_4_UOPS Counts the cycles when 4 uops are delivered by the LSD (Loop-stream detector). EventSel=A8H UMask=01H CMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
LSD.CYCLES_ACTIVE Counts the cycles when at least one uop is delivered by the LSD (Loop-stream detector). EventSel=A8H UMask=01H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
LSD.UOPS Number of uops delivered to the back-end by the LSD(Loop Stream Detector). EventSel=A8H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
MACHINE_CLEARS.COUNT Number of machine clears (nukes) of any type. EventSel=C3H UMask=01H EdgeDetect=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
MACHINE_CLEARS.MEMORY_ORDERING Counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following:a. memory disambiguation,b. external snoop, orc. cross SMT-HW-thread snoop (stores) hitting load buffer.
Errata: SKL089
EventSel=C3H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
MACHINE_CLEARS.SMC Counts self-modifying code (SMC) detected, which causes a machine clear. EventSel=C3H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
MEM_INST_RETIRED.ALL_LOADS Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW. EventSel=D0H UMask=81H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.ALL_LOADS_PS Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW. (Precise Event) EventSel=D0H UMask=81H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_INST_RETIRED.ALL_STORES Counts all retired store instructions. EventSel=D0H UMask=82H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.ALL_STORES_PS Counts all retired store instructions. (Precise Event) EventSel=D0H UMask=82H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_INST_RETIRED.ANY Counts all retired memory instructions - loads and stores. EventSel=D0H UMask=83H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.ANY_PS Counts all retired memory instructions - loads and stores. EventSel=D0H UMask=83H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_INST_RETIRED.LOCK_LOADS Retired load instructions with locked access. EventSel=D0H UMask=21H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.LOCK_LOADS_PS Retired load instructions with locked access. (Precise Event) EventSel=D0H UMask=21H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_INST_RETIRED.SPLIT_LOADS Counts retired load instructions that split across a cacheline boundary. EventSel=D0H UMask=41H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.SPLIT_LOADS_PS Retired load instructions that split across a cacheline boundary. (Precise Event) EventSel=D0H UMask=41H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_INST_RETIRED.SPLIT_STORES Counts retired store instructions that split across a cacheline boundary. EventSel=D0H UMask=42H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.SPLIT_STORES_PS Retired store instructions that split across a cacheline boundary. (Precise Event) EventSel=D0H UMask=42H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_INST_RETIRED.STLB_MISS_LOADS Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB). EventSel=D0H UMask=11H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.STLB_MISS_LOADS_PS Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB). EventSel=D0H UMask=11H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_INST_RETIRED.STLB_MISS_STORES Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB). EventSel=D0H UMask=12H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_INST_RETIRED.STLB_MISS_STORES_PS Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB). EventSel=D0H UMask=12H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT Retired load instructions which data sources were L3 and cross-core snoop hits in on-pkg core cache. EventSel=D2H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS Retired load instructions which data sources were L3 and cross-core snoop hits in on-pkg core cache. EventSel=D2H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM Retired load instructions which data sources were HitM responses from shared L3. EventSel=D2H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS Retired load instructions which data sources were HitM responses from shared L3. EventSel=D2H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS Retired load instructions which data sources were L3 hit and cross-core snoop missed in on-pkg core cache. EventSel=D2H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS Retired load instructions which data sources were L3 hit and cross-core snoop missed in on-pkg core cache. EventSel=D2H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE Retired load instructions which data sources were hits in L3 without snoops required. EventSel=D2H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE_PS Retired load instructions which data sources were hits in L3 without snoops required. EventSel=D2H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM Retired load instructions which data sources missed L3 but serviced from local DRAM. EventSel=D3H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM_PS Retired load instructions which data sources missed L3 but serviced from local dram EventSel=D3H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM Retired load instructions which data sources missed L3 but serviced from remote dram EventSel=D3H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS Retired load instructions which data sources missed L3 but serviced from remote dram EventSel=D3H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD Retired load instructions whose data sources was forwarded from a remote cache. EventSel=D3H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS Retired load instructions whose data sources was forwarded from a remote cache EventSel=D3H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM Retired load instructions whose data sources was remote HITM. EventSel=D3H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
CoreOnly
MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS Retired load instructions whose data sources was remote HITM EventSel=D3H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress]
CoreOnly
MEM_LOAD_MISC_RETIRED.UC Retired instructions with at least 1 uncacheable load or lock. EventSel=D4H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_MISC_RETIRED.UC_PS Retired instructions with at least 1 uncacheable load or lock. EventSel=D4H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_RETIRED.FB_HIT Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready. EventSel=D1H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_RETIRED.FB_HIT_PS Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready. EventSel=D1H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_RETIRED.L1_HIT Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source. EventSel=D1H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_RETIRED.L1_HIT_PS Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source. EventSel=D1H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_RETIRED.L1_MISS Counts retired load instructions with at least one uop that missed in the L1 cache. EventSel=D1H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_RETIRED.L1_MISS_PS Counts retired load instructions with at least one uop that missed in the L1 cache. EventSel=D1H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_RETIRED.L2_HIT Retired load instructions with L2 cache hits as data sources. EventSel=D1H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_RETIRED.L2_HIT_PS Retired load instructions with L2 cache hits as data sources. EventSel=D1H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_RETIRED.L2_MISS Retired load instructions missed L2 cache as data sources. EventSel=D1H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_RETIRED.L2_MISS_PS Retired load instructions missed L2 cache as data sources. EventSel=D1H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_RETIRED.L3_HIT Counts retired load instructions with at least one uop that hit in the L3 cache. EventSel=D1H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_RETIRED.L3_HIT_PS Retired load instructions with L3 cache hits as data sources. EventSel=D1H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_LOAD_RETIRED.L3_MISS Counts retired load instructions with at least one uop that missed in the L3 cache. EventSel=D1H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
MEM_LOAD_RETIRED.L3_MISS_PS Retired load instructions missed L3 cache as data sources. EventSel=D1H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP, DataLinearAddress]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128 Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=80H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16 Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256 Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=100H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32 Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4 Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512 Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=200H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64 Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8 Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency. EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[Precise, DataLinearAddress, Latency]
CoreOnly
MEMORY_DISAMBIGUATION.HISTORY_RESET MEMORY_DISAMBIGUATION.HISTORY_RESET EventSel=09H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS.ALL_DATA_RD Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type. EventSel=B0H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS.ALL_REQUESTS Counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, etc.. EventSel=B0H UMask=80H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS.DEMAND_CODE_RD Counts both cacheable and non-cacheable code read requests. EventSel=B0H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS.DEMAND_DATA_RD Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore. EventSel=B0H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS.DEMAND_RFO Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM. EventSel=B0H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD Demand Data Read requests who miss L3 cache. EventSel=B0H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_BUFFER.SQ_FULL Counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full.Note: Writeback pending FIFO has six entries. EventSel=B2H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD Counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS. EventSel=60H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS. EventSel=60H UMask=08H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS. EventSel=60H UMask=02H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD Counts cycles when offcore outstanding Demand Data Read transactions are present in the super queue (SQ). A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). EventSel=60H UMask=01H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS. EventSel=60H UMask=04H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD Cycles with at least 1 Demand Data Read requests who miss L3 cache in the superQ. EventSel=60H UMask=10H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS. EventSel=60H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD Counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS.Note: A prefetch promoted to Demand is counted from the promotion point. EventSel=60H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_GE_6 Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue. EventSel=60H UMask=01H CMask=06H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO Counts the number of offcore outstanding RFO (store) transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS. EventSel=60H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD Counts number of Offcore outstanding Demand Data Read requests that miss L3 cache in the superQ every cycle. EventSel=60H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD_GE_6 Cycles with at least 6 Demand Data Read requests that miss L3 cache in the superQ. EventSel=60H UMask=10H CMask=06H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
OTHER_ASSISTS.ANY Number of times a microcode assist is invoked by HW other than FP-assist. Examples include AD (page Access Dirty) and AVX* related assists. EventSel=C1H UMask=3FH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
PARTIAL_RAT_STALLS.SCOREBOARD This event counts cycles during which the microcode scoreboard stalls happen. EventSel=59H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
RESOURCE_STALLS.ANY Counts resource-related stall cycles. EventSel=A2H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
RESOURCE_STALLS.SB Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end. EventSel=A2H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ROB_MISC_EVENTS.LBR_INSERTS Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECT. EventSel=CCH UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
ROB_MISC_EVENTS.PAUSE_INST Number of retired PAUSE instructions (that do not end up with a VMExit to the VMM; TSX aborted Instructions may be counted). This event is not supported on first SKL and KBL products. EventSel=CCH UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
RS_EVENTS.EMPTY_CYCLES Counts cycles during which the reservation station (RS) is empty for the thread.; Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues. EventSel=5EH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
RS_EVENTS.EMPTY_END Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate front-end Latency Bound issues. EventSel=5EH UMask=01H EdgeDetect=1 Invert=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
RTM_RETIRED.ABORTED Number of times RTM abort was triggered. EventSel=C9H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP, Counter=0,1,2,3]
CoreOnly
RTM_RETIRED.ABORTED_EVENTS Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt). EventSel=C9H UMask=80H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
RTM_RETIRED.ABORTED_MEM Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts). EventSel=C9H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
RTM_RETIRED.ABORTED_MEMTYPE Number of times an RTM execution aborted due to incompatible memory type. EventSel=C9H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
RTM_RETIRED.ABORTED_PS Number of times RTM abort was triggered. (PEBS) EventSel=C9H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3
PEBS:[PreciseEventingIP]
CoreOnly
RTM_RETIRED.ABORTED_TIMER Number of times an RTM execution aborted due to uncommon conditions. EventSel=C9H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
RTM_RETIRED.ABORTED_UNFRIENDLY Number of times an RTM execution aborted due to HLE-unfriendly instructions. EventSel=C9H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
RTM_RETIRED.COMMIT Number of times RTM commit succeeded. EventSel=C9H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
RTM_RETIRED.START Number of times we entered an RTM region. Does not count nested transactions. EventSel=C9H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
SQ_MISC.SPLIT_LOCK Counts the number of cache line split locks sent to the uncore. EventSel=F4H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
SW_PREFETCH_ACCESS.ANY Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed. EventSel=32H UMask=0FH
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
SW_PREFETCH_ACCESS.NTA Number of PREFETCHNTA instructions executed. EventSel=32H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
SW_PREFETCH_ACCESS.PREFETCHW Number of PREFETCHW instructions executed. EventSel=32H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
SW_PREFETCH_ACCESS.T0 Number of PREFETCHT0 instructions executed. EventSel=32H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
SW_PREFETCH_ACCESS.T1_T2 Number of PREFETCHT1 or PREFETCHT2 instructions executed. EventSel=32H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TLB_FLUSH.DTLB_THREAD Counts the number of DTLB flush attempts of the thread-specific entries. EventSel=BDH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TLB_FLUSH.STLB_ANY Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.). EventSel=BDH UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_EXEC.MISC1 Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort. EventSel=5DH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_EXEC.MISC2 Unfriendly TSX abort triggered by a vzeroupper instruction. EventSel=5DH UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_EXEC.MISC3 Unfriendly TSX abort triggered by a nest count that is too deep. EventSel=5DH UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_EXEC.MISC4 RTM region detected inside HLE. EventSel=5DH UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_EXEC.MISC5 Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region. EventSel=5DH UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_MEM.ABORT_CAPACITY Number of times a transactional abort was signaled due to a data capacity limitation for transactional reads or writes. EventSel=54H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_MEM.ABORT_CONFLICT Number of times a TSX line had a cache conflict. EventSel=54H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH Number of times a TSX Abort was triggered due to release/commit but data and address mismatch. EventSel=54H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty. EventSel=54H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer. EventSel=54H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK Number of times a TSX Abort was triggered due to a non-release/commit store to lock. EventSel=54H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
TX_MEM.HLE_ELISION_BUFFER_FULL Number of times we could not allocate Lock Buffer. EventSel=54H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_0 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0. EventSel=A1H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_1 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1. EventSel=A1H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_2 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 2. EventSel=A1H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_3 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 3. EventSel=A1H UMask=08H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_4 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 4. EventSel=A1H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_5 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5. EventSel=A1H UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_6 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6. EventSel=A1H UMask=40H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_DISPATCHED_PORT.PORT_7 Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 7. EventSel=A1H UMask=80H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CORE Number of uops executed from any thread. EventSel=B1H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CORE_CYCLES_GE_1 Cycles at least 1 micro-op is executed from any thread on physical core. EventSel=B1H UMask=02H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CORE_CYCLES_GE_2 Cycles at least 2 micro-op is executed from any thread on physical core. EventSel=B1H UMask=02H CMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CORE_CYCLES_GE_3 Cycles at least 3 micro-op is executed from any thread on physical core. EventSel=B1H UMask=02H CMask=03H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CORE_CYCLES_GE_4 Cycles at least 4 micro-op is executed from any thread on physical core. EventSel=B1H UMask=02H CMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CORE_CYCLES_NONE Cycles with no micro-ops executed from any thread on physical core. EventSel=B1H UMask=02H Invert=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC Cycles where at least 1 uop was executed per-thread. EventSel=B1H UMask=01H CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC Cycles where at least 2 uops were executed per-thread. EventSel=B1H UMask=01H CMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC Cycles where at least 3 uops were executed per-thread. EventSel=B1H UMask=01H CMask=03H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC Cycles where at least 4 uops were executed per-thread. EventSel=B1H UMask=01H CMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.STALL_CYCLES Counts cycles during which no uops were dispatched from the Reservation Station (RS) per thread. EventSel=B1H UMask=01H Invert=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.THREAD Number of uops to be executed per-thread each cycle. EventSel=B1H UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_EXECUTED.X87 Counts the number of x87 uops executed. EventSel=B1H UMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_ISSUED.ANY Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS). EventSel=0EH UMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_ISSUED.SLOW_LEA Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not. EventSel=0EH UMask=20H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_ISSUED.STALL_CYCLES Counts cycles during which the Resource Allocation Table (RAT) does not issue any Uops to the reservation station (RS) for the current thread. EventSel=0EH UMask=01H Invert=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_ISSUED.VECTOR_WIDTH_MISMATCH Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to “Mixing Intel AVX and Intel SSE Code” section of the Optimization Guide. EventSel=0EH UMask=02H CMask=0
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
CoreOnly
UOPS_RETIRED.MACRO_FUSED Counts the number of macro-fused uops retired. (non precise) EventSel=C2H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
UOPS_RETIRED.RETIRE_SLOTS Counts the retirement slots used. EventSel=C2H UMask=02H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
UOPS_RETIRED.STALL_CYCLES This event counts cycles without actually retired uops. EventSel=C2H UMask=02H Invert=1 CMask=01H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
UOPS_RETIRED.TOTAL_CYCLES Number of cycles using always true condition (uops_ret < 16) applied to non PEBS uops retired event. EventSel=C2H UMask=02H Invert=1 CMask=10H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
PEBS:[PreciseEventingIP]
CoreOnly
L2_LINES_OUT.USELESS_PREF This event is deprecated. Refer to new event L2_LINES_OUT.USELESS_HWPF EventSel=F2H UMask=04H
Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7
Deprecated
CoreOnly
UNCORE Uncore
UNC_CHA_CLOCKTICKS Counts clockticks of the clock controlling the uncore caching and home agent (CHA). EventSel=00H UMask=00H
Counter=0,1,2,3
Uncore
UNC_CHA_CORE_SNP.CORE_GTONE Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s). EventSel=33H UMask=42H
Counter=0,1,2,3
Uncore
UNC_CHA_CORE_SNP.EVICT_GTONE Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s). EventSel=33H UMask=82H
Counter=0,1,2,3
Uncore
UNC_CHA_DIR_LOOKUP.NO_SNP Counts transactions that looked into the multi-socket cacheline Directory state, and therefore did not send a snoop because the Directory indicated it was not needed EventSel=53H UMask=02H
Counter=0,1,2,3
Uncore
UNC_CHA_DIR_LOOKUP.SNP Counts transactions that looked into the multi-socket cacheline Directory state, and sent one or more snoops, because the Directory indicated it was needed EventSel=53H UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_DIR_UPDATE.HA Counts only multi-socket cacheline Directory state updates memory writes issued from the HA pipe. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelines. EventSel=54H UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_DIR_UPDATE.TOR Counts only multi-socket cacheline Directory state updates due to memory writes issued from the TOR pipe which are the result of remote transaction hitting the SF/LLC and returning data Core2Core. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelines. EventSel=54H UMask=02H
Counter=0,1,2,3
Uncore
UNC_CHA_FAST_ASSERTED.HORZ Counts the number of cycles either the local or incoming distress signals are asserted. Incoming distress includes up, dn and across. EventSel=A5H UMask=02H
Counter=0,1,2,3
Uncore
UNC_CHA_HITME_HIT.EX_RDS Counts read requests from a remote socket which hit in the HitME cache (used to cache the multi-socket Directory state) to a line in the E(Exclusive) state. This includes the following read opcodes (RdCode, RdData, RdDataMigratory, RdCur, RdInv*, Inv*) EventSel=5FH UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_IMC_READS_COUNT.NORMAL Counts when a normal (Non-Isochronous) read is issued to any of the memory controller channels from the CHA. EventSel=59H UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_IMC_WRITES_COUNT.FULL Counts when a normal (Non-Isochronous) full line write is issued from the CHA to the any of the memory controller channels. EventSel=5BH UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_LLC_VICTIMS.TOTAL_E Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. EventSel=37H UMask=02H
Counter=0,1,2,3
Uncore
UNC_CHA_LLC_VICTIMS.TOTAL_F Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. EventSel=37H UMask=08H
Counter=0,1,2,3
Uncore
UNC_CHA_LLC_VICTIMS.TOTAL_M Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. EventSel=37H UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_LLC_VICTIMS.TOTAL_S Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. EventSel=37H UMask=04H
Counter=0,1,2,3
Uncore
UNC_CHA_MISC.RFO_HIT_S Counts when a RFO (the Read for Ownership issued before a write) request hit a cacheline in the S (Shared) state. EventSel=39H UMask=08H
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.INVITOE_LOCAL Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA. EventSel=50H UMask=10H
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.INVITOE_REMOTE Counts the total number of requests coming from a remote socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA. EventSel=50H UMask=20H
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.READS Counts read requests made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write) . EventSel=50H UMask=03H
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.READS_LOCAL Counts read requests coming from a unit on this socket made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write). EventSel=50H UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.READS_REMOTE Counts read requests coming from a remote socket made into the CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write). EventSel=50H UMask=02H
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.WRITES Counts write requests made into the CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etc. EventSel=50H UMask=0CH
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.WRITES_LOCAL Counts write requests coming from a unit on this socket made into this CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etc. EventSel=50H UMask=04H
Counter=0,1,2,3
Uncore
UNC_CHA_REQUESTS.WRITES_REMOTE Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc). EventSel=50H UMask=08H
Counter=0,1,2,3
Uncore
UNC_CHA_RxC_INSERTS.IRQ Counts number of allocations per cycle into the specified Ingress queue. EventSel=13H UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH Ingress (from CMS) Request Queue Rejects; PhyAddr Match EventSel=19H UMask=80H
Counter=0,1,2,3
Uncore
UNC_CHA_RxC_OCCUPANCY.IRQ Counts number of entries in the specified Ingress queue in each cycle. EventSel=11H UMask=01H
Counter=0
Uncore
UNC_CHA_SF_EVICTION.E_STATE Counts snoop filter capacity evictions for entries tracking exclusive lines in the cores’ cache.  Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry.  Does not count clean evictions such as when a core’s cache replaces a tracked cacheline with a new cacheline. EventSel=3DH UMask=02H
Counter=0,1,2,3
Uncore
UNC_CHA_SF_EVICTION.M_STATE Counts snoop filter capacity evictions for entries tracking modified lines in the cores’ cache.  Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry.  Does not count clean evictions such as when a core’s cache replaces a tracked cacheline with a new cacheline. EventSel=3DH UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_SF_EVICTION.S_STATE Counts snoop filter capacity evictions for entries tracking shared lines in the cores’ cache.  Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry.  Does not count clean evictions such as when a core’s cache replaces a tracked cacheline with a new cacheline. EventSel=3DH UMask=04H
Counter=0,1,2,3
Uncore
UNC_CHA_SNOOP_RESP.RSP_FWD_WB Counts when a transaction with the opcode type Rsp*Fwd*WB Snoop Response was received which indicates the data was written back to its home socket, and the cacheline was forwarded to the requestor socket. This snoop response is only used in >= 4 socket systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to its home socket to be written back to memory. EventSel=5CH UMask=20H
Counter=0,1,2,3
Uncore
UNC_CHA_SNOOP_RESP.RSP_WBWB Counts when a transaction with the opcode type Rsp*WB Snoop Response was received which indicates which indicates the data was written back to its home. This is returned when a non-RFO request hits a cacheline in the Modified state. The Cache can either downgrade the cacheline to a S (Shared) or I (Invalid) state depending on how the system has been configured. This response will also be sent when a cache requests E (Exclusive) ownership of a cache line without receiving data, because the cache must acquire ownership. EventSel=5CH UMask=10H
Counter=0,1,2,3
Uncore
UNC_CHA_SNOOP_RESP.RSPCNFLCTS Counts when a a transaction with the opcode type RspCnflct* Snoop Response was received. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent. This triggers conflict resolution hardware. This covers both the opcode RspCnflct and RspCnflctWbI. EventSel=5CH UMask=40H
Counter=0,1,2,3
Uncore
UNC_CHA_SNOOP_RESP.RSPI Counts when a transaction with the opcode type RspI Snoop Response was received which indicates the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO: the Read for Ownership issued before a write hits non-modified data). EventSel=5CH UMask=01H
Counter=0,1,2,3
Uncore
UNC_CHA_SNOOP_RESP.RSPIFWD Counts when a a transaction with the opcode type RspIFwd Snoop Response was received which indicates a remote caching agent forwarded the data and the requesting agent is able to acquire the data in E (Exclusive) or M (modified) states. This is commonly returned with RFO (the Read for Ownership issued before a write) transactions. The snoop could have either been to a cacheline in the M,E,F (Modified, Exclusive or Forward) states. EventSel=5CH UMask=04H
Counter=0,1,2,3
Uncore
UNC_CHA_SNOOP_RESP.RSPSFWD Counts when a a transaction with the opcode type RspSFwd Snoop Response was received which indicates a remote caching agent forwarded the data but held on to its current copy. This is common for data and code reads that hit in a remote socket in E (Exclusive) or F (Forward) state. EventSel=5CH UMask=08H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_HIT_CRD TOR Inserts : CRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=11H Cn_MSR_PMON_BOX_FILTER1=40233H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_HIT_DRD TOR Inserts : DRds issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=11H Cn_MSR_PMON_BOX_FILTER1=40433H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD EventSel=35H UMask=11H Cn_MSR_PMON_BOX_FILTER1=4B233H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD EventSel=35H UMask=11H Cn_MSR_PMON_BOX_FILTER1=4B433H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefRFO TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=11H Cn_MSR_PMON_BOX_FILTER1=4B033H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_HIT_RFO TOR Inserts : RFOs issued by iA Cores that Hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=11H Cn_MSR_PMON_BOX_FILTER1=40033H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_MISS_CRD TOR Inserts : CRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=21H Cn_MSR_PMON_BOX_FILTER1=40233H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_MISS_DRD TOR Inserts : DRds issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=21H Cn_MSR_PMON_BOX_FILTER1=40433H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD EventSel=35H UMask=21H Cn_MSR_PMON_BOX_FILTER1=4B233H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD EventSel=35H UMask=21H Cn_MSR_PMON_BOX_FILTER1=4B433H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefRFO TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=21H Cn_MSR_PMON_BOX_FILTER1=4B033H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_INSERTS.IA_MISS_RFO TOR Inserts : RFOs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. EventSel=35H UMask=21H Cn_MSR_PMON_BOX_FILTER1=40033H
Counter=0,1,2,3
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD TOR Occupancy : CRds issued by iA Cores that Hit the LLC EventSel=36H UMask=11H Cn_MSR_PMON_BOX_FILTER1=40233H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD TOR Occupancy : DRds issued by iA Cores that Hit the LLC EventSel=36H UMask=11H Cn_MSR_PMON_BOX_FILTER1=40433H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD EventSel=36H UMask=11H Cn_MSR_PMON_BOX_FILTER1=4B233H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD EventSel=36H UMask=11H Cn_MSR_PMON_BOX_FILTER1=4B433H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefRFO TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC EventSel=36H UMask=11H Cn_MSR_PMON_BOX_FILTER1=4B033H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO TOR Occupancy : RFOs issued by iA Cores that Hit the LLC EventSel=36H UMask=11H Cn_MSR_PMON_BOX_FILTER1=40033H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD TOR Occupancy : CRds issued by iA Cores that Missed the LLC EventSel=36H UMask=21H Cn_MSR_PMON_BOX_FILTER1=40233H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD TOR Occupancy : DRds issued by iA Cores that Missed the LLC EventSel=36H UMask=21H Cn_MSR_PMON_BOX_FILTER1=40433H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD EventSel=36H UMask=21H Cn_MSR_PMON_BOX_FILTER1=4B233H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD EventSel=36H UMask=21H Cn_MSR_PMON_BOX_FILTER1=4B433H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefRFO TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC EventSel=36H UMask=21H Cn_MSR_PMON_BOX_FILTER1=4B033H
Counter=0
Uncore
UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO TOR Occupancy : RFOs issued by iA Cores that Missed the LLC EventSel=36H UMask=21H Cn_MSR_PMON_BOX_FILTER1=40033H
Counter=0
Uncore
UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_NCS Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time. EventSel=3BH UMask=80H
Counter=0
Uncore
UNC_IIO_CLOCKTICKS Counts clockticks of the 1GHz traffic controller clock in the IIO unit. EventSel=01H UMask=00H
Counter=0,1,2,3
Uncore
UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS PCIe Completion Buffer Inserts of completions with data: Part 0-3 EventSel=C2H UMask=03H FCMask=04H PortMask=0FH
Counter=0,1,2,3
Uncore
UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0 PCIe Completion Buffer Inserts of completions with data: Part 0 EventSel=C2H UMask=03H FCMask=04H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1 PCIe Completion Buffer Inserts of completions with data: Part 1 EventSel=C2H UMask=03H FCMask=04H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2 PCIe Completion Buffer Inserts of completions with data: Part 2 EventSel=C2H UMask=03H FCMask=04H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3 PCIe Completion Buffer Inserts of completions with data: Part 3 EventSel=C2H UMask=03H FCMask=04H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS PCIe Completion Buffer occupancy of completions with data: Part 0-3 EventSel=D5H UMask=0FH FCMask=04H PortMask=00H
Counter=2,3
Uncore
UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0 PCIe Completion Buffer occupancy of completions with data: Part 0 EventSel=D5H UMask=01H FCMask=04H PortMask=00H
Counter=2,3
Uncore
UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1 PCIe Completion Buffer occupancy of completions with data: Part 1 EventSel=D5H UMask=02H FCMask=04H PortMask=00H
Counter=2,3
Uncore
UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2 PCIe Completion Buffer occupancy of completions with data: Part 2 EventSel=D5H UMask=04H FCMask=04H PortMask=00H
Counter=2,3
Uncore
UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3 PCIe Completion Buffer occupancy of completions with data: Part 3 EventSel=D5H UMask=08H FCMask=04H PortMask=00H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0 Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part0. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C0H UMask=04H FCMask=07H PortMask=01H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1 Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part1. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C0H UMask=04H FCMask=07H PortMask=02H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2 Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part2. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C0H UMask=04H FCMask=07H PortMask=04H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3 Counts every read request for 4 bytes of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part3. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C0H UMask=04H FCMask=07H PortMask=08H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0 Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part0 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C0H UMask=01H FCMask=07H PortMask=01H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1 Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part1 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C0H UMask=01H FCMask=07H PortMask=02H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2 Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part2 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C0H UMask=01H FCMask=07H PortMask=04H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3 Counts every write request of 4 bytes of data made to the MMIO space of a card on IIO Part3 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C0H UMask=01H FCMask=07H PortMask=08H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0 Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part0. Does not include requests made by the same IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C0H UMask=08H FCMask=07H PortMask=01H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1 Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part1. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C0H UMask=08H FCMask=07H PortMask=02H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2 Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part2. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C0H UMask=08H FCMask=07H PortMask=04H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3 Counts ever peer to peer read request for 4 bytes of data made by a different IIO unit to the MMIO space of a card on IIO Part3. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C0H UMask=08H FCMask=07H PortMask=08H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0 Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part0 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C0H UMask=02H FCMask=07H PortMask=01H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1 Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part1 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C0H UMask=02H FCMask=07H PortMask=02H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2 Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part2 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C0H UMask=02H FCMask=07H PortMask=04H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3 Counts every peer to peer write request of 4 bytes of data made to the MMIO space of a card on IIO Part3 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C0H UMask=02H FCMask=07H PortMask=08H
Counter=2,3
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 Counts every read request for 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=83H UMask=04H FCMask=07H PortMask=01H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 Counts every read request for 4 bytes of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=83H UMask=04H FCMask=07H PortMask=02H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 Counts every read request for 4 bytes of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=83H UMask=04H FCMask=07H PortMask=04H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 Counts every read request for 4 bytes of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=83H UMask=04H FCMask=07H PortMask=08H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 Counts every write request of 4 bytes of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=83H UMask=01H FCMask=07H PortMask=01H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 Counts every write request of 4 bytes of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=83H UMask=01H FCMask=07H PortMask=02H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 Counts every write request of 4 bytes of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=83H UMask=01H FCMask=07H PortMask=04H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3 Counts every write request of 4 bytes of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=83H UMask=01H FCMask=07H PortMask=08H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0 Counts every peer to peer read request for 4 bytes of data made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=83H UMask=08H FCMask=07H PortMask=01H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1 Counts every peer to peer read request for 4 bytes of data made by IIO Part1 to the MMIO space of an IIO target. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=83H UMask=08H FCMask=07H PortMask=02H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2 Counts every peer to peer read request for 4 bytes of data made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=83H UMask=08H FCMask=07H PortMask=04H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3 Counts every peer to peer read request for 4 bytes of data made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=83H UMask=08H FCMask=07H PortMask=08H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0 Counts every peer to peer write request of 4 bytes of data made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=83H UMask=02H FCMask=07H PortMask=01H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1 Counts every peer to peer write request of 4 bytes of data made by IIO Part1 to the MMIO space of an IIO target. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=83H UMask=02H FCMask=07H PortMask=02H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2 Counts every peer to peer write request of 4 bytes of data made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=83H UMask=02H FCMask=07H PortMask=04H
Counter=0,1
Uncore
UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3 Counts every peer to peer write request of 4 bytes of data made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=83H UMask=02H FCMask=07H PortMask=08H
Counter=0,1
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0 Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part0. In the general case, part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C1H UMask=04H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1 Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part1. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C1H UMask=04H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2 Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part2. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C1H UMask=04H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3 Counts every read request for up to a 64 byte transaction of data made by a unit on the main die (generally a core) or by another IIO unit to the MMIO space of a card on IIO Part3. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C1H UMask=04H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0 Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part0 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C1H UMask=01H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1 Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part1 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C1H UMask=01H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2 Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part2 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C1H UMask=01H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3 Counts every write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part3 by a unit on the main die (generally a core) or by another IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C1H UMask=01H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART0 Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part0. Does not include requests made by the same IIO unit. In the general case, part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C1H UMask=08H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART1 Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part1. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C1H UMask=08H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART2 Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part2. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C1H UMask=08H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART3 Counts every peer to peer read request for up to a 64 byte transaction of data made by a different IIO unit to the MMIO space of a card on IIO Part3. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C1H UMask=08H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0 Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part0 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=C1H UMask=02H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1 Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part1 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=C1H UMask=02H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2 Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part2 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=C1H UMask=02H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3 Counts every peer to peer write request of up to a 64 byte transaction of data made to the MMIO space of a card on IIO Part3 by a different IIO unit. Does not include requests made by the same IIO unit. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=C1H UMask=02H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0 Counts every read request for up to a 64 byte transaction of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=84H UMask=04H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1 Counts every read request for up to a 64 byte transaction of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=84H UMask=04H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2 Counts every read request for up to a 64 byte transaction of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=84H UMask=04H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3 Counts every read request for up to a 64 byte transaction of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=84H UMask=04H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0 Counts every write request of up to a 64 byte transaction of data made by IIO Part0 to a unit on the main die (generally memory). In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=84H UMask=01H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1 Counts every write request of up to a 64 byte transaction of data made by IIO Part1 to a unit on the main die (generally memory). In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=84H UMask=01H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2 Counts every write request of up to a 64 byte transaction of data made by IIO Part2 to a unit on the main die (generally memory). In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=84H UMask=01H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3 Counts every write request of up to a 64 byte transaction of data made by IIO Part3 to a unit on the main die (generally memory). In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=84H UMask=01H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART0 Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=84H UMask=08H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART1 Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part1 to the MMIO space of an IIO target. In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=84H UMask=08H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART2 Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=84H UMask=08H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART3 Counts every peer to peer read request of up to a 64 byte transaction made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=84H UMask=08H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0 Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part0 to the MMIO space of an IIO target. In the general case, Part0 refers to a standard PCIe card of any size (x16,x8,x4) that is plugged directly into one of the PCIe slots. Part0 could also refer to any device plugged into the first slot of a PCIe riser card or to a device attached to the IIO unit which starts its use of the bus using lane 0 of the 16 lanes supported by the bus. EventSel=84H UMask=02H FCMask=07H PortMask=01H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1 Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part1 to the MMIO space of an IIO target.In the general case, Part1 refers to a x4 PCIe card plugged into the second slot of a PCIe riser card, but it could refer to any x4 device attached to the IIO unit using lanes starting at lane 4 of the 16 lanes supported by the bus. EventSel=84H UMask=02H FCMask=07H PortMask=02H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2 Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part2 to the MMIO space of an IIO target. In the general case, Part2 refers to a x4 or x8 PCIe card plugged into the third slot of a PCIe riser card, but it could refer to any x4 or x8 device attached to the IIO unit and using lanes starting at lane 8 of the 16 lanes supported by the bus. EventSel=84H UMask=02H FCMask=07H PortMask=04H
Counter=0,1,2,3
Uncore
UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3 Counts every peer to peer write request of up to a 64 byte transaction of data made by IIO Part3 to the MMIO space of an IIO target. In the general case, Part3 refers to a x4 PCIe card plugged into the fourth slot of a PCIe riser card, but it could brefer to any device attached to the IIO unit using the lanes starting at lane 12 of the 16 lanes supported by the bus. EventSel=84H UMask=02H FCMask=07H PortMask=08H
Counter=0,1,2,3
Uncore
UNC_M_ACT_COUNT.WR Counts DRAM Page Activate commands sent on this channel due to a write request to the iMC (Memory Controller). Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS (Column Access Select) command. EventSel=01H UMask=02H
Counter=0,1,2,3
Uncore
UNC_M_CAS_COUNT.ALL Counts all CAS (Column Address Select) commands issued to DRAM per memory channel. CAS commands are issued to specify the address to read or write on DRAM, so this event increments for every read and write. This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write) is enabled or not. EventSel=04H UMask=0FH
Counter=0,1,2,3
Uncore
UNC_M_CAS_COUNT.RD Counts all CAS (Column Access Select) read commands issued to DRAM on a per channel basis. CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every read. This event includes underfill reads due to partial write requests. This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write) is enabled or not. EventSel=04H UMask=03H
Counter=0,1,2,3
Uncore
UNC_M_CAS_COUNT.RD_REG Counts CAS (Column Access Select) regular read commands issued to DRAM on a per channel basis. CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every regular read. This event only counts regular reads and does not includes underfill reads due to partial write requests. This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write) is enabled or not. EventSel=04H UMask=01H
Counter=0,1,2,3
Uncore
UNC_M_CAS_COUNT.RD_UNDERFILL Counts CAS (Column Access Select) underfill read commands issued to DRAM due to a partial write, on a per channel basis. CAS commands are issued to specify the address to read or write on DRAM, and this command counts underfill reads. Partial writes must be completed by first reading in the underfill from DRAM and then merging in the partial write data before writing the full line back to DRAM. This event will generally count about the same as the number of partial writes, but may be slightly less because of partials hitting in the WPQ (due to a previous write request). EventSel=04H UMask=02H
Counter=0,1,2,3
Uncore
UNC_M_CAS_COUNT.WR Counts all CAS (Column Address Select) commands issued to DRAM per memory channel. CAS commands are issued to specify the address to read or write on DRAM, and this event increments for every write. This event counts whether AutoPrecharge (which closes the DRAM Page automatically after a read/write) is enabled or not. EventSel=04H UMask=0CH
Counter=0,1,2,3
Uncore
UNC_M_CAS_COUNT.WR_WMM Counts the total number or DRAM Write CAS commands issued on this channel while in Write-Major-Mode. EventSel=04H UMask=04H
Counter=0,1,2,3
Uncore
UNC_M_CLOCKTICKS Counts clockticks of the fixed frequency clock of the memory controller using one of the programmable counters. EventSel=00H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M_POWER_CHANNEL_PPD Counts cycles when all the ranks in the channel are in PPD (PreCharge Power Down) mode. If IBT (Input Buffer Terminators)=off is enabled, then this event counts the cycles in PPD mode. If IBT=off is not enabled, then this event counts the number of cycles when being in PPD mode could have been taken advantage of. EventSel=85H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M_POWER_SELF_REFRESH Counts the number of cycles when the iMC (memory controller) is in self-refresh and has a clock. This happens in some ACPI CPU package C-states for the sleep levels. For example, the PCU (Power Control Unit) may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Intel? Dynamic Power Technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases. EventSel=43H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M_PRE_COUNT.PAGE_MISS Counts the number of explicit DRAM Precharge commands sent on this channel as a result of a DRAM page miss. This does not include the implicit precharge commands sent with CAS commands in Auto-Precharge mode. This does not include Precharge commands sent as a result of a page close counter expiration. EventSel=02H UMask=01H
Counter=0,1,2,3
Uncore
UNC_M_PRE_COUNT.RD Counts the number of explicit DRAM Precharge commands issued on a per channel basis due to a read, so as to close the previous DRAM page, before opening the requested page. EventSel=02H UMask=04H
Counter=0,1,2,3
Uncore
UNC_M_RPQ_INSERTS Counts the number of read requests allocated into the Read Pending Queue (RPQ). This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. The requests deallocate after the read CAS command has been issued to DRAM. This event counts both Isochronous and non-Isochronous requests which were issued to the RPQ. EventSel=10H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M_RPQ_OCCUPANCY Counts the number of entries in the Read Pending Queue (RPQ) at each cycle. This can then be used to calculate both the average occupancy of the queue (in conjunction with the number of cycles not empty) and the average latency in the queue (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate from the RPQ after the CAS command has been issued to memory. EventSel=80H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M_WPQ_INSERTS Counts the number of writes requests allocated into the Write Pending Queue (WPQ). The WPQ is used to schedule writes out to the memory controller and to track the requests. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC (Memory Controller). The write requests deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC. EventSel=20H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M_WPQ_OCCUPANCY Counts the number of entries in the Write Pending Queue (WPQ) at each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule writes out to the memory controller and to track the requests. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC (memory controller). They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have 'posted' to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the 'not posted' filter, we can track how long writes spent in the iMC before completions were sent to the HA. The 'posted' filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts. EventSel=81H UMask=00H
Counter=0,1,2,3
Uncore
UNC_I_CACHE_TOTAL_OCCUPANCY.MEM Total IRP occupancy of inbound read and write requests. This is effectively the sum of read occupancy and write occupancy. EventSel=0FH UMask=04H
Counter=0,1
Uncore
UNC_I_COHERENT_OPS.PCITOM PCIITOM request issued by the IRP unit to the mesh with the intention of writing a full cacheline to coherent memory, without a RFO. PCIITOM is a speculative Invalidate to Modified command that requests ownership of the cacheline and does not move data from the mesh to IRP cache. EventSel=10H UMask=10H
Counter=0,1
Uncore
UNC_I_COHERENT_OPS.RFO RFO request issued by the IRP unit to the mesh with the intention of writing a partial cacheline to coherent memory. RFO is a Read For Ownership command that requests ownership of the cacheline and moves data from the mesh to IRP cache. EventSel=10H UMask=08H
Counter=0,1
Uncore
UNC_I_FAF_INSERTS Inbound read requests to coherent memory, received by the IRP and inserted into the Fire and Forget queue (FAF), a queue used for processing inbound reads in the IRP. EventSel=18H UMask=00H
Counter=0,1
Uncore
UNC_I_FAF_OCCUPANCY Occupancy of the IRP Fire and Forget (FAF) queue, a queue used for processing inbound reads in the IRP. EventSel=19H UMask=00H
Counter=0,1
Uncore
UNC_I_TRANSACTIONS.WR_PREF Inbound write (fast path) requests to coherent memory, received by the IRP resulting in write ownership requests issued by IRP to the mesh. EventSel=11H UMask=08H
Counter=0,1
Uncore
UNC_M2M_BYPASS_M2M_Egress.NOT_TAKEN Counts traffic in which the M2M (Mesh to Memory) to iMC (Memory Controller) bypass was not taken EventSel=22H UMask=02H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE Counts cycles when direct to core mode (which bypasses the CHA) was disabled EventSel=24H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECT2CORE_TAKEN Counts when messages were sent direct to core (bypassing the CHA) EventSel=23H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECT2CORE_TXN_OVERRIDE Counts reads in which direct to core transactions (which would have bypassed the CHA) were overridden EventSel=25H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECT2UPI_NOT_TAKEN_CREDITS Counts reads in which direct to Intel® Ultra Path Interconnect (UPI) transactions (which would have bypassed the CHA) were overridden EventSel=28H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE Counts cycles when the ability to send messages direct to the Intel® Ultra Path Interconnect (bypassing the CHA) was disabled EventSel=27H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECT2UPI_TAKEN Counts when messages were sent direct to the Intel® Ultra Path Interconnect (bypassing the CHA) EventSel=26H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECT2UPI_TXN_OVERRIDE Counts when a read message that was sent direct to the Intel® Ultra Path Interconnect (bypassing the CHA) was overridden EventSel=29H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_LOOKUP.ANY Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state, and found the cacheline marked in Any State (A, I, S or unused) EventSel=2DH UMask=01H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_LOOKUP.STATE_A Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state, and found the cacheline marked in the A (SnoopAll) state, indicating the cacheline is stored in another socket in any state, and we must snoop the other sockets to make sure we get the latest data. The data may be stored in any state in the local socket. EventSel=2DH UMask=08H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_LOOKUP.STATE_I Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state , and found the cacheline marked in the I (Invalid) state indicating the cacheline is not stored in another socket, and so there is no need to snoop the other sockets for the latest data. The data may be stored in any state in the local socket. EventSel=2DH UMask=02H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_LOOKUP.STATE_S Counts when the M2M (Mesh to Memory) looks into the multi-socket cacheline Directory state , and found the cacheline marked in the S (Shared) state indicating the cacheline is either stored in another socket in the S(hared) state , and so there is no need to snoop the other sockets for the latest data. The data may be stored in any state in the local socket. EventSel=2DH UMask=04H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_UPDATE.A2I Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from A (SnoopAll) to I (Invalid) EventSel=2EH UMask=20H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_UPDATE.A2S Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from A (SnoopAll) to S (Shared) EventSel=2EH UMask=40H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_UPDATE.ANY Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory to a new state EventSel=2EH UMask=01H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_UPDATE.I2A Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from I (Invalid) to A (SnoopAll) EventSel=2EH UMask=04H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_UPDATE.I2S Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from I (Invalid) to S (Shared) EventSel=2EH UMask=02H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_UPDATE.S2A Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from S (Shared) to A (SnoopAll) EventSel=2EH UMask=10H
Counter=0,1,2,3
Uncore
UNC_M2M_DIRECTORY_UPDATE.S2I Counts when the M2M (Mesh to Memory) updates the multi-socket cacheline Directory state from S (Shared) to I (Invalid) EventSel=2EH UMask=08H
Counter=0,1,2,3
Uncore
UNC_M2M_IMC_READS.ALL Counts when the M2M (Mesh to Memory) issues reads to the iMC (Memory Controller). EventSel=37H UMask=04H
Counter=0,1,2,3
Uncore
UNC_M2M_IMC_READS.NORMAL Counts when the M2M (Mesh to Memory) issues reads to the iMC (Memory Controller). It only counts normal priority non-isochronous reads. EventSel=37H UMask=01H
Counter=0,1,2,3
Uncore
UNC_M2M_IMC_WRITES.ALL Counts when the M2M (Mesh to Memory) issues writes to the iMC (Memory Controller). EventSel=38H UMask=10H
Counter=0,1,2,3
Uncore
UNC_M2M_IMC_WRITES.NI M2M Writes Issued to iMC; All, regardless of priority. EventSel=38H UMask=80H
Counter=0,1,2,3
Uncore
UNC_M2M_IMC_WRITES.PARTIAL Counts when the M2M (Mesh to Memory) issues partial writes to the iMC (Memory Controller). It only counts normal priority non-isochronous writes. EventSel=38H UMask=02H
Counter=0,1,2,3
Uncore
UNC_M2M_PREFCAM_DEMAND_PROMOTIONS Counts when the M2M (Mesh to Memory) promotes a outstanding request in the prefetch queue due to a subsequent demand read request that entered the M2M with the same address. Explanatory Side Note: The Prefetch queue is made of CAM (Content Addressable Memory) EventSel=56H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_PREFCAM_INSERTS Counts when the M2M (Mesh to Memory) receives a prefetch request and inserts it into its outstanding prefetch queue. Explanatory Side Note: the prefect queue is made from CAM: Content Addressable Memory EventSel=57H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_RxC_AD_INSERTS Counts when the a new entry is Received(RxC) and then added to the AD (Address Ring) Ingress Queue from the CMS (Common Mesh Stop). This is generally used for reads, and EventSel=01H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_RxC_AD_OCCUPANCY AD Ingress (from CMS) Occupancy EventSel=02H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_RxC_BL_INSERTS BL Ingress (from CMS) Allocations EventSel=05H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_RxC_BL_OCCUPANCY BL Ingress (from CMS) Occupancy EventSel=06H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_TxC_AD_INSERTS AD Egress (to CMS) Allocations EventSel=09H UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_TxC_AD_OCCUPANCY AD Egress (to CMS) Occupancy EventSel=0AH UMask=00H
Counter=0,1,2,3
Uncore
UNC_M2M_TxC_BL_INSERTS.ALL BL Egress (to CMS) Allocations; All EventSel=15H UMask=03H
Counter=0,1,2,3
Uncore
UNC_M2M_TxC_BL_OCCUPANCY.ALL BL Egress (to CMS) Occupancy; All EventSel=16H UMask=03H
Counter=0,1,2,3
Uncore
UNC_M3UPI_UPI_PREFETCH_SPAWN Count cases where flow control queue that sits between the Intel® Ultra Path Interconnect (UPI) and the mesh spawns a prefetch to the iMC (Memory Controller) EventSel=29H UMask=00H
Counter=0,1,2
Uncore
UNC_UPI_CLOCKTICKS Counts clockticks of the fixed frequency clock controlling the Intel® Ultra Path Interconnect (UPI). This clock runs at1/8th the 'GT/s' speed of the UPI link. For example, a 9.6GT/s link will have a fixed Frequency of 1.2 Ghz. EventSel=01H UMask=00H
Counter=0,1,2,3
Uncore
UNC_UPI_DIRECT_ATTEMPTS.D2C Counts Data Response (DRS) packets that attempted to go direct to core bypassing the CHA. EventSel=12H UMask=01H
Counter=0,1,2,3
Uncore
UNC_UPI_DIRECT_ATTEMPTS.D2U Counts Data Response (DRS) packets that attempted to go direct to Intel® Ultra Path Interconnect (UPI) bypassing the CHA . EventSel=12H UMask=02H
Counter=0,1,2,3
Uncore
UNC_UPI_L1_POWER_CYCLES Counts cycles when the Intel® Ultra Path Interconnect (UPI) is in L1 power mode. L1 is a mode that totally shuts down the UPI link. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another, this event only coutns when both links are shutdown. EventSel=21H UMask=00H
Counter=0,1,2,3
Uncore
UNC_UPI_RxL_BYPASSED.SLOT0 Counts incoming FLITs (FLow control unITs) which bypassed the slot0 RxQ buffer (Receive Queue) and passed directly to the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of FLITs transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency. EventSel=31H UMask=01H
Counter=0,1,2,3
Uncore
UNC_UPI_RxL_BYPASSED.SLOT1 Counts incoming FLITs (FLow control unITs) which bypassed the slot1 RxQ buffer (Receive Queue) and passed directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of FLITs transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency. EventSel=31H UMask=02H
Counter=0,1,2,3
Uncore
UNC_UPI_RxL_BYPASSED.SLOT2 Counts incoming FLITs (FLow control unITs) which bypassed the slot2 RxQ buffer (Receive Queue) and passed directly to the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of FLITs transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency. EventSel=31H UMask=04H
Counter=0,1,2,3
Uncore
UNC_UPI_RxL_FLITS.ALL_DATA Counts valid data FLITs (80 bit FLow control unITs: 64bits of data) received from any of the 3 Intel® Ultra Path Interconnect (UPI) Receive Queue slots on this UPI unit. EventSel=03H UMask=0FH
Counter=0,1,2,3
Uncore
UNC_UPI_RxL_FLITS.ALL_NULL Counts null FLITs (80 bit FLow control unITs) received from any of the 3 Intel® Ultra Path Interconnect (UPI) Receive Queue slots on this UPI unit. EventSel=03H UMask=27H
Counter=0,1,2,3
Uncore
UNC_UPI_RxL_FLITS.NON_DATA Counts protocol header and credit FLITs (80 bit FLow control unITs) received from any of the 3 UPI slots on this UPI unit. EventSel=03H UMask=97H
Counter=0,1,2,3
Uncore
UNC_UPI_RxL0P_POWER_CYCLES Counts cycles when the receive side (Rx) of the Intel® Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power. EventSel=25H UMask=00H
Counter=0,1,2,3
Uncore
UNC_UPI_TxL_BYPASSED Counts incoming FLITs (FLow control unITs) which bypassed the TxL(transmit) FLIT buffer and pass directly out the UPI Link. Generally, when data is transmitted across the Intel® Ultra Path Interconnect (UPI), it will bypass the TxQ and pass directly to the link. However, the TxQ will be used in L0p (Low Power) mode and (Link Layer Retry) LLR mode, increasing latency to transfer out to the link. EventSel=41H UMask=00H
Counter=0,1,2,3
Uncore
UNC_UPI_TxL_FLITS.ALL_DATA Counts valid data FLITs (80 bit FLow control unITs: 64bits of data) transmitted (TxL) via any of the 3 Intel® Ultra Path Interconnect (UPI) slots on this UPI unit. EventSel=02H UMask=0FH
Counter=0,1,2,3
Uncore
UNC_UPI_TxL_FLITS.ALL_NULL Counts null FLITs (80 bit FLow control unITs) transmitted via any of the 3 Intel® Ulra Path Interconnect (UPI) slots on this UPI unit. EventSel=02H UMask=27H
Counter=0,1,2,3
Uncore
UNC_UPI_TxL_FLITS.IDLE Counts when the Intel Ultra Path Interconnect(UPI) transmits an idle FLIT(80 bit FLow control unITs). Every UPI cycle must be sending either data FLITs, protocol/credit FLITs or idle FLITs. EventSel=02H UMask=47H
Counter=0,1,2,3
Uncore
UNC_UPI_TxL_FLITS.NON_DATA Counts protocol header and credit FLITs (80 bit FLow control unITs) transmitted across any of the 3 UPI (Ultra Path Interconnect) slots on this UPI unit. EventSel=02H UMask=97H
Counter=0,1,2,3
Uncore
UNC_UPI_TxL0P_POWER_CYCLES Counts cycles when the transmit side (Tx) of the Intel® Ultra Path Interconnect(UPI) is in L0p power mode. L0p is a mode where we disable 60% of the UPI lanes, decreasing our bandwidth in order to save power. EventSel=27H UMask=00H
Counter=0,1,2,3
Uncore
UNC_C_CLOCKTICKS This event is deprecated. Refer to new event UNC_CHA_CLOCKTICKS EventSel=00H UMask=00H
Counter=0,1,2,3
Deprecated
Uncore
UNC_C_FAST_ASSERTED This event is deprecated. Refer to new event UNC_CHA_FAST_ASSERTED.HORZ EventSel=A5H UMask=02H
Counter=0,1,2,3
Deprecated
Uncore
UNC_C_LLC_VICTIMS.E_STATE This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_E EventSel=37H UMask=02H
Counter=0,1,2,3
Deprecated
Uncore
UNC_C_LLC_VICTIMS.F_STATE This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_F EventSel=37H UMask=08H
Counter=0,1,2,3
Deprecated
Uncore
UNC_C_LLC_VICTIMS.M_STATE This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_M EventSel=37H UMask=01H
Counter=0,1,2,3
Deprecated
Uncore
UNC_C_LLC_VICTIMS.S_STATE This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_S EventSel=37H UMask=04H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_CORE_SNP.CORE_GTONE This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.CORE_GTONE EventSel=33H UMask=42H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_CORE_SNP.EVICT_GTONE This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EVICT_GTONE EventSel=33H UMask=82H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_DIR_LOOKUP.NO_SNP This event is deprecated. Refer to new event UNC_CHA_DIR_LOOKUP.NO_SNP EventSel=53H UMask=02H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_DIR_LOOKUP.SNP This event is deprecated. Refer to new event UNC_CHA_DIR_LOOKUP.SNP EventSel=53H UMask=01H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_DIR_UPDATE.HA This event is deprecated. Refer to new event UNC_CHA_DIR_UPDATE.HA EventSel=54H UMask=01H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_DIR_UPDATE.TOR This event is deprecated. Refer to new event UNC_CHA_DIR_UPDATE.TOR EventSel=54H UMask=02H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_HITME_HIT.EX_RDS This event is deprecated. Refer to new event UNC_CHA_HITME_HIT.EX_RDS EventSel=5FH UMask=01H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_MISC.RFO_HIT_S This event is deprecated. Refer to new event UNC_CHA_MISC.RFO_HIT_S EventSel=39H UMask=08H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_REQUESTS.INVITOE_LOCAL This event is deprecated. Refer to new event UNC_CHA_REQUESTS.INVITOE_LOCAL EventSel=50H UMask=10H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_REQUESTS.INVITOE_REMOTE This event is deprecated. Refer to new event UNC_CHA_REQUESTS.INVITOE_REMOTE EventSel=50H UMask=20H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_REQUESTS.READS This event is deprecated. Refer to new event UNC_CHA_REQUESTS.READS EventSel=50H UMask=03H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_REQUESTS.READS_LOCAL This event is deprecated. Refer to new event UNC_CHA_REQUESTS.READS_LOCAL EventSel=50H UMask=01H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_REQUESTS.WRITES This event is deprecated. Refer to new event UNC_CHA_REQUESTS.WRITES EventSel=50H UMask=0CH
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_REQUESTS.WRITES_LOCAL This event is deprecated. Refer to new event UNC_CHA_REQUESTS.WRITES_LOCAL EventSel=50H UMask=04H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_RxC_INSERTS.IRQ This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.IRQ EventSel=13H UMask=01H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_RxC_IRQ1_REJECT.PA_MATCH This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH EventSel=19H UMask=80H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_RxC_OCCUPANCY.IRQ This event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.IRQ EventSel=11H UMask=01H
Counter=0
Deprecated
Uncore
UNC_H_SNOOP_RESP.RSP_FWD_WB This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSP_FWD_WB EventSel=5CH UMask=20H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_SNOOP_RESP.RSPCNFLCT This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPCNFLCTS EventSel=5CH UMask=40H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_SNOOP_RESP.RSPIFWD This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPIFWD EventSel=5CH UMask=04H
Counter=0,1,2,3
Deprecated
Uncore
UNC_H_SNOOP_RESP.RSPSFWD This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPSFWD EventSel=5CH UMask=08H
Counter=0,1,2,3
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART0 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 EventSel=83H UMask=04H FCMask=07H PortMask=01H
Counter=0,1
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART1 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 EventSel=83H UMask=04H FCMask=07H PortMask=02H
Counter=0,1
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART2 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 EventSel=83H UMask=04H FCMask=07H PortMask=04H
Counter=0,1
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART3 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 EventSel=83H UMask=04H FCMask=07H PortMask=08H
Counter=0,1
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART0 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 EventSel=83H UMask=01H FCMask=07H PortMask=01H
Counter=0,1
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART1 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 EventSel=83H UMask=01H FCMask=07H PortMask=02H
Counter=0,1
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART2 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 EventSel=83H UMask=01H FCMask=07H PortMask=04H
Counter=0,1
Deprecated
Uncore
UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART3 This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3 EventSel=83H UMask=01H FCMask=07H PortMask=08H
Counter=0,1
Deprecated
Uncore
UNC_UPI_DIRECT_ATTEMPTS.D2K This event is deprecated. Refer to new event UNC_UPI_DIRECT_ATTEMPTS.D2U EventSel=12H UMask=02H
Counter=0,1,2,3
Deprecated
Uncore
OFFCORE Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=ANY_RESPONSE Counts demand data reads that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_HIT.NO_SNOOP_NEEDED Counts demand data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts demand data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_HIT.HITM_OTHER_CORE Counts demand data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_HIT.ANY_SNOOP Counts demand data reads that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_MISS.ANY_SNOOP Counts demand data reads that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_MISS.REMOTE_HIT_FORWARD Counts demand data reads that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_MISS.REMOTE_HITM Counts demand data reads that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts demand data reads that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts demand data reads that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800001H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts demand data reads that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000001H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=ANY_RESPONSE Counts all demand data writes (RFOs) that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_HIT.NO_SNOOP_NEEDED Counts all demand data writes (RFOs) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand data writes (RFOs) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_HIT.HITM_OTHER_CORE Counts all demand data writes (RFOs) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_HIT.ANY_SNOOP Counts all demand data writes (RFOs) that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_MISS.ANY_SNOOP Counts all demand data writes (RFOs) that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_MISS.REMOTE_HIT_FORWARD Counts all demand data writes (RFOs) that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_MISS.REMOTE_HITM Counts all demand data writes (RFOs) that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all demand data writes (RFOs) that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand data writes (RFOs) that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800002H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand data writes (RFOs) that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000002H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=ANY_RESPONSE Counts all demand code reads that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_HIT.NO_SNOOP_NEEDED Counts all demand code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_HIT.HITM_OTHER_CORE Counts all demand code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_HIT.ANY_SNOOP Counts all demand code reads that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_MISS.ANY_SNOOP Counts all demand code reads that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_MISS.REMOTE_HIT_FORWARD Counts all demand code reads that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_MISS.REMOTE_HITM Counts all demand code reads that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all demand code reads that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand code reads that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800004H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand code reads that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000004H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=ANY_RESPONSE Counts prefetch (that bring data to L2) data reads that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_HIT.NO_SNOOP_NEEDED Counts prefetch (that bring data to L2) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_HIT.HITM_OTHER_CORE Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_HIT.ANY_SNOOP Counts prefetch (that bring data to L2) data reads that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_MISS.ANY_SNOOP Counts prefetch (that bring data to L2) data reads that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_MISS.REMOTE_HIT_FORWARD Counts prefetch (that bring data to L2) data reads that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_MISS.REMOTE_HITM Counts prefetch (that bring data to L2) data reads that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800010H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000010H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=ANY_RESPONSE Counts all prefetch (that bring data to L2) RFOs that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_HIT.NO_SNOOP_NEEDED Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_HIT.HITM_OTHER_CORE Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_HIT.ANY_SNOOP Counts all prefetch (that bring data to L2) RFOs that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_MISS.ANY_SNOOP Counts all prefetch (that bring data to L2) RFOs that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_MISS.REMOTE_HIT_FORWARD Counts all prefetch (that bring data to L2) RFOs that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_MISS.REMOTE_HITM Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800020H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000020H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=ANY_RESPONSE Counts all prefetch (that bring data to LLC only) data reads that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_HIT.NO_SNOOP_NEEDED Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_HIT.HITM_OTHER_CORE Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_HIT.ANY_SNOOP Counts all prefetch (that bring data to LLC only) data reads that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_MISS.ANY_SNOOP Counts all prefetch (that bring data to LLC only) data reads that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_MISS.REMOTE_HIT_FORWARD Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_MISS.REMOTE_HITM Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800080H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000080H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=ANY_RESPONSE Counts all prefetch (that bring data to LLC only) RFOs that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_HIT.NO_SNOOP_NEEDED Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_HIT.HITM_OTHER_CORE Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_HIT.ANY_SNOOP Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_MISS.ANY_SNOOP Counts all prefetch (that bring data to LLC only) RFOs that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_MISS.REMOTE_HIT_FORWARD Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_MISS.REMOTE_HITM Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800100H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000100H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=ANY_RESPONSE Counts L1 data cache hardware prefetch requests and software prefetch requests that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_HIT.NO_SNOOP_NEEDED Counts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_HIT.HITM_OTHER_CORE Counts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_HIT.ANY_SNOOP Counts L1 data cache hardware prefetch requests and software prefetch requests that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_MISS.ANY_SNOOP Counts L1 data cache hardware prefetch requests and software prefetch requests that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_MISS.REMOTE_HIT_FORWARD Counts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_MISS.REMOTE_HITM Counts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800400H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts L1 data cache hardware prefetch requests and software prefetch requests that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000400H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=ANY_RESPONSE Counts all prefetch data reads that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_HIT.NO_SNOOP_NEEDED Counts all prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_HIT.HITM_OTHER_CORE Counts all prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_HIT.ANY_SNOOP Counts all prefetch data reads that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_MISS.ANY_SNOOP Counts all prefetch data reads that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_MISS.REMOTE_HIT_FORWARD Counts all prefetch data reads that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_MISS.REMOTE_HITM Counts all prefetch data reads that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all prefetch data reads that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch data reads that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all prefetch data reads that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=ANY_RESPONSE Counts prefetch RFOs that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_HIT.NO_SNOOP_NEEDED Counts prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_HIT.HITM_OTHER_CORE Counts prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_HIT.ANY_SNOOP Counts prefetch RFOs that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_MISS.ANY_SNOOP Counts prefetch RFOs that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_MISS.REMOTE_HIT_FORWARD Counts prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_MISS.REMOTE_HITM Counts prefetch RFOs that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts prefetch RFOs that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts prefetch RFOs that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800120H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts prefetch RFOs that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000120H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=ANY_RESPONSE Counts all demand & prefetch data reads that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_HIT.NO_SNOOP_NEEDED Counts all demand & prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand & prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_HIT.HITM_OTHER_CORE Counts all demand & prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_HIT.ANY_SNOOP Counts all demand & prefetch data reads that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_MISS.ANY_SNOOP Counts all demand & prefetch data reads that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_MISS.REMOTE_HIT_FORWARD Counts all demand & prefetch data reads that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_MISS.REMOTE_HITM Counts all demand & prefetch data reads that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all demand & prefetch data reads that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand & prefetch data reads that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800491H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand & prefetch data reads that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000491H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=ANY_RESPONSE Counts all demand & prefetch RFOs that have any response type. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_HIT.NO_SNOOP_NEEDED Counts all demand & prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=1003C0122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand & prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=4003C0122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_HIT.HITM_OTHER_CORE Counts all demand & prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=10003C0122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_HIT.ANY_SNOOP Counts all demand & prefetch RFOs that hit in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3F803C0122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_MISS.ANY_SNOOP Counts all demand & prefetch RFOs that miss in the L3. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=3FBC000122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_MISS.REMOTE_HIT_FORWARD Counts all demand & prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=83FC00122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_MISS.REMOTE_HITM Counts all demand & prefetch RFOs that miss the L3 and the modified data is transferred from remote cache. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=103FC00122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_MISS.SNOOP_MISS_OR_NO_FWD Counts all demand & prefetch RFOs that miss the L3 and the data is returned from local or remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63FC00122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand & prefetch RFOs that miss the L3 and the data is returned from remote dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=63B800122H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD Counts all demand & prefetch RFOs that miss the L3 and the data is returned from local dram. EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=604000122H Offcore
OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0001H Offcore
OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=DEMAND_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0002H Offcore
OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0004H Offcore
OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0010H Offcore
OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=PF_L2_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0020H Offcore
OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=PF_L3_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0080H Offcore
OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=PF_L3_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0100H Offcore
OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=PF_L1D_AND_SW: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0400H Offcore
OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0490H Offcore
OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=ALL_PF_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0120H Offcore
OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=ALL_DATA_RD: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0491H Offcore
OFFCORE_RESPONSE:request=ALL_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD OFFCORE_RESPONSE:request=ALL_RFO: response=L3_HIT.SNOOP_HIT_WITH_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C0122H Offcore
OFFCORE_RESPONSE:request=ALL_READS: response=L3_HIT.HIT_OTHER_CORE_FWD OFFCORE_RESPONSE:request=ALL_READS: response=L3_HIT.HIT_OTHER_CORE_FWD EventSel={B7H,BBH} UMask=01H MSR_OFFCORE_RSPx{1A6H,1A7H}=8003C07F7H Offcore