Event Name | Description | Additional Info |
---|---|---|
CORE | ||
INST_RETIRED.ANY | This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. | IA32_FIXED_CTR0 Architectural, Fixed |
CPU_CLK_UNHALTED.THREAD | This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. | IA32_FIXED_CTR1 Architectural, Fixed |
CPU_CLK_UNHALTED.THREAD_ANY | Core cycles when at least one thread on the physical core is not in halt state. | IA32_FIXED_CTR1 Architectural, Fixed |
CPU_CLK_UNHALTED.REF_TSC | This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. | IA32_FIXED_CTR2 Architectural, Fixed |
BR_INST_RETIRED.ALL_BRANCHES | All (macro) branch instructions retired. | EventSel=C4H UMask=00H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
BR_MISP_RETIRED.ALL_BRANCHES | All mispredicted macro branch instructions retired. | EventSel=C5H UMask=00H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
CPU_CLK_THREAD_UNHALTED.REF_XCLK | Reference cycles when the thread is unhalted (counts at 100 MHz rate). | EventSel=3CH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY | Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate). | EventSel=3CH UMask=01H AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
CPU_CLK_UNHALTED.REF_XCLK | Reference cycles when the thread is unhalted (counts at 100 MHz rate) | EventSel=3CH UMask=01H CMask=0 Counter=0,1,2,3 Architectural |
CPU_CLK_UNHALTED.REF_XCLK_ANY | Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate). | EventSel=3CH UMask=01H AnyThread=1 CMask=0 Counter=0,1,2,3 Architectural |
CPU_CLK_UNHALTED.THREAD_P | Thread cycles when thread is not in halt state. | EventSel=3CH UMask=00H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
CPU_CLK_UNHALTED.THREAD_P_ANY | Core cycles when at least one thread on the physical core is not in halt state. | EventSel=3CH UMask=00H AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
INST_RETIRED.ANY_P | Number of instructions retired. General Counter - architectural event. | EventSel=C0H UMask=00H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
LONGEST_LAT_CACHE.MISS | Core-originated cacheable demand requests missed LLC. | EventSel=2EH UMask=41H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
LONGEST_LAT_CACHE.REFERENCE | Core-originated cacheable demand requests that refer to LLC. | EventSel=2EH UMask=4FH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 Architectural |
AGU_BYPASS_CANCEL.COUNT | This event counts executed load operations with all the following traits: 1. addressing of the format [base + offset], 2. the offset is between 1 and 2047, 3. the address specified in the base register is in one page and the address [base+offset] is in an. | EventSel=B6H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ARITH.FPU_DIV | This event counts the number of the divide operations executed. | EventSel=14H UMask=01H EdgeDetect=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ARITH.FPU_DIV_ACTIVE | Cycles when divider is busy executing divide operations. | EventSel=14H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BACLEARS.ANY | Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end. | EventSel=E6H UMask=1FH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.ALL_BRANCHES | Speculative and retired branches. | EventSel=88H UMask=FFH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.ALL_CONDITIONAL | Speculative and retired macro-conditional branches. | EventSel=88H UMask=C1H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.ALL_DIRECT_JMP | Speculative and retired macro-unconditional branches excluding calls and indirects. | EventSel=88H UMask=C2H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.ALL_DIRECT_NEAR_CALL | Speculative and retired direct near calls. | EventSel=88H UMask=D0H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET | Speculative and retired indirect branches excluding calls and returns. | EventSel=88H UMask=C4H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN | Speculative and retired indirect return branches. | EventSel=88H UMask=C8H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.NONTAKEN_CONDITIONAL | Not taken macro-conditional branches. | EventSel=88H UMask=41H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.TAKEN_CONDITIONAL | Taken speculative and retired macro-conditional branches. | EventSel=88H UMask=81H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.TAKEN_DIRECT_JUMP | Taken speculative and retired macro-conditional branch instructions excluding calls and indirects. | EventSel=88H UMask=82H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL | Taken speculative and retired direct near calls. | EventSel=88H UMask=90H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET | Taken speculative and retired indirect branches excluding calls and returns. | EventSel=88H UMask=84H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL | Taken speculative and retired indirect calls. | EventSel=88H UMask=A0H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN | Taken speculative and retired indirect branches with return mnemonic. | EventSel=88H UMask=88H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_INST_RETIRED.ALL_BRANCHES_PS | All (macro) branch instructions retired. (Precise Event - PEBS). | EventSel=C4H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.CONDITIONAL | Conditional branch instructions retired. | EventSel=C4H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.CONDITIONAL_PS | Conditional branch instructions retired. (Precise Event - PEBS). | EventSel=C4H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.FAR_BRANCH | Far branch instructions retired. | EventSel=C4H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_CALL | Direct and indirect near call instructions retired. | EventSel=C4H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_CALL_PS | Direct and indirect near call instructions retired. (Precise Event - PEBS). | EventSel=C4H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_CALL_R3 | Direct and indirect macro near call instructions retired (captured in ring 3). | EventSel=C4H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_CALL_R3_PS | Direct and indirect macro near call instructions retired (captured in ring 3). (Precise Event - PEBS). | EventSel=C4H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_RETURN | Return instructions retired. | EventSel=C4H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_RETURN_PS | Return instructions retired. (Precise Event - PEBS). | EventSel=C4H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_TAKEN | Taken branch instructions retired. | EventSel=C4H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NEAR_TAKEN_PS | Taken branch instructions retired. (Precise Event - PEBS). | EventSel=C4H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_INST_RETIRED.NOT_TAKEN | Not taken branch instructions retired. | EventSel=C4H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_MISP_EXEC.ALL_BRANCHES | Speculative and retired mispredicted macro conditional branches. | EventSel=89H UMask=FFH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.ALL_CONDITIONAL | Speculative and retired mispredicted macro conditional branches. | EventSel=89H UMask=C1H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.ALL_DIRECT_NEAR_CALL | Speculative and retired mispredicted direct near calls. | EventSel=89H UMask=D0H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET | Mispredicted indirect branches excluding calls and returns. | EventSel=89H UMask=C4H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.INDIRECT | Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded). | EventSel=89H UMask=E4H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.NONTAKEN_CONDITIONAL | Not taken speculative and retired mispredicted macro conditional branches. | EventSel=89H UMask=41H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.TAKEN_CONDITIONAL | Taken speculative and retired mispredicted macro conditional branches. | EventSel=89H UMask=81H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.TAKEN_DIRECT_NEAR_CALL | Taken speculative and retired mispredicted direct near calls. | EventSel=89H UMask=90H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET | Taken speculative and retired mispredicted indirect branches excluding calls and returns. | EventSel=89H UMask=84H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL | Taken speculative and retired mispredicted indirect calls. | EventSel=89H UMask=A0H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_EXEC.TAKEN_RETURN_NEAR | Taken speculative and retired mispredicted indirect branches with return mnemonic. | EventSel=89H UMask=88H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
BR_MISP_RETIRED.ALL_BRANCHES_PS | Mispredicted macro branch instructions retired. (Precise Event - PEBS) | EventSel=C5H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.CONDITIONAL | Mispredicted conditional branch instructions retired. | EventSel=C5H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.CONDITIONAL_PS | Mispredicted conditional branch instructions retired. (Precise Event - PEBS). | EventSel=C5H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.NEAR_CALL | Direct and indirect mispredicted near call instructions retired. | EventSel=C5H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.NEAR_CALL_PS | Direct and indirect mispredicted near call instructions retired. (Precise Event - PEBS). | EventSel=C5H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.NOT_TAKEN | Mispredicted not taken branch instructions retired. | EventSel=C5H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.NOT_TAKEN_PS | Mispredicted not taken branch instructions retired.(Precise Event - PEBS). | EventSel=C5H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.TAKEN | Mispredicted taken branch instructions retired. | EventSel=C5H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
BR_MISP_RETIRED.TAKEN_PS | Mispredicted taken branch instructions retired. (Precise Event - PEBS). | EventSel=C5H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
CPL_CYCLES.RING0 | Unhalted core cycles when the thread is in ring 0. | EventSel=5CH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
CPL_CYCLES.RING0_TRANS | Number of intervals between processor halts while thread is in ring 0. | EventSel=5CH UMask=01H EdgeDetect=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
CPL_CYCLES.RING123 | Unhalted core cycles when thread is in rings 1, 2, or 3. | EventSel=5CH UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE | Count XClk pulses when this thread is unhalted and the other is halted. | EventSel=3CH UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE | Count XClk pulses when this thread is unhalted and the other thread is halted. | EventSel=3CH UMask=02H CMask=0 Counter=0,1,2,3 |
CYCLE_ACTIVITY.CYCLES_L1D_PENDING | Each cycle there was a miss-pending demand load this thread, increment by 1. Note this is in DCU and connected to Umask 1. Miss Pending demand load should be deduced by OR-ing increment bits of DCACHE_MISS_PEND.PENDING. | EventSel=A3H UMask=02H CMask=02H Counter=2 CounterHTOff=2 |
CYCLE_ACTIVITY.CYCLES_L2_PENDING | Each cycle there was a MLC-miss pending demand load this thread (i.e. Non-completed valid SQ entry allocated for demand load and waiting for Uncore), increment by 1. Note this is in MLC and connected to Umask 0. | EventSel=A3H UMask=01H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
CYCLE_ACTIVITY.CYCLES_NO_DISPATCH | Each cycle there was no dispatch for this thread, increment by 1. Note this is connect to Umask 2. No dispatch can be deduced from the UOPS_EXECUTED event. | EventSel=A3H UMask=04H CMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
CYCLE_ACTIVITY.STALLS_L1D_PENDING | Each cycle there was a miss-pending demand load this thread and no uops dispatched, increment by 1. Note this is in DCU and connected to Umask 1 and 2. Miss Pending demand load should be deduced by OR-ing increment bits of DCACHE_MISS_PEND.PENDING. | EventSel=A3H UMask=06H CMask=06H Counter=2 CounterHTOff=2 |
CYCLE_ACTIVITY.STALLS_L2_PENDING | Each cycle there was a MLC-miss pending demand load and no uops dispatched on this thread (i.e. Non-completed valid SQ entry allocated for demand load and waiting for Uncore), increment by 1. Note this is in MLC and connected to Umask 0 and 2. | EventSel=A3H UMask=05H CMask=05H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
DSB_FILL.ALL_CANCEL | Cases of cancelling valid Decode Stream Buffer (DSB) fill not because of exceeding way limit. | EventSel=ACH UMask=0AH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DSB_FILL.EXCEED_DSB_LINES | Cycles when Decode Stream Buffer (DSB) fill encounter more than 3 Decode Stream Buffer (DSB) lines. | EventSel=ACH UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DSB_FILL.OTHER_CANCEL | Cases of cancelling valid DSB fill not because of exceeding way limit. | EventSel=ACH UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DSB2MITE_SWITCHES.COUNT | Decode Stream Buffer (DSB)-to-MITE switches. | EventSel=ABH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DSB2MITE_SWITCHES.PENALTY_CYCLES | This event counts the cycles attributed to a switch from the Decoded Stream Buffer (DSB), which holds decoded instructions, to the legacy decode pipeline. It excludes cycles when the back-end cannot accept new micro-ops. The penalty for these switches is potentially several cycles of instruction starvation, where no micro-ops are delivered to the back-end. | EventSel=ABH UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK | Load misses in all DTLB levels that cause page walks. | EventSel=08H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_LOAD_MISSES.STLB_HIT | This event counts load operations that miss the first DTLB level but hit the second and do not cause any page walks. The penalty in this case is approximately 7 cycles. | EventSel=08H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_LOAD_MISSES.WALK_COMPLETED | Load misses at all DTLB levels that cause completed page walks. | EventSel=08H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_LOAD_MISSES.WALK_DURATION | This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB load misses. | EventSel=08H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_STORE_MISSES.MISS_CAUSES_A_WALK | Store misses in all DTLB levels that cause page walks. | EventSel=49H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_STORE_MISSES.STLB_HIT | Store operations that miss the first TLB level but hit the second and do not cause page walks. | EventSel=49H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_STORE_MISSES.WALK_COMPLETED | Store misses in all DTLB levels that cause completed page walks. | EventSel=49H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
DTLB_STORE_MISSES.WALK_DURATION | Cycles when PMH is busy with page walks. | EventSel=49H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
EPT.WALK_CYCLES | Cycle count for an Extended Page table walk. The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB caches. | EventSel=4FH UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
FP_ASSIST.ANY | Cycles with any input/output SSE or FP assist. | EventSel=CAH UMask=1EH CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
FP_ASSIST.SIMD_INPUT | Number of SIMD FP assists due to input values. | EventSel=CAH UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
FP_ASSIST.SIMD_OUTPUT | Number of SIMD FP assists due to Output values. | EventSel=CAH UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
FP_ASSIST.X87_INPUT | Number of X87 assists due to input value. | EventSel=CAH UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
FP_ASSIST.X87_OUTPUT | Number of X87 assists due to output value. | EventSel=CAH UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE | Number of SSE* or AVX-128 FP Computational packed double-precision uops issued this cycle. | EventSel=10H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
FP_COMP_OPS_EXE.SSE_PACKED_SINGLE | Number of SSE* or AVX-128 FP Computational packed single-precision uops issued this cycle. | EventSel=10H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE | Number of SSE* or AVX-128 FP Computational scalar double-precision uops issued this cycle. | EventSel=10H UMask=80H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE | Number of SSE* or AVX-128 FP Computational scalar single-precision uops issued this cycle. | EventSel=10H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
FP_COMP_OPS_EXE.X87 | Number of FP Computational Uops Executed this cycle. The number of FADD, FSUB, FCOM, FMULs, integer MULs and IMULs, FDIVs, FPREMs, FSQRTS, integer DIVs, and IDIVs. This event does not distinguish an FADD used in the middle of a transcendental flow from a s. | EventSel=10H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
HW_PRE_REQ.DL1_MISS | Hardware Prefetch requests that miss the L1D cache. This accounts for both L1 streamer and IP-based (IPP) HW prefetchers. A request is being counted each time it access the cache & miss it, including if a block is applicable or if hit the Fill Buffer for . | EventSel=4EH UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ICACHE.HIT | Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches. | EventSel=80H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ICACHE.MISSES | This event counts the number of instruction cache, streaming buffer and victim cache misses. Counting includes unchacheable accesses. | EventSel=80H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.ALL_DSB_CYCLES_4_UOPS | Cycles Decode Stream Buffer (DSB) is delivering 4 Uops. | EventSel=79H UMask=18H CMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.ALL_DSB_CYCLES_ANY_UOPS | Cycles Decode Stream Buffer (DSB) is delivering any Uop. | EventSel=79H UMask=18H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.ALL_MITE_CYCLES_4_UOPS | Cycles MITE is delivering 4 Uops. | EventSel=79H UMask=24H CMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.ALL_MITE_CYCLES_ANY_UOPS | Cycles MITE is delivering any Uop. | EventSel=79H UMask=24H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.DSB_CYCLES | Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path. | EventSel=79H UMask=08H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.DSB_UOPS | Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. | EventSel=79H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.EMPTY | Instruction Decode Queue (IDQ) empty cycles. | EventSel=79H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
IDQ.MITE_ALL_UOPS | Uops delivered to Instruction Decode Queue (IDQ) from MITE path. | EventSel=79H UMask=3CH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MITE_CYCLES | Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path. | EventSel=79H UMask=04H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MITE_UOPS | Uops delivered to Instruction Decode Queue (IDQ) from MITE path. | EventSel=79H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MS_CYCLES | This event counts cycles during which the microcode sequencer assisted the front-end in delivering uops. Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder. Using other instructions, if possible, will usually improve performance. See the Intel® 64 and IA-32 Architectures Optimization Reference Manual for more information. | EventSel=79H UMask=30H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MS_DSB_CYCLES | Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy. | EventSel=79H UMask=10H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MS_DSB_OCCUR | Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busy. | EventSel=79H UMask=10H EdgeDetect=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MS_DSB_UOPS | Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy. | EventSel=79H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MS_MITE_UOPS | Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy. | EventSel=79H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MS_SWITCHES | Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer. | EventSel=79H UMask=30H EdgeDetect=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ.MS_UOPS | Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy. | EventSel=79H UMask=30H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
IDQ_UOPS_NOT_DELIVERED.CORE | This event counts the number of uops not delivered to the back-end per cycle, per thread, when the back-end was not stalled. In the ideal case 4 uops can be delivered each cycle. The event counts the undelivered uops - so if 3 were delivered in one cycle, the counter would be incremented by 1 for that cycle (4 - 3). If the back-end is stalled, the count for this event is not incremented even when uops were not delivered, because the back-end would not have been able to accept them. This event is used in determining the front-end bound category of the top-down pipeline slots characterization. | EventSel=9CH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE | Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled. | EventSel=9CH UMask=01H CMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK | Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE. | EventSel=9CH UMask=01H Invert=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
IDQ_UOPS_NOT_DELIVERED.CYCLES_GE_1_UOP_DELIV.CORE | Cycles when 1 or more uops were delivered to the by the front end. | EventSel=9CH UMask=01H Invert=1 CMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE | Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled. | EventSel=9CH UMask=01H CMask=03H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE | Cycles with less than 2 uops delivered by the front end. | EventSel=9CH UMask=01H CMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE | Cycles with less than 3 uops delivered by the front end. | EventSel=9CH UMask=01H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
ILD_STALL.IQ_FULL | Stall cycles because IQ is full. | EventSel=87H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ILD_STALL.LCP | Stalls caused by changing prefix length of the instruction. | EventSel=87H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
INST_RETIRED.PREC_DIST | Instructions retired. (Precise Event - PEBS). | EventSel=C0H UMask=01H Counter=1 CounterHTOff=1 PEBS:[Precise] |
INSTS_WRITTEN_TO_IQ.INSTS | Valid instructions written to IQ per cycle. | EventSel=17H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
INT_MISC.RAT_STALL_CYCLES | Cycles when Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the thread. | EventSel=0DH UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
INT_MISC.RECOVERY_CYCLES | Number of cycles waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc...). | EventSel=0DH UMask=03H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
INT_MISC.RECOVERY_CYCLES_ANY | Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke). | EventSel=0DH UMask=03H AnyThread=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
INT_MISC.RECOVERY_STALLS_COUNT | Number of occurrences waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc...). | EventSel=0DH UMask=03H EdgeDetect=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ITLB.ITLB_FLUSH | Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages. | EventSel=AEH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ITLB_MISSES.MISS_CAUSES_A_WALK | Misses at all ITLB levels that cause page walks. | EventSel=85H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ITLB_MISSES.STLB_HIT | Operations that miss the first ITLB level but hit the second and do not cause any page walks. | EventSel=85H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ITLB_MISSES.WALK_COMPLETED | Misses in all ITLB levels that cause completed page walks. | EventSel=85H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ITLB_MISSES.WALK_DURATION | This event count cycles when Page Miss Handler (PMH) is servicing page walks caused by ITLB misses. | EventSel=85H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L1D.ALL_M_REPLACEMENT | Cache lines in M state evicted out of L1D due to Snoop HitM or dirty line replacement. | EventSel=51H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L1D.ALLOCATED_IN_M | Allocated L1D data cache lines in M state. | EventSel=51H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L1D.EVICTION | L1D data cache lines in M state evicted due to replacement. | EventSel=51H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L1D.REPLACEMENT | This event counts L1D data line replacements. Replacements occur when a new line is brought into the cache, causing eviction of a line loaded earlier. | EventSel=51H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L1D_BLOCKS.BANK_CONFLICT_CYCLES | Cycles when dispatched loads are cancelled due to L1D bank conflicts with other load ports. | EventSel=BFH UMask=05H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L1D_PEND_MISS.FB_FULL | Cycles a demand request was blocked due to Fill Buffers unavailability. | EventSel=48H UMask=02H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L1D_PEND_MISS.PENDING | L1D miss outstanding duration in cycles. | EventSel=48H UMask=01H Counter=2 CounterHTOff=2 |
L1D_PEND_MISS.PENDING_CYCLES | Cycles with L1D load Misses outstanding. | EventSel=48H UMask=01H CMask=01H Counter=2 CounterHTOff=2 |
L1D_PEND_MISS.PENDING_CYCLES_ANY | Cycles with L1D load Misses outstanding from any thread on physical core. | EventSel=48H UMask=01H AnyThread=1 CMask=01H Counter=2 CounterHTOff=2 |
L2_L1D_WB_RQSTS.ALL | Not rejected writebacks from L1D to L2 cache lines in any state. | EventSel=28H UMask=0FH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_L1D_WB_RQSTS.HIT_E | Not rejected writebacks from L1D to L2 cache lines in E state. | EventSel=28H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_L1D_WB_RQSTS.HIT_M | Not rejected writebacks from L1D to L2 cache lines in M state. | EventSel=28H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_L1D_WB_RQSTS.HIT_S | Not rejected writebacks from L1D to L2 cache lines in S state. | EventSel=28H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_L1D_WB_RQSTS.MISS | Count the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.). | EventSel=28H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_IN.ALL | This event counts the number of L2 cache lines brought into the L2 cache. Lines are filled into the L2 cache when there was an L2 miss. | EventSel=F1H UMask=07H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_IN.E | L2 cache lines in E state filling L2. | EventSel=F1H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_IN.I | L2 cache lines in I state filling L2. | EventSel=F1H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_IN.S | L2 cache lines in S state filling L2. | EventSel=F1H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_OUT.DEMAND_CLEAN | Clean L2 cache lines evicted by demand. | EventSel=F2H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_OUT.DEMAND_DIRTY | Dirty L2 cache lines evicted by demand. | EventSel=F2H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_OUT.DIRTY_ALL | Dirty L2 cache lines filling the L2. | EventSel=F2H UMask=0AH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_OUT.PF_CLEAN | Clean L2 cache lines evicted by L2 prefetch. | EventSel=F2H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_LINES_OUT.PF_DIRTY | Dirty L2 cache lines evicted by L2 prefetch. | EventSel=F2H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.ALL_CODE_RD | L2 code requests. | EventSel=24H UMask=30H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.ALL_DEMAND_DATA_RD | Demand Data Read requests. | EventSel=24H UMask=03H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.ALL_PF | Requests from L2 hardware prefetchers. | EventSel=24H UMask=C0H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.ALL_RFO | RFO requests to L2 cache. | EventSel=24H UMask=0CH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.CODE_RD_HIT | L2 cache hits when fetching instructions, code reads. | EventSel=24H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.CODE_RD_MISS | L2 cache misses when fetching instructions. | EventSel=24H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.DEMAND_DATA_RD_HIT | Demand Data Read requests that hit L2 cache. | EventSel=24H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.PF_HIT | Requests from the L2 hardware prefetchers that hit L2 cache. | EventSel=24H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.PF_MISS | Requests from the L2 hardware prefetchers that miss L2 cache. | EventSel=24H UMask=80H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.RFO_HIT | RFO requests that hit L2 cache. | EventSel=24H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_RQSTS.RFO_MISS | RFO requests that miss L2 cache. | EventSel=24H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_STORE_LOCK_RQSTS.ALL | RFOs that access cache lines in any state. | EventSel=27H UMask=0FH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_STORE_LOCK_RQSTS.HIT_E | RFOs that hit cache lines in E state. | EventSel=27H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_STORE_LOCK_RQSTS.HIT_M | RFOs that hit cache lines in M state. | EventSel=27H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_STORE_LOCK_RQSTS.MISS | RFOs that miss cache lines. | EventSel=27H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.ALL_PF | L2 or LLC HW prefetches that access L2 cache. | EventSel=F0H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.ALL_REQUESTS | Transactions accessing L2 pipe. | EventSel=F0H UMask=80H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.CODE_RD | L2 cache accesses when fetching instructions. | EventSel=F0H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.DEMAND_DATA_RD | Demand Data Read requests that access L2 cache. | EventSel=F0H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.L1D_WB | L1D writebacks that access L2 cache. | EventSel=F0H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.L2_FILL | L2 fill requests that access L2 cache. | EventSel=F0H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.L2_WB | L2 writebacks that access L2 cache. | EventSel=F0H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
L2_TRANS.RFO | RFO requests that access L2 cache. | EventSel=F0H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LD_BLOCKS.ALL_BLOCK | Number of cases where any load ends up with a valid block-code written to the load buffer (including blocks due to Memory Order Buffer (MOB), Data Cache Unit (DCU), TLB, but load has no DCU miss). | EventSel=03H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LD_BLOCKS.DATA_UNKNOWN | Loads delayed due to SB blocks, preceding store operations with known addresses but unknown data. | EventSel=03H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LD_BLOCKS.NO_SR | This event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use. | EventSel=03H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LD_BLOCKS.STORE_FORWARD | This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load. The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store. See the table of not supported store forwards in the Intel® 64 and IA-32 Architectures Optimization Reference Manual. The penalty for blocked store forwarding is that the load must wait for the store to complete before it can be issued. | EventSel=03H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LD_BLOCKS_PARTIAL.ADDRESS_ALIAS | Aliasing occurs when a load is issued after a store and their memory addresses are offset by 4K. This event counts the number of loads that aliased with a preceding store, resulting in an extended address check in the pipeline. The enhanced address check typically has a performance penalty of 5 cycles. | EventSel=07H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LD_BLOCKS_PARTIAL.ALL_STA_BLOCK | This event counts the number of times that load operations are temporarily blocked because of older stores, with addresses that are not yet known. A load operation may incur more than one block of this type. | EventSel=07H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LOAD_HIT_PRE.HW_PF | Not software-prefetch load dispatches that hit FB allocated for hardware prefetch. | EventSel=4CH UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LOAD_HIT_PRE.SW_PF | Not software-prefetch load dispatches that hit FB allocated for software prefetch. | EventSel=4CH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LOCK_CYCLES.CACHE_LOCK_DURATION | Cycles when L1D is locked. | EventSel=63H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION | Cycles when L1 and L2 are locked due to UC or split lock. | EventSel=63H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LSD.CYCLES_4_UOPS | Cycles 4 Uops delivered by the LSD, but didn't come from the decoder. | EventSel=A8H UMask=01H CMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LSD.CYCLES_ACTIVE | Cycles Uops delivered by the LSD, but didn't come from the decoder. | EventSel=A8H UMask=01H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
LSD.UOPS | Number of Uops delivered by the LSD. | EventSel=A8H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
MACHINE_CLEARS.COUNT | Number of machine clears (nukes) of any type. | EventSel=C3H UMask=01H EdgeDetect=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
MACHINE_CLEARS.MASKMOV | Maskmov false fault - counts number of time ucode passes through Maskmov flow due to instruction's mask being 0 while the flow was completed without raising a fault. | EventSel=C3H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
MACHINE_CLEARS.MEMORY_ORDERING | This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from memory disambiguation, external snoops, or cross SMT-HW-thread snoop (stores) hitting load buffers. Machine clears can have a significant performance impact if they are happening frequently. | EventSel=C3H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
MACHINE_CLEARS.SMC | This event is incremented when self-modifying code (SMC) is detected, which causes a machine clear. Machine clears can have a significant performance impact if they are happening frequently. | EventSel=C3H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT | This event counts retired load uops that hit in the last-level cache (L3) and were found in a non-modified state in a neighboring core's private cache (same package). Since the last level cache is inclusive, hits to the L3 may require snooping the private L2 caches of any cores on the same socket that have the line. In this case, a snoop was required, and another L2 had the line in a non-modified state. | EventSel=D2H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM | This event counts retired load uops that hit in the last-level cache (L3) and were found in a non-modified state in a neighboring core's private cache (same package). Since the last level cache is inclusive, hits to the L3 may require snooping the private L2 caches of any cores on the same socket that have the line. In this case, a snoop was required, and another L2 had the line in a modified state, so the line had to be invalidated in that L2 cache and transferred to the requesting L2. | EventSel=D2H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS | Retired load uops which data sources were LLC hit and cross-core snoop missed in on-pkg core cache. | EventSel=D2H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_NONE | Retired load uops which data sources were hits in LLC without snoops required. | EventSel=D2H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM | Data from local DRAM either Snoop not needed or Snoop Miss (RspI) | EventSel=D3H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM | Data from remote DRAM either Snoop not needed or Snoop Miss (RspI) | EventSel=D3H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.HIT_LFB | Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready. | EventSel=D1H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS | Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready. (Precise Event - PEBS). | EventSel=D1H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.L1_HIT | Retired load uops with L1 cache hits as data sources. | EventSel=D1H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.L1_HIT_PS | Retired load uops with L1 cache hits as data sources. (Precise Event - PEBS). | EventSel=D1H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.L2_HIT | Retired load uops with L2 cache hits as data sources. | EventSel=D1H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.L2_HIT_PS | Retired load uops with L2 cache hits as data sources. (Precise Event - PEBS). | EventSel=D1H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.LLC_HIT | This event counts retired load uops that hit in the last-level (L3) cache without snoops required. | EventSel=D1H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_LOAD_UOPS_RETIRED.LLC_MISS | Miss in last-level (L3) cache. Excludes Unknown data-source. | EventSel=D1H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128 | Randomly selected loads with latency value being above 128. | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=80H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16 | Randomly selected loads with latency value being above 16. | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=10H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256 | Randomly selected loads with latency value being above 256. | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=100H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32 | Randomly selected loads with latency value being above 32. | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=20H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4 | Randomly selected loads with latency value being above 4 . | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=04H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512 | Randomly selected loads with latency value being above 512. | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=200H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64 | Randomly selected loads with latency value being above 64. | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=40H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8 | Randomly selected loads with latency value being above 8. | EventSel=CDH UMask=01H MSR_PEBS_LD_LAT_THRESHOLD(3F6H)=08H Counter=3 CounterHTOff=3 PEBS:[Precise, DataLinearAddress, Latency] |
MEM_TRANS_RETIRED.PRECISE_STORE | Sample stores and collect precise store operation via PEBS record. PMC3 only. (Precise Event - PEBS). | EventSel=CDH UMask=02H Counter=3 CounterHTOff=3 PEBS:[Precise, Latency] |
MEM_UOPS_RETIRED.ALL_LOADS | This event counts the number of load uops retired | EventSel=D0H UMask=81H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.ALL_LOADS_PS | This event counts the number of load uops retired (Precise Event) | EventSel=D0H UMask=81H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.ALL_STORES | This event counts the number of store uops retired. | EventSel=D0H UMask=82H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.ALL_STORES_PS | This event counts the number of store uops retired. (Precise Event - PEBS) | EventSel=D0H UMask=82H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.LOCK_LOADS | Retired load uops with locked access. | EventSel=D0H UMask=21H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.LOCK_LOADS_PS | Retired load uops with locked access. (Precise Event - PEBS). | EventSel=D0H UMask=21H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.SPLIT_LOADS | This event counts line-splitted load uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K). | EventSel=D0H UMask=41H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.SPLIT_LOADS_PS | This event counts line-splitted load uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K). (Precise Event - PEBS) | EventSel=D0H UMask=41H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.SPLIT_STORES | This event counts line-splitted store uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K). | EventSel=D0H UMask=42H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.SPLIT_STORES_PS | This event counts line-splitted store uops retired to the architected path. A line split is across 64B cache-line which includes a page split (4K). (Precise Event - PEBS) | EventSel=D0H UMask=42H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.STLB_MISS_LOADS | Retired load uops that miss the STLB. | EventSel=D0H UMask=11H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS | Retired load uops that miss the STLB. (Precise Event - PEBS). | EventSel=D0H UMask=11H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.STLB_MISS_STORES | Retired store uops that miss the STLB. | EventSel=D0H UMask=12H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MEM_UOPS_RETIRED.STLB_MISS_STORES_PS | Retired store uops that miss the STLB. (Precise Event - PEBS). | EventSel=D0H UMask=12H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
MISALIGN_MEM_REF.LOADS | Speculative cache line split load uops dispatched to L1 cache. | EventSel=05H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
MISALIGN_MEM_REF.STORES | Speculative cache line split STA uops dispatched to L1 cache. | EventSel=05H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS.ALL_DATA_RD | Demand and prefetch data reads. | EventSel=B0H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS.DEMAND_CODE_RD | Cacheable and noncacheable code read requests. | EventSel=B0H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS.DEMAND_DATA_RD | Demand Data Read requests sent to uncore. | EventSel=B0H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS.DEMAND_RFO | Demand RFO requests including regular RFOs, locks, ItoM. | EventSel=B0H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_BUFFER.SQ_FULL | Cases when offcore requests buffer cannot take more entries for core. | EventSel=B2H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD | Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore. | EventSel=60H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD | Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore. | EventSel=60H UMask=08H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD | Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore. | EventSel=60H UMask=01H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO | Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle. | EventSel=60H UMask=04H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD | Offcore outstanding Demand Data Read transactions in uncore queue. | EventSel=60H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_C6 | Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue. | EventSel=60H UMask=01H CMask=06H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO | Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore. | EventSel=60H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
OTHER_ASSISTS.AVX_STORE | Number of GSSE memory assist for stores. GSSE microcode assist is being invoked whenever the hardware is unable to properly handle GSSE-256b operations. | EventSel=C1H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
OTHER_ASSISTS.AVX_TO_SSE | Number of transitions from AVX-256 to legacy SSE when penalty applicable. | EventSel=C1H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
OTHER_ASSISTS.ITLB_MISS_RETIRED | Retired instructions experiencing ITLB misses. | EventSel=C1H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
OTHER_ASSISTS.SSE_TO_AVX | Number of transitions from SSE to AVX-256 when penalty applicable. | EventSel=C1H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
PARTIAL_RAT_STALLS.FLAGS_MERGE_UOP | Increments the number of flags-merge uops in flight each cycle. | EventSel=59H UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
PARTIAL_RAT_STALLS.FLAGS_MERGE_UOP_CYCLES | This event counts the number of cycles spent executing performance-sensitive flags-merging uops. For example, shift CL (merge_arith_flags). For more details, See the Intel® 64 and IA-32 Architectures Optimization Reference Manual. | EventSel=59H UMask=20H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
PARTIAL_RAT_STALLS.MUL_SINGLE_UOP | Multiply packed/scalar single precision uops allocated. | EventSel=59H UMask=80H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
PARTIAL_RAT_STALLS.SLOW_LEA_WINDOW | This event counts the number of cycles with at least one slow LEA uop being allocated. A uop is generally considered as slow LEA if it has three sources (for example, two sources and immediate) regardless of whether it is a result of LEA instruction or not. Examples of the slow LEA uop are or uops with base, index, and offset source operands using base and index reqisters, where base is EBR/RBP/R13, using RIP relative or 16-bit addressing modes. See the Intel® 64 and IA-32 Architectures Optimization Reference Manual for more details about slow LEA instructions. | EventSel=59H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.ANY | Resource-related stall cycles. | EventSel=A2H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.LB | Counts the cycles of stall due to lack of load buffers. | EventSel=A2H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.LB_SB | Resource stalls due to load or store buffers all being in use. | EventSel=A2H UMask=0AH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.MEM_RS | Resource stalls due to memory buffers or Reservation Station (RS) being fully utilized. | EventSel=A2H UMask=0EH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.OOO_RSRC | Resource stalls due to Rob being full, FCSW, MXCSR and OTHER. | EventSel=A2H UMask=F0H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.ROB | Cycles stalled due to re-order buffer full. | EventSel=A2H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.RS | Cycles stalled due to no eligible RS entry available. | EventSel=A2H UMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS.SB | Cycles stalled due to no store buffers available. (not including draining form sync). | EventSel=A2H UMask=08H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS2.ALL_FL_EMPTY | Cycles with either free list is empty. | EventSel=5BH UMask=0CH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS2.ALL_PRF_CONTROL | Resource stalls2 control structures full for physical registers. | EventSel=5BH UMask=0FH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS2.BOB_FULL | Cycles when Allocator is stalled if BOB is full and new branch needs it. | EventSel=5BH UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RESOURCE_STALLS2.OOO_RSRC | Resource stalls out of order resources full. | EventSel=5BH UMask=4FH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
ROB_MISC_EVENTS.LBR_INSERTS | Count cases of saving new LBR. | EventSel=CCH UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RS_EVENTS.EMPTY_CYCLES | Cycles when Reservation Station (RS) is empty for the thread. | EventSel=5EH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
RS_EVENTS.EMPTY_END | Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues. | EventSel=5EH UMask=01H EdgeDetect=1 Invert=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
SIMD_FP_256.PACKED_DOUBLE | Number of AVX-256 Computational FP double precision uops issued this cycle. | EventSel=11H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
SIMD_FP_256.PACKED_SINGLE | Number of GSSE-256 Computational FP single precision uops issued this cycle. | EventSel=11H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
SQ_MISC.SPLIT_LOCK | Split locks in SQ. | EventSel=F4H UMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
TLB_FLUSH.DTLB_THREAD | DTLB flush attempts of the thread-specific entries. | EventSel=BDH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
TLB_FLUSH.STLB_ANY | STLB flush attempts. | EventSel=BDH UMask=20H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED.CORE | Uops dispatched from any thread. | EventSel=B1H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED.STALL_CYCLES | Cases of no uops dispatched per thread. | EventSel=B1H UMask=01H Invert=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
UOPS_DISPATCHED.THREAD | Uops dispatched per thread. | EventSel=B1H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_0 | Cycles per thread when uops are dispatched to port 0. | EventSel=A1H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_0_CORE | Cycles per core when uops are dispatched to port 0. | EventSel=A1H UMask=01H AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_1 | Cycles per thread when uops are dispatched to port 1. | EventSel=A1H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_1_CORE | Cycles per core when uops are dispatched to port 1. | EventSel=A1H UMask=02H AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_2 | Cycles per thread when load or STA uops are dispatched to port 2. | EventSel=A1H UMask=0CH Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_2_CORE | Cycles per core when load or STA uops are dispatched to port 2. | EventSel=A1H UMask=0CH AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_3 | Cycles per thread when load or STA uops are dispatched to port 3. | EventSel=A1H UMask=30H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_3_CORE | Cycles per core when load or STA uops are dispatched to port 3. | EventSel=A1H UMask=30H AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_4 | Cycles per thread when uops are dispatched to port 4. | EventSel=A1H UMask=40H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_4_CORE | Cycles per core when uops are dispatched to port 4. | EventSel=A1H UMask=40H AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_5 | Cycles per thread when uops are dispatched to port 5. | EventSel=A1H UMask=80H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_DISPATCHED_PORT.PORT_5_CORE | Cycles per core when uops are dispatched to port 5. | EventSel=A1H UMask=80H AnyThread=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_EXECUTED.CORE_CYCLES_GE_1 | Cycles at least 1 micro-op is executed from any thread on physical core. | EventSel=B1H UMask=02H CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_EXECUTED.CORE_CYCLES_GE_2 | Cycles at least 2 micro-op is executed from any thread on physical core. | EventSel=B1H UMask=02H CMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_EXECUTED.CORE_CYCLES_GE_3 | Cycles at least 3 micro-op is executed from any thread on physical core. | EventSel=B1H UMask=02H CMask=03H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_EXECUTED.CORE_CYCLES_GE_4 | Cycles at least 4 micro-op is executed from any thread on physical core. | EventSel=B1H UMask=02H CMask=04H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_EXECUTED.CORE_CYCLES_NONE | Cycles with no micro-ops executed from any thread on physical core. | EventSel=B1H UMask=02H Invert=1 Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_ISSUED.ANY | This event counts the number of Uops issued by the front-end of the pipeilne to the back-end. | EventSel=0EH UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 |
UOPS_ISSUED.CORE_STALL_CYCLES | Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threads. | EventSel=0EH UMask=01H AnyThread=1 Invert=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
UOPS_ISSUED.STALL_CYCLES | Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread. | EventSel=0EH UMask=01H Invert=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 |
UOPS_RETIRED.ALL | This event counts the number of micro-ops retired. | EventSel=C2H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
UOPS_RETIRED.ALL_PS | This event counts the number of micro-ops retired. (Precise Event) | EventSel=C2H UMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
UOPS_RETIRED.CORE_STALL_CYCLES | Cycles without actually retired uops. | EventSel=C2H UMask=01H AnyThread=1 Invert=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
UOPS_RETIRED.RETIRE_SLOTS | This event counts the number of retirement slots used each cycle. There are potentially 4 slots that can be used each cycle - meaning, 4 micro-ops or 4 instructions could retire each cycle. This event is used in determining the 'Retiring' category of the Top-Down pipeline slots characterization. | EventSel=C2H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3,4,5,6,7 PEBS:[PreciseEventingIP] |
UOPS_RETIRED.RETIRE_SLOTS_PS | This event counts the number of retirement slots used each cycle. There are potentially 4 slots that can be used each cycle - meaning, 4 micro-ops or 4 instructions could retire each cycle. This event is used in determining the 'Retiring' category of the Top-Down pipeline slots characterization. (Precise Event - PEBS) | EventSel=C2H UMask=02H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
UOPS_RETIRED.STALL_CYCLES | Cycles without actually retired uops. | EventSel=C2H UMask=01H Invert=1 CMask=01H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
UOPS_RETIRED.TOTAL_CYCLES | Cycles with less than 10 actually retired uops. | EventSel=C2H UMask=01H Invert=1 CMask=10H Counter=0,1,2,3 CounterHTOff=0,1,2,3 PEBS:[PreciseEventingIP] |
UNCORE | ||
UNC_C_CLOCKTICKS | Uncore Clocks | EventSel=00H UMask=00H Counter=0,1,2,3 |
UNC_C_COUNTER0_OCCUPANCY | Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry. | EventSel=1FH UMask=00H Counter=1,2,3 |
UNC_C_ISMQ_DRD_MISS_OCC | UNC_C_ISMQ_DRD_MISS_OCC | EventSel=21H UMask=00H Counter=0,1 |
UNC_C_LLC_LOOKUP.DATA_READ | Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. | EventSel=34H UMask=03H Counter=0,1 |
UNC_C_LLC_LOOKUP.NID | Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. | EventSel=34H UMask=41H Counter=0,1 |
UNC_C_LLC_LOOKUP.REMOTE_SNOOP | Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. | EventSel=34H UMask=09H Counter=0,1 |
UNC_C_LLC_LOOKUP.WRITE | Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set filter mask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state. | EventSel=34H UMask=05H Counter=0,1 |
UNC_C_LLC_VICTIMS.E_STATE | Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. | EventSel=37H UMask=02H Counter=0,1 |
UNC_C_LLC_VICTIMS.M_STATE | Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. | EventSel=37H UMask=01H Counter=0,1 |
UNC_C_LLC_VICTIMS.MISS | Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. | EventSel=37H UMask=08H Counter=0,1 |
UNC_C_LLC_VICTIMS.NID | Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. | EventSel=37H UMask=40H Counter=0,1 |
UNC_C_LLC_VICTIMS.S_STATE | Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in. | EventSel=37H UMask=04H Counter=0,1 |
UNC_C_MISC.RFO_HIT_S | Miscellaneous events in the Cbo. | EventSel=39H UMask=08H Counter=0,1 |
UNC_C_MISC.RSPI_WAS_FSE | Miscellaneous events in the Cbo. | EventSel=39H UMask=01H Counter=0,1 |
UNC_C_MISC.STARTED | Miscellaneous events in the Cbo. | EventSel=39H UMask=04H Counter=0,1 |
UNC_C_MISC.WC_ALIASING | Miscellaneous events in the Cbo. | EventSel=39H UMask=02H Counter=0,1 |
UNC_C_RING_AD_USED.DOWN_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1BH UMask=04H Counter=2,3 |
UNC_C_RING_AD_USED.DOWN_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1BH UMask=08H Counter=2,3 |
UNC_C_RING_AD_USED.UP_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1BH UMask=01H Counter=2,3 |
UNC_C_RING_AD_USED.UP_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1BH UMask=02H Counter=2,3 |
UNC_C_RING_AK_USED.DOWN_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1CH UMask=04H Counter=2,3 |
UNC_C_RING_AK_USED.DOWN_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1CH UMask=08H Counter=2,3 |
UNC_C_RING_AK_USED.UP_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1CH UMask=01H Counter=2,3 |
UNC_C_RING_AK_USED.UP_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1CH UMask=02H Counter=2,3 |
UNC_C_RING_BL_USED.DOWN_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1DH UMask=04H Counter=2,3 |
UNC_C_RING_BL_USED.DOWN_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1DH UMask=08H Counter=2,3 |
UNC_C_RING_BL_USED.UP_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1DH UMask=01H Counter=2,3 |
UNC_C_RING_BL_USED.UP_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring. | EventSel=1DH UMask=02H Counter=2,3 |
UNC_C_RING_BOUNCES.AK_CORE | Number of LLC responses that bounced on the Ring.; Acknowledgements to core | EventSel=05H UMask=02H Counter=0,1 |
UNC_C_RING_BOUNCES.BL_CORE | Number of LLC responses that bounced on the Ring.; Data Responses to core | EventSel=05H UMask=04H Counter=0,1 |
UNC_C_RING_BOUNCES.IV_CORE | Number of LLC responses that bounced on the Ring.; Snoops of processor's cache. | EventSel=05H UMask=08H Counter=0,1 |
UNC_C_RING_IV_USED.ANY | Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in JKT. Therefore, if one wants to monitor the "Even" ring, they should select both UP_EVEN and DN_EVEN. To monitor the "Odd" ring, they should select both UP_ODD and DN_ODD. | EventSel=1EH UMask=0FH Counter=2,3 |
UNC_C_RING_SINK_STARVED.AD_CACHE | UNC_C_RING_SINK_STARVED.AD_CACHE | EventSel=06H UMask=01H Counter=0,1 |
UNC_C_RING_SINK_STARVED.AK_CORE | UNC_C_RING_SINK_STARVED.AK_CORE | EventSel=06H UMask=02H Counter=0,1 |
UNC_C_RING_SINK_STARVED.BL_CORE | UNC_C_RING_SINK_STARVED.BL_CORE | EventSel=06H UMask=04H Counter=0,1 |
UNC_C_RING_SINK_STARVED.IV_CORE | UNC_C_RING_SINK_STARVED.IV_CORE | EventSel=06H UMask=08H Counter=0,1 |
UNC_C_RING_SRC_THRTL | UNC_C_RING_SRC_THRTL | EventSel=07H UMask=00H Counter=0,1 |
UNC_C_RxR_EXT_STARVED.IPQ | Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. | EventSel=12H UMask=02H Counter=0,1 |
UNC_C_RxR_EXT_STARVED.IRQ | Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. | EventSel=12H UMask=01H Counter=0,1 |
UNC_C_RxR_EXT_STARVED.ISMQ | Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. | EventSel=12H UMask=04H Counter=0,1 |
UNC_C_RxR_EXT_STARVED.ISMQ_BIDS | Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues. | EventSel=12H UMask=08H Counter=0,1 |
UNC_C_RxR_INSERTS.IPQ | Counts number of allocations per cycle into the specified Ingress queue. | EventSel=13H UMask=04H Counter=0,1 |
UNC_C_RxR_INSERTS.IRQ | Counts number of allocations per cycle into the specified Ingress queue. | EventSel=13H UMask=01H Counter=0,1 |
UNC_C_RxR_INSERTS.IRQ_REJECTED | Counts number of allocations per cycle into the specified Ingress queue. | EventSel=13H UMask=02H Counter=0,1 |
UNC_C_RxR_INSERTS.VFIFO | Counts number of allocations per cycle into the specified Ingress queue. | EventSel=13H UMask=10H Counter=0,1 |
UNC_C_RxR_INT_STARVED.IPQ | Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue. | EventSel=14H UMask=04H Counter=0,1 |
UNC_C_RxR_INT_STARVED.IRQ | Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue. | EventSel=14H UMask=01H Counter=0,1 |
UNC_C_RxR_INT_STARVED.ISMQ | Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue. | EventSel=14H UMask=08H Counter=0,1 |
UNC_C_RxR_IPQ_RETRY.ADDR_CONFLICT | Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. | EventSel=31H UMask=04H Counter=0,1 |
UNC_C_RxR_IPQ_RETRY.ANY | Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. | EventSel=31H UMask=01H Counter=0,1 |
UNC_C_RxR_IPQ_RETRY.FULL | Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. | EventSel=31H UMask=02H Counter=0,1 |
UNC_C_RxR_IPQ_RETRY.QPI_CREDITS | Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries. | EventSel=31H UMask=10H Counter=0,1 |
UNC_C_RxR_IRQ_RETRY.ADDR_CONFLICT | Ingress Request Queue Rejects; Address Conflict | EventSel=32H UMask=04H Counter=0,1 |
UNC_C_RxR_IRQ_RETRY.ANY | Ingress Request Queue Rejects; Any Reject | EventSel=32H UMask=01H Counter=0,1 |
UNC_C_RxR_IRQ_RETRY.FULL | Ingress Request Queue Rejects; No Egress Credits | EventSel=32H UMask=02H Counter=0,1 |
UNC_C_RxR_IRQ_RETRY.QPI_CREDITS | Ingress Request Queue Rejects; No QPI Credits | EventSel=32H UMask=10H Counter=0,1 |
UNC_C_RxR_IRQ_RETRY.RTID | Ingress Request Queue Rejects; No RTIDs | EventSel=32H UMask=08H Counter=0,1 |
UNC_C_RxR_ISMQ_RETRY.ANY | Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. | EventSel=33H UMask=01H Counter=0,1 |
UNC_C_RxR_ISMQ_RETRY.FULL | Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. | EventSel=33H UMask=02H Counter=0,1 |
UNC_C_RxR_ISMQ_RETRY.IIO_CREDITS | Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. | EventSel=33H UMask=20H Counter=0,1 |
UNC_C_RxR_ISMQ_RETRY.QPI_CREDITS | Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. | EventSel=33H UMask=10H Counter=0,1 |
UNC_C_RxR_ISMQ_RETRY.RTID | Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. | EventSel=33H UMask=08H Counter=0,1 |
UNC_C_RxR_OCCUPANCY.IPQ | Counts number of entries in the specified Ingress queue in each cycle. | EventSel=11H UMask=04H Counter=0 |
UNC_C_RxR_OCCUPANCY.IRQ | Counts number of entries in the specified Ingress queue in each cycle. | EventSel=11H UMask=01H Counter=0 |
UNC_C_RxR_OCCUPANCY.IRQ_REJECTED | Counts number of entries in the specified Ingress queue in each cycle. | EventSel=11H UMask=02H Counter=0 |
UNC_C_RxR_OCCUPANCY.VFIFO | Counts number of entries in the specified Ingress queue in each cycle. | EventSel=11H UMask=10H Counter=0 |
UNC_C_TOR_INSERTS.EVICTION | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=04H Counter=0,1 |
UNC_C_TOR_INSERTS.MISS_ALL | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=0AH Counter=0,1 |
UNC_C_TOR_INSERTS.MISS_OPCODE | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=03H Counter=0,1 |
UNC_C_TOR_INSERTS.NID_ALL | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=48H Counter=0,1 |
UNC_C_TOR_INSERTS.NID_EVICTION | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=44H Counter=0,1 |
UNC_C_TOR_INSERTS.NID_MISS_ALL | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=4AH Counter=0,1 |
UNC_C_TOR_INSERTS.NID_MISS_OPCODE | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=43H Counter=0,1 |
UNC_C_TOR_INSERTS.NID_OPCODE | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=41H Counter=0,1 |
UNC_C_TOR_INSERTS.NID_WB | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=50H Counter=0,1 |
UNC_C_TOR_INSERTS.OPCODE | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=01H Counter=0,1 |
UNC_C_TOR_INSERTS.WB | Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182). | EventSel=35H UMask=10H Counter=0,1 |
UNC_C_TOR_OCCUPANCY.ALL | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=08H Counter=0 |
UNC_C_TOR_OCCUPANCY.EVICTION | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=04H Counter=0 |
UNC_C_TOR_OCCUPANCY.MISS_ALL | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=0AH Counter=0 |
UNC_C_TOR_OCCUPANCY.MISS_OPCODE | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=03H Counter=0 |
UNC_C_TOR_OCCUPANCY.NID_ALL | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=48H Counter=0 |
UNC_C_TOR_OCCUPANCY.NID_EVICTION | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=44H Counter=0 |
UNC_C_TOR_OCCUPANCY.NID_MISS_ALL | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=4AH Counter=0 |
UNC_C_TOR_OCCUPANCY.NID_MISS_OPCODE | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=43H Counter=0 |
UNC_C_TOR_OCCUPANCY.NID_OPCODE | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=41H Counter=0 |
UNC_C_TOR_OCCUPANCY.OPCODE | For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182) | EventSel=36H UMask=01H Counter=0 |
UNC_C_TxR_ADS_USED | UNC_C_TxR_ADS_USED | EventSel=04H UMask=00H Counter=0,1 |
UNC_C_TxR_INSERTS.AD_CACHE | Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. | EventSel=02H UMask=01H Counter=0,1 |
UNC_C_TxR_INSERTS.AD_CORE | Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. | EventSel=02H UMask=10H Counter=0,1 |
UNC_C_TxR_INSERTS.AK_CACHE | Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. | EventSel=02H UMask=02H Counter=0,1 |
UNC_C_TxR_INSERTS.AK_CORE | Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. | EventSel=02H UMask=20H Counter=0,1 |
UNC_C_TxR_INSERTS.BL_CACHE | Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. | EventSel=02H UMask=04H Counter=0,1 |
UNC_C_TxR_INSERTS.BL_CORE | Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. | EventSel=02H UMask=40H Counter=0,1 |
UNC_C_TxR_INSERTS.IV_CACHE | Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring. | EventSel=02H UMask=08H Counter=0,1 |
UNC_C_TxR_STARVED.AK | Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time. | EventSel=03H UMask=02H Counter=0,1 |
UNC_C_TxR_STARVED.BL | Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time. | EventSel=03H UMask=04H Counter=0,1 |
UNC_H_ADDR_OPC_MATCH.FILT | QPI Address/Opcode Match; Address & Opcode Match | EventSel=20H UMask=03H Counter=0,1,2,3 |
UNC_H_BYPASS_IMC.NOT_TAKEN | Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not. | EventSel=14H UMask=02H Counter=0,1,2,3 |
UNC_H_BYPASS_IMC.TAKEN | Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not. | EventSel=14H UMask=01H Counter=0,1,2,3 |
UNC_H_CLOCKTICKS | Counts the number of uclks in the HA. This will be slightly different than the count in the Ubox because of enable/freeze delays. The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agent. | EventSel=00H UMask=00H Counter=0,1,2,3 |
UNC_H_CONFLICT_CYCLES.CONFLICT | Conflict Checks; Conflict Detected | EventSel=0BH UMask=02H Counter=0,1,2,3 |
UNC_H_CONFLICT_CYCLES.NO_CONFLICT | Conflict Checks; No Conflict | EventSel=0BH UMask=01H Counter=0,1,2,3 |
UNC_H_DIRECT2CORE_COUNT | Number of Direct2Core messages sent | EventSel=11H UMask=00H Counter=0,1,2,3 |
UNC_H_DIRECT2CORE_CYCLES_DISABLED | Number of cycles in which Direct2Core was disabled | EventSel=12H UMask=00H Counter=0,1,2,3 |
UNC_H_DIRECT2CORE_TXN_OVERRIDE | Number of Reads where Direct2Core overridden | EventSel=13H UMask=00H Counter=0,1,2,3 |
UNC_H_DIRECTORY_LOOKUP.NO_SNP | Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to. | EventSel=0CH UMask=02H Counter=0,1,2,3 |
UNC_H_DIRECTORY_LOOKUP.SNP | Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to. | EventSel=0CH UMask=01H Counter=0,1,2,3 |
UNC_H_DIRECTORY_UPDATE.ANY | Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears. | EventSel=0DH UMask=03H Counter=0,1,2,3 |
UNC_H_DIRECTORY_UPDATE.CLEAR | Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears. | EventSel=0DH UMask=02H Counter=0,1,2,3 |
UNC_H_DIRECTORY_UPDATE.SET | Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears. | EventSel=0DH UMask=01H Counter=0,1,2,3 |
UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI0 | Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. | EventSel=22H UMask=01H Counter=0,1,2,3 |
UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI1 | Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. | EventSel=22H UMask=02H Counter=0,1,2,3 |
UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI0 | Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. | EventSel=22H UMask=04H Counter=0,1,2,3 |
UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI1 | Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links. | EventSel=22H UMask=08H Counter=0,1,2,3 |
UNC_H_IMC_RETRY | Retry Events | EventSel=1EH UMask=00H Counter=0,1,2,3 |
UNC_H_IMC_WRITES.ALL | Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. | EventSel=1AH UMask=0FH Counter=0,1,2,3 |
UNC_H_IMC_WRITES.FULL | Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. | EventSel=1AH UMask=01H Counter=0,1,2,3 |
UNC_H_IMC_WRITES.FULL_ISOCH | Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. | EventSel=1AH UMask=04H Counter=0,1,2,3 |
UNC_H_IMC_WRITES.PARTIAL | Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. | EventSel=1AH UMask=02H Counter=0,1,2,3 |
UNC_H_IMC_WRITES.PARTIAL_ISOCH | Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH. | EventSel=1AH UMask=08H Counter=0,1,2,3 |
UNC_H_REQUESTS.READS | Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc). | EventSel=01H UMask=03H Counter=0,1,2,3 |
UNC_H_REQUESTS.WRITES | Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc). | EventSel=01H UMask=0CH Counter=0,1,2,3 |
UNC_H_RING_AD_USED.CCW_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3EH UMask=04H Counter=0,1,2,3 |
UNC_H_RING_AD_USED.CCW_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3EH UMask=08H Counter=0,1,2,3 |
UNC_H_RING_AD_USED.CW_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3EH UMask=01H Counter=0,1,2,3 |
UNC_H_RING_AD_USED.CW_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3EH UMask=02H Counter=0,1,2,3 |
UNC_H_RING_AK_USED.CCW_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3FH UMask=04H Counter=0,1,2,3 |
UNC_H_RING_AK_USED.CCW_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3FH UMask=08H Counter=0,1,2,3 |
UNC_H_RING_AK_USED.CW_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3FH UMask=01H Counter=0,1,2,3 |
UNC_H_RING_AK_USED.CW_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=3FH UMask=02H Counter=0,1,2,3 |
UNC_H_RING_BL_USED.CCW_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=40H UMask=04H Counter=0,1,2,3 |
UNC_H_RING_BL_USED.CCW_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=40H UMask=08H Counter=0,1,2,3 |
UNC_H_RING_BL_USED.CW_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=40H UMask=01H Counter=0,1,2,3 |
UNC_H_RING_BL_USED.CW_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=40H UMask=02H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN0 | Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=15H UMask=01H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN1 | Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=15H UMask=02H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN2 | Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=15H UMask=04H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN3 | Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=15H UMask=08H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0 | Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=16H UMask=01H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1 | Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=16H UMask=02H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2 | Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=16H UMask=04H Counter=0,1,2,3 |
UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN3 | Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=16H UMask=08H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION0 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=01H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION1 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=02H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION2 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=04H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION3 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=08H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION4 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=10H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION5 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=20H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION6 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=40H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G0.REGION7 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1BH UMask=80H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G1.REGION10 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1CH UMask=04H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G1.REGION11 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1CH UMask=08H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G1.REGION8 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1CH UMask=01H Counter=0,1,2,3 |
UNC_H_TAD_REQUESTS_G1.REGION9 | Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power. | EventSel=1CH UMask=02H Counter=0,1,2,3 |
UNC_H_TRACKER_INSERTS.ALL | Counts the number of allocations into the local HA tracker pool. This can be used in conjunction with the occupancy accumulation event in order to calculate average latency. One cannot filter between reads and writes. HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring. | EventSel=06H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AD.NDR | Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details. | EventSel=0FH UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AD.SNP | Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details. | EventSel=0FH UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AD_CYCLES_FULL.ALL | AD Egress Full | EventSel=2AH UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AD_CYCLES_FULL.SCHED0 | AD Egress Full | EventSel=2AH UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AD_CYCLES_FULL.SCHED1 | AD Egress Full | EventSel=2AH UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AD_CYCLES_NE.ALL | AD Egress Not Empty | EventSel=29H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AD_CYCLES_NE.SCHED0 | AD Egress Not Empty | EventSel=29H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AD_CYCLES_NE.SCHED1 | AD Egress Not Empty | EventSel=29H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AD_INSERTS.ALL | AD Egress Allocations | EventSel=27H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AD_INSERTS.SCHED0 | AD Egress Allocations | EventSel=27H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AD_INSERTS.SCHED1 | AD Egress Allocations | EventSel=27H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AD_OCCUPANCY.ALL | AD Egress Occupancy | EventSel=28H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AD_OCCUPANCY.SCHED0 | AD Egress Occupancy | EventSel=28H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AD_OCCUPANCY.SCHED1 | AD Egress Occupancy | EventSel=28H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AK_CYCLES_FULL.ALL | AK Egress Full | EventSel=32H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AK_CYCLES_FULL.SCHED0 | AK Egress Full | EventSel=32H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AK_CYCLES_FULL.SCHED1 | AK Egress Full | EventSel=32H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AK_CYCLES_NE.ALL | AK Egress Not Empty | EventSel=31H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AK_CYCLES_NE.SCHED0 | AK Egress Not Empty | EventSel=31H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AK_CYCLES_NE.SCHED1 | AK Egress Not Empty | EventSel=31H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AK_INSERTS.ALL | AK Egress Allocations | EventSel=2FH UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AK_INSERTS.SCHED0 | AK Egress Allocations | EventSel=2FH UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AK_INSERTS.SCHED1 | AK Egress Allocations | EventSel=2FH UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_AK_NDR | Counts the number of outbound NDR transactions sent on the AK ring. NDR stands for "non-data response" and is generally used for completions that do not include data. AK NDR is used for messages to the local socket. | EventSel=0EH UMask=00H Counter=0,1,2,3 |
UNC_H_TxR_AK_OCCUPANCY.ALL | AK Egress Occupancy | EventSel=30H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_AK_OCCUPANCY.SCHED0 | AK Egress Occupancy | EventSel=30H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_AK_OCCUPANCY.SCHED1 | AK Egress Occupancy | EventSel=30H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_BL.DRS_CACHE | Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination. | EventSel=10H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_BL.DRS_CORE | Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination. | EventSel=10H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_BL.DRS_QPI | Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination. | EventSel=10H UMask=04H Counter=0,1,2,3 |
UNC_H_TxR_BL_CYCLES_FULL.ALL | BL Egress Full | EventSel=36H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_BL_CYCLES_FULL.SCHED0 | BL Egress Full | EventSel=36H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_BL_CYCLES_FULL.SCHED1 | BL Egress Full | EventSel=36H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_BL_CYCLES_NE.ALL | BL Egress Not Empty | EventSel=35H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_BL_CYCLES_NE.SCHED0 | BL Egress Not Empty | EventSel=35H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_BL_CYCLES_NE.SCHED1 | BL Egress Not Empty | EventSel=35H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_BL_INSERTS.ALL | BL Egress Allocations | EventSel=33H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_BL_INSERTS.SCHED0 | BL Egress Allocations | EventSel=33H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_BL_INSERTS.SCHED1 | BL Egress Allocations | EventSel=33H UMask=02H Counter=0,1,2,3 |
UNC_H_TxR_BL_OCCUPANCY.ALL | BL Egress Occupancy | EventSel=34H UMask=03H Counter=0,1,2,3 |
UNC_H_TxR_BL_OCCUPANCY.SCHED0 | BL Egress Occupancy | EventSel=34H UMask=01H Counter=0,1,2,3 |
UNC_H_TxR_BL_OCCUPANCY.SCHED1 | BL Egress Occupancy | EventSel=34H UMask=02H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN0 | Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=18H UMask=01H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN1 | Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=18H UMask=02H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN2 | Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=18H UMask=04H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN3 | Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=18H UMask=08H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN0 | Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=19H UMask=01H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN1 | Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=19H UMask=02H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN2 | Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=19H UMask=04H Counter=0,1,2,3 |
UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN3 | Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time. | EventSel=19H UMask=08H Counter=0,1,2,3 |
UNC_M_ACT_COUNT | Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates. | EventSel=01H UMask=00H Counter=0,1,2,3 |
UNC_M_CAS_COUNT.ALL | DRAM RD_CAS and WR_CAS Commands | EventSel=04H UMask=0FH Counter=0,1,2,3 |
UNC_M_CAS_COUNT.RD | DRAM RD_CAS and WR_CAS Commands | EventSel=04H UMask=03H Counter=0,1,2,3 |
UNC_M_CAS_COUNT.RD_REG | DRAM RD_CAS and WR_CAS Commands | EventSel=04H UMask=01H Counter=0,1,2,3 |
UNC_M_CAS_COUNT.RD_UNDERFILL | DRAM RD_CAS and WR_CAS Commands | EventSel=04H UMask=02H Counter=0,1,2,3 |
UNC_M_CAS_COUNT.WR | DRAM RD_CAS and WR_CAS Commands | EventSel=04H UMask=0CH Counter=0,1,2,3 |
UNC_M_CAS_COUNT.WR_RMM | DRAM RD_CAS and WR_CAS Commands | EventSel=04H UMask=08H Counter=0,1,2,3 |
UNC_M_CAS_COUNT.WR_WMM | DRAM RD_CAS and WR_CAS Commands | EventSel=04H UMask=04H Counter=0,1,2,3 |
UNC_M_CLOCKTICKS | Uncore Fixed Counter - uclks | EventSel=00H UMask=00H Counter=0,1,2,3 |
UNC_M_DRAM_PRE_ALL | Counts the number of times that the precharge all command was sent. | EventSel=06H UMask=00H Counter=0,1,2,3 |
UNC_M_DRAM_REFRESH.HIGH | Counts the number of refreshes issued. | EventSel=05H UMask=04H Counter=0,1,2,3 |
UNC_M_DRAM_REFRESH.PANIC | Counts the number of refreshes issued. | EventSel=05H UMask=02H Counter=0,1,2,3 |
UNC_M_ECC_CORRECTABLE_ERRORS | Counts the number of ECC errors detected and corrected by the iMC on this channel. This counter is only useful with ECC DRAM devices. This count will increment one time for each correction regardless of the number of bits corrected. The iMC can correct up to 4 bit errors in independent channel mode and 8 bit errors in lockstep mode. | EventSel=09H UMask=00H Counter=0,1,2,3 |
UNC_M_MAJOR_MODES.ISOCH | Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. | EventSel=07H UMask=08H Counter=0,1,2,3 |
UNC_M_MAJOR_MODES.PARTIAL | Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. | EventSel=07H UMask=04H Counter=0,1,2,3 |
UNC_M_MAJOR_MODES.READ | Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. | EventSel=07H UMask=01H Counter=0,1,2,3 |
UNC_M_MAJOR_MODES.WRITE | Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode. | EventSel=07H UMask=02H Counter=0,1,2,3 |
UNC_M_POWER_CHANNEL_DLLOFF | Number of cycles when all the ranks in the channel are in CKE Slow (DLLOFF) mode. | EventSel=84H UMask=00H Counter=0,1,2,3 |
UNC_M_POWER_CHANNEL_PPD | Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of. | EventSel=85H UMask=00H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK0 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=01H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK1 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=02H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK2 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=04H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK3 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=08H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK4 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=10H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK5 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=20H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK6 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=40H Counter=0,1,2,3 |
UNC_M_POWER_CKE_CYCLES.RANK7 | Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary). | EventSel=83H UMask=80H Counter=0,1,2,3 |
UNC_M_POWER_CRITICAL_THROTTLE_CYCLES | Counts the number of cycles when the iMC is in critical thermal throttling. When this happens, all traffic is blocked. This should be rare unless something bad is going on in the platform. There is no filtering by rank for this event. | EventSel=86H UMask=00H Counter=0,1,2,3 |
UNC_M_POWER_SELF_REFRESH | Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases. | EventSel=43H UMask=00H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK0 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=01H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK1 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=02H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK2 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=04H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK3 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=08H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK4 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=10H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK5 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=20H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK6 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=40H Counter=0,1,2,3 |
UNC_M_POWER_THROTTLE_CYCLES.RANK7 | Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. | EventSel=41H UMask=80H Counter=0,1,2,3 |
UNC_M_PRE_COUNT.PAGE_CLOSE | Counts the number of DRAM Precharge commands sent on this channel. | EventSel=02H UMask=02H Counter=0,1,2,3 |
UNC_M_PRE_COUNT.PAGE_MISS | Counts the number of DRAM Precharge commands sent on this channel. | EventSel=02H UMask=01H Counter=0,1,2,3 |
UNC_M_PREEMPTION.RD_PREEMPT_RD | Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency. | EventSel=08H UMask=01H Counter=0,1,2,3 |
UNC_M_PREEMPTION.RD_PREEMPT_WR | Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency. | EventSel=08H UMask=02H Counter=0,1,2,3 |
UNC_M_RPQ_CYCLES_FULL | Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries. | EventSel=12H UMask=00H Counter=0,1,2,3 |
UNC_M_RPQ_CYCLES_NE | Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests. | EventSel=11H UMask=00H Counter=0,1,2,3 |
UNC_M_RPQ_INSERTS | Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests. | EventSel=10H UMask=00H Counter=0,1,2,3 |
UNC_M_RPQ_OCCUPANCY | Accumulates the occupancies of the Read Pending Queue each cycle. This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. | EventSel=80H UMask=00H Counter=0,1,2,3 |
UNC_M_WPQ_CYCLES_FULL | Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no WPQ credits, just somewhat smaller to account for the credit return overhead. | EventSel=22H UMask=00H Counter=0,1,2,3 |
UNC_M_WPQ_CYCLES_NE | Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have "posted" to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. | EventSel=21H UMask=00H Counter=0,1,2,3 |
UNC_M_WPQ_INSERTS | Counts the number of allocations into the Write Pending Queue. This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have "posted" to the iMC. | EventSel=20H UMask=00H Counter=0,1,2,3 |
UNC_M_WPQ_OCCUPANCY | Accumulates the occupancies of the Write Pending Queue each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have "posted" to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the "not posted" filter, we can track how long writes spent in the iMC before completions were sent to the HA. The "posted" filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts. | EventSel=81H UMask=00H Counter=0,1,2,3 |
UNC_M_WPQ_READ_HIT | Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections. | EventSel=23H UMask=00H Counter=0,1,2,3 |
UNC_M_WPQ_WRITE_HIT | Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections. | EventSel=24H UMask=00H Counter=0,1,2,3 |
UNC_I_ADDRESS_MATCH.MERGE_COUNT | Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cache. | EventSel=17H UMask=02H Counter=0,1 |
UNC_I_ADDRESS_MATCH.STALL_COUNT | Counts the number of times when an inbound write (from a device to memory or another device) had an address match with another request in the write cache. | EventSel=17H UMask=01H Counter=0,1 |
UNC_I_CACHE_ACK_PENDING_OCCUPANCY.ANY | Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore. These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data. The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released. Note that a single tickle can result in multiple decrements. | EventSel=14H UMask=01H Counter=0,1 |
UNC_I_CACHE_ACK_PENDING_OCCUPANCY.SOURCE | Accumulates the number of writes that have acquired ownership but have not yet returned their data to the uncore. These writes are generally queued up in the switch trying to get to the head of their queues so that they can post their data. The queue occuapancy increments when the ACK is received, and decrements when either the data is returned OR a tickle is received and ownership is released. Note that a single tickle can result in multiple decrements. | EventSel=14H UMask=02H Counter=0,1 |
UNC_I_CACHE_OWN_OCCUPANCY.ANY | Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle. This can be used with the write transaction count to calculate the average write latency in the uncore. The occupancy increments when a write request is issued, and decrements when the data is returned. | EventSel=13H UMask=01H Counter=0,1 |
UNC_I_CACHE_OWN_OCCUPANCY.SOURCE | Accumulates the number of writes (and write prefetches) that are outstanding in the uncore trying to acquire ownership in each cycle. This can be used with the write transaction count to calculate the average write latency in the uncore. The occupancy increments when a write request is issued, and decrements when the data is returned. | EventSel=13H UMask=02H Counter=0,1 |
UNC_I_CACHE_READ_OCCUPANCY.ANY | Accumulates the number of reads that are outstanding in the uncore in each cycle. This can be used with the read transaction count to calculate the average read latency in the uncore. The occupancy increments when a read request is issued, and decrements when the data is returned. | EventSel=10H UMask=01H Counter=0,1 |
UNC_I_CACHE_READ_OCCUPANCY.SOURCE | Accumulates the number of reads that are outstanding in the uncore in each cycle. This can be used with the read transaction count to calculate the average read latency in the uncore. The occupancy increments when a read request is issued, and decrements when the data is returned. | EventSel=10H UMask=02H Counter=0,1 |
UNC_I_CACHE_TOTAL_OCCUPANCY.ANY | Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events. | EventSel=12H UMask=01H Counter=0,1 |
UNC_I_CACHE_TOTAL_OCCUPANCY.SOURCE | Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events. | EventSel=12H UMask=02H Counter=0,1 |
UNC_I_CACHE_WRITE_OCCUPANCY.ANY | Accumulates the number of writes (and write prefetches) that are outstanding in the uncore in each cycle. This can be used with the transaction count event to calculate the average latency in the uncore. The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncore. | EventSel=11H UMask=01H Counter=0,1 |
UNC_I_CACHE_WRITE_OCCUPANCY.SOURCE | Accumulates the number of writes (and write prefetches) that are outstanding in the uncore in each cycle. This can be used with the transaction count event to calculate the average latency in the uncore. The occupancy increments when the ownership fetch/prefetch is issued, and decrements the data is returned to the uncore. | EventSel=11H UMask=02H Counter=0,1 |
UNC_I_CLOCKTICKS | Number of clocks in the IRP. | EventSel=00H UMask=00H Counter=0,1 |
UNC_I_RxR_AK_CYCLES_FULL | Counts the number of cycles when the AK Ingress is full. This queue is where the IRP receives responses from R2PCIe (the ring). | EventSel=0BH UMask=00H Counter=0,1 |
UNC_I_RxR_AK_INSERTS | Counts the number of allocations into the AK Ingress. This queue is where the IRP receives responses from R2PCIe (the ring). | EventSel=0AH UMask=00H Counter=0,1 |
UNC_I_RxR_AK_OCCUPANCY | Accumulates the occupancy of the AK Ingress in each cycles. This queue is where the IRP receives responses from R2PCIe (the ring). | EventSel=0CH UMask=00H Counter=0,1 |
UNC_I_RxR_BL_DRS_CYCLES_FULL | Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=04H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_DRS_INSERTS | Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=01H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_DRS_OCCUPANCY | Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=07H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_NCB_CYCLES_FULL | Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=05H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_NCB_INSERTS | Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=02H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_NCB_OCCUPANCY | Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=08H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_NCS_CYCLES_FULL | Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=06H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_NCS_INSERTS | Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=03H UMask=00H Counter=0,1 |
UNC_I_RxR_BL_NCS_OCCUPANCY | Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes. | EventSel=09H UMask=00H Counter=0,1 |
UNC_I_TICKLES.LOST_OWNERSHIP | Counts the number of tickles that are received. This is for both explicit (from Cbo) and implicit (internal conflict) tickles. | EventSel=16H UMask=01H Counter=0,1 |
UNC_I_TICKLES.TOP_OF_QUEUE | Counts the number of tickles that are received. This is for both explicit (from Cbo) and implicit (internal conflict) tickles. | EventSel=16H UMask=02H Counter=0,1 |
UNC_I_TRANSACTIONS.ORDERINGQ | Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. | EventSel=15H UMask=08H Counter=0,1 |
UNC_I_TRANSACTIONS.PD_PREFETCHES | Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. | EventSel=15H UMask=04H Counter=0,1 |
UNC_I_TRANSACTIONS.READS | Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. | EventSel=15H UMask=01H Counter=0,1 |
UNC_I_TRANSACTIONS.WRITES | Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. | EventSel=15H UMask=02H Counter=0,1 |
UNC_I_TxR_AD_STALL_CREDIT_CYCLES | Counts the number times when it is not possible to issue a request to the R2PCIe because there are no AD Egress Credits available. | EventSel=18H UMask=00H Counter=0,1 |
UNC_I_TxR_BL_STALL_CREDIT_CYCLES | Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits available. | EventSel=19H UMask=00H Counter=0,1 |
UNC_I_TxR_DATA_INSERTS_NCB | Counts the number of requests issued to the switch (towards the devices). | EventSel=0EH UMask=00H Counter=0,1 |
UNC_I_TxR_DATA_INSERTS_NCS | Counts the number of requests issued to the switch (towards the devices). | EventSel=0FH UMask=00H Counter=0,1 |
UNC_I_TxR_REQUEST_OCCUPANCY | Accumulates the number of outstanding outbound requests from the IRP to the switch (towards the devices). This can be used in conjunction with the allocations event in order to calculate average latency of outbound requests. | EventSel=0DH UMask=00H Counter=0,1 |
UNC_I_WRITE_ORDERING_STALL_CYCLES | Counts the number of cycles when there are pending write ACK's in the switch but the switch->IRP pipeline is not utilized. | EventSel=1AH UMask=00H Counter=0,1 |
UNC_P_CLOCKTICKS | The PCU runs off a fixed 800 MHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time. | EventSel=00H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE0_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=03H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE1_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=04H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE2_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=05H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE3_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=06H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE4_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=07H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE5_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=08H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE6_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=09H UMask=00H Counter=0,1,2,3 |
UNC_P_CORE7_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions. There is one event per core. | EventSel=0AH UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE0 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=1EH UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE1 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=1FH UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE2 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=20H UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE3 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=21H UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE4 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=22H UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE5 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=23H UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE6 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=24H UMask=00H Counter=0,1,2,3 |
UNC_P_DEMOTIONS_CORE7 | Counts the number of times when a configurable cores had a C-state demotion | EventSel=25H UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_BAND0_CYCLES | Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. | EventSel=0BH UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_BAND1_CYCLES | Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. | EventSel=0CH UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_BAND2_CYCLES | Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. | EventSel=0DH UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_BAND3_CYCLES | Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency. | EventSel=0EH UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_MAX_CURRENT_CYCLES | Counts the number of cycles when current is the upper limit on frequency. | EventSel=07H UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES | Counts the number of cycles when thermal conditions are the upper limit on frequency. This is related to the THERMAL_THROTTLE CYCLES_ABOVE_TEMP event, which always counts cycles when we are above the thermal temperature. This event (STRONGEST_UPPER_LIMIT) is sampled at the output of the algorithm that determines the actual frequency, while THERMAL_THROTTLE looks at the input. | EventSel=04H UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_MAX_OS_CYCLES | Counts the number of cycles when the OS is the upper limit on frequency. | EventSel=06H UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_MAX_POWER_CYCLES | Counts the number of cycles when power is the upper limit on frequency. | EventSel=05H UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_MIN_IO_P_CYCLES | Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth. | EventSel=01H UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_MIN_PERF_P_CYCLES | Counts the number of cycles when Perf P Limit is preventing us from dropping the frequency lower. Perf P Limit is an algorithm that takes input from remote sockets when determining if a socket should drop it's frequency down. This is largely to minimize increases in snoop and remote read latencies. | EventSel=02H UMask=00H Counter=0,1,2,3 |
UNC_P_FREQ_TRANS_CYCLES | Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system. | EventSel=00H UMask=00H Counter=0,1,2,3 |
UNC_P_MEMORY_PHASE_SHEDDING_CYCLES | Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency. | EventSel=2FH UMask=00H Counter=0,1,2,3 |
UNC_P_POWER_STATE_OCCUPANCY.CORES_C0 | This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. | EventSel=80H UMask=40H Counter=0,1,2,3 |
UNC_P_POWER_STATE_OCCUPANCY.CORES_C3 | This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. | EventSel=80H UMask=80H Counter=0,1,2,3 |
UNC_P_POWER_STATE_OCCUPANCY.CORES_C6 | This is an occupancy event that tracks the number of cores that are in C0. It can be used by itself to get the average number of cores in C0, with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details. | EventSel=80H UMask=C0H Counter=0,1,2,3 |
UNC_P_PROCHOT_EXTERNAL_CYCLES | Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip. | EventSel=0AH UMask=00H Counter=0,1,2,3 |
UNC_P_PROCHOT_INTERNAL_CYCLES | Counts the number of cycles that we are in Internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip. | EventSel=09H UMask=00H Counter=0,1,2,3 |
UNC_P_TOTAL_TRANSITION_CYCLES | Number of cycles spent performing core C state transitions across all cores. | EventSel=0BH UMask=00H Counter=0,1,2,3 |
UNC_P_VOLT_TRANS_CYCLES_CHANGE | Counts the number of cycles when the system is changing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition. This event is calculated by or'ing together the increasing and decreasing events. | EventSel=03H UMask=00H Counter=0,1,2,3 |
UNC_P_VOLT_TRANS_CYCLES_DECREASE | Counts the number of cycles when the system is decreasing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition. | EventSel=02H UMask=00H Counter=0,1,2,3 |
UNC_P_VOLT_TRANS_CYCLES_INCREASE | Counts the number of cycles when the system is increasing voltage. There is no filtering supported with this event. One can use it as a simple event, or use it conjunction with the occupancy events to monitor the number of cores or threads that were impacted by the transition. | EventSel=01H UMask=00H Counter=0,1,2,3 |
UNC_P_VR_HOT_CYCLES | VR Hot | EventSel=32H UMask=00H Counter=0,1,2,3 |
UNC_Q_CLOCKTICKS | Counts the number of clocks in the QPI LL. This clock runs at 1/8th the "GT/s" speed of the QPI link. For example, a 8GT/s link will have qfclk or 1GHz. JKT does not support dynamic link speeds, so this frequency is fixed. | EventSel=14H UMask=00H Counter=0,1,2,3 |
UNC_Q_CTO_COUNT | Counts the number of CTO (cluster trigger outs) events that were asserted across the two slots. If both slots trigger in a given cycle, the event will increment by 2. You can use edge detect to count the number of cases when both events triggered. | EventSel=38H UMask=00H Counter=0,1,2,3 |
UNC_Q_DIRECT2CORE.FAILURE_CREDITS | Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. | EventSel=13H UMask=02H Counter=0,1,2,3 |
UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT | Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. | EventSel=13H UMask=08H Counter=0,1,2,3 |
UNC_Q_DIRECT2CORE.FAILURE_RBT | Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. | EventSel=13H UMask=04H Counter=0,1,2,3 |
UNC_Q_DIRECT2CORE.SUCCESS | Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos. | EventSel=13H UMask=01H Counter=0,1,2,3 |
UNC_Q_L1_POWER_CYCLES | Number of QPI qfclk cycles spent in L1 power mode. L1 is a mode that totally shuts down a QPI link. Use edge detect to count the number of instances when the QPI link entered L1. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this mode. | EventSel=12H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_BYPASSED | Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency. | EventSel=09H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_CRC_ERRORS.LINK_INIT | Number of CRC errors detected in the QPI Agent. Each QPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it). | EventSel=03H UMask=01H Counter=0,1,2,3 |
UNC_Q_RxL_CRC_ERRORS.NORMAL_OP | Number of CRC errors detected in the QPI Agent. Each QPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it). | EventSel=03H UMask=02H Counter=0,1,2,3 |
UNC_Q_RxL_CREDITS_CONSUMED_VN0.DRS | Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. | EventSel=1EH UMask=01H Counter=0,1,2,3 |
UNC_Q_RxL_CREDITS_CONSUMED_VN0.HOM | Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. | EventSel=1EH UMask=08H Counter=0,1,2,3 |
UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCB | Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. | EventSel=1EH UMask=02H Counter=0,1,2,3 |
UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCS | Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. | EventSel=1EH UMask=04H Counter=0,1,2,3 |
UNC_Q_RxL_CREDITS_CONSUMED_VN0.NDR | Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. | EventSel=1EH UMask=20H Counter=0,1,2,3 |
UNC_Q_RxL_CREDITS_CONSUMED_VN0.SNP | Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. | EventSel=1EH UMask=10H Counter=0,1,2,3 |
UNC_Q_RxL_CREDITS_CONSUMED_VNA | Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed. | EventSel=1DH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_CYCLES_NE | Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. | EventSel=0AH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G0.DATA | Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. | EventSel=01H UMask=02H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G0.IDLE | Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. | EventSel=01H UMask=01H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G0.NON_DATA | Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. | EventSel=01H UMask=04H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G1.DRS | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=02H UMask=18H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G1.DRS_DATA | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=02H UMask=08H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G1.DRS_NONDATA | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=02H UMask=10H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G1.HOM | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=02H UMask=06H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G1.HOM_NONREQ | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=02H UMask=04H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G1.HOM_REQ | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=02H UMask=02H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G1.SNP | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=02H UMask=01H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G2.NCB | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=03H UMask=0CH Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G2.NCB_DATA | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=03H UMask=04H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G2.NCB_NONDATA | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=03H UMask=08H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G2.NCS | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=03H UMask=10H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G2.NDR_AD | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=03H UMask=01H Counter=0,1,2,3 |
UNC_Q_RxL_FLITS_G2.NDR_AK | Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=03H UMask=02H Counter=0,1,2,3 |
UNC_Q_RxL_INSERTS | Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. | EventSel=08H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_INSERTS_DRS | Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only DRS flits. | EventSel=09H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_INSERTS_HOM | Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only HOM flits. | EventSel=0CH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_INSERTS_NCB | Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCB flits. | EventSel=0AH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_INSERTS_NCS | Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCS flits. | EventSel=0BH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_INSERTS_NDR | Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NDR flits. | EventSel=0EH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_INSERTS_SNP | Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only SNP flits. | EventSel=0DH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_OCCUPANCY | Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. | EventSel=0BH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_OCCUPANCY_DRS | Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors DRS flits only. | EventSel=15H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_OCCUPANCY_HOM | Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors HOM flits only. | EventSel=18H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_OCCUPANCY_NCB | Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCB flits only. | EventSel=16H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_OCCUPANCY_NCS | Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCS flits only. | EventSel=17H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_OCCUPANCY_NDR | Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NDR flits only. | EventSel=1AH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_OCCUPANCY_SNP | Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors SNP flits only. | EventSel=19H UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.BGF_DRS | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=01H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.BGF_HOM | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=08H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.BGF_NCB | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=02H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.BGF_NCS | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=04H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.BGF_NDR | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=20H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.BGF_SNP | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=10H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.EGRESS_CREDITS | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=40H Counter=0,1,2,3 |
UNC_Q_RxL_STALLS.GV | Number of stalls trying to send to R3QPI. | EventSel=35H UMask=80H Counter=0,1,2,3 |
UNC_Q_RxL0_POWER_CYCLES | Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event. | EventSel=0FH UMask=00H Counter=0,1,2,3 |
UNC_Q_RxL0P_POWER_CYCLES | Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. | EventSel=10H UMask=00H Counter=0,1,2,3 |
UNC_Q_TxL_BYPASSED | Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the QPI Link. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. | EventSel=05H UMask=00H Counter=0,1,2,3 |
UNC_Q_TxL_CRC_NO_CREDITS.ALMOST_FULL | Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall. | EventSel=02H UMask=02H Counter=0,1,2,3 |
UNC_Q_TxL_CRC_NO_CREDITS.FULL | Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall. | EventSel=02H UMask=01H Counter=0,1,2,3 |
UNC_Q_TxL_CYCLES_NE | Counts the number of cycles when the TxQ is not empty. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. | EventSel=06H UMask=00H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G0.DATA | Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. | EventSel=00H UMask=02H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G0.IDLE | Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. | EventSel=00H UMask=01H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G0.NON_DATA | Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p. | EventSel=00H UMask=04H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G1.DRS | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=00H UMask=18H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G1.DRS_DATA | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=00H UMask=08H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G1.DRS_NONDATA | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=00H UMask=10H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G1.HOM | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=00H UMask=06H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G1.HOM_NONREQ | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=00H UMask=04H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G1.HOM_REQ | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=00H UMask=02H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G1.SNP | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=00H UMask=01H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G2.NCB | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=01H UMask=0CH Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G2.NCB_DATA | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=01H UMask=04H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G2.NCB_NONDATA | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=01H UMask=08H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G2.NCS | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=01H UMask=10H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G2.NDR_AD | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=01H UMask=01H Counter=0,1,2,3 |
UNC_Q_TxL_FLITS_G2.NDR_AK | Counts the number of flits transmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time. | EventSel=01H UMask=02H Counter=0,1,2,3 |
UNC_Q_TxL_INSERTS | Number of allocations into the QPI Tx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. | EventSel=04H UMask=00H Counter=0,1,2,3 |
UNC_Q_TxL_OCCUPANCY | Accumulates the number of flits in the TxQ. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQ. | EventSel=07H UMask=00H Counter=0,1,2,3 |
UNC_Q_TxL0_POWER_CYCLES | Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event. | EventSel=0CH UMask=00H Counter=0,1,2,3 |
UNC_Q_TxL0P_POWER_CYCLES | Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. | EventSel=0DH UMask=00H Counter=0,1,2,3 |
UNC_Q_VNA_CREDIT_RETURN_OCCUPANCY | Number of VNA credits in the Rx side that are waitng to be returned back across the link. | EventSel=1BH UMask=00H Counter=0,1,2,3 |
UNC_Q_VNA_CREDIT_RETURNS | Number of VNA credits returned. | EventSel=1CH UMask=00H Counter=0,1,2,3 |
UNC_R2_CLOCKTICKS | Counts the number of uclks in the R2PCIe uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the R2PCIe is close to the Ubox, they generally should not diverge by more than a handful of cycles. | EventSel=01H UMask=00H Counter=0,1,2,3 |
UNC_R2_IIO_CREDITS_ACQUIRED.DRS | Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=33H UMask=08H Counter=0,1 |
UNC_R2_IIO_CREDITS_ACQUIRED.NCB | Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=33H UMask=10H Counter=0,1 |
UNC_R2_IIO_CREDITS_ACQUIRED.NCS | Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=33H UMask=20H Counter=0,1 |
UNC_R2_IIO_CREDITS_REJECT.DRS | Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=34H UMask=08H Counter=0,1 |
UNC_R2_IIO_CREDITS_REJECT.NCB | Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=34H UMask=10H Counter=0,1 |
UNC_R2_IIO_CREDITS_REJECT.NCS | Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=34H UMask=20H Counter=0,1 |
UNC_R2_IIO_CREDITS_USED.DRS | Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=32H UMask=08H Counter=0,1 |
UNC_R2_IIO_CREDITS_USED.NCB | Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=32H UMask=10H Counter=0,1 |
UNC_R2_IIO_CREDITS_USED.NCS | Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). | EventSel=32H UMask=20H Counter=0,1 |
UNC_R2_RING_AD_USED.CCW_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=04H Counter=0,1,2,3 |
UNC_R2_RING_AD_USED.CCW_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=08H Counter=0,1,2,3 |
UNC_R2_RING_AD_USED.CW_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=01H Counter=0,1,2,3 |
UNC_R2_RING_AD_USED.CW_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=02H Counter=0,1,2,3 |
UNC_R2_RING_AK_USED.CCW_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=08H UMask=04H Counter=0,1,2,3 |
UNC_R2_RING_AK_USED.CCW_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=08H UMask=08H Counter=0,1,2,3 |
UNC_R2_RING_AK_USED.CW_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=08H UMask=01H Counter=0,1,2,3 |
UNC_R2_RING_AK_USED.CW_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=08H UMask=02H Counter=0,1,2,3 |
UNC_R2_RING_BL_USED.CCW_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=04H Counter=0,1,2,3 |
UNC_R2_RING_BL_USED.CCW_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=08H Counter=0,1,2,3 |
UNC_R2_RING_BL_USED.CW_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=01H Counter=0,1,2,3 |
UNC_R2_RING_BL_USED.CW_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=02H Counter=0,1,2,3 |
UNC_R2_RING_IV_USED.ANY | Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time. | EventSel=0AH UMask=0FH Counter=0,1,2,3 |
UNC_R2_RxR_AK_BOUNCES | Counts the number of times when a request destined for the AK ingress bounced. | EventSel=12H UMask=00H Counter=0 |
UNC_R2_RxR_CYCLES_NE.DRS | Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=08H Counter=0,1 |
UNC_R2_RxR_CYCLES_NE.NCB | Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=10H Counter=0,1 |
UNC_R2_RxR_CYCLES_NE.NCS | Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=20H Counter=0,1 |
UNC_R2_TxR_CYCLES_FULL.AD | Counts the number of cycles when the R2PCIe Egress buffer is full. | EventSel=25H UMask=01H Counter=0 |
UNC_R2_TxR_CYCLES_FULL.AK | Counts the number of cycles when the R2PCIe Egress buffer is full. | EventSel=25H UMask=02H Counter=0 |
UNC_R2_TxR_CYCLES_FULL.BL | Counts the number of cycles when the R2PCIe Egress buffer is full. | EventSel=25H UMask=04H Counter=0 |
UNC_R2_TxR_CYCLES_NE.AD | Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity. | EventSel=23H UMask=01H Counter=0 |
UNC_R2_TxR_CYCLES_NE.AK | Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity. | EventSel=23H UMask=02H Counter=0 |
UNC_R2_TxR_CYCLES_NE.BL | Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity. | EventSel=23H UMask=04H Counter=0 |
UNC_R2_TxR_NACKS.AD | Counts the number of times that the Egress received a NACK from the ring and could not issue a transaction. | EventSel=26H UMask=01H Counter=0,1 |
UNC_R2_TxR_NACKS.AK | Counts the number of times that the Egress received a NACK from the ring and could not issue a transaction. | EventSel=26H UMask=02H Counter=0,1 |
UNC_R2_TxR_NACKS.BL | Counts the number of times that the Egress received a NACK from the ring and could not issue a transaction. | EventSel=26H UMask=04H Counter=0,1 |
UNC_R3_CLOCKTICKS | Counts the number of uclks in the QPI uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the QPI Agent is close to the Ubox, they generally should not diverge by more than a handful of cycles. | EventSel=01H UMask=00H Counter=0,1,2 |
UNC_R3_IIO_CREDITS_ACQUIRED.DRS | Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=20H UMask=08H Counter=0,1 |
UNC_R3_IIO_CREDITS_ACQUIRED.NCB | Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=20H UMask=10H Counter=0,1 |
UNC_R3_IIO_CREDITS_ACQUIRED.NCS | Counts the number of times the NCS/NCB/DRS credit is acquired in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=20H UMask=20H Counter=0,1 |
UNC_R3_IIO_CREDITS_REJECT.DRS | Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=21H UMask=08H Counter=0,1 |
UNC_R3_IIO_CREDITS_REJECT.NCB | Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=21H UMask=10H Counter=0,1 |
UNC_R3_IIO_CREDITS_REJECT.NCS | Counts the number of times that a request attempted to acquire an NCS/NCB/DRS credit in the QPI for sending messages on BL to the IIO but was rejected because no credit was available. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=21H UMask=20H Counter=0,1 |
UNC_R3_IIO_CREDITS_USED.DRS | Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=22H UMask=08H Counter=0,1 |
UNC_R3_IIO_CREDITS_USED.NCB | Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=22H UMask=10H Counter=0,1 |
UNC_R3_IIO_CREDITS_USED.NCS | Counts the number of cycles when the NCS/NCB/DRS credit is in use in the QPI for sending messages on BL to the IIO. There is one credit for each of these three message classes (three credits total). NCS is used for reads to PCIe space, NCB is used for transferring data without coherency, and DRS is used for transferring data with coherency (cacheable PCI transactions). This event can only track one message class at a time. | EventSel=22H UMask=20H Counter=0,1 |
UNC_R3_RING_AD_USED.CCW_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=04H Counter=0,1,2 |
UNC_R3_RING_AD_USED.CCW_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=08H Counter=0,1,2 |
UNC_R3_RING_AD_USED.CW_EVEN | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=01H Counter=0,1,2 |
UNC_R3_RING_AD_USED.CW_ODD | Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=07H UMask=02H Counter=0,1,2 |
UNC_R3_RING_AK_USED.CCW_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. | EventSel=08H UMask=04H Counter=0,1,2 |
UNC_R3_RING_AK_USED.CCW_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. | EventSel=08H UMask=08H Counter=0,1,2 |
UNC_R3_RING_AK_USED.CW_EVEN | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. | EventSel=08H UMask=01H Counter=0,1,2 |
UNC_R3_RING_AK_USED.CW_ODD | Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. | EventSel=08H UMask=02H Counter=0,1,2 |
UNC_R3_RING_BL_USED.CCW_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=04H Counter=0,1,2 |
UNC_R3_RING_BL_USED.CCW_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=08H Counter=0,1,2 |
UNC_R3_RING_BL_USED.CW_EVEN | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=01H Counter=0,1,2 |
UNC_R3_RING_BL_USED.CW_ODD | Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. | EventSel=09H UMask=02H Counter=0,1,2 |
UNC_R3_RING_IV_USED.ANY | Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. The IV ring is unidirectional. Whether UP or DN is used is dependent on the system programming. Thereofore, one should generally set both the UP and DN bits for a given polarity (or both) at a given time. | EventSel=0AH UMask=0FH Counter=0,1,2 |
UNC_R3_RxR_BYPASSED.AD | Counts the number of times when the Ingress was bypassed and an incoming transaction was bypassed directly across the BGF and into the qfclk domain. | EventSel=12H UMask=01H Counter=0,1 |
UNC_R3_RxR_CYCLES_NE.DRS | Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=08H Counter=0,1 |
UNC_R3_RxR_CYCLES_NE.HOM | Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=01H Counter=0,1 |
UNC_R3_RxR_CYCLES_NE.NCB | Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=10H Counter=0,1 |
UNC_R3_RxR_CYCLES_NE.NCS | Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=20H Counter=0,1 |
UNC_R3_RxR_CYCLES_NE.NDR | Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=04H Counter=0,1 |
UNC_R3_RxR_CYCLES_NE.SNP | Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=10H UMask=02H Counter=0,1 |
UNC_R3_RxR_INSERTS.DRS | Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=11H UMask=08H Counter=0,1 |
UNC_R3_RxR_INSERTS.HOM | Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=11H UMask=01H Counter=0,1 |
UNC_R3_RxR_INSERTS.NCB | Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=11H UMask=10H Counter=0,1 |
UNC_R3_RxR_INSERTS.NCS | Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=11H UMask=20H Counter=0,1 |
UNC_R3_RxR_INSERTS.NDR | Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=11H UMask=04H Counter=0,1 |
UNC_R3_RxR_INSERTS.SNP | Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. | EventSel=11H UMask=02H Counter=0,1 |
UNC_R3_RxR_OCCUPANCY.DRS | Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. | EventSel=13H UMask=08H Counter=0 |
UNC_R3_RxR_OCCUPANCY.HOM | Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. | EventSel=13H UMask=01H Counter=0 |
UNC_R3_RxR_OCCUPANCY.NCB | Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. | EventSel=13H UMask=10H Counter=0 |
UNC_R3_RxR_OCCUPANCY.NCS | Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. | EventSel=13H UMask=20H Counter=0 |
UNC_R3_RxR_OCCUPANCY.NDR | Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. | EventSel=13H UMask=04H Counter=0 |
UNC_R3_RxR_OCCUPANCY.SNP | Accumulates the occupancy of a given QPI Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI Ingress Not Empty event to calculate average occupancy or the QPI Ingress Allocations event in order to calculate average queuing latency. | EventSel=13H UMask=02H Counter=0 |
UNC_R3_VN0_CREDITS_REJECT.DRS | Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. | EventSel=37H UMask=08H Counter=0,1 |
UNC_R3_VN0_CREDITS_REJECT.HOM | Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. | EventSel=37H UMask=01H Counter=0,1 |
UNC_R3_VN0_CREDITS_REJECT.NCB | Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. | EventSel=37H UMask=10H Counter=0,1 |
UNC_R3_VN0_CREDITS_REJECT.NCS | Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. | EventSel=37H UMask=20H Counter=0,1 |
UNC_R3_VN0_CREDITS_REJECT.NDR | Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. | EventSel=37H UMask=04H Counter=0,1 |
UNC_R3_VN0_CREDITS_REJECT.SNP | Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation. | EventSel=37H UMask=02H Counter=0,1 |
UNC_R3_VN0_CREDITS_USED.DRS | Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. | EventSel=36H UMask=08H Counter=0,1 |
UNC_R3_VN0_CREDITS_USED.HOM | Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. | EventSel=36H UMask=01H Counter=0,1 |
UNC_R3_VN0_CREDITS_USED.NCB | Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. | EventSel=36H UMask=10H Counter=0,1 |
UNC_R3_VN0_CREDITS_USED.NCS | Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. | EventSel=36H UMask=20H Counter=0,1 |
UNC_R3_VN0_CREDITS_USED.NDR | Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. | EventSel=36H UMask=04H Counter=0,1 |
UNC_R3_VN0_CREDITS_USED.SNP | Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. | EventSel=36H UMask=02H Counter=0,1 |
UNC_R3_VNA_CREDIT_CYCLES_OUT | Number of QPI uclk cycles when the transmitted has no VNA credits available and therefore cannot send any requests on this channel. Note that this does not mean that no flits can be transmitted, as those holding VN0 credits will still (potentially) be able to transmit. Generally it is the goal of the uncore that VNA credits should not run out, as this can substantially throttle back useful QPI bandwidth. | EventSel=31H UMask=00H Counter=0,1 |
UNC_R3_VNA_CREDIT_CYCLES_USED | Number of QPI uclk cycles with one or more VNA credits in use. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average number of used VNA credits. | EventSel=32H UMask=00H Counter=0,1 |
UNC_R3_VNA_CREDITS_ACQUIRED | Number of QPI VNA Credit acquisitions. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder. VNA credits are used by all message classes in order to communicate across QPI. If a packet is unable to acquire credits, it will then attempt to use credits from the VN0 pool. Note that a single packet may require multiple flit buffers (i.e. when data is being transferred). Therefore, this event will increment by the number of credits acquired in each cycle. Filtering based on message class is not provided. One can count the number of packets transferred in a given message class using an qfclk event. | EventSel=33H UMask=00H Counter=0,1 |
UNC_R3_VNA_CREDITS_REJECT.DRS | Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. | EventSel=34H UMask=08H Counter=0,1 |
UNC_R3_VNA_CREDITS_REJECT.HOM | Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. | EventSel=34H UMask=01H Counter=0,1 |
UNC_R3_VNA_CREDITS_REJECT.NCB | Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. | EventSel=34H UMask=10H Counter=0,1 |
UNC_R3_VNA_CREDITS_REJECT.NCS | Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. | EventSel=34H UMask=20H Counter=0,1 |
UNC_R3_VNA_CREDITS_REJECT.NDR | Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. | EventSel=34H UMask=04H Counter=0,1 |
UNC_R3_VNA_CREDITS_REJECT.SNP | Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough. | EventSel=34H UMask=02H Counter=0,1 |
UNC_U_CLOCKTICKS | UNC_U_CLOCKTICKS | EventSel=00H UMask=00H Counter=0,1 |
UNC_U_EVENT_MSG.DOORBELL_RCVD | Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=42H UMask=08H Counter=0,1 |
UNC_U_EVENT_MSG.INT_PRIO | Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=42H UMask=10H Counter=0,1 |
UNC_U_EVENT_MSG.IPI_RCVD | Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=42H UMask=04H Counter=0,1 |
UNC_U_EVENT_MSG.MSI_RCVD | Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=42H UMask=02H Counter=0,1 |
UNC_U_EVENT_MSG.VLW_RCVD | Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=42H UMask=01H Counter=0,1 |
UNC_U_FILTER_MATCH.DISABLE | Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=41H UMask=02H Counter=0,1 |
UNC_U_FILTER_MATCH.ENABLE | Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=41H UMask=01H Counter=0,1 |
UNC_U_FILTER_MATCH.U2C_DISABLE | Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=41H UMask=08H Counter=0,1 |
UNC_U_FILTER_MATCH.U2C_ENABLE | Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID. | EventSel=41H UMask=04H Counter=0,1 |
UNC_U_LOCK_CYCLES | Number of times an IDI Lock/SplitLock sequence was started | EventSel=44H UMask=00H Counter=0,1 |
UNC_U_MSG_CHNL_SIZE_COUNT.4B | Number of transactions on the message channel filtered by request size. This includes both reads and writes. | EventSel=47H UMask=01H Counter=0,1 |
UNC_U_MSG_CHNL_SIZE_COUNT.8B | Number of transactions on the message channel filtered by request size. This includes both reads and writes. | EventSel=47H UMask=02H Counter=0,1 |
UNC_U_PHOLD_CYCLES.ACK_TO_DEASSERT | PHOLD cycles. Filter from source CoreID. | EventSel=45H UMask=02H Counter=0,1 |
UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK | PHOLD cycles. Filter from source CoreID. | EventSel=45H UMask=01H Counter=0,1 |
UNC_U_RACU_REQUESTS.COUNT | RACU Request | EventSel=46H UMask=01H Counter=0,1 |
UNC_U_U2C_EVENTS.CMC | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=10H Counter=0,1 |
UNC_U_U2C_EVENTS.LIVELOCK | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=04H Counter=0,1 |
UNC_U_U2C_EVENTS.LTERROR | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=08H Counter=0,1 |
UNC_U_U2C_EVENTS.MONITOR_T0 | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=01H Counter=0,1 |
UNC_U_U2C_EVENTS.MONITOR_T1 | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=02H Counter=0,1 |
UNC_U_U2C_EVENTS.OTHER | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=80H Counter=0,1 |
UNC_U_U2C_EVENTS.TRAP | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=40H Counter=0,1 |
UNC_U_U2C_EVENTS.UMC | Events coming from Uncore can be sent to one or all cores | EventSel=43H UMask=20H Counter=0,1 |
OFFCORE |