Cover Image for How Hyper-Threading Works — A Microarchitectural Perspective
Cover Image for How Hyper-Threading Works — A Microarchitectural Perspective
11 Going

How Hyper-Threading Works — A Microarchitectural Perspective

Hosted by Abhinav Upadhyay
Zoom
Registration
Approval Required
Your registration is subject to approval by the host.
Welcome! To join the event, please register below.
About Event

Have you ever wondered how Simultaneous Multithreading (SMT) works at the hardware level? Or thought about its impact on your code's performance, such as whether it can affect single-threaded applications?

Simultaneous Multithreading (SMT), also known as Hyper-Threading (HT), is a hardware feature available on many modern processors that enables a single processor core to execute two threads simultaneously. This technology improves instruction throughput and can significantly boost system performance.

In our next live session, we will answer these questions by exploring the microarchitectural implementation of SMT in Intel CPUs. Apart from covering how SMT works, this discussion will provide you with a thorough overview of the microarchitecture of the x86 CPUs, and offer a deep understanding of how your program's instructions are executed. This knowledge is extremely useful for performing low-level performance optimizations and squeezing out every bit of efficiency from the CPU.

Here’s what we’ll cover:

  • What is simultaneous multithreading (SMT) & motivation behind its introduction in CPUs

  • A brief background on the CPU microarchitecture

  • How SMT instruction execution works at the microarchitecture level, we will cover:

    • Instruction fetch & decode

    • ITLB and branch prediction

    • Uop queue

    • Out-of-order execution engine

    • Instruction scheduling & retirement

    • Memory access

If you are not familiar with these microarchitecture details of the CPU, this talk will be a good first introduction.

11 Going