{"@attributes":{"version":"2.0"},"channel":{"title":"Min Hsu's Homepage","link":"https:\/\/myhsu.xyz\/","description":"Recent content on Min Hsu's Homepage","generator":"Hugo -- gohugo.io","language":"en","managingEditor":"min@myhsu.dev (Min-Yih Hsu)","webMaster":"min@myhsu.dev (Min-Yih Hsu)","copyright":"\u00a9 2024 Min-Yih Hsu","lastBuildDate":"Sat, 01 Nov 2025 00:00:00 +0000","item":[{"title":"Machine Scheduler in LLVM - Part II","link":"https:\/\/myhsu.xyz\/llvm-machine-scheduler-2\/","pubDate":"Sat, 01 Nov 2025 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/llvm-machine-scheduler-2\/","description":"In the first part of this series, we covered the basic workflow of Machine Scheduler &ndash; LLVM&rsquo;s predominated instruction scheduling framework &ndash; and learned that an instruction could go through three phases of checks before it finally got scheduled: legality check, feasibility check, and profitibility check.\nThe first two phases &ndash; which were explained in details in that post &ndash; have direct connections with program correctness and avoiding potential processor hazard, respectiviely."},{"title":"Machine Scheduler in LLVM - Part I","link":"https:\/\/myhsu.xyz\/llvm-machine-scheduler\/","pubDate":"Tue, 16 Sep 2025 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/llvm-machine-scheduler\/","description":"By this point it&rsquo;s evidently that I won&rsquo;t shut up talking about scheduling model in LLVM.1\nI found the fact that it encodes so many low-level details pretty facinated. It has so much potential and could arguably be used in more places &ndash; but the latter is probably a story for another day, because there is still an elephant in the room:\nWhat about instruction scheduler, which scheduling model was originally invented for?"},{"title":"Calculate Throughput with LLVM's Scheduling Model","link":"https:\/\/myhsu.xyz\/llvm-sched-interval-throughput\/","pubDate":"Sun, 23 Mar 2025 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/llvm-sched-interval-throughput\/","description":"From Cambridge Dictionary:\nThroughput \/\u02c8\u03b8ru\u02d0.p\u028at\/ (noun)\nan amount of work done in a particular period of time. In architecture-level performance analysis, throughput is usually measured by IPC &ndash; Instruction Per Cycle. The inverse of this property, namely, inverse or reciprocal throughput, is also commonly used to describe the performance characteristics of a sinlge instruction. It&rsquo;s not the time an instruction spends on to finish from start to end &ndash; that is latency &ndash; but more closed to the amount of time it takes to finish a bunch of instructions amortized by their degree of (instruction-level) parallelism."},{"title":"Visualize RISC-V Vector Memory Instructions","link":"https:\/\/myhsu.xyz\/riscv-rvv-mem-visualize\/","pubDate":"Sun, 05 Jan 2025 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/riscv-rvv-mem-visualize\/","description":"RISC-V Vector (RVV) extension has several kinds of load \/ store instructions which access memory in different ways. Just as the memory access pattern might take a little more time to fully understand, it gets even more tricky when multiplexing with RVV&rsquo;s own concepts like variable element size (SEW), register groups (LMUL), number of elements (VL), masks and mask \/ tail policies.\nPersonally I found it easier to memorize them with visualization, hence this (relatively) short post!"},{"title":"Scheduling Model in LLVM - Part II","link":"https:\/\/myhsu.xyz\/llvm-sched-model-1.5\/","pubDate":"Mon, 28 Oct 2024 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/llvm-sched-model-1.5\/","description":"In the previous post, we covered the basics of scheduling model in LLVM. Specifically, per-operand tokens that connect an instruction with models that spell out processor-specific scheduling properties like instruction latency, and the concept of processor resources with different sizes of buffer.\nWhile I was planning to write how scheduling models are used in this post &ndash; namely, covering things like instruction scheduler and MCA &ndash; the draft was overwhelmed by the sheer amount of content needed to cover just the substrate."},{"title":"When LLVM scalable vector meets RISC-V","link":"https:\/\/myhsu.xyz\/llvm-riscv-bits-per-block\/","pubDate":"Sat, 05 Oct 2024 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/llvm-riscv-bits-per-block\/","description":"There is a nice page about how LLVM handles RISC-V Vector Extension (RVV). It primarily covers how the RISC-V backend lowers vector types and vector operations. Right at the beginning of the page lies this table:\nIt shows the LLVM IR types we use to represent RVV&rsquo;s dynamically sized vectors: each row is an element type, while each column is a LMUL &ndash; the register grouping factor, or &ldquo;how many vector registers should we slap together and treat it as a single logical vector register&rdquo;."},{"title":"Scheduling Model in LLVM - Part I","link":"https:\/\/myhsu.xyz\/llvm-sched-model-1\/","pubDate":"Mon, 05 Aug 2024 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/llvm-sched-model-1\/","description":"Instruction scheduling is essential to modern compilers. It tries to hide latencies and increases the throughput of a straight line code by reordering the enclosing instructions. In order to do that, compilers have to know a whole bunch of information, ranging from individual instruction&rsquo;s latency to microarchitecture details. The system that describes these is called a scheduling model. In LLVM, a scheduling model is used by not just the instruction scheduler, but also target-specific optimizations like MachineCombiner and components like MCA (Machine Code Analyzer)1."},{"title":"Legalizations in LLVM Backend","link":"https:\/\/myhsu.xyz\/llvm-codegen-legalization\/","pubDate":"Wed, 15 May 2024 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/llvm-codegen-legalization\/","description":"Ideally, compilers can build a program for a wide variety of hardware without the need to change a single line of its source code. While there are exceptions and corner cases, this holds in the majority of cases. Which means that if the input code uses something that is not directly available on the hardware, the compiler has to figure out a way to effectively emulate those features.\nThis might sound a little distant to our typical software development experiences, but I&rsquo;m not even talking about a problem that only happens on some exotic proprietary ML accelerator whatnots."},{"title":"Publications","link":"https:\/\/myhsu.xyz\/publications\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","author":"min@myhsu.dev (Min-Yih Hsu)","guid":"https:\/\/myhsu.xyz\/publications\/","description":"Book Min-Yih Hsu. \u201cLLVM Techniques, Tips, and Best Practices Clang and Middle-End Libraries: Design Powerful and Reliable Compilers Using the Latest Libraries and Tools from LLVM\u201d. Packt Publishing (2021). Amazon Link.\nPaper and Talk Min-Yih Hsu. &ldquo;Scheduling Model in LLVM: Past, Present, and Future&rdquo;. LLVM Developers&rsquo; Meeting (2025). [Slides] Min-Yih Hsu. &ldquo;New llvm-exegesis Support for RISC-V Vector Extension&rdquo;. LLVM Developers&rsquo; Meeting (2024). [Recording] [Slides] Min-Yih Hsu, Felicitas Hetzelt, David Gens, Michael Maitland, and Michael Franz."}]}}