-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Description
Motivation
While investigating #11498 I noticed that one of the problems - perhaps the main problem - is that our stage1 LLVM lowering manages to get optimized into something like this:
if (!init_a()) goto cleanup_a;
if (!init_b()) goto cleanup_b;
if (!init_c()) goto cleanup_c;
cleanup_all:
deinit_c();
cleanup_c:
deinit_b();
cleanup_b:
deinit_a();
cleanup_a:
return;Whereas our stage2 LLVM lowering remains in the form that it goes in, which looks more like this:
if (!init_a()) {
return;
}
if (!init_b()) {
deinit_a();
return;
}
if (!init_c()) {
deinit_b();
deinit_a();
return;
}
deinit_c();
deinit_b();
deinit_a();
return;A direct fix to this would be to implement #283 - something that I plan to investigate as well. However, this led me to notice a related difference in the input LLVM IR that gets generated in stage1 vs stage2. Here is an example:
fn doTheTest() !void {
var list = std.ArrayList([]const u8).init(std.heap.page_allocator);
defer list.deinit();
var list2 = std.ArrayList([]const u8).init(std.heap.page_allocator);
defer list2.deinit();
try foo();
try foo();
}Specifically for the try foo(); line, stage1 lowers to this LLVM IR:
%2 = call fastcc i16 @foo()
store i16 %2, i16* %0, align 2
%3 = icmp ne i16 %2, 0
br i1 %3, label %ErrRetReturn, label %ErrRetContinue
ErrRetReturn: ; preds = %Entry
%4 = load i16, i16* %0, align 2
store i16 %4, i16* %result, align 2
call fastcc void @"std.array_list.ArrayListAligned([]const u8,null).deinit"(%"std.array_list.ArrayListAligned([]const u8,null)"* %list2)
call fastcc void @"std.array_list.ArrayListAligned([]const u8,null).deinit"(%"std.array_list.ArrayListAligned([]const u8,null)"* %list)
ret i16 %4
ErrRetContinue: ; preds = %Entrywhile stage2 lowers to this:
%8 = call fastcc i16 @test2.foo()
%9 = icmp eq i16 %8, 0
br i1 %9, label %Then, label %Else
Then: ; preds = %Entry
br label %Block
Else: ; preds = %Entry
call fastcc void @"array_list.ArrayListAligned([]const u8,null).deinit"(%"array_list.ArrayListAligned([]const u8,null)"* %1)
call fastcc void @"array_list.ArrayListAligned([]const u8,null).deinit"(%"array_list.ArrayListAligned([]const u8,null)"* %3)
ret i16 %8
Block: ; preds = %ThenPerhaps this difference is related to LLVM's relative ability to optimize. At least, the former LLVM IR is simpler since it has fewer basic blocks, creating less potential issues for LLVM both in terms of optimization and compilation speed.
So I wanted to start exploring how to emit the better code like stage1 does. The first thing I did was look at the corresponding AIR:
%68!= block(void, {
%74 = call(%72, [])
%75 = is_non_err(%74)
%103!= cond_br(%75!, {
%83! %92!
%76 = unwrap_errunion_payload(void, %74!)
%77!= br(%68, %76!)
}, {
%79 = unwrap_errunion_err((error_set_inferred func=Module.Decl.Index(153)), %74!)
%85 = load((struct decl=Module.Decl.Index(256)), %52)
%88!= call(%83!, [%85!])
%94 = load((struct decl=Module.Decl.Index(256)), %1)
%97!= call(%92!, [%94!])
%99 = bitcast((error_set_inferred func=Module.Decl.Index(152)), %79!)
%101 = wrap_errunion_err((error_set_inferred func=Module.Decl.Index(152))!void, %99!)
%102!= ret(%101!)
})
})
This is the AIR for the try foo(); line. After pondering a potential optimization during the LLVM backend in order to lower this as desired, I made the following observations:
- This kind of optimization does not work very well with the existing structure. It would require a full recursive scan of the tree at each
block, doing redundant work as the regular lowering, or it would require noticing too late that the optimization could have happened and then rewriting the LLVM IR using LLVM's API. - This optimization would need to be repeated in every backend which is inefficient from a development effort perspective and leaves room for a lot of bugs and inconsistency in Zig's codegen quality.
- Doing an optimization earlier in the pipeline would potentially improve performance due to creating fewer IR instructions in memory along multiple phases of the pipeline.
tryis extremely common in Zig source code.
Proposed Changes
In short summary, lower try more efficiently, taking advantage of a new special purpose ZIR instruction and AIR instruction.
Taking the same Zig source code example from above, here is the proposed difference in ZIR:
ZIR Before
%154 = block({
%149 = decl_val("foo") token_offset:34:9
%150 = dbg_stmt(8, 12)
%151 = call(.auto, %149, []) node_offset:34:12
%152 = is_non_err(%151) node_offset:34:5
%153 = condbr(%152, {
%155 = err_union_payload_unsafe(%151) node_offset:34:5
%166 = break(%154, %155)
}, {
%156 = err_union_code(%151) node_offset:34:5
%157 = dbg_stmt(6, 11)
%158 = field_call_bind(%130, "deinit") node_offset:32:16
%159 = dbg_stmt(6, 23)
%160 = call(nodiscard .auto, %158, []) node_offset:32:23
%161 = dbg_stmt(3, 11)
%162 = field_call_bind(%111, "deinit") node_offset:29:15
%163 = dbg_stmt(3, 22)
%164 = call(nodiscard .auto, %162, []) node_offset:29:22
%165 = ret_node(%156) node_offset:34:5
}) node_offset:34:5
}) node_offset:34:5
%167 = ensure_result_used(%154) node_offset:34:5
ZIR After
%149 = decl_val("foo") token_offset:34:9
%150 = dbg_stmt(8, 12)
%151 = call(.auto, %149, []) node_offset:34:12
%153 = try(%151, {
%156 = err_union_code(%151) node_offset:34:5
%157 = dbg_stmt(6, 11)
%158 = field_call_bind(%130, "deinit") node_offset:32:16
%159 = dbg_stmt(6, 23)
%160 = call(nodiscard .auto, %158, []) node_offset:32:23
%161 = dbg_stmt(3, 11)
%162 = field_call_bind(%111, "deinit") node_offset:29:15
%163 = dbg_stmt(3, 22)
%164 = call(nodiscard .auto, %162, []) node_offset:29:22
%165 = ret_node(%156) node_offset:34:5
}) node_offset:34:5
%167 = ensure_result_used(%153) node_offset:34:5
ZIR Explanation
This new try instruction would implicitly do the checking whether an operand is an error, and its result value would be the unwrapped payload. The body that it provides is what executes in the "is an error" case, which you can see here is running the defer expressions. In the future if #283 is implemented, this would change to simply take another operand which is the block to break from in case of an error. In both cases the try body would have the guarantee that there is no control flow possible from inside the try body to directly after the try instruction.
Sema
Sema would have the option to lower the ZIR try instruction the same as before using already existing AIR instructions, however, introducing a try AIR instruction as well would result in additional savings and better codegen without optimizations.
AIR Before
This is the same snippet as above in the Motivation section, however I reproduced it below for comparison:
%68!= block(void, {
%74 = call(%72, [])
%75 = is_non_err(%74)
%103!= cond_br(%75!, {
%83! %92!
%76 = unwrap_errunion_payload(void, %74!)
%77!= br(%68, %76!)
}, {
%79 = unwrap_errunion_err((error_set_inferred func=Module.Decl.Index(153)), %74!)
%85 = load((struct decl=Module.Decl.Index(256)), %52)
%88!= call(%83!, [%85!])
%94 = load((struct decl=Module.Decl.Index(256)), %1)
%97!= call(%92!, [%94!])
%99 = bitcast((error_set_inferred func=Module.Decl.Index(152)), %79!)
%101 = wrap_errunion_err((error_set_inferred func=Module.Decl.Index(152))!void, %99!)
%102!= ret(%101!)
})
})
AIR After
%74 = call(%72, [])
%103!= try(%74!, {
%79 = unwrap_errunion_err((error_set_inferred func=Module.Decl.Index(153)), %74!)
%85 = load((struct decl=Module.Decl.Index(256)), %52)
%88!= call(%83!, [%85!])
%94 = load((struct decl=Module.Decl.Index(256)), %1)
%97!= call(%92!, [%94!])
%99 = bitcast((error_set_inferred func=Module.Decl.Index(152)), %79!)
%101 = wrap_errunion_err((error_set_inferred func=Module.Decl.Index(152))!void, %99!)
%102!= ret(%101!)
})
Conclusion
This AIR is trivial to lower to only 1 branch / 1 basic block rather than the two required by the block / cond_br combo.
This would require adding support for this new kind of control flow in all the codegen backends, however, I think it will be worth it because it offers both improved compilation speed and codegen, which is the kind of tradeoff we are intentionally making for this compiler.