Skip to content

Commit d9a4b24

Browse files
mknyszekcherrymui
authored andcommitted
runtime: always lock OS thread in debugcall
Right now debuggers like Delve rely on the new goroutine created to run a debugcall function to run on the same thread it started on, up until it hits itself with a SIGINT as part of the debugcall protocol. That's all well and good, except debugCallWrap1 isn't particularly careful about not growing the stack. For example, if the new goroutine happens to have a stale preempt flag, then it's possible a stack growth will cause a roundtrip into the scheduler, possibly causing the goroutine to switch to another thread. Previous attempts to just be more careful around debugCallWrap1 were helpful, but insufficient. This change takes everything a step further and always locks the debug call goroutine and the new goroutine it creates to the OS thread. For #61732. Change-Id: I038f3a4df30072833e27e6a5a1ec01806a32891f Reviewed-on: https://go-review.googlesource.com/c/go/+/515637 LUCI-TryBot-Result: Go LUCI <[email protected]> Reviewed-by: Alessandro Arzilli <[email protected]> Reviewed-by: Cherry Mui <[email protected]>
1 parent fb5bdb4 commit d9a4b24

1 file changed

Lines changed: 30 additions & 28 deletions

File tree

src/runtime/debugcall.go

Lines changed: 30 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -102,11 +102,18 @@ func debugCallCheck(pc uintptr) string {
102102
//
103103
//go:nosplit
104104
func debugCallWrap(dispatch uintptr) {
105-
var lockedm bool
106105
var lockedExt uint32
107106
callerpc := getcallerpc()
108107
gp := getg()
109108

109+
// Lock ourselves to the OS thread.
110+
//
111+
// Debuggers rely on us running on the same thread until we get to
112+
// dispatch the function they asked as to.
113+
//
114+
// We're going to transfer this to the new G we just created.
115+
lockOSThread()
116+
110117
// Create a new goroutine to execute the call on. Run this on
111118
// the system stack to avoid growing our stack.
112119
systemstack(func() {
@@ -121,27 +128,22 @@ func debugCallWrap(dispatch uintptr) {
121128
}
122129
newg.param = unsafe.Pointer(args)
123130

124-
// If the current G is locked, then transfer that
125-
// locked-ness to the new goroutine.
126-
if gp.lockedm != 0 {
127-
// Save lock state to restore later.
128-
mp := gp.m
129-
if mp != gp.lockedm.ptr() {
130-
throw("inconsistent lockedm")
131-
}
132-
133-
lockedm = true
134-
lockedExt = mp.lockedExt
135-
136-
// Transfer external lock count to internal so
137-
// it can't be unlocked from the debug call.
138-
mp.lockedInt++
139-
mp.lockedExt = 0
140-
141-
mp.lockedg.set(newg)
142-
newg.lockedm.set(mp)
143-
gp.lockedm = 0
131+
// Transfer locked-ness to the new goroutine.
132+
// Save lock state to restore later.
133+
mp := gp.m
134+
if mp != gp.lockedm.ptr() {
135+
throw("inconsistent lockedm")
144136
}
137+
// Save the external lock count and clear it so
138+
// that it can't be unlocked from the debug call.
139+
// Note: we already locked internally to the thread,
140+
// so if we were locked before we're still locked now.
141+
lockedExt = mp.lockedExt
142+
mp.lockedExt = 0
143+
144+
mp.lockedg.set(newg)
145+
newg.lockedm.set(mp)
146+
gp.lockedm = 0
145147

146148
// Mark the calling goroutine as being at an async
147149
// safe-point, since it has a few conservative frames
@@ -177,13 +179,13 @@ func debugCallWrap(dispatch uintptr) {
177179
// We'll resume here when the call returns.
178180

179181
// Restore locked state.
180-
if lockedm {
181-
mp := gp.m
182-
mp.lockedExt = lockedExt
183-
mp.lockedInt--
184-
mp.lockedg.set(gp)
185-
gp.lockedm.set(mp)
186-
}
182+
mp := gp.m
183+
mp.lockedExt = lockedExt
184+
mp.lockedg.set(gp)
185+
gp.lockedm.set(mp)
186+
187+
// Undo the lockOSThread we did earlier.
188+
unlockOSThread()
187189

188190
gp.asyncSafePoint = false
189191
}

0 commit comments

Comments
 (0)