Show note when instance is still running during logout#105
Conversation
gtsiolis
commented
Mar 11, 2026
| BEFORE | AFTER |
|---|---|
![]() |
![]() |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI (base), Organization UI (inherited) Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (12)
📝 WalkthroughWalkthroughThe pull request extends the logout flow to be runtime-aware. It adds functionality to detect if LocalStack containers are still running after logout and notifies users accordingly, leveraging Docker runtime information when available. Changes
Sequence DiagramsequenceDiagram
participant User
participant LogoutCmd as Logout Command
participant UI as RunLogout (UI)
participant Config as Config
participant Runtime as Docker Runtime
participant Sink as TUI Sink
User->>LogoutCmd: Execute logout
LogoutCmd->>LogoutCmd: Create runtime (if available)
LogoutCmd->>UI: RunLogout(ctx, runtime, ...)
UI->>UI: Perform logout
UI->>Config: Load container config
UI->>Runtime: Query container status
Runtime-->>UI: Container running status
alt Container running
UI->>Sink: Emit note: "LocalStack is still running"
Sink->>User: Display notification
end
UI-->>LogoutCmd: Return result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
internal/container/running.go
Outdated
| ) | ||
|
|
||
| func AnyRunning(ctx context.Context, rt runtime.Runtime) (bool, error) { | ||
| cfg, err := config.Get() |
There was a problem hiding this comment.
issue: we should avoid calls to config.Get() internally but pass only what's necessary as argument (container config list in this case)
There was a problem hiding this comment.
This seems to be an issue across the codebase, right? Did you have in mind a fix like this or something simpler scoped for changes in this PR?
diff --git a/cmd/logout.go b/cmd/logout.go
index c0c516f..e470b38 100644
--- a/cmd/logout.go
+++ b/cmd/logout.go
@@ -7,6 +7,7 @@ import (
"github.com/localstack/lstk/internal/api"
"github.com/localstack/lstk/internal/auth"
+ "github.com/localstack/lstk/internal/config"
"github.com/localstack/lstk/internal/container"
"github.com/localstack/lstk/internal/env"
"github.com/localstack/lstk/internal/output"
@@ -22,12 +23,16 @@ func newLogoutCmd(cfg *env.Env) *cobra.Command {
PreRunE: initConfig,
RunE: func(cmd *cobra.Command, args []string) error {
platformClient := api.NewPlatformClient(cfg.APIEndpoint)
+ appConfig, err := config.Get()
+ if err != nil {
+ return fmt.Errorf("failed to get config: %w", err)
+ }
if isInteractiveMode(cfg) {
var rt runtime.Runtime
if dockerRuntime, err := runtime.NewDockerRuntime(); err == nil {
rt = dockerRuntime
}
- return ui.RunLogout(cmd.Context(), rt, platformClient, cfg.AuthToken, cfg.ForceFileKeyring)
+ return ui.RunLogout(cmd.Context(), rt, platformClient, cfg.AuthToken, cfg.ForceFileKeyring, appConfig.Containers)
}
sink := output.NewPlainSink(os.Stdout)
@@ -44,7 +49,7 @@ func newLogoutCmd(cfg *env.Env) *cobra.Command {
}
if rt, err := runtime.NewDockerRuntime(); err == nil {
- if running, err := container.AnyRunning(cmd.Context(), rt); err == nil && running {
+ if running, err := container.AnyRunning(cmd.Context(), rt, appConfig.Containers); err == nil && running {
output.EmitNote(sink, "LocalStack is still running in the background")
}
}
diff --git a/cmd/logs.go b/cmd/logs.go
index 5cfc2c6..40fac90 100644
--- a/cmd/logs.go
+++ b/cmd/logs.go
@@ -1,8 +1,10 @@
package cmd
import (
+ "fmt"
"os"
+ "github.com/localstack/lstk/internal/config"
"github.com/localstack/lstk/internal/container"
"github.com/localstack/lstk/internal/output"
"github.com/localstack/lstk/internal/runtime"
@@ -24,7 +26,11 @@ func newLogsCmd() *cobra.Command {
if err != nil {
return err
}
- return container.Logs(cmd.Context(), rt, output.NewPlainSink(os.Stdout), follow)
+ appConfig, err := config.Get()
+ if err != nil {
+ return fmt.Errorf("failed to get config: %w", err)
+ }
+ return container.Logs(cmd.Context(), rt, output.NewPlainSink(os.Stdout), appConfig.Containers, follow)
},
}
cmd.Flags().BoolP("follow", "f", false, "Follow log output")
diff --git a/cmd/root.go b/cmd/root.go
index 103bca0..0efc875 100644
--- a/cmd/root.go
+++ b/cmd/root.go
@@ -81,12 +81,19 @@ func runStart(ctx context.Context, rt runtime.Runtime, cfg *env.Env, tel *teleme
// TODO: replace map with a typed payload struct once event schema is finalised
tel.Emit(ctx, "cli_cmd", map[string]any{"cmd": "lstk start", "params": []string{}})
+ appConfig, err := config.Get()
+ if err != nil {
+ return fmt.Errorf("failed to get config: %w", err)
+ }
+
opts := container.StartOptions{
PlatformClient: api.NewPlatformClient(cfg.APIEndpoint),
AuthToken: cfg.AuthToken,
ForceFileKeyring: cfg.ForceFileKeyring,
WebAppURL: cfg.WebAppURL,
LocalStackHost: cfg.LocalStackHost,
+ Containers: appConfig.Containers,
+ Env: appConfig.Env,
}
if isInteractiveMode(cfg) {
return ui.Run(ctx, rt, version.Version(), opts)
diff --git a/cmd/stop.go b/cmd/stop.go
index f8aa46d..c867a7c 100644
--- a/cmd/stop.go
+++ b/cmd/stop.go
@@ -1,8 +1,10 @@
package cmd
import (
+ "fmt"
"os"
+ "github.com/localstack/lstk/internal/config"
"github.com/localstack/lstk/internal/container"
"github.com/localstack/lstk/internal/env"
"github.com/localstack/lstk/internal/output"
@@ -22,13 +24,16 @@ func newStopCmd(cfg *env.Env) *cobra.Command {
if err != nil {
return err
}
+ appConfig, err := config.Get()
+ if err != nil {
+ return fmt.Errorf("failed to get config: %w", err)
+ }
if isInteractiveMode(cfg) {
- return ui.RunStop(cmd.Context(), rt)
+ return ui.RunStop(cmd.Context(), rt, appConfig.Containers)
}
- return container.Stop(cmd.Context(), rt, output.NewPlainSink(os.Stdout))
+ return container.Stop(cmd.Context(), rt, output.NewPlainSink(os.Stdout), appConfig.Containers)
},
}
}
-
diff --git a/internal/container/logs.go b/internal/container/logs.go
index a557f86..8e0665a 100644
--- a/internal/container/logs.go
+++ b/internal/container/logs.go
@@ -11,17 +11,13 @@ import (
"github.com/localstack/lstk/internal/runtime"
)
-func Logs(ctx context.Context, rt runtime.Runtime, sink output.Sink, follow bool) error {
- cfg, err := config.Get()
- if err != nil {
- return fmt.Errorf("failed to get config: %w", err)
- }
- if len(cfg.Containers) == 0 {
+func Logs(ctx context.Context, rt runtime.Runtime, sink output.Sink, containers []config.ContainerConfig, follow bool) error {
+ if len(containers) == 0 {
return fmt.Errorf("no containers configured")
}
// TODO: handle logs per container
- c := cfg.Containers[0]
+ c := containers[0]
pr, pw := io.Pipe()
errCh := make(chan error, 1)
diff --git a/internal/container/running.go b/internal/container/running.go
index e21820a..7decfce 100644
--- a/internal/container/running.go
+++ b/internal/container/running.go
@@ -8,13 +8,8 @@ import (
"github.com/localstack/lstk/internal/runtime"
)
-func AnyRunning(ctx context.Context, rt runtime.Runtime) (bool, error) {
- cfg, err := config.Get()
- if err != nil {
- return false, fmt.Errorf("failed to get config: %w", err)
- }
-
- for _, c := range cfg.Containers {
+func AnyRunning(ctx context.Context, rt runtime.Runtime, containers []config.ContainerConfig) (bool, error) {
+ for _, c := range containers {
running, err := rt.IsRunning(ctx, c.Name())
if err != nil {
return false, fmt.Errorf("checking %s running: %w", c.Name(), err)
diff --git a/internal/container/running_test.go b/internal/container/running_test.go
index e667f88..97d8d8a 100644
--- a/internal/container/running_test.go
+++ b/internal/container/running_test.go
@@ -2,8 +2,6 @@ package container
import (
"context"
- "os"
- "path/filepath"
"testing"
"github.com/localstack/lstk/internal/config"
@@ -14,50 +12,27 @@ import (
)
func TestAnyRunning_ReturnsTrueWhenConfiguredContainerIsRunning(t *testing.T) {
- initTestConfig(t, `
-[[containers]]
-type = "aws"
-tag = "latest"
-port = "4566"
-`)
+ containers := []config.ContainerConfig{{Type: config.EmulatorAWS, Tag: "latest", Port: "4566"}}
ctrl := gomock.NewController(t)
mockRT := runtime.NewMockRuntime(ctrl)
mockRT.EXPECT().IsRunning(gomock.Any(), "localstack-aws").Return(true, nil)
- running, err := AnyRunning(context.Background(), mockRT)
+ running, err := AnyRunning(context.Background(), mockRT, containers)
require.NoError(t, err)
assert.True(t, running)
}
func TestAnyRunning_ReturnsFalseWhenConfiguredContainerIsNotRunning(t *testing.T) {
- initTestConfig(t, `
-[[containers]]
-type = "aws"
-tag = "latest"
-port = "4566"
-`)
+ containers := []config.ContainerConfig{{Type: config.EmulatorAWS, Tag: "latest", Port: "4566"}}
ctrl := gomock.NewController(t)
mockRT := runtime.NewMockRuntime(ctrl)
mockRT.EXPECT().IsRunning(gomock.Any(), "localstack-aws").Return(false, nil)
- running, err := AnyRunning(context.Background(), mockRT)
+ running, err := AnyRunning(context.Background(), mockRT, containers)
require.NoError(t, err)
assert.False(t, running)
}
-
-func initTestConfig(t *testing.T, content string) {
- t.Helper()
-
- dir := t.TempDir()
- path := filepath.Join(dir, "config.toml")
- err := os.WriteFile(path, []byte(content), 0o600)
- require.NoError(t, err)
- require.NoError(t, config.InitFromPath(path))
- t.Cleanup(func() {
- require.NoError(t, config.InitFromPath(path))
- })
-}
diff --git a/internal/container/start.go b/internal/container/start.go
index 60a14e2..9b489f5 100644
--- a/internal/container/start.go
+++ b/internal/container/start.go
@@ -29,6 +29,8 @@ type StartOptions struct {
ForceFileKeyring bool
WebAppURL string
LocalStackHost string
+ Containers []config.ContainerConfig
+ Env map[string]map[string]string
}
func Start(ctx context.Context, rt runtime.Runtime, sink output.Sink, opts StartOptions, interactive bool) error {
@@ -48,17 +50,12 @@ func Start(ctx context.Context, rt runtime.Runtime, sink output.Sink, opts Start
return err
}
- cfg, err := config.Get()
- if err != nil {
- return fmt.Errorf("failed to get config: %w", err)
- }
-
- if hasDuplicateContainerTypes(cfg.Containers) {
+ if hasDuplicateContainerTypes(opts.Containers) {
output.EmitWarning(sink, "Multiple emulators of the same type are defined in your config; this setup is not supported yet")
}
- containers := make([]runtime.ContainerConfig, len(cfg.Containers))
- for i, c := range cfg.Containers {
+ containers := make([]runtime.ContainerConfig, len(opts.Containers))
+ for i, c := range opts.Containers {
image, err := c.Image()
if err != nil {
return err
@@ -72,7 +69,7 @@ func Start(ctx context.Context, rt runtime.Runtime, sink output.Sink, opts Start
return err
}
- resolvedEnv, err := c.ResolvedEnv(cfg.Env)
+ resolvedEnv, err := c.ResolvedEnv(opts.Env)
if err != nil {
return err
}
@@ -115,7 +112,7 @@ func Start(ctx context.Context, rt runtime.Runtime, sink output.Sink, opts Start
setups := map[config.EmulatorType]postStartSetupFunc{
config.EmulatorAWS: awsconfig.Setup,
}
- return runPostStartSetups(ctx, sink, cfg.Containers, interactive, opts.LocalStackHost, setups)
+ return runPostStartSetups(ctx, sink, opts.Containers, interactive, opts.LocalStackHost, setups)
}
func runPostStartSetups(ctx context.Context, sink output.Sink, containers []config.ContainerConfig, interactive bool, localStackHost string, setups map[config.EmulatorType]postStartSetupFunc) error {
diff --git a/internal/container/stop.go b/internal/container/stop.go
index fa5048f..52c88c0 100644
--- a/internal/container/stop.go
+++ b/internal/container/stop.go
@@ -9,13 +9,8 @@ import (
"github.com/localstack/lstk/internal/runtime"
)
-func Stop(ctx context.Context, rt runtime.Runtime, sink output.Sink) error {
- cfg, err := config.Get()
- if err != nil {
- return fmt.Errorf("failed to get config: %w", err)
- }
-
- for _, c := range cfg.Containers {
+func Stop(ctx context.Context, rt runtime.Runtime, sink output.Sink, containers []config.ContainerConfig) error {
+ for _, c := range containers {
name := c.Name()
running, err := rt.IsRunning(ctx, name)
if err != nil {
diff --git a/internal/ui/run.go b/internal/ui/run.go
index 0881f48..afca580 100644
--- a/internal/ui/run.go
+++ b/internal/ui/run.go
@@ -6,7 +6,6 @@ import (
"os"
tea "github.com/charmbracelet/bubbletea"
- "github.com/localstack/lstk/internal/config"
"github.com/localstack/lstk/internal/container"
"github.com/localstack/lstk/internal/endpoint"
"github.com/localstack/lstk/internal/output"
@@ -32,10 +31,10 @@ func Run(parentCtx context.Context, rt runtime.Runtime, version string, opts con
// FIXME: This assumes a single emulator; revisit for proper multi-emulator support
emulatorName := "LocalStack Emulator"
host := endpoint.Hostname
- if cfg, err := config.Get(); err == nil && len(cfg.Containers) > 0 {
- emulatorName = cfg.Containers[0].DisplayName()
- if cfg.Containers[0].Port != "" {
- host, _ = endpoint.ResolveHost(cfg.Containers[0].Port, opts.LocalStackHost)
+ if len(opts.Containers) > 0 {
+ emulatorName = opts.Containers[0].DisplayName()
+ if opts.Containers[0].Port != "" {
+ host, _ = endpoint.ResolveHost(opts.Containers[0].Port, opts.LocalStackHost)
}
}
diff --git a/internal/ui/run_logout.go b/internal/ui/run_logout.go
index f83423c..2bf8a0d 100644
--- a/internal/ui/run_logout.go
+++ b/internal/ui/run_logout.go
@@ -8,12 +8,13 @@ import (
tea "github.com/charmbracelet/bubbletea"
"github.com/localstack/lstk/internal/api"
"github.com/localstack/lstk/internal/auth"
+ "github.com/localstack/lstk/internal/config"
"github.com/localstack/lstk/internal/container"
"github.com/localstack/lstk/internal/output"
"github.com/localstack/lstk/internal/runtime"
)
-func RunLogout(parentCtx context.Context, rt runtime.Runtime, platformClient api.PlatformAPI, authToken string, forceFileKeyring bool) error {
+func RunLogout(parentCtx context.Context, rt runtime.Runtime, platformClient api.PlatformAPI, authToken string, forceFileKeyring bool, containers []config.ContainerConfig) error {
_, cancel := context.WithCancel(parentCtx)
defer cancel()
@@ -34,7 +35,7 @@ func RunLogout(parentCtx context.Context, rt runtime.Runtime, platformClient api
a := auth.New(sink, platformClient, tokenStorage, authToken, "", false)
err = a.Logout()
if err == nil && rt != nil {
- if running, runningErr := container.AnyRunning(parentCtx, rt); runningErr == nil && running {
+ if running, runningErr := container.AnyRunning(parentCtx, rt, containers); runningErr == nil && running {
output.EmitNote(sink, "LocalStack is still running in the background")
}
}
diff --git a/internal/ui/run_stop.go b/internal/ui/run_stop.go
index 3de944e..f24f3cc 100644
--- a/internal/ui/run_stop.go
+++ b/internal/ui/run_stop.go
@@ -6,12 +6,13 @@ import (
"os"
tea "github.com/charmbracelet/bubbletea"
+ "github.com/localstack/lstk/internal/config"
"github.com/localstack/lstk/internal/container"
"github.com/localstack/lstk/internal/output"
"github.com/localstack/lstk/internal/runtime"
)
-func RunStop(parentCtx context.Context, rt runtime.Runtime) error {
+func RunStop(parentCtx context.Context, rt runtime.Runtime, containers []config.ContainerConfig) error {
ctx, cancel := context.WithCancel(parentCtx)
defer cancel()
@@ -20,7 +21,7 @@ func RunStop(parentCtx context.Context, rt runtime.Runtime) error {
runErrCh := make(chan error, 1)
go func() {
- err := container.Stop(ctx, rt, output.NewTUISink(programSender{p: p}))
+ err := container.Stop(ctx, rt, output.NewTUISink(programSender{p: p}), containers)
runErrCh <- err
if err != nil && !errors.Is(err, context.Canceled) {
p.Send(runErrMsg{err: err})There was a problem hiding this comment.
You're right 🙈
Thanks for fixing everywhere!
internal/ui/run_logout.go
Outdated
| a := auth.New(sink, platformClient, tokenStorage, authToken, "", false) | ||
| err = a.Logout() | ||
| if err == nil && rt != nil { | ||
| if running, runningErr := container.AnyRunning(parentCtx, rt); runningErr == nil && running { |
There was a problem hiding this comment.
issue: We should pass the cancelContext created at line 15/17 instead of parentContext here. We're in a go routine and if the user cancels the command, we need to make sure the request to docker is also cancelled.
internal/container/running_test.go
Outdated
| t.Cleanup(func() { | ||
| require.NoError(t, config.InitFromPath(path)) | ||
| }) | ||
| } |
There was a problem hiding this comment.
nit: CLAUDE.md says "Prefer integration tests to cover most cases" but apparently this was not picked up. This is a unit test, integration test would be better.
cmd/logout.go
Outdated
| if isInteractiveMode(cfg) { | ||
| return ui.RunLogout(cmd.Context(), platformClient, cfg.AuthToken, cfg.ForceFileKeyring) | ||
| var rt runtime.Runtime | ||
| if dockerRuntime, err := runtime.NewDockerRuntime(); err == nil { |
There was a problem hiding this comment.
nit: we create runtime.NewDockerRuntime() twice (once for interactive, once for non-interactive). This should be created once at the top of RunE and passed down, similar to how platformClient is already handled. This avoids redundant Docker client initialization and follows the dependency injection pattern the codebase uses.
34efd91 to
9b873c9
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
internal/container/running.go (1)
11-17: 🛠️ Refactor suggestion | 🟠 MajorInject container config instead of reading global config here.
This helper now pulls process-global state from
config.Get(), which makesinternal/containerharder to test in isolation and pushes config lookup into domain code. Please pass the container list in from the command/UI boundary and keepAnyRunningpure.♻️ Suggested direction
-func AnyRunning(ctx context.Context, rt runtime.Runtime) (bool, error) { - cfg, err := config.Get() - if err != nil { - return false, fmt.Errorf("failed to get config: %w", err) - } - - for _, c := range cfg.Containers { +func AnyRunning(ctx context.Context, rt runtime.Runtime, containers []config.ContainerConfig) (bool, error) { + for _, c := range containers { running, err := rt.IsRunning(ctx, c.Name()) if err != nil { return false, fmt.Errorf("checking %s running: %w", c.Name(), err) }As per coding guidelines, "
internal/**/*.go: Avoid package-level global variables; use constructor functions that return fresh instances and inject dependencies explicitly to keep packages testable in isolation".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/container/running.go` around lines 11 - 17, The AnyRunning function currently reads global config via config.Get(), making it impure and hard to test; change its signature to accept the container list as an injected dependency (e.g., func AnyRunning(ctx context.Context, rt runtime.Runtime, containers []config.Container) (bool, error)) and remove the config.Get() call and cfg usage inside AnyRunning; update all call sites to pass in cfg.Containers from the command/UI boundary so internal/container remains pure and testable (or alternatively add a new helper AnyRunningWithContainers that takes the containers slice and delegate tests/callers to it).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/container/running.go`:
- Around line 11-17: The AnyRunning function currently reads global config via
config.Get(), making it impure and hard to test; change its signature to accept
the container list as an injected dependency (e.g., func AnyRunning(ctx
context.Context, rt runtime.Runtime, containers []config.Container) (bool,
error)) and remove the config.Get() call and cfg usage inside AnyRunning; update
all call sites to pass in cfg.Containers from the command/UI boundary so
internal/container remains pure and testable (or alternatively add a new helper
AnyRunningWithContainers that takes the containers slice and delegate
tests/callers to it).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 4cdda54f-bbab-4ed0-b389-27400c2fcbbb
📒 Files selected for processing (4)
cmd/logout.gointernal/container/running.gointernal/container/running_test.gointernal/ui/run_logout.go
|
Made an attempt to resolve the remaining comments in 66ca4c1. Could you take another look when you're free, @carole-lavillonniere? Low priority. |
|
Thank you for taking another look, @carole-lavillonniere! 🍫 |

