Skip to content

fix: harden and consolidate bot notifications#1798

Merged
yottahmd merged 4 commits intomainfrom
debounce-notification
Mar 19, 2026
Merged

fix: harden and consolidate bot notifications#1798
yottahmd merged 4 commits intomainfrom
debounce-notification

Conversation

@yottahmd
Copy link
Copy Markdown
Collaborator

@yottahmd yottahmd commented Mar 19, 2026

Summary

  • batch DAG run notifications by destination to reduce Slack and Telegram flood
  • harden delivery semantics and clean shutdown handling for notification batches
  • share the notification monitor core across Slack and Telegram

Testing

  • go test ./internal/service/chatbridge ./internal/service/slack ./internal/service/telegram -count=1
  • make fmt

Summary by CodeRabbit

Release Notes

  • New Features

    • Notification batching: Success notifications are now grouped and delivered as periodic digests
    • Improved deduplication: Duplicate notification events are filtered and tracked across channels
    • Enhanced retry mechanism: Failed notification deliveries are automatically retried with improved handling
    • AI-powered notifications: Optional LLM-assisted generation for richer notification content
  • Tests

    • Added comprehensive test coverage for notification batching and retry behavior

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 19, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: fea3d3f7-7c88-432b-9d8c-003507b3d18b

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This pull request introduces a centralized NotificationMonitor in chatbridge that polls DAG run status changes, batches notifications by urgency/window, and delegates delivery to platform-specific transports. Slack and Telegram monitors are refactored to delegate notification logic to this shared monitor rather than implementing their own polling and deduplication.

Changes

Cohort / File(s) Summary
Notification Infrastructure
internal/service/chatbridge/notifications.go, internal/service/chatbridge/notifications_test.go
Introduces NotificationBatcher for classifying and windowing DAG run statuses into success-digest vs urgent batches; implements deterministic message formatting and optional LLM-assisted message generation via GenerateNotificationMessage and FormatNotificationBatch.
Notification Monitor
internal/service/chatbridge/monitor.go, internal/service/chatbridge/monitor_test.go
Adds NotificationMonitor that polls status changes, maintains delivered-state tracking via sync.Map, interfaces with transports via NotificationTransport, batches events via the batcher, and drains pending notifications on shutdown.
Chatbridge Test Updates
internal/service/chatbridge/chatbridge_test.go
Enhanced fakeAgentService.GenerateAssistantMessage test double to support configurable error returns and custom message content via generateErr and generatedMessage fields.
Slack Monitor Refactoring
internal/service/slack/monitor.go, internal/service/slack/monitor_test.go, internal/service/slack/bot_test.go
Refactored DAGRunMonitor to delegate polling/deduplication to chatbridge.NotificationMonitor via core field; removed self-contained seen-tracking and status-filtering; added NotificationDestinations() and FlushNotificationBatch(...) to implement NotificationTransport; enhanced fake Slack client and agent service for improved test observability.
Telegram Monitor Refactoring
internal/service/telegram/monitor.go, internal/service/telegram/monitor_test.go, internal/service/telegram/bot.go, internal/service/telegram/bot_test.go
Refactored DAGRunMonitor to delegate to chatbridge.NotificationMonitor and removed polling/deduplication logic; added NotificationDestinations() and FlushNotificationBatch(...); enhanced fake Telegram agent service for test tracking; added cancellation check in typing loop; updated monitor tests to exercise new behavior.

Sequence Diagram

sequenceDiagram
    participant Monitor as NotificationMonitor
    participant Store as DAGRunStore
    participant Batcher as NotificationBatcher
    participant Transport as NotificationTransport<br/>(SlackBot/TelegramBot)
    participant Dest as Destination<br/>(Slack/Telegram)
    
    rect rgba(100, 150, 200, 0.5)
    Note over Monitor,Store: Initialization & Seeding
    Monitor->>Store: ListStatuses (last 24h, non-active)
    Store-->>Monitor: historical statuses
    Monitor->>Monitor: seedDelivered: mark all as delivered
    end
    
    rect rgba(100, 200, 150, 0.5)
    Note over Monitor,Batcher: Polling Loop
    Monitor->>Store: ListStatuses (last 1h, active/inactive)
    Store-->>Monitor: current statuses
    Monitor->>Monitor: checkForCompletions: filter by NotificationStatuses
    Monitor->>Batcher: Enqueue(destination, status)
    Batcher->>Batcher: classify status into bucket (urgent/digest)
    Batcher->>Batcher: deduplicate & buffer by destination+class
    Batcher-->>Monitor: signal ready when bucket timer fires
    end
    
    rect rgba(200, 150, 100, 0.5)
    Note over Monitor,Transport: Flushing & Delivery
    Monitor->>Batcher: TakeReady() returns NotificationPendingBatches
    Monitor->>Transport: FlushNotificationBatch(ctx, dest, batch, allowLLM)
    Transport->>Dest: deliver batch (with LLM-generated or formatted message)
    Dest-->>Transport: success/failure
    Transport-->>Monitor: bool (acknowledged)
    Monitor->>Monitor: IsDelivered: mark batch events delivered for destination
    end
    
    rect rgba(150, 100, 200, 0.5)
    Note over Monitor,Batcher: Shutdown & Drain
    Monitor->>Batcher: DrainAndStop() returns pending + buffered batches
    Batcher-->>Monitor: []NotificationPendingBatch (urgent first)
    loop for each pending batch
        Monitor->>Transport: FlushNotificationBatch(ctx, dest, batch, allowLLM=false)
        Transport->>Dest: deliver without LLM
        Dest-->>Transport: acknowledged
        Monitor->>Monitor: mark delivered
    end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • PR #1793: Directly extends the same chatbridge package and agent service APIs (GenerateAssistantMessage, AppendExternalMessage) that the new notification monitor and batcher depend on.
  • PR #1783: Introduces Telegram bot DAG run notification flows that are now being replaced/integrated with the centralized NotificationMonitor in this PR.
  • PR #1794: Modifies agent session and message APIs (GenerateAssistantMessage, AppendExternalMessage) that are used by the new notification formatting and delivery logic.
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.64% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the main changes: hardening notification delivery semantics and consolidating bot notification logic across services through shared batching infrastructure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch debounce-notification
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (7)
internal/service/telegram/bot_test.go (1)

158-166: Consider consistent lock scope for generatedErr access.

Same concern as the Slack fake: generatedErr and generated are read outside the lock. While safe for current test usage, capturing values under the lock would be more robust.

🔧 Suggested fix for consistent locking
 func (s *fakeTelegramAgentService) GenerateAssistantMessage(context.Context, string, agent.UserIdentity, string, string) (agent.Message, error) {
 	s.mu.Lock()
 	s.generateCalls++
-	s.mu.Unlock()
-	if s.generatedErr != nil {
-		return agent.Message{}, s.generatedErr
+	err := s.generatedErr
+	msg := s.generated
+	s.mu.Unlock()
+	if err != nil {
+		return agent.Message{}, err
 	}
-	return s.generated, nil
+	return msg, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/telegram/bot_test.go` around lines 158 - 166, In
GenerateAssistantMessage, avoid reading s.generatedErr and s.generated outside
the mutex; acquire the lock, increment s.generateCalls and copy s.generatedErr
and s.generated to local vars while holding s.mu, then release the lock and use
those locals for the error check and return. This keeps access to the shared
fields (s.generatedErr, s.generated, s.generateCalls) consistent and race-free
in the fakeTelegramAgentService.
internal/service/slack/monitor_test.go (1)

22-40: Consider extracting startTestMonitor to a shared test helper.

This helper is duplicated verbatim in internal/service/telegram/monitor_test.go. While acceptable for now, extracting it to a shared test utility (e.g., in chatbridge or a testutil package) would reduce maintenance burden.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/slack/monitor_test.go` around lines 22 - 40, The
startTestMonitor helper is duplicated; extract it into a shared test helper
(e.g., package testutil or chatbridge/testhelpers) so both
internal/service/slack/monitor_test.go and
internal/service/telegram/monitor_test.go import and call the common function;
move the startTestMonitor implementation (context creation, goroutine running
monitor.Run(ctx), cancellation and shutdown wait logic) into the shared package,
export it if needed (StartTestMonitor) or provide a wrapper, update both test
files to remove the local copy and call the shared helper, and run tests to
ensure no behavior changes.
internal/service/chatbridge/notifications_test.go (2)

175-185: Potential issue with DAG name generation for large maxNotificationGroups.

string(rune('a'+i)) produces valid lowercase letters only for i in 0-25. If maxNotificationGroups exceeds 24, the generated names will contain non-letter characters (e.g., {, |). While the test may still pass, the names become confusing. Consider using fmt.Sprintf("dag-%d", i) for clarity and robustness.

🔧 Suggested fix
 	for i := range maxNotificationGroups + 2 {
 		events = append(events, NotificationEvent{
 			Status: &exec.DAGRunStatus{
-				Name:      "dag-" + string(rune('a'+i)),
-				DAGRunID:  "run-" + string(rune('a'+i)),
+				Name:      fmt.Sprintf("dag-%d", i),
+				DAGRunID:  fmt.Sprintf("run-%d", i),
 				AttemptID: "a1",
 				Status:    core.Succeeded,
 			},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/chatbridge/notifications_test.go` around lines 175 - 185,
The test currently constructs DAG names using string(rune('a'+i)) which breaks
for i>25; update the name generation in the loop that builds events (the
NotificationEvent / exec.DAGRunStatus entries) to use numeric formatting
instead, e.g. build Name and DAGRunID with fmt.Sprintf("dag-%d", i) and
fmt.Sprintf("run-%d", i) so names remain clear and robust for any value of
maxNotificationGroups.

52-55: Unused mutex-protected slice flushed.

The flushed slice is declared and mutex-protected but only written to, never read for assertions after the final check at line 79. Lines 64-67 check currentFlushes but flushed is empty at that point since batches are only appended after waitForReadyBatch. Consider removing the dead code or clarifying its purpose.

🔧 Suggested simplification
 func TestNotificationBatcher_ReplacesWaitingWithSuccessBeforeFlush(t *testing.T) {
 	t.Parallel()
 
-	var (
-		mu      sync.Mutex
-		flushed []NotificationBatch
-	)
 	batcher := NewNotificationBatcher(15*time.Millisecond, 25*time.Millisecond)
 	defer batcher.Stop()
 
 	require.True(t, batcher.Enqueue("dest-1", &exec.DAGRunStatus{Name: "briefing", DAGRunID: "run-1", AttemptID: "a1", Status: core.Waiting}))
 	time.Sleep(5 * time.Millisecond)
 	require.True(t, batcher.Enqueue("dest-1", &exec.DAGRunStatus{Name: "briefing", DAGRunID: "run-1", AttemptID: "a1", Status: core.Succeeded}))
 
 	time.Sleep(20 * time.Millisecond)
-	mu.Lock()
-	currentFlushes := len(flushed)
-	mu.Unlock()
-	assert.Zero(t, currentFlushes, "waiting batch should have been replaced before urgent flush")
 
 	ready := waitForReadyBatch(t, batcher)
-	mu.Lock()
-	flushed = append(flushed, ready.Batch)
-	mu.Unlock()
-
-	mu.Lock()
-	defer mu.Unlock()
-	require.Len(t, flushed, 1)
-	assert.Equal(t, NotificationClassSuccessDigest, flushed[0].Class)
-	require.Len(t, flushed[0].Events, 1)
-	assert.Equal(t, core.Succeeded, flushed[0].Events[0].Status.Status)
+	assert.Equal(t, NotificationClassSuccessDigest, ready.Batch.Class)
+	require.Len(t, ready.Batch.Events, 1)
+	assert.Equal(t, core.Succeeded, ready.Batch.Events[0].Status.Status)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/chatbridge/notifications_test.go` around lines 52 - 55, The
mutex-protected slice `flushed` (and `mu`) is dead code: it's only appended to
but never read, causing confusion in tests; remove the declarations `mu
sync.Mutex` and `flushed []NotificationBatch` and any code that appends to
`flushed`, and instead rely on the existing `currentFlushes` checks and
`waitForReadyBatch` assertions to validate flush behavior (or if the intent was
to assert flushed contents, update the test to read/assert `flushed` after the
final `waitForReadyBatch`). Locate uses around `waitForReadyBatch`,
`currentFlushes`, and any append operations to `flushed` to remove or convert
them into proper assertions.
internal/service/slack/bot_test.go (1)

172-180: Consider consistent lock scope for generatedErr access.

The lock is released before checking generatedErr (line 176), but generatedErr is read outside the lock. If tests ever concurrently modify generatedErr, this could race. Since tests currently set it before calling, this works, but holding the lock through the check would be safer.

🔧 Suggested fix for consistent locking
 func (s *fakeSlackAgentService) GenerateAssistantMessage(context.Context, string, agent.UserIdentity, string, string) (agent.Message, error) {
 	s.mu.Lock()
 	s.generateCalls++
-	s.mu.Unlock()
-	if s.generatedErr != nil {
-		return agent.Message{}, s.generatedErr
+	err := s.generatedErr
+	msg := s.generated
+	s.mu.Unlock()
+	if err != nil {
+		return agent.Message{}, err
 	}
-	return s.generated, nil
+	return msg, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/slack/bot_test.go` around lines 172 - 180, In
GenerateAssistantMessage, the mutex is unlocked before reading s.generatedErr
which can race if tests modify it concurrently; hold s.mu during the check and
return so the read of s.generatedErr and access to s.generated are protected.
Modify GenerateAssistantMessage (function name) to keep the lock around the
generatedErr check and the return of s.generated, then unlock the mutex
afterwards (protecting s.mu, s.generatedErr, and s.generated).
internal/service/chatbridge/notifications.go (1)

619-625: Shallow clone may not isolate all mutable state.

The cloneNotificationStatus function performs a shallow copy. If exec.DAGRunStatus contains pointer fields (like Nodes, OnFailure, OnExit slices/pointers visible in the formatting functions), modifications to the original status after enqueuing could affect the batched snapshot.

However, since the monitor polls completed statuses that are unlikely to be mutated further, this is likely acceptable in practice.

💡 Consider deep clone if status mutation is possible

If there's any possibility of the original status being mutated after enqueuing, consider a deeper clone:

func cloneNotificationStatus(status *exec.DAGRunStatus) *exec.DAGRunStatus {
	if status == nil {
		return nil
	}
	clone := *status
	if status.Nodes != nil {
		clone.Nodes = make([]*exec.Node, len(status.Nodes))
		copy(clone.Nodes, status.Nodes)
	}
	return &clone
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/chatbridge/notifications.go` around lines 619 - 625, The
current cloneNotificationStatus function does a shallow copy which can leave
pointer/slice fields shared (e.g., Nodes, OnFailure, OnExit or any
pointer/map/slice inside exec.DAGRunStatus), so modify cloneNotificationStatus
to perform a deep copy of those mutable fields: after cloning the struct value,
allocate and copy any non-nil slices (e.g., make and copy clone.Nodes,
clone.OnFailure, clone.OnExit) and, if needed, clone the elements (e.g., copy
node pointers into new slice or deep-copy node structs) and copy any maps by
creating new maps and copying entries; keep the nil checks and return the new
pointer as before.
internal/service/chatbridge/monitor.go (1)

149-182: Consider potential startup delay with large DAG history.

The seedDelivered method queries the last 24 hours of DAG runs without a limit. For repositories with high DAG activity, this could result in loading a large number of statuses at startup.

💡 Consider adding a limit to seedDelivered query

If startup performance becomes an issue with very active DAG repositories, consider adding a limit:

 	statuses, err := m.dagRunStore.ListStatuses(ctx,
 		exec.WithFrom(from),
 		exec.WithTo(to),
+		exec.WithLimit(10000), // Cap seeding to avoid slow startup
 	)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/chatbridge/monitor.go` around lines 149 - 182, The
seedDelivered method can load an unbounded number of DAG statuses; update
seedDelivered to pass a limit and ordering to dagRunStore.ListStatuses (e.g.,
add exec.WithLimit(N) and an order option such as exec.WithOrder or
exec.WithSortRecent) so you only fetch the most recent N runs (make N
configurable on NotificationMonitor or use a sensible default like 1000); keep
the existing time window (exec.WithFrom/exec.WithTo) but combine it with the
limit and ensure the store call still returns the most recent entries first
before calling m.markDelivered for each destination.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/service/chatbridge/monitor.go`:
- Around line 149-182: The seedDelivered method can load an unbounded number of
DAG statuses; update seedDelivered to pass a limit and ordering to
dagRunStore.ListStatuses (e.g., add exec.WithLimit(N) and an order option such
as exec.WithOrder or exec.WithSortRecent) so you only fetch the most recent N
runs (make N configurable on NotificationMonitor or use a sensible default like
1000); keep the existing time window (exec.WithFrom/exec.WithTo) but combine it
with the limit and ensure the store call still returns the most recent entries
first before calling m.markDelivered for each destination.

In `@internal/service/chatbridge/notifications_test.go`:
- Around line 175-185: The test currently constructs DAG names using
string(rune('a'+i)) which breaks for i>25; update the name generation in the
loop that builds events (the NotificationEvent / exec.DAGRunStatus entries) to
use numeric formatting instead, e.g. build Name and DAGRunID with
fmt.Sprintf("dag-%d", i) and fmt.Sprintf("run-%d", i) so names remain clear and
robust for any value of maxNotificationGroups.
- Around line 52-55: The mutex-protected slice `flushed` (and `mu`) is dead
code: it's only appended to but never read, causing confusion in tests; remove
the declarations `mu sync.Mutex` and `flushed []NotificationBatch` and any code
that appends to `flushed`, and instead rely on the existing `currentFlushes`
checks and `waitForReadyBatch` assertions to validate flush behavior (or if the
intent was to assert flushed contents, update the test to read/assert `flushed`
after the final `waitForReadyBatch`). Locate uses around `waitForReadyBatch`,
`currentFlushes`, and any append operations to `flushed` to remove or convert
them into proper assertions.

In `@internal/service/chatbridge/notifications.go`:
- Around line 619-625: The current cloneNotificationStatus function does a
shallow copy which can leave pointer/slice fields shared (e.g., Nodes,
OnFailure, OnExit or any pointer/map/slice inside exec.DAGRunStatus), so modify
cloneNotificationStatus to perform a deep copy of those mutable fields: after
cloning the struct value, allocate and copy any non-nil slices (e.g., make and
copy clone.Nodes, clone.OnFailure, clone.OnExit) and, if needed, clone the
elements (e.g., copy node pointers into new slice or deep-copy node structs) and
copy any maps by creating new maps and copying entries; keep the nil checks and
return the new pointer as before.

In `@internal/service/slack/bot_test.go`:
- Around line 172-180: In GenerateAssistantMessage, the mutex is unlocked before
reading s.generatedErr which can race if tests modify it concurrently; hold s.mu
during the check and return so the read of s.generatedErr and access to
s.generated are protected. Modify GenerateAssistantMessage (function name) to
keep the lock around the generatedErr check and the return of s.generated, then
unlock the mutex afterwards (protecting s.mu, s.generatedErr, and s.generated).

In `@internal/service/slack/monitor_test.go`:
- Around line 22-40: The startTestMonitor helper is duplicated; extract it into
a shared test helper (e.g., package testutil or chatbridge/testhelpers) so both
internal/service/slack/monitor_test.go and
internal/service/telegram/monitor_test.go import and call the common function;
move the startTestMonitor implementation (context creation, goroutine running
monitor.Run(ctx), cancellation and shutdown wait logic) into the shared package,
export it if needed (StartTestMonitor) or provide a wrapper, update both test
files to remove the local copy and call the shared helper, and run tests to
ensure no behavior changes.

In `@internal/service/telegram/bot_test.go`:
- Around line 158-166: In GenerateAssistantMessage, avoid reading s.generatedErr
and s.generated outside the mutex; acquire the lock, increment s.generateCalls
and copy s.generatedErr and s.generated to local vars while holding s.mu, then
release the lock and use those locals for the error check and return. This keeps
access to the shared fields (s.generatedErr, s.generated, s.generateCalls)
consistent and race-free in the fakeTelegramAgentService.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4403fd8e-a8cd-4da9-9c1e-dee2c0550834

📥 Commits

Reviewing files that changed from the base of the PR and between a61a34e and 0809a5e.

📒 Files selected for processing (12)
  • internal/service/chatbridge/chatbridge_test.go
  • internal/service/chatbridge/monitor.go
  • internal/service/chatbridge/monitor_test.go
  • internal/service/chatbridge/notifications.go
  • internal/service/chatbridge/notifications_test.go
  • internal/service/slack/bot_test.go
  • internal/service/slack/monitor.go
  • internal/service/slack/monitor_test.go
  • internal/service/telegram/bot.go
  • internal/service/telegram/bot_test.go
  • internal/service/telegram/monitor.go
  • internal/service/telegram/monitor_test.go

@yottahmd yottahmd merged commit 63ed05d into main Mar 19, 2026
5 checks passed
@yottahmd yottahmd deleted the debounce-notification branch March 19, 2026 06:11
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 19, 2026

Codecov Report

❌ Patch coverage is 64.75038% with 233 lines in your changes missing coverage. Please review.
✅ Project coverage is 68.98%. Comparing base (a61a34e) to head (9b55ce5).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
internal/service/chatbridge/notifications.go 63.88% 111 Missing and 32 partials ⚠️
internal/service/chatbridge/monitor.go 66.45% 41 Missing and 13 partials ⚠️
internal/service/telegram/monitor.go 60.78% 17 Missing and 3 partials ⚠️
internal/service/slack/monitor.go 71.42% 13 Missing and 1 partial ⚠️
internal/service/telegram/bot.go 50.00% 2 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1798      +/-   ##
==========================================
+ Coverage   68.69%   68.98%   +0.29%     
==========================================
  Files         422      424       +2     
  Lines       50800    51230     +430     
==========================================
+ Hits        34896    35343     +447     
+ Misses      12953    12877      -76     
- Partials     2951     3010      +59     
Files with missing lines Coverage Δ
internal/service/telegram/bot.go 35.26% <50.00%> (+0.14%) ⬆️
internal/service/slack/monitor.go 64.44% <71.42%> (+45.04%) ⬆️
internal/service/telegram/monitor.go 66.66% <60.78%> (+53.10%) ⬆️
internal/service/chatbridge/monitor.go 66.45% <66.45%> (ø)
internal/service/chatbridge/notifications.go 63.88% <63.88%> (ø)

... and 17 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a61a34e...9b55ce5. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant