Skip to content

🐛 fix(blog,accm): public blog fallback + split AI metric + full project history#6214

Merged
clubanderson merged 1 commit intomainfrom
accm/split-pr-issue-metric
Apr 10, 2026
Merged

🐛 fix(blog,accm): public blog fallback + split AI metric + full project history#6214
clubanderson merged 1 commit intomainfrom
accm/split-pr-issue-metric

Conversation

@clubanderson
Copy link
Copy Markdown
Collaborator

Summary

Three fixes bundled (each ~surgical, all touch the Learn / ACCM analytics surface):

  1. useMediumBlog.ts — public production fallback
    When the local /api/medium/blog endpoint is unreachable, fall back to https://console.kubestellar.io/api/medium/blog (CORS-enabled, public, returns the right shape). Covers: Vite-only dev with no Go backend, self-hosted installs whose backend isn't running, and stale backends predating PR ✨ Add Medium blog section to Learn dropdown #5277 (which is the case I hit on my own dev box — backend was started before the Medium route was added, so all requests fell through to a 401). Also switched the cache-hit path to lazy useState initializers so we don't call setState inside the effect.

  2. web/public/analytics.js — split the AI vs Human metric
    The single "AI Contributions: 59%" KPI was lumping PRs and issues together, hiding the fact that PRs are overwhelmingly AI-authored while issues are overwhelmingly human-filed bug reports. Split into four KPIs (AI-Authored PRs, Human-Authored PRs, AI-Filed Issues, Human-Filed Issues) and split the AI-vs-Human chart into a PR chart and an Issue chart.

  3. web/netlify/functions/analytics-accm.mts — full project history
    Replaced `WEEKS_OF_HISTORY = 12` and the three `since.setDate(... -90)` callsites with a `PROJECT_START_DATE = '2025-12-15'` computation. ACCM charts now show the full project history (Dec 2025 → today) instead of a sliding 12-week window. Bumped `MAX_PAGES` 15 → 30 since the longer window will hit the page cap on busy weeks. Capped at `MAX_WEEKS_OF_HISTORY = 260` for safety.

Test plan

  • Local: kill backend, hit Vite at 5174, open Learn dropdown — blog posts now appear (via prod fallback)
  • Local: with backend running, blog posts still appear (local hit)
  • /admin/analytics on console.kubestellar.io: ACCM section shows 4 KPIs (PRs and Issues split), 2 line charts (PRs / Issues), and history extending back to W51 of 2025
  • No regressions in lint/build (verified locally)

Three independent fixes bundled because they all touch how the
Learn / Analytics surfaces report blog and contribution data.

1. useMediumBlog.ts — fall back to the public production endpoint
   (https://console.kubestellar.io/api/medium/blog) when the local
   /api/medium/blog endpoint fails. Covers Vite-only dev with no
   backend, self-hosted installs that haven't started the backend,
   and stale backends predating the Medium blog route. Also use lazy
   useState initializers so the cache-hit path doesn't call setState
   in an effect.

2. analytics.js (ACCM Metrics) — split 'AI Contributions' into four
   KPIs (AI-Authored PRs, Human-Authored PRs, AI-Filed Issues,
   Human-Filed Issues) and split the AI vs Human chart into a PR
   chart and an Issue chart. Lumping PRs (overwhelmingly AI-authored)
   with issues (overwhelmingly user-filed bug reports) was hiding
   the fact that >95% of code is AI-written.

3. analytics-accm.mts — replace WEEKS_OF_HISTORY=12 with a
   PROJECT_START_DATE='2025-12-15' computation so the chart shows
   the entire project history rather than a sliding 12-week window.
   Bumped MAX_PAGES 15 -> 30 to keep up with the longer window.

Signed-off-by: Andrew Anderson <[email protected]>
Copilot AI review requested due to automatic review settings April 10, 2026 19:31
@kubestellar-prow kubestellar-prow Bot added the dco-signoff: yes Indicates the PR's author has signed the DCO. label Apr 10, 2026
@kubestellar-prow
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign clubanderson for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@netlify
Copy link
Copy Markdown

netlify Bot commented Apr 10, 2026

Deploy Preview for kubestellarconsole ready!

Name Link
🔨 Latest commit 45184a4
🔍 Latest deploy log https://app.netlify.com/projects/kubestellarconsole/deploys/69d9501cc5b9db0008637751
😎 Deploy Preview https://deploy-preview-6214.console-deploy-preview.kubestellar.io
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hey @clubanderson — thanks for opening this PR!

🤖 This project is developed exclusively using AI coding assistants.

Please do not attempt to code anything for this project manually.
All contributions should be authored using an AI coding tool such as:

This ensures consistency in code style, architecture patterns, test coverage,
and commit quality across the entire codebase.


This is an automated message.

@kubestellar-prow kubestellar-prow Bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 10, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Improves the Learn/ACCM analytics experience by making the Medium blog feed more resilient, refining the AI vs Human contribution metrics, and expanding ACCM history to cover the full project timeline.

Changes:

  • Add a public production fallback for the Medium blog API and switch cache-hit initialization to lazy useState initializers.
  • Split the ACCM “AI vs Human” metric into separate PR vs Issue KPIs and charts.
  • Replace the sliding 12-week ACCM history window with a project-start-based window and increase pagination limits accordingly.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
web/src/hooks/useMediumBlog.ts Adds local endpoint constant + public fallback URL; uses lazy state initialization for cache hits and fetches with fallback behavior.
web/public/analytics.js Splits ACCM AI/Human KPIs into PR vs Issue categories and renders two separate stacked area charts.
web/netlify/functions/analytics-accm.mts Computes history length from a project start date and expands fetch window/pagination to support longer history.

Comment thread web/public/analytics.js
Comment on lines +943 to +957
const aiPrs = accm.weeklyActivity.reduce((s, w) => s + (w.aiPrs || 0), 0);
const humanPrs = accm.weeklyActivity.reduce((s, w) => s + (w.humanPrs || 0), 0);
const totalPrs = aiPrs + humanPrs;
const aiPrPct = totalPrs > 0 ? Math.round((aiPrs / totalPrs) * 100) : 0;

const aiIssues = accm.weeklyActivity.reduce((s, w) => s + (w.aiIssues || 0), 0);
const humanIssues = accm.weeklyActivity.reduce((s, w) => s + (w.humanIssues || 0), 0);
const totalIssues = aiIssues + humanIssues;
const aiIssuePct = totalIssues > 0 ? Math.round((aiIssues / totalIssues) * 100) : 0;

html += '<div class="kpi-grid">';
html += `<div class="kpi-card"><div class="kpi-label">AI Contributions</div><div class="kpi-value" style="color:var(--purple)">${aiPct}%</div><div class="kpi-change flat">${totalAi} of ${total} total</div></div>`;
html += `<div class="kpi-card"><div class="kpi-label">Human Contributions</div><div class="kpi-value" style="color:var(--cyan)">${100 - aiPct}%</div><div class="kpi-change flat">${totalHuman} of ${total} total</div></div>`;
html += `<div class="kpi-card"><div class="kpi-label">AI-Authored PRs</div><div class="kpi-value" style="color:var(--purple)">${aiPrPct}%</div><div class="kpi-change flat">${aiPrs} of ${totalPrs} PRs</div></div>`;
html += `<div class="kpi-card"><div class="kpi-label">Human-Authored PRs</div><div class="kpi-value" style="color:var(--cyan)">${100 - aiPrPct}%</div><div class="kpi-change flat">${humanPrs} of ${totalPrs} PRs</div></div>`;
html += `<div class="kpi-card"><div class="kpi-label">AI-Filed Issues</div><div class="kpi-value" style="color:var(--purple)">${aiIssuePct}%</div><div class="kpi-change flat">${aiIssues} of ${totalIssues} issues</div></div>`;
html += `<div class="kpi-card"><div class="kpi-label">Human-Filed Issues</div><div class="kpi-value" style="color:var(--cyan)">${100 - aiIssuePct}%</div><div class="kpi-change flat">${humanIssues} of ${totalIssues} issues</div></div>`;
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The KPI percentages for Human PRs/Issues are computed as 100 - ai*Pct. When the total count is 0, ai*Pct is forced to 0, which makes the Human percentage show as 100% even though there were 0 PRs/issues (e.g. "0 of 0 PRs" but "100%"). Consider explicitly handling totalPrs === 0 / totalIssues === 0 so both percentages render as 0% (or display N/A) in the zero-data case.

Copilot uses AI. Check for mistakes.
Comment on lines +75 to +82
// Read the cache synchronously during initial render via lazy useState
// initializers. This avoids calling setState inside the effect for the
// cache-hit path (react-hooks/set-state-in-effect).
const [posts, setPosts] = useState<BlogPost[]>(() => readCache()?.posts ?? [])
const [channelUrl, setChannelUrl] = useState<string>(
() => readCache()?.channelUrl ?? 'https://medium.com/@kubestellar'
)
const [loading, setLoading] = useState(() => readCache() === null)
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readCache() is called three times during initial render (posts, channelUrl, loading), which repeats sessionStorage.getItem + JSON.parse and can lead to small inconsistencies if the cache changes between calls. Consider reading the cache once (e.g., into a local variable inside a single lazy initializer) and deriving all initial state from that value.

Copilot uses AI. Check for mistakes.
@clubanderson clubanderson merged commit d384298 into main Apr 10, 2026
57 of 63 checks passed
@kubestellar-prow kubestellar-prow Bot deleted the accm/split-pr-issue-metric branch April 10, 2026 19:41
@github-actions
Copy link
Copy Markdown
Contributor

Thank you for your contribution! Your PR has been merged.

Check out what's new:

Stay connected: Slack #kubestellar-dev | Multi-Cluster Survey

@github-actions
Copy link
Copy Markdown
Contributor

Post-merge build verification passed

Both Go and frontend builds compiled successfully against merge commit d384298415b5ff2546fbe20999951b6d4efabf2e.

@github-actions
Copy link
Copy Markdown
Contributor

✅ Post-Merge Verification: passed

Commit: d384298415b5ff2546fbe20999951b6d4efabf2e
Specs run: smoke.spec.ts
Report: https://github.com/kubestellar/console/actions/runs/24260879865

clubanderson added a commit that referenced this pull request Apr 10, 2026
…r-info toast

Two related UX fixes following #6214:

1. ClusterAssignmentPanel — Active Clusters picker showed every kubeconfig
   context, not every distinct cluster. Multiple OpenShift contexts pointing
   at the same API server (one per user identity) produced duplicate rows
   like '[email protected]' x4. Switch to deduplicatedClusters from
   useClusters() so contexts pointing at the same server collapse into a
   single picker entry — this is the same dedupe used everywhere else for
   metrics and stats per the project-wide rule.

2. AuthCallback — the 'Failed to fetch user info, proceeding anyway' warning
   toast was appearing even when login succeeded. Two causes, both fixed:
   - StrictMode double-mount race: the cleanup ran before the catch fired,
     so the navigate-to-login was cancelled but the toast was already shown.
     Add a 'cancelled' flag and bail out of the catch handler when the
     component has unmounted.
   - Token-exchange-succeeded race: setToken ran successfully but the
     follow-up refreshUser(token) call failed (e.g. transient network
     blip). The user is authenticated and sees their username, but the
     catch block was firing the misleading warning toast and scheduling
     a navigate-back-to-login that the auth context's own navigation
     usually overrode. Track tokenExchangeSucceeded and proceed silently
     to the destination on a post-token failure rather than warning + bouncing.
   Also clear the abort timeout in cleanup to avoid a stale abort during
   StrictMode double-mount, and use a lazy initial useState for status to
   satisfy react-hooks/set-state-in-effect.

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 10, 2026
…r-info toast (#6223)

Two related UX fixes following #6214:

1. ClusterAssignmentPanel — Active Clusters picker showed every kubeconfig
   context, not every distinct cluster. Multiple OpenShift contexts pointing
   at the same API server (one per user identity) produced duplicate rows
   like '[email protected]' x4. Switch to deduplicatedClusters from
   useClusters() so contexts pointing at the same server collapse into a
   single picker entry — this is the same dedupe used everywhere else for
   metrics and stats per the project-wide rule.

2. AuthCallback — the 'Failed to fetch user info, proceeding anyway' warning
   toast was appearing even when login succeeded. Two causes, both fixed:
   - StrictMode double-mount race: the cleanup ran before the catch fired,
     so the navigate-to-login was cancelled but the toast was already shown.
     Add a 'cancelled' flag and bail out of the catch handler when the
     component has unmounted.
   - Token-exchange-succeeded race: setToken ran successfully but the
     follow-up refreshUser(token) call failed (e.g. transient network
     blip). The user is authenticated and sees their username, but the
     catch block was firing the misleading warning toast and scheduling
     a navigate-back-to-login that the auth context's own navigation
     usually overrode. Track tokenExchangeSucceeded and proceed silently
     to the destination on a post-token failure rather than warning + bouncing.
   Also clear the abort timeout in cleanup to avoid a stale abort during
   StrictMode double-mount, and use a lazy initial useState for status to
   satisfy react-hooks/set-state-in-effect.

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 10, 2026
The GitHub Search API caps results at 1000 per query, which meant the
live /api/analytics-accm endpoint couldn't reach more than the most
recent ~12 weeks of kubestellar/console history (busy weeks alone
exceed the cap). The Analytics page showed mostly empty bars for
early weeks even after #6214 removed the 12-week rolling window.

Fix: precompute the full dataset out-of-band and serve it from a
public gist.

1. scripts/build-accm-history.mjs — Node script that slices the
   search by 7-day windows so no single query approaches the 1000-
   result cap, paginates with retry-on-rate-limit, and emits the
   same ACCMData shape the Netlify Function returns.

2. .github/workflows/accm-history-update.yml — daily cron (06:30 UTC)
   that runs the script and PATCHes the public gist
   (21a665e2a49ced34f83bc290c3fd6a23). Also wired for manual dispatch.
   Requires a new secret ACCM_HISTORY_GIST_TOKEN (PAT with 'gist' scope)
   — the default GITHUB_TOKEN cannot write to gists.

3. web/netlify/functions/analytics-accm.mts — try the gist first; fall
   back to live computation on any failure so the endpoint stays
   available even if the gist or the cron breaks. Also corrects
   PROJECT_START_DATE to 2026-01-16 (verified via repos/created_at).

Initial gist seed verified locally: 13 weeks, 3626 PRs (97.4% AI),
2600 issues, 39 contributors, from 2026-W03 through 2026-W15.

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 10, 2026
The GitHub Search API caps results at 1000 per query, which meant the
live /api/analytics-accm endpoint couldn't reach more than the most
recent ~12 weeks of kubestellar/console history (busy weeks alone
exceed the cap). The Analytics page showed mostly empty bars for
early weeks even after #6214 removed the 12-week rolling window.

Fix: precompute the full dataset out-of-band and serve it from a
public gist.

1. scripts/build-accm-history.mjs — Node script that slices the
   search by 7-day windows so no single query approaches the 1000-
   result cap, paginates with retry-on-rate-limit, and emits the
   same ACCMData shape the Netlify Function returns.

2. .github/workflows/accm-history-update.yml — daily cron (06:30 UTC)
   that runs the script and PATCHes the public gist
   (21a665e2a49ced34f83bc290c3fd6a23). Also wired for manual dispatch.
   Requires a new secret ACCM_HISTORY_GIST_TOKEN (PAT with 'gist' scope)
   — the default GITHUB_TOKEN cannot write to gists.

3. web/netlify/functions/analytics-accm.mts — try the gist first; fall
   back to live computation on any failure so the endpoint stays
   available even if the gist or the cron breaks. Also corrects
   PROJECT_START_DATE to 2026-01-16 (verified via repos/created_at).

Initial gist seed verified locally: 13 weeks, 3626 PRs (97.4% AI),
2600 issues, 39 contributors, from 2026-W03 through 2026-W15.

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 10, 2026
…, single cache read (#6212, #6220)

#6212 — Copilot review on #6207 (README round 4)
==================================================
Two factual errors caught against source.

1) The README treated 'GitHub PAT in Settings UI' and
   'FEEDBACK_GITHUB_TOKEN' as separate, 'not interchangeable'
   credentials. Wrong — verified against
   pkg/api/handlers/github_proxy.go:92 and :238: both write to the
   single FeedbackGitHubToken field on AllSettings, which is consumed
   by the github proxy, feedback issue creation, missions, and
   rewards. They are TWO WAYS TO SUPPLY THE SAME TOKEN, not separate
   credentials. Merged the table rows into one 'Consolidated GitHub
   PAT' row and added a 'Setting the consolidated PAT' subsection
   that explains the env-var path and the Settings UI path are
   equivalent — pick one.

2) The Settings UI POST /api/github/token endpoint requires the
   console 'admin' role and returns 403 otherwise. Verified at
   pkg/api/handlers/github_proxy.go:214:
     if currentUser.Role != "admin" { return 403 }
   The README implied any self-hosted user could persist the PAT.
   Documented the admin requirement explicitly in the new section.

#6220 — Copilot review on #6214 (analytics + cache)
=====================================================
Two small correctness fixes.

1) web/public/analytics.js:957 — KPI percentages for Human PRs/Issues
   were computed as `100 - aiPct`. With totalPrs===0, aiPct is forced
   to 0, so the Human KPI rendered as 100% with the change line
   '0 of 0 PRs' — extremely confusing. New humanPrPct/humanIssuePct
   variables that also force to 0 in the empty case, so both KPIs
   render as 0% / 0% / '0 of 0' when there is no data.

2) web/src/hooks/useMediumBlog.ts:78-82 — readCache() was called THREE
   times during initial render (one per useState lazy initializer),
   repeating sessionStorage.getItem + JSON.parse three times and
   creating a tiny race window if the cache changed between calls.
   Read once into a captured initialCache variable; all three
   useStates derive from the same snapshot.

Verified: `npm run build` clean.

Closes #6212, closes #6220.

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 10, 2026
…, single cache read (#6212, #6220) (#6236)

#6212 — Copilot review on #6207 (README round 4)
==================================================
Two factual errors caught against source.

1) The README treated 'GitHub PAT in Settings UI' and
   'FEEDBACK_GITHUB_TOKEN' as separate, 'not interchangeable'
   credentials. Wrong — verified against
   pkg/api/handlers/github_proxy.go:92 and :238: both write to the
   single FeedbackGitHubToken field on AllSettings, which is consumed
   by the github proxy, feedback issue creation, missions, and
   rewards. They are TWO WAYS TO SUPPLY THE SAME TOKEN, not separate
   credentials. Merged the table rows into one 'Consolidated GitHub
   PAT' row and added a 'Setting the consolidated PAT' subsection
   that explains the env-var path and the Settings UI path are
   equivalent — pick one.

2) The Settings UI POST /api/github/token endpoint requires the
   console 'admin' role and returns 403 otherwise. Verified at
   pkg/api/handlers/github_proxy.go:214:
     if currentUser.Role != "admin" { return 403 }
   The README implied any self-hosted user could persist the PAT.
   Documented the admin requirement explicitly in the new section.

#6220 — Copilot review on #6214 (analytics + cache)
=====================================================
Two small correctness fixes.

1) web/public/analytics.js:957 — KPI percentages for Human PRs/Issues
   were computed as `100 - aiPct`. With totalPrs===0, aiPct is forced
   to 0, so the Human KPI rendered as 100% with the change line
   '0 of 0 PRs' — extremely confusing. New humanPrPct/humanIssuePct
   variables that also force to 0 in the empty case, so both KPIs
   render as 0% / 0% / '0 of 0' when there is no data.

2) web/src/hooks/useMediumBlog.ts:78-82 — readCache() was called THREE
   times during initial render (one per useState lazy initializer),
   repeating sessionStorage.getItem + JSON.parse three times and
   creating a tiny race window if the cache changed between calls.
   Read once into a captured initialCache variable; all three
   useStates derive from the same snapshot.

Verified: `npm run build` clean.

Closes #6212, closes #6220.

Signed-off-by: Andrew Anderson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dco-signoff: yes Indicates the PR's author has signed the DCO. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants