Skip to content

πŸ“ docs(security): add SECURITY-AI.md β€” AI automation threat model#8249

Merged
clubanderson merged 1 commit intomainfrom
docs/security-ai
Apr 16, 2026
Merged

πŸ“ docs(security): add SECURITY-AI.md β€” AI automation threat model#8249
clubanderson merged 1 commit intomainfrom
docs/security-ai

Conversation

@clubanderson
Copy link
Copy Markdown
Collaborator

Summary

First of four PRs from a fullsend-ai/fullsend vs console-automation evaluation.

`docs/security/SECURITY-MODEL.md` covers runtime web threats well but says nothing about the LLM attack surfaces this project already ships: Claude Code review on every PR, auto-qa / auto-qa-tuner cron jobs, the GA4 β†’ GitHub issue pipeline, kc-agent + MCP handlers. This PR adds a dedicated `SECURITY-AI.md` sibling doc adapting the threat taxonomy from fullsend-ai/fullsend.

What it contains

  • Scope table β€” every LLM-calling surface in the repo, what triggers it, who controls the input, what the LLM is allowed to do.
  • Six threat categories β€” external prompt injection, insider/compromised credentials, DoS/resource exhaustion, agent drift, supply chain, agent-to-agent injection. Each with definition β†’ how-it-applies-to-console β†’ current mitigations β†’ recommended next steps.
  • Exotic-attack section β€” invisible Unicode steganography, temporal split-payload (xz-style), zero-trust-between-agents principle. Named so reviewers stay alert, even though defenses are still evolving.
  • Audit checklist β€” a 7-item checklist to run through before merging any PR that adds a new LLM-calling workflow or handler.

Cross-references added

  • `docs/security/SECURITY-MODEL.md` β€” new `## 5. AI / Automation Surface` section + "Related documents" pointer.
  • `CLAUDE.md` β€” new `### AI / LLM Surfaces` note under Critical Rules so agent sessions see the audit checklist before writing LLM-calling code.

What's in the next 3 PRs

This is part of a 4-PR series based on the fullsend evaluation:

Zero runtime changes

Pure documentation. No code paths modified. No existing behavior changes.

Test plan

  • `npm run build` β€” passes locally
  • Doc renders on GitHub preview with correct markdown structure
  • Cross-references from `SECURITY-MODEL.md` and `CLAUDE.md` resolve correctly
  • Review by a security-focused reviewer for threat-taxonomy accuracy

πŸ€– Generated with Claude Code

Adapts the threat taxonomy from fullsend-ai/fullsend
(docs/problems/security-threat-model.md) to console's specific
LLM surfaces. Closes the gap where SECURITY-MODEL.md covers runtime
web threats but says nothing about the prompt injection, supply
chain, and agent drift exposures the project already has through
Claude Code review, auto-qa, ga4-error-monitor, and kc-agent/MCP.

Contents:
- Scope table listing every current LLM-calling surface + its input
  source + what the LLM can do.
- Six threat categories (external prompt injection, insider/creds,
  DoS/resource exhaustion, agent drift, supply chain, agent-to-agent
  injection) β€” each with definition, how-it-applies, current
  mitigations, recommended next steps.
- Exotic-attacks section (invisible Unicode steganography, temporal
  split-payload / xz-style, zero-trust-between-agents principle).
- Audit checklist for future LLM workflows β€” reviewers should run
  through this before approving any PR that adds a new LLM call.

Cross-references:
- docs/security/SECURITY-MODEL.md gets a new "AI / Automation
  Surface" section pointing at the new doc.
- CLAUDE.md gets a "AI / LLM Surfaces" note under Critical Rules so
  agent sessions see the audit checklist before writing new
  LLM-calling code.

No runtime code changes. Pure documentation.

First of four PRs from a fullsend-ai/fullsend vs console-automation
evaluation β€” the other three address tier-based change classification,
webhook-driven Copilot PR monitoring, and structured-output decomposed
review prompts.

Signed-off-by: Andrew Anderson <[email protected]>
Copilot AI review requested due to automatic review settings April 15, 2026 23:53
@kubestellar-prow kubestellar-prow Bot added the dco-signoff: yes Indicates the PR's author has signed the DCO. label Apr 15, 2026
@kubestellar-prow
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign clubanderson for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@netlify
Copy link
Copy Markdown

netlify Bot commented Apr 15, 2026

βœ… Deploy Preview for kubestellarconsole canceled.

Name Link
πŸ”¨ Latest commit 3465462
πŸ” Latest deploy log https://app.netlify.com/projects/kubestellarconsole/deploys/69e02500b5d9530008ac9440

@github-actions
Copy link
Copy Markdown
Contributor

πŸ‘‹ Hey @clubanderson β€” thanks for opening this PR!

πŸ€– This project is developed exclusively using AI coding assistants.

Please do not attempt to code anything for this project manually.
All contributions should be authored using an AI coding tool such as:

This ensures consistency in code style, architecture patterns, test coverage,
and commit quality across the entire codebase.


This is an automated message.

@kubestellar-prow kubestellar-prow Bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 15, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds AI/LLM automation threat-model documentation to complement the existing runtime security model, and cross-links it from core security/agent guidance docs.

Changes:

  • Introduces docs/security/SECURITY-AI.md describing AI automation/LLM threat categories plus an audit checklist for new LLM-calling surfaces.
  • Updates docs/security/SECURITY-MODEL.md to reference the new AI threat model and adds an β€œAI / Automation Surface” section.
  • Updates CLAUDE.md to point contributors/agents at the new AI threat model before adding new LLM surfaces.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.

File Description
docs/security/SECURITY-MODEL.md Adds link and a new section describing the AI/automation threat surface and pointing to the new doc.
docs/security/SECURITY-AI.md New AI automation threat model document: scope table, threat taxonomy, and audit checklist.
CLAUDE.md Adds a β€œAI / LLM Surfaces” rule pointing to the new AI threat model.

**Current mitigations.** GitHub Actions secret store (encrypted at rest). Secret is only accessible to workflows running on the main repo, not forks.

**Recommended next steps.**
- **Per-role GitHub Apps with OIDC isolation** (deferred work β€” documented in `project_automation_fullsend_comparison.md`): split the single token into distinct apps per role. Blast radius of one compromise shrinks to that role's scope.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

project_automation_fullsend_comparison.md is referenced here but doesn’t exist in this repo, so readers can’t follow the pointer. Add the referenced document (with a link) or remove/replace this reference with an existing doc/issue/PR link.

Suggested change
- **Per-role GitHub Apps with OIDC isolation** (deferred work β€” documented in `project_automation_fullsend_comparison.md`): split the single token into distinct apps per role. Blast radius of one compromise shrinks to that role's scope.
- **Per-role GitHub Apps with OIDC isolation** (deferred work): split the single token into distinct apps per role. Blast radius of one compromise shrinks to that role's scope.

Copilot uses AI. Check for mistakes.
Comment on lines +290 to +292
The runtime model above (backend, kc-agent, browser) is only part of the picture. The repo also runs LLM-backed GitHub workflows β€” Claude Code review on every PR, auto-qa and auto-qa-tuner on a cron, a GA4 β†’ GitHub issue pipeline, and the kc-agent itself. These bring threat surfaces that don't look like classic web attacks (prompt injection, supply chain, agent drift, token isolation).

See **[`SECURITY-AI.md`](SECURITY-AI.md)** for the AI-specific threat model β€” six threat categories, current mitigations, and an audit checklist for adding new LLM-calling workflows.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph describes auto-qa, auto-qa-tuner, and the GA4 issue pipeline as β€œLLM-backed GitHub workflows”, but those workflows don’t call an LLM in this repo. To avoid misleading the threat model, either rename this to β€œautomation/agent-triggering workflows” or explicitly distinguish β€œLLM-calling” vs β€œcreates inputs consumed by LLM agents”.

Suggested change
The runtime model above (backend, kc-agent, browser) is only part of the picture. The repo also runs LLM-backed GitHub workflows β€” Claude Code review on every PR, auto-qa and auto-qa-tuner on a cron, a GA4 β†’ GitHub issue pipeline, and the kc-agent itself. These bring threat surfaces that don't look like classic web attacks (prompt injection, supply chain, agent drift, token isolation).
See **[`SECURITY-AI.md`](SECURITY-AI.md)** for the AI-specific threat model β€” six threat categories, current mitigations, and an audit checklist for adding new LLM-calling workflows.
The runtime model above (backend, kc-agent, browser) is only part of the picture. The repo also runs AI-related GitHub automation: Claude Code review on every PR is LLM-backed, while auto-qa, auto-qa-tuner, and the GA4 β†’ GitHub issue pipeline are automation or agent-triggering workflows that create inputs later consumed by LLM-based agents. These bring threat surfaces that don't look like classic web attacks (prompt injection, supply chain, agent drift, token isolation).
See **[`SECURITY-AI.md`](SECURITY-AI.md)** for the AI-specific threat model β€” six threat categories, current mitigations, and an audit checklist for adding new LLM-calling workflows or automation that feeds LLM agents.

Copilot uses AI. Check for mistakes.
Comment on lines +13 to +19
| Surface | Where | What triggers it | Who controls the input | What the LLM can do |
|---|---|---|---|---|
| Claude Code review | `.github/workflows/claude-code-review.yml` | Every PR | Any PR author (including forks) | Read repo, post review comments, no write access to main |
| auto-qa / auto-qa-tuner | `.github/workflows/auto-qa.yml`, `.github/workflows/auto-qa-tuner.yml` | Scheduled cron | Maintainers (workflow contents) + repo history | Open issues, propose patches |
| ai-fix / scanner workflows | `.github/workflows/ai-fix.yml` (currently disabled) and manually-dispatched scanner sessions | Manual or automated scheduling | Maintainers | Open PRs against branches |
| GA4 error monitor β†’ issue pipeline | `.github/workflows/ga4-error-monitor.yml` | Hourly cron | Google Analytics 4 production event stream (real user traffic) | Open issues with attacker-influenceable text in the title/body |
| kc-agent + MCP handlers | `cmd/kc-agent/main.go`, `pkg/mcp/*` | User opens an agent session in their browser | The user running the session | Execute kubectl operations against the user's kubeconfig |
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scope table uses || at the start of each row (e.g., || Surface | ...), which won’t render as a Markdown table on GitHub. Use standard |-delimited table rows (single leading/trailing |) so the scope table displays correctly.

Copilot uses AI. Check for mistakes.
Comment on lines +9 to +20
## Scope: where LLMs run in this project

The console codebase touches LLM capabilities in five places. This is the complete list as of the document's last update β€” if you are reviewing a PR that adds a new LLM surface, please update this table.

| Surface | Where | What triggers it | Who controls the input | What the LLM can do |
|---|---|---|---|---|
| Claude Code review | `.github/workflows/claude-code-review.yml` | Every PR | Any PR author (including forks) | Read repo, post review comments, no write access to main |
| auto-qa / auto-qa-tuner | `.github/workflows/auto-qa.yml`, `.github/workflows/auto-qa-tuner.yml` | Scheduled cron | Maintainers (workflow contents) + repo history | Open issues, propose patches |
| ai-fix / scanner workflows | `.github/workflows/ai-fix.yml` (currently disabled) and manually-dispatched scanner sessions | Manual or automated scheduling | Maintainers | Open PRs against branches |
| GA4 error monitor β†’ issue pipeline | `.github/workflows/ga4-error-monitor.yml` | Hourly cron | Google Analytics 4 production event stream (real user traffic) | Open issues with attacker-influenceable text in the title/body |
| kc-agent + MCP handlers | `cmd/kc-agent/main.go`, `pkg/mcp/*` | User opens an agent session in their browser | The user running the session | Execute kubectl operations against the user's kubeconfig |

Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section says there are β€œfive” LLM surfaces and that the table is the β€œcomplete list”, but the repo also has .github/workflows/claude.yml which invokes anthropics/claude-code-action@v1 (another LLM-calling surface). Either add it to the table or loosen the β€œcomplete list” wording so the doc stays accurate.

Copilot uses AI. Check for mistakes.
Comment on lines +11 to +18
The console codebase touches LLM capabilities in five places. This is the complete list as of the document's last update β€” if you are reviewing a PR that adds a new LLM surface, please update this table.

| Surface | Where | What triggers it | Who controls the input | What the LLM can do |
|---|---|---|---|---|
| Claude Code review | `.github/workflows/claude-code-review.yml` | Every PR | Any PR author (including forks) | Read repo, post review comments, no write access to main |
| auto-qa / auto-qa-tuner | `.github/workflows/auto-qa.yml`, `.github/workflows/auto-qa-tuner.yml` | Scheduled cron | Maintainers (workflow contents) + repo history | Open issues, propose patches |
| ai-fix / scanner workflows | `.github/workflows/ai-fix.yml` (currently disabled) and manually-dispatched scanner sessions | Manual or automated scheduling | Maintainers | Open PRs against branches |
| GA4 error monitor β†’ issue pipeline | `.github/workflows/ga4-error-monitor.yml` | Hourly cron | Google Analytics 4 production event stream (real user traffic) | Open issues with attacker-influenceable text in the title/body |
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The table and threat text treat ga4-error-monitor.yml, auto-qa.yml, and auto-qa-tuner.yml as β€œLLM” workflows, but those workflows don’t call an LLM in-repo (they use scripts/gh and open issues/config commits). Consider reframing them as β€œautomation surfaces” (LLM-adjacent) or explicitly distinguishing β€œcalls LLM” vs β€œcreates artifacts consumed by LLM agents”.

Copilot uses AI. Check for mistakes.
Comment on lines +43 to +45
**Definition.** A single credential β€” the `CLAUDE_CODE_OAUTH_TOKEN` secret β€” currently powers every LLM-calling workflow in the repo. Compromise of that one secret grants the attacker the union of every workflow's capabilities.

**How it applies to console.** The secret is used by at least `claude-code-review.yml`, `auto-qa.yml`, `auto-qa-tuner.yml`, `ai-fix.yml`, and any manually-dispatched scanner workflows. If it's exfiltrated (fork leak, workflow log leak, supply-chain compromise of the `anthropics/claude-code-action` action itself), the attacker can post review comments, open issues, and potentially create branches on behalf of the account.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section claims CLAUDE_CODE_OAUTH_TOKEN is used by auto-qa.yml, auto-qa-tuner.yml, and ai-fix.yml, but currently only claude-code-review.yml and claude.yml reference that secret. Update the list to match current secret usage (and/or call out the other secrets these workflows actually use).

Suggested change
**Definition.** A single credential β€” the `CLAUDE_CODE_OAUTH_TOKEN` secret β€” currently powers every LLM-calling workflow in the repo. Compromise of that one secret grants the attacker the union of every workflow's capabilities.
**How it applies to console.** The secret is used by at least `claude-code-review.yml`, `auto-qa.yml`, `auto-qa-tuner.yml`, `ai-fix.yml`, and any manually-dispatched scanner workflows. If it's exfiltrated (fork leak, workflow log leak, supply-chain compromise of the `anthropics/claude-code-action` action itself), the attacker can post review comments, open issues, and potentially create branches on behalf of the account.
**Definition.** A shared credential β€” the `CLAUDE_CODE_OAUTH_TOKEN` secret β€” is currently referenced by multiple AI-related workflows in the repo, including `claude-code-review.yml` and `claude.yml`. Compromise of that secret grants the attacker the union of the capabilities exposed through the workflows that actually use it.
**How it applies to console.** Based on current workflow references, `CLAUDE_CODE_OAUTH_TOKEN` is used by `claude-code-review.yml` and `claude.yml`. Other AI-related workflows such as `auto-qa.yml`, `auto-qa-tuner.yml`, `ai-fix.yml`, and manually-dispatched scanner sessions may use different credentials; they should be documented separately rather than being treated as consumers of this secret. If `CLAUDE_CODE_OAUTH_TOKEN` is exfiltrated (workflow log leak, supply-chain compromise of the `anthropics/claude-code-action` action itself, or another secret-handling failure), the attacker can act with the permissions available to the workflows that use it.

Copilot uses AI. Check for mistakes.
clubanderson added a commit that referenced this pull request Apr 16, 2026
…red output

Fourth of four PRs from the fullsend-ai/fullsend automation
evaluation. Adopts fullsend's "decomposed code review" pattern
(docs/problems/code-review.md) β€” split review into specialized
concerns (correctness, security, style) rather than one monolithic
pass.

Done as a single LLM call with structured output, not 3 parallel
jobs, to keep token cost neutral. The prompt asks the existing
/code-review:code-review plugin to organize its findings into three
explicit sections with P0/P1/P2 priority tags, and to write "None."
under any section that has nothing to report so it doesn't
fabricate issues.

The SECURITY section references docs/security/SECURITY-AI.md (added
in PR #8249) so the reviewer explicitly watches for the six threat
categories β€” external prompt injection, insider credentials, DoS,
agent drift, supply chain, agent-to-agent injection β€” on any PR
that touches LLM-calling code.

Only change is the `prompt:` field in the existing
`.github/workflows/claude-code-review.yml`. No new actions, no new
secrets, no cost increase.

Expected behavior after merge:
- Every PR's Claude Code review comment now has a CORRECTNESS /
  SECURITY / STYLE structure instead of prose.
- Every issue is tagged P0/P1/P2 so reviewers can triage quickly.
- A pure doc PR should show "None." in all three sections, not
  fabricated nits.
- A PR touching LLM-calling code should produce at least one item
  in SECURITY referencing prompt-injection risk.

Signed-off-by: Andrew Anderson <[email protected]>
@clubanderson clubanderson merged commit efcb982 into main Apr 16, 2026
41 of 42 checks passed
@kubestellar-prow kubestellar-prow Bot deleted the docs/security-ai branch April 16, 2026 00:06
@github-actions
Copy link
Copy Markdown
Contributor

Thank you for your contribution! Your PR has been merged.

Check out what's new:

Stay connected: Slack #kubestellar-dev | Multi-Cluster Survey

clubanderson added a commit that referenced this pull request Apr 16, 2026
…red output (#8253)

* πŸ“ docs(security): mark local LLM providers registered + active

Updates SECURITY-MODEL.md Β§3 to reflect #8248, which
registers Ollama, llama.cpp, LocalAI, vLLM, LM Studio, RHAIIS, Groq,
OpenRouter and Open WebUI as chat-only agent providers in
InitializeProviders.

Changes:

- Provider table flips the Registered column from "no" to "yes (chat
  only)" for the nine HTTP providers that are now wired into the agent
  dropdown, and adds rows for the six new local LLM runners with their
  env vars and default URLs.
- Explains the chat-only capability flag and why missions still route
  through the tool-capable CLI agents (registry.go:303 rationale).
- Adds a "Local LLM strategy" subsection that cross-links the
  docs.kubestellar.io local-llm-strategy page and the eight install
  missions on kubestellar/console-kb.
- Replaces the "Planned follow-up" subsection with active recipes for
  each runner β€” Ollama loopback default, in-cluster Service URLs for
  llama.cpp/LocalAI/vLLM/RHAIIS, LM Studio workstation default, and
  Groq/OpenRouter/Open WebUI gateway overrides. The "# PLANNED β€”
  not yet wired at runtime" bash comments are removed.

The threat model claims about kubeconfig and credentials staying out
of the request body are unchanged and still authoritative.

Signed-off-by: Andrew Anderson <[email protected]>

* ✨ feat(ci): decomposed-review prompt for Claude Code review β€” structured output

Fourth of four PRs from the fullsend-ai/fullsend automation
evaluation. Adopts fullsend's "decomposed code review" pattern
(docs/problems/code-review.md) β€” split review into specialized
concerns (correctness, security, style) rather than one monolithic
pass.

Done as a single LLM call with structured output, not 3 parallel
jobs, to keep token cost neutral. The prompt asks the existing
/code-review:code-review plugin to organize its findings into three
explicit sections with P0/P1/P2 priority tags, and to write "None."
under any section that has nothing to report so it doesn't
fabricate issues.

The SECURITY section references docs/security/SECURITY-AI.md (added
in PR #8249) so the reviewer explicitly watches for the six threat
categories β€” external prompt injection, insider credentials, DoS,
agent drift, supply chain, agent-to-agent injection β€” on any PR
that touches LLM-calling code.

Only change is the `prompt:` field in the existing
`.github/workflows/claude-code-review.yml`. No new actions, no new
secrets, no cost increase.

Expected behavior after merge:
- Every PR's Claude Code review comment now has a CORRECTNESS /
  SECURITY / STYLE structure instead of prose.
- Every issue is tagged P0/P1/P2 so reviewers can triage quickly.
- A pure doc PR should show "None." in all three sections, not
  fabricated nits.
- A PR touching LLM-calling code should produce at least one item
  in SECURITY referencing prompt-injection risk.

Signed-off-by: Andrew Anderson <[email protected]>

---------

Signed-off-by: Andrew Anderson <[email protected]>
@github-actions
Copy link
Copy Markdown
Contributor

Post-merge build verification passed βœ…

Both Go and frontend builds compiled successfully against merge commit efcb982ea00aef732c9dc4097262db9299a7d53d.

clubanderson added a commit that referenced this pull request Apr 16, 2026
…odel links

Follow-up to #8210 which added the Security posture section to the
install modal and a Security tab to the mission detail view. The
original PR linked to the source-grounded repo version of
SECURITY-MODEL.md β€” functional but less user-friendly than the
rendered docs site, and missed the AI threat model entirely.

Changes:

SetupInstructionsDialog.tsx (Run KubeStellar Console Locally modal):
- Primary security link now points at
  https://kubestellar.io/docs/console/main/console/security-model/
  (rendered docs site, main version).
- Added AI automation threat model link (SECURITY-AI.md from #8249)
  to surface prompt-injection / supply-chain / agent-drift concerns.
- Kept the repo version as a secondary "source-grounded" link with
  smaller muted styling β€” useful for readers who want the exact
  file/line claims SECURITY-MODEL.md makes.

MissionDetailView.tsx (mission security tab):
- Same docs.kubestellar.io primary URL swap.
- Added AI threat model link both to the populated-tab footer and
  the empty-state fallback.

Three new URL constants replace the single hardcoded GH link:
  SECURITY_DOC_URL        = docs.kubestellar.io (primary)
  SECURITY_DOC_REPO_URL   = github.com/.../SECURITY-MODEL.md (secondary)
  SECURITY_AI_DOC_URL     = github.com/.../SECURITY-AI.md

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 16, 2026
…odel links (#8348)

Follow-up to #8210 which added the Security posture section to the
install modal and a Security tab to the mission detail view. The
original PR linked to the source-grounded repo version of
SECURITY-MODEL.md β€” functional but less user-friendly than the
rendered docs site, and missed the AI threat model entirely.

Changes:

SetupInstructionsDialog.tsx (Run KubeStellar Console Locally modal):
- Primary security link now points at
  https://kubestellar.io/docs/console/main/console/security-model/
  (rendered docs site, main version).
- Added AI automation threat model link (SECURITY-AI.md from #8249)
  to surface prompt-injection / supply-chain / agent-drift concerns.
- Kept the repo version as a secondary "source-grounded" link with
  smaller muted styling β€” useful for readers who want the exact
  file/line claims SECURITY-MODEL.md makes.

MissionDetailView.tsx (mission security tab):
- Same docs.kubestellar.io primary URL swap.
- Added AI threat model link both to the populated-tab footer and
  the empty-state fallback.

Three new URL constants replace the single hardcoded GH link:
  SECURITY_DOC_URL        = docs.kubestellar.io (primary)
  SECURITY_DOC_REPO_URL   = github.com/.../SECURITY-MODEL.md (secondary)
  SECURITY_AI_DOC_URL     = github.com/.../SECURITY-AI.md

Signed-off-by: Andrew Anderson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dco-signoff: yes Indicates the PR's author has signed the DCO. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants