Skip to content

📝 docs: add security model, air-gapped deployment, and local LLM guides (#8194 #8195 #8196)#8203

Merged
clubanderson merged 1 commit intomainfrom
docs/security-airgapped-localllm
Apr 15, 2026
Merged

📝 docs: add security model, air-gapped deployment, and local LLM guides (#8194 #8195 #8196)#8203
clubanderson merged 1 commit intomainfrom
docs/security-airgapped-localllm

Conversation

@clubanderson
Copy link
Copy Markdown
Collaborator

Summary

Adds docs/security/SECURITY-MODEL.md — a single document covering the three related questions filed in #8194, #8195, and #8196:

Also adds a one-line link from README.md § AI configuration → the new document.

Grounding

Every runtime claim is grounded in source with file:line references. Verified while writing:

  • pkg/agent/server.go:578 — kc-agent binds 127.0.0.1:8585
  • cmd/kc-agent/main.go:25 — default port flag
  • pkg/agent/config.go:277-314 — the canonical list of env-var names per provider
  • pkg/agent/provider_groq.go:22-51GROQ_BASE_URL override
  • pkg/agent/provider_openrouter.go:23-58OPENROUTER_BASE_URL override
  • pkg/agent/provider_openwebui.go:16,39OPEN_WEBUI_URL
  • pkg/agent/provider_openai.go:15 — note the asymmetry that OpenAI's base URL is not currently env-overridable
  • pkg/api/server.go:429-432 — CSP allow-list for the local kc-agent
  • pkg/agent/server.go:214KC_AGENT_TOKEN optional shared secret

Per feedback_verify_readme_claims.md, this PR is strictly descriptive — no runtime behavior changes.

Scope

Docs-only. No code, no config, no version bump. One new file plus two lines in README.

Test plan

  • cd web && npm run build — passes (build succeeded, all post-build safety checks passed)
  • cd web && npm run lint — no new errors introduced (pre-existing baseline unchanged; zero matches for the new file path in the lint output)
  • New doc opens cleanly in GitHub preview
  • Every file:line reference in the doc was verified against the current source

Fixes #8194, Fixes #8195, Fixes #8196

Add docs/security/SECURITY-MODEL.md covering:

- Security model and data flow (browser ↔ Go backend ↔ kc-agent ↔ clusters)
- The pod-SA identity rule (bootstrap, GPU reserve, self-upgrade only)
- What kc-agent does and does NOT send to AI providers
- Air-gapped / restricted-egress deployment postures
- Local / self-hosted LLM configuration via GROQ_BASE_URL,
  OPENROUTER_BASE_URL, and OPEN_WEBUI_URL (OpenAI-compatible shim)
- Environment variable and port reference

All runtime claims are grounded in source with file:line references
(pkg/agent/config.go, pkg/agent/provider_groq.go,
pkg/agent/provider_openrouter.go, pkg/agent/provider_openwebui.go,
pkg/agent/server.go, cmd/kc-agent/main.go, pkg/api/server.go).

README.md gains a link to the new document from the AI configuration
section.

Fixes #8194, Fixes #8195, Fixes #8196

Signed-off-by: Andy Anderson <[email protected]>
Copilot AI review requested due to automatic review settings April 15, 2026 18:43
@clubanderson clubanderson added the ai-generated Pull request generated by AI label Apr 15, 2026
@kubestellar-prow kubestellar-prow Bot added the dco-signoff: yes Indicates the PR's author has signed the DCO. label Apr 15, 2026
@kubestellar-prow
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign mikespreitzer for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@netlify
Copy link
Copy Markdown

netlify Bot commented Apr 15, 2026

Deploy Preview for kubestellarconsole ready!

Name Link
🔨 Latest commit afc0de0
🔍 Latest deploy log https://app.netlify.com/projects/kubestellarconsole/deploys/69dfdc548394370008220e72
😎 Deploy Preview https://deploy-preview-8203.console-deploy-preview.kubestellar.io
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@kubestellar-prow kubestellar-prow Bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 15, 2026
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hey @clubanderson — thanks for opening this PR!

🤖 This project is developed exclusively using AI coding assistants.

Please do not attempt to code anything for this project manually.
All contributions should be authored using an AI coding tool such as:

This ensures consistency in code style, architecture patterns, test coverage,
and commit quality across the entire codebase.


This is an automated message.

@clubanderson clubanderson merged commit 8564e81 into main Apr 15, 2026
15 of 16 checks passed
@clubanderson clubanderson deleted the docs/security-airgapped-localllm branch April 15, 2026 18:50
@github-actions
Copy link
Copy Markdown
Contributor

Thank you for your contribution! Your PR has been merged.

Check out what's new:

Stay connected: Slack #kubestellar-dev | Multi-Cluster Survey

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds consolidated security and deployment documentation for KubeStellar Console, covering the security model/data flow, air-gapped deployment postures, and local/self-hosted LLM configuration, with a README pointer for discoverability.

Changes:

  • Add docs/security/SECURITY-MODEL.md describing architecture/data flow, auth/CORS/CSP surface, and deployment postures.
  • Document local/self-hosted LLM configuration options and environment variables with worked examples.
  • Link the new document from the README’s AI configuration section.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.

File Description
docs/security/SECURITY-MODEL.md New consolidated security/air-gap/local-LLM guide with diagrams, tables, and source references.
README.md Adds a link from AI configuration to the new security model document.

Comment thread README.md

**If no key is configured**, AI-powered features fall back to deterministic / rule-based behavior. The card suggestions, missions, and dashboards remain fully usable.

**Security model, air-gapped deployments, and local / self-hosted LLMs** are covered in [`docs/security/SECURITY-MODEL.md`](docs/security/SECURITY-MODEL.md). That document explains the data flow between browser, Go backend, kc-agent, and AI providers; how to run the console with no external AI access; and how to point an OpenAI-compatible local LLM (Ollama, vLLM, LM Studio, internal gateway) at kc-agent via `GROQ_BASE_URL` / `OPENROUTER_BASE_URL` / `OPEN_WEBUI_URL`.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This README addition states that users can point an OpenAI-compatible local LLM at kc-agent via GROQ_BASE_URL / OPENROUTER_BASE_URL / OPEN_WEBUI_URL. In the current code, those API-key HTTP providers are not registered by InitializeProviders (API-only agents are explicitly excluded), so these env vars won’t enable a selectable/usable provider at runtime. Please adjust the sentence to avoid implying this works today, or document the current supported path (CLI-based agents) and any planned follow-up to wire in these base-URL-overridable providers.

Suggested change
**Security model, air-gapped deployments, and local / self-hosted LLMs** are covered in [`docs/security/SECURITY-MODEL.md`](docs/security/SECURITY-MODEL.md). That document explains the data flow between browser, Go backend, kc-agent, and AI providers; how to run the console with no external AI access; and how to point an OpenAI-compatible local LLM (Ollama, vLLM, LM Studio, internal gateway) at kc-agent via `GROQ_BASE_URL` / `OPENROUTER_BASE_URL` / `OPEN_WEBUI_URL`.
**Security model, air-gapped deployments, and local / self-hosted LLMs** are covered in [`docs/security/SECURITY-MODEL.md`](docs/security/SECURITY-MODEL.md). That document explains the data flow between browser, Go backend, kc-agent, and AI providers; how to run the console with no external AI access; and the currently supported self-hosted path using kc-agent's CLI-based agents. It also notes that OpenAI-compatible local LLM endpoints (for example Ollama, vLLM, LM Studio, or an internal gateway) are a planned follow-up for the base-URL-overridable HTTP providers referenced by `GROQ_BASE_URL` / `OPENROUTER_BASE_URL` / `OPEN_WEBUI_URL`, which are not yet wired into the selectable provider list in the current build.

Copilot uses AI. Check for mistakes.
Comment on lines +100 to +108
Key consequence: **the kubeconfig, raw secrets, and cluster credentials never cross the process boundary from kc-agent.** The only thing kc-agent sends outward is the HTTP chat payload to the configured AI provider, which contains the conversation the user has been having (system prompt + message history + current prompt — see `pkg/agent/provider_openai.go:207-238` for the exact OpenAI shape).

### What kc-agent does **not** send to AI providers

- It does not upload `~/.kube/config`.
- It does not upload cluster bearer tokens, client certificates, or any other credential material.
- It does not auto-attach arbitrary cluster objects. The conversation context is whatever the user chose to type or paste, plus the system prompt defined in the provider implementation (`DefaultSystemPrompt`).

If you need to audit what leaves the machine, the provider files under `pkg/agent/provider_*.go` each contain exactly one outbound HTTP call site per request type (`Chat` and `StreamChat`). Those are the only places any AI traffic originates.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc states that kc-agent only sends the user’s typed/pasted conversation to AI providers and that the provider_*.go HTTP call sites are the only place AI traffic originates. In the current code, InitializeProviders registers CLI-based tool agents (e.g., claude-code/codex/gemini-cli) and explicitly does NOT register the API-key HTTP providers (OpenAI/Claude/Gemini/Groq/OpenRouter/Open WebUI). Those CLI agents can execute kubectl/helm and may transmit command output to the upstream LLM via the external CLI, so cluster data can leave the machine depending on the agent in use. Please revise this section to reflect the actual agent types and data-flow differences (CLI tool agents vs direct HTTP providers).

Suggested change
Key consequence: **the kubeconfig, raw secrets, and cluster credentials never cross the process boundary from kc-agent.** The only thing kc-agent sends outward is the HTTP chat payload to the configured AI provider, which contains the conversation the user has been having (system prompt + message history + current prompt — see `pkg/agent/provider_openai.go:207-238` for the exact OpenAI shape).
### What kc-agent does **not** send to AI providers
- It does not upload `~/.kube/config`.
- It does not upload cluster bearer tokens, client certificates, or any other credential material.
- It does not auto-attach arbitrary cluster objects. The conversation context is whatever the user chose to type or paste, plus the system prompt defined in the provider implementation (`DefaultSystemPrompt`).
If you need to audit what leaves the machine, the provider files under `pkg/agent/provider_*.go` each contain exactly one outbound HTTP call site per request type (`Chat` and `StreamChat`). Those are the only places any AI traffic originates.
Key consequence: **the kubeconfig, raw secrets, and cluster credentials never cross the process boundary from kc-agent as direct credential uploads.** However, the data that can leave the machine depends on which kind of AI agent is configured:
- **CLI tool agents** (for example `claude-code`, `codex`, `gemini-cli`) run an external CLI locally. These agents can execute tools such as `kubectl` and `helm`, and the external CLI may send tool output or other cluster-derived context to its upstream LLM depending on the agent's behavior and the prompt.
- **Direct HTTP providers** (for example OpenAI-compatible/API-key providers) construct an HTTP chat payload in `pkg/agent/provider_*.go`. For those providers, the outbound data is the request body built by the provider implementation (for example, system prompt + message history + current prompt; see `pkg/agent/provider_openai.go:207-238` for the OpenAI shape).
### What kc-agent does **not** send to AI providers
- It does not upload `~/.kube/config`.
- It does not upload cluster bearer tokens, client certificates, or any other credential material.
- It does not auto-attach arbitrary cluster objects on its own. For direct HTTP providers, the conversation context is whatever the user chose to type or paste, plus the system prompt defined in the provider implementation (`DefaultSystemPrompt`).
- For CLI tool agents, be aware that cluster data can still leave the machine indirectly if the agent runs commands (for example via `kubectl` or `helm`) and the external CLI includes that output in requests to the upstream model.
If you need to audit what leaves the machine, distinguish the two paths: for **direct HTTP providers**, inspect the outbound request construction and HTTP call sites in `pkg/agent/provider_*.go`; for **CLI tool agents**, audit the external CLI invocation plus whatever tool output and prompts that CLI may forward upstream. The `provider_*.go` call sites are therefore the AI egress points only for the direct HTTP provider path, not for every configured agent type.

Copilot uses AI. Check for mistakes.
Comment on lines +136 to +142
All cluster-management features continue to work. **AI is optional.** If no key is configured for any provider, `IsKeyAvailable()` returns `false` (`pkg/agent/config.go:235-244`), and AI-driven features fall back to deterministic / rule-based behavior. The README covers this under *AI configuration*: "If no key is configured, AI-powered features fall back to deterministic / rule-based behavior."

To run without AI:

1. Do **not** set `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, `GROQ_API_KEY`, `OPENROUTER_API_KEY`, or `OPEN_WEBUI_API_KEY` (`pkg/agent/config.go:277-314` lists every recognized variable).
2. Leave the Settings → API Keys modal empty (no entries in `~/.kc/config.yaml`).
3. Optionally block outbound DNS/HTTP to `api.anthropic.com`, `api.openai.com`, `generativelanguage.googleapis.com`, `api.groq.com`, and `openrouter.ai` at your egress.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The “restricted egress (no AI provider)” section ties AI enablement to API keys and ConfigManager.IsKeyAvailable(). However, the current agent registry only registers CLI-based agents and does not register API-key providers; additionally, the /settings/keys status endpoint currently returns an empty provider list. As written, the doc implies that setting OPENAI_API_KEY/ANTHROPIC_API_KEY/etc enables AI features, but that is not true in the current runtime behavior. Please update this posture description to describe the real gating condition (availability of registered CLI agents) or document that API-key providers are currently disabled/not wired in.

Suggested change
All cluster-management features continue to work. **AI is optional.** If no key is configured for any provider, `IsKeyAvailable()` returns `false` (`pkg/agent/config.go:235-244`), and AI-driven features fall back to deterministic / rule-based behavior. The README covers this under *AI configuration*: "If no key is configured, AI-powered features fall back to deterministic / rule-based behavior."
To run without AI:
1. Do **not** set `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, `GROQ_API_KEY`, `OPENROUTER_API_KEY`, or `OPEN_WEBUI_API_KEY` (`pkg/agent/config.go:277-314` lists every recognized variable).
2. Leave the Settings → API Keys modal empty (no entries in `~/.kc/config.yaml`).
3. Optionally block outbound DNS/HTTP to `api.anthropic.com`, `api.openai.com`, `generativelanguage.googleapis.com`, `api.groq.com`, and `openrouter.ai` at your egress.
All cluster-management features continue to work. **AI is optional.** In the current runtime, AI features are gated by the availability of a registered AI agent/provider, and the active registry is CLI-based. API-key-based providers are recognized in configuration, but they are not currently wired into the runtime provider registry/status path, so setting `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, `GROQ_API_KEY`, `OPENROUTER_API_KEY`, or `OPEN_WEBUI_API_KEY` does **not** by itself enable AI features. When no supported CLI-backed AI agent/provider is available, AI-driven features fall back to deterministic / rule-based behavior.
To run without AI:
1. Do **not** configure or run any supported CLI-backed AI agent/provider for kc-agent.
2. Treat the Settings → API Keys modal and related `*_API_KEY` variables as non-operative for current runtime enablement; leaving them empty is fine, but their presence alone does not activate AI.
3. Optionally block outbound DNS/HTTP to known hosted AI endpoints such as `api.anthropic.com`, `api.openai.com`, `generativelanguage.googleapis.com`, `api.groq.com`, and `openrouter.ai` at your egress as defense in depth.

Copilot uses AI. Check for mistakes.
Comment on lines +168 to +184
## 3. Local / Self-Hosted LLMs

The AI layer is a set of pluggable providers under `pkg/agent/provider_*.go`. Each provider maps to one API key env var (listed in `pkg/agent/config.go:277-314`) and, in some cases, a base-URL override.

### Supported providers and env vars

| Provider | `provider` name | API key env var | Model env var | Base URL overridable? | Base URL env var | Source |
|---|---|---|---|---|---|---|
| Anthropic Claude | `claude` / `anthropic` | `ANTHROPIC_API_KEY` | `CLAUDE_MODEL` | no (fixed to Anthropic) | — | `pkg/agent/provider_claude.go` |
| OpenAI (ChatGPT) | `openai` | `OPENAI_API_KEY` | `OPENAI_MODEL` | no (fixed to `api.openai.com`) | — | `pkg/agent/provider_openai.go:15` |
| Google Gemini | `gemini` / `google` | `GOOGLE_API_KEY` | `GEMINI_MODEL` | no | — | `pkg/agent/provider_gemini.go:15` |
| Groq (OpenAI-compatible) | `groq` | `GROQ_API_KEY` | `GROQ_MODEL` | **yes** | `GROQ_BASE_URL` | `pkg/agent/provider_groq.go:22-51` |
| OpenRouter (OpenAI-compatible) | `openrouter` | `OPENROUTER_API_KEY` | `OPENROUTER_MODEL` | **yes** | `OPENROUTER_BASE_URL` | `pkg/agent/provider_openrouter.go:23-58` |
| Open WebUI (OpenAI-compatible) | `open-webui` | `OPEN_WEBUI_API_KEY` | `OPEN_WEBUI_MODEL` | **yes** | `OPEN_WEBUI_URL` | `pkg/agent/provider_openwebui.go:16,39` |

Note the asymmetry: **the upstream OpenAI provider does not currently honor an `OPENAI_BASE_URL` override.** The hostname is a package-level variable in `pkg/agent/provider_openai.go:15`, but it is not re-read from the environment. If you want to point an OpenAI-compatible local server at the console today, use one of the three providers whose base URLs *are* overridable: **Groq, OpenRouter, or Open WebUI**. All three speak the OpenAI chat-completions wire format.

Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section says that the in-console AI layer supports Groq/OpenRouter/Open WebUI (with base-URL overrides) and presents them as supported providers. In the current source, those HTTP API providers are not registered by InitializeProviders (they’re explicitly excluded as “API-only agents”), so users cannot actually select/use them via the agent’s provider registry. Please either (a) adjust the doc to clarify these providers exist in code but are currently not wired/registered, or (b) update the implementation in a follow-up PR to register them and expose them safely.

Copilot uses AI. Check for mistakes.
Comment on lines +119 to +122
If you deploy the console inside a cluster with `deploy.sh`, outbound traffic from the **backend pod** is limited to:

- GitHub API calls for OAuth exchange and update checks (`update_checker.go`). These can be disabled.
- Nothing else in the core install. No telemetry, no AI calls. AI calls originate from the user's **local** kc-agent, not from the pod.
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This “What actually leaves the cluster” bullet references update checks via update_checker.go in the context of the backend pod. In the current tree, update_checker.go is part of the local kc-agent package (auto-update) rather than the Go backend, so this reference is misleading for an in-cluster backend deployment. Please correct the file/path reference and be explicit about which process (backend vs local agent) performs any GitHub/update polling.

Copilot uses AI. Check for mistakes.
| `CLAUDE_MODEL` / `OPENAI_MODEL` / `GEMINI_MODEL` / `GROQ_MODEL` / `OPENROUTER_MODEL` / `OPEN_WEBUI_MODEL` | kc-agent | Model override per provider |
| `KC_AGENT_TOKEN` | kc-agent | Optional shared secret for browser→agent auth |
| `KC_ALLOWED_ORIGINS` | kc-agent | Extra allowed origins (comma-separated) |
| `KC_DEV_MODE` | kc-agent | Development mode toggle (`1` to enable) |
Copy link

Copilot AI Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The env var cheat sheet lists KC_DEV_MODE as a kc-agent “development mode toggle”. In the current code, the kc-agent binary’s logging/dev behavior is controlled by DEV_MODE (cmd/kc-agent/main.go), while KC_DEV_MODE=1 is used for the agent’s backend process restart path (pkg/agent/server_http.go) rather than a general dev-mode toggle. Please clarify the distinction or rename the entry so operators don’t set the wrong variable when trying to run in dev mode.

Suggested change
| `KC_DEV_MODE` | kc-agent | Development mode toggle (`1` to enable) |
| `DEV_MODE` | kc-agent | General kc-agent development/logging mode toggle |
| `KC_DEV_MODE` | kc-agent | Used for the backend-driven agent restart/dev path; not the general kc-agent dev-mode toggle |

Copilot uses AI. Check for mistakes.
@github-actions
Copy link
Copy Markdown
Contributor

Post-merge build verification passed

Both Go and frontend builds compiled successfully against merge commit 8564e818128e0999d42ffc3e516f3cd3828f68a1.

clubanderson added a commit that referenced this pull request Apr 15, 2026
Enhances the merged security doc (#8203) with four mermaid diagrams
that make the architecture and posture choices visible at a glance:

1. **Component diagram** — replaces the ASCII-art architecture block
   with a richer mermaid flowchart that groups components by trust
   boundary (user machine / console deployment / managed clusters /
   AI providers), uses solid vs dashed arrows to distinguish
   mandatory vs optional flows, and color-codes the boundaries.

2. **Auth + transport defense-in-depth** — shows the four layers
   that gate traffic into kc-agent (bind check on 127.0.0.1, CORS
   allow-list, DNS-rebinding guard, optional KC_AGENT_TOKEN) so
   readers can see why loopback alone isn't the whole story and
   when they should set the token.

3. **Posture comparison** — stacks the three network postures
   (fully online / restricted egress / fully air-gapped) side by
   side with red dotted arrows marking the flows explicitly
   blocked at egress in each posture.

4. **Local-LLM provider-slot routing** — shows how setting
   GROQ_BASE_URL (or OPENROUTER_BASE_URL / OPEN_WEBUI_URL)
   redirects the provider slot to Ollama, vLLM, LM Studio, or
   a corporate gateway without touching the console code.

Also adds a short "Local LLM as a security posture" subsection
framing the override as the correct choice for air-gapped and
high-sensitivity environments — not a feature gap. This is
deliberately scoped to the security doc; broader local-LLM
support work lives elsewhere.

No runtime behavior changes. Documentation only.

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 15, 2026
…detail

Makes the security picture visible in-context at the two moments
users care about — installing the Console itself and installing
a CNCF project via a guided mission. Both surfaces link to the
SECURITY-MODEL.md doc merged in #8203.

Setup install modal (SetupInstructionsDialog.tsx):
- New expandable "Security posture" section next to the Dev
  Guide / K8s Deploy / OAuth sections
- Four subsections covering kc-agent posture, AI key handling,
  what leaves your machine, and the air-gapped / local-LLM
  option (framed as a security posture, not a feature gap —
  deliberately scoped to NOT conflate with the separate broader
  local-LLM support work)
- "Read the full security model" link to docs/security/SECURITY-MODEL.md

Mission Detail view (MissionDetailView.tsx):
- New 5th tab: install / uninstall / upgrade / troubleshooting /
  **security**
- Renders mission.security steps via the existing StepCard
  component
- When mission.security is populated, adds a footer link to the
  overall SECURITY-MODEL.md so users always have a path to the
  full doc
- When mission.security is empty, shows a helpful fallback with
  the global doc link and an "Suggest security notes" button
  (reuses the existing onImprove flow)

Schema (lib/missions/types.ts):
- Adds optional `security?: MissionStep[]` to the MissionExport
  interface. Backwards-compatible.

Locale (locales/en/common.json):
- Adds `missions.detail.tabs.security` and
  `missions.detail.tabs.securityEmpty` strings

Paired with kubestellar/console-kb#2027 which introduces the
schema-side `mission.security` array and populates the first
mission (install-kubevirt).

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 15, 2026
Enhances the merged security doc (#8203) with four mermaid diagrams
that make the architecture and posture choices visible at a glance:

1. **Component diagram** — replaces the ASCII-art architecture block
   with a richer mermaid flowchart that groups components by trust
   boundary (user machine / console deployment / managed clusters /
   AI providers), uses solid vs dashed arrows to distinguish
   mandatory vs optional flows, and color-codes the boundaries.

2. **Auth + transport defense-in-depth** — shows the four layers
   that gate traffic into kc-agent (bind check on 127.0.0.1, CORS
   allow-list, DNS-rebinding guard, optional KC_AGENT_TOKEN) so
   readers can see why loopback alone isn't the whole story and
   when they should set the token.

3. **Posture comparison** — stacks the three network postures
   (fully online / restricted egress / fully air-gapped) side by
   side with red dotted arrows marking the flows explicitly
   blocked at egress in each posture.

4. **Local-LLM provider-slot routing** — shows how setting
   GROQ_BASE_URL (or OPENROUTER_BASE_URL / OPEN_WEBUI_URL)
   redirects the provider slot to Ollama, vLLM, LM Studio, or
   a corporate gateway without touching the console code.

Also adds a short "Local LLM as a security posture" subsection
framing the override as the correct choice for air-gapped and
high-sensitivity environments — not a feature gap. This is
deliberately scoped to the security doc; broader local-LLM
support work lives elsewhere.

No runtime behavior changes. Documentation only.

Signed-off-by: Andrew Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 15, 2026
…DME (#8207)

Fixes #8207

Addresses all 6 Copilot review comments from PR #8203 (security docs
bundle). Verified each claim against source before applying:

- Verified InitializeProviders (pkg/agent/registry.go:283) registers
  only CLI-based tool agents and explicitly excludes API-key HTTP
  providers (claude/openai/gemini/groq/openrouter/open-webui).
- Verified update_checker.go lives in pkg/agent/ (local kc-agent),
  not in the Go backend server pod.
- Verified DEV_MODE is read in cmd/kc-agent/main.go:18 while
  KC_DEV_MODE=1 is only used in pkg/agent/server_http.go:2202 for
  the backend-driven agent restart path.

Changes:

1. README.md (finding #1): The "security model" paragraph no longer
   claims users can point an OpenAI-compatible local LLM at kc-agent
   via GROQ_BASE_URL / OPENROUTER_BASE_URL / OPEN_WEBUI_URL today.
   Reframed as a planned follow-up; currently supported path is the
   CLI-based agents.

2. SECURITY-MODEL.md §1 data flow (finding #2): Replaced the
   single-sentence "Key consequence" block with the two-path
   distinction (CLI tool agents vs direct HTTP providers). Notes
   that CLI agents can exfiltrate cluster data indirectly via
   kubectl/helm tool output; direct HTTP providers are not
   registered at runtime today.

3. SECURITY-MODEL.md §2 Posture B (finding #3): Rewrote the
   restricted-egress section to match runtime reality. AI gating
   is by registered CLI agent availability, not by API-key env
   vars. Setting *_API_KEY does not by itself enable AI. Settings
   → API Keys modal documented as non-operative.

4. SECURITY-MODEL.md §1 "leaves the cluster" (finding #5):
   Corrected the update_checker.go reference. The local kc-agent
   (not the backend pod) performs any GitHub update polling.
   In-cluster backend deployments do not poll GitHub from the
   server pod.

5. SECURITY-MODEL.md §3 Local/Self-hosted LLMs (finding #4):
   Added a prominent "current registration status" subsection
   stating that Groq/OpenRouter/Open WebUI provider implementations
   exist but are NOT registered by InitializeProviders. Relabeled
   the Ollama / vLLM / LM Studio / internal-gateway recipes as
   "planned follow-up" (not operative today). Base-URL env vars
   noted as "parsed, not wired". Retained the mermaid diagrams
   from PR #8206 and framed them as the intended direction.

6. SECURITY-MODEL.md §4 env var cheat sheet (finding #6): Split
   the KC_DEV_MODE row into two entries — DEV_MODE (general
   kc-agent dev/logging toggle, read in cmd/kc-agent/main.go)
   and KC_DEV_MODE (backend-driven restart/dev path in
   pkg/agent/server_http.go) — so operators don't set the wrong
   variable.

Docs-only change. web build + lint pass.

Signed-off-by: Andy Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 15, 2026
…DME (#8207) (#8223)

Fixes #8207

Addresses all 6 Copilot review comments from PR #8203 (security docs
bundle). Verified each claim against source before applying:

- Verified InitializeProviders (pkg/agent/registry.go:283) registers
  only CLI-based tool agents and explicitly excludes API-key HTTP
  providers (claude/openai/gemini/groq/openrouter/open-webui).
- Verified update_checker.go lives in pkg/agent/ (local kc-agent),
  not in the Go backend server pod.
- Verified DEV_MODE is read in cmd/kc-agent/main.go:18 while
  KC_DEV_MODE=1 is only used in pkg/agent/server_http.go:2202 for
  the backend-driven agent restart path.

Changes:

1. README.md (finding #1): The "security model" paragraph no longer
   claims users can point an OpenAI-compatible local LLM at kc-agent
   via GROQ_BASE_URL / OPENROUTER_BASE_URL / OPEN_WEBUI_URL today.
   Reframed as a planned follow-up; currently supported path is the
   CLI-based agents.

2. SECURITY-MODEL.md §1 data flow (finding #2): Replaced the
   single-sentence "Key consequence" block with the two-path
   distinction (CLI tool agents vs direct HTTP providers). Notes
   that CLI agents can exfiltrate cluster data indirectly via
   kubectl/helm tool output; direct HTTP providers are not
   registered at runtime today.

3. SECURITY-MODEL.md §2 Posture B (finding #3): Rewrote the
   restricted-egress section to match runtime reality. AI gating
   is by registered CLI agent availability, not by API-key env
   vars. Setting *_API_KEY does not by itself enable AI. Settings
   → API Keys modal documented as non-operative.

4. SECURITY-MODEL.md §1 "leaves the cluster" (finding #5):
   Corrected the update_checker.go reference. The local kc-agent
   (not the backend pod) performs any GitHub update polling.
   In-cluster backend deployments do not poll GitHub from the
   server pod.

5. SECURITY-MODEL.md §3 Local/Self-hosted LLMs (finding #4):
   Added a prominent "current registration status" subsection
   stating that Groq/OpenRouter/Open WebUI provider implementations
   exist but are NOT registered by InitializeProviders. Relabeled
   the Ollama / vLLM / LM Studio / internal-gateway recipes as
   "planned follow-up" (not operative today). Base-URL env vars
   noted as "parsed, not wired". Retained the mermaid diagrams
   from PR #8206 and framed them as the intended direction.

6. SECURITY-MODEL.md §4 env var cheat sheet (finding #6): Split
   the KC_DEV_MODE row into two entries — DEV_MODE (general
   kc-agent dev/logging toggle, read in cmd/kc-agent/main.go)
   and KC_DEV_MODE (backend-driven restart/dev path in
   pkg/agent/server_http.go) — so operators don't set the wrong
   variable.

Docs-only change. web build + lint pass.

Signed-off-by: Andy Anderson <[email protected]>
clubanderson added a commit that referenced this pull request Apr 16, 2026
…detail (#8210)

Makes the security picture visible in-context at the two moments
users care about — installing the Console itself and installing
a CNCF project via a guided mission. Both surfaces link to the
SECURITY-MODEL.md doc merged in #8203.

Setup install modal (SetupInstructionsDialog.tsx):
- New expandable "Security posture" section next to the Dev
  Guide / K8s Deploy / OAuth sections
- Four subsections covering kc-agent posture, AI key handling,
  what leaves your machine, and the air-gapped / local-LLM
  option (framed as a security posture, not a feature gap —
  deliberately scoped to NOT conflate with the separate broader
  local-LLM support work)
- "Read the full security model" link to docs/security/SECURITY-MODEL.md

Mission Detail view (MissionDetailView.tsx):
- New 5th tab: install / uninstall / upgrade / troubleshooting /
  **security**
- Renders mission.security steps via the existing StepCard
  component
- When mission.security is populated, adds a footer link to the
  overall SECURITY-MODEL.md so users always have a path to the
  full doc
- When mission.security is empty, shows a helpful fallback with
  the global doc link and an "Suggest security notes" button
  (reuses the existing onImprove flow)

Schema (lib/missions/types.ts):
- Adds optional `security?: MissionStep[]` to the MissionExport
  interface. Backwards-compatible.

Locale (locales/en/common.json):
- Adds `missions.detail.tabs.security` and
  `missions.detail.tabs.securityEmpty` strings

Paired with kubestellar/console-kb#2027 which introduces the
schema-side `mission.security` array and populates the first
mission (install-kubevirt).

Signed-off-by: Andrew Anderson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai-generated Pull request generated by AI dco-signoff: yes Indicates the PR's author has signed the DCO. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

2 participants