📝 docs: add security model, air-gapped deployment, and local LLM guides (#8194 #8195 #8196)#8203
Conversation
Add docs/security/SECURITY-MODEL.md covering: - Security model and data flow (browser ↔ Go backend ↔ kc-agent ↔ clusters) - The pod-SA identity rule (bootstrap, GPU reserve, self-upgrade only) - What kc-agent does and does NOT send to AI providers - Air-gapped / restricted-egress deployment postures - Local / self-hosted LLM configuration via GROQ_BASE_URL, OPENROUTER_BASE_URL, and OPEN_WEBUI_URL (OpenAI-compatible shim) - Environment variable and port reference All runtime claims are grounded in source with file:line references (pkg/agent/config.go, pkg/agent/provider_groq.go, pkg/agent/provider_openrouter.go, pkg/agent/provider_openwebui.go, pkg/agent/server.go, cmd/kc-agent/main.go, pkg/api/server.go). README.md gains a link to the new document from the AI configuration section. Fixes #8194, Fixes #8195, Fixes #8196 Signed-off-by: Andy Anderson <[email protected]>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
✅ Deploy Preview for kubestellarconsole ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
👋 Hey @clubanderson — thanks for opening this PR!
This is an automated message. |
|
Thank you for your contribution! Your PR has been merged. Check out what's new:
Stay connected: Slack #kubestellar-dev | Multi-Cluster Survey |
There was a problem hiding this comment.
Pull request overview
Adds consolidated security and deployment documentation for KubeStellar Console, covering the security model/data flow, air-gapped deployment postures, and local/self-hosted LLM configuration, with a README pointer for discoverability.
Changes:
- Add
docs/security/SECURITY-MODEL.mddescribing architecture/data flow, auth/CORS/CSP surface, and deployment postures. - Document local/self-hosted LLM configuration options and environment variables with worked examples.
- Link the new document from the README’s AI configuration section.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| docs/security/SECURITY-MODEL.md | New consolidated security/air-gap/local-LLM guide with diagrams, tables, and source references. |
| README.md | Adds a link from AI configuration to the new security model document. |
|
|
||
| **If no key is configured**, AI-powered features fall back to deterministic / rule-based behavior. The card suggestions, missions, and dashboards remain fully usable. | ||
|
|
||
| **Security model, air-gapped deployments, and local / self-hosted LLMs** are covered in [`docs/security/SECURITY-MODEL.md`](docs/security/SECURITY-MODEL.md). That document explains the data flow between browser, Go backend, kc-agent, and AI providers; how to run the console with no external AI access; and how to point an OpenAI-compatible local LLM (Ollama, vLLM, LM Studio, internal gateway) at kc-agent via `GROQ_BASE_URL` / `OPENROUTER_BASE_URL` / `OPEN_WEBUI_URL`. |
There was a problem hiding this comment.
This README addition states that users can point an OpenAI-compatible local LLM at kc-agent via GROQ_BASE_URL / OPENROUTER_BASE_URL / OPEN_WEBUI_URL. In the current code, those API-key HTTP providers are not registered by InitializeProviders (API-only agents are explicitly excluded), so these env vars won’t enable a selectable/usable provider at runtime. Please adjust the sentence to avoid implying this works today, or document the current supported path (CLI-based agents) and any planned follow-up to wire in these base-URL-overridable providers.
| **Security model, air-gapped deployments, and local / self-hosted LLMs** are covered in [`docs/security/SECURITY-MODEL.md`](docs/security/SECURITY-MODEL.md). That document explains the data flow between browser, Go backend, kc-agent, and AI providers; how to run the console with no external AI access; and how to point an OpenAI-compatible local LLM (Ollama, vLLM, LM Studio, internal gateway) at kc-agent via `GROQ_BASE_URL` / `OPENROUTER_BASE_URL` / `OPEN_WEBUI_URL`. | |
| **Security model, air-gapped deployments, and local / self-hosted LLMs** are covered in [`docs/security/SECURITY-MODEL.md`](docs/security/SECURITY-MODEL.md). That document explains the data flow between browser, Go backend, kc-agent, and AI providers; how to run the console with no external AI access; and the currently supported self-hosted path using kc-agent's CLI-based agents. It also notes that OpenAI-compatible local LLM endpoints (for example Ollama, vLLM, LM Studio, or an internal gateway) are a planned follow-up for the base-URL-overridable HTTP providers referenced by `GROQ_BASE_URL` / `OPENROUTER_BASE_URL` / `OPEN_WEBUI_URL`, which are not yet wired into the selectable provider list in the current build. |
| Key consequence: **the kubeconfig, raw secrets, and cluster credentials never cross the process boundary from kc-agent.** The only thing kc-agent sends outward is the HTTP chat payload to the configured AI provider, which contains the conversation the user has been having (system prompt + message history + current prompt — see `pkg/agent/provider_openai.go:207-238` for the exact OpenAI shape). | ||
|
|
||
| ### What kc-agent does **not** send to AI providers | ||
|
|
||
| - It does not upload `~/.kube/config`. | ||
| - It does not upload cluster bearer tokens, client certificates, or any other credential material. | ||
| - It does not auto-attach arbitrary cluster objects. The conversation context is whatever the user chose to type or paste, plus the system prompt defined in the provider implementation (`DefaultSystemPrompt`). | ||
|
|
||
| If you need to audit what leaves the machine, the provider files under `pkg/agent/provider_*.go` each contain exactly one outbound HTTP call site per request type (`Chat` and `StreamChat`). Those are the only places any AI traffic originates. |
There was a problem hiding this comment.
The doc states that kc-agent only sends the user’s typed/pasted conversation to AI providers and that the provider_*.go HTTP call sites are the only place AI traffic originates. In the current code, InitializeProviders registers CLI-based tool agents (e.g., claude-code/codex/gemini-cli) and explicitly does NOT register the API-key HTTP providers (OpenAI/Claude/Gemini/Groq/OpenRouter/Open WebUI). Those CLI agents can execute kubectl/helm and may transmit command output to the upstream LLM via the external CLI, so cluster data can leave the machine depending on the agent in use. Please revise this section to reflect the actual agent types and data-flow differences (CLI tool agents vs direct HTTP providers).
| Key consequence: **the kubeconfig, raw secrets, and cluster credentials never cross the process boundary from kc-agent.** The only thing kc-agent sends outward is the HTTP chat payload to the configured AI provider, which contains the conversation the user has been having (system prompt + message history + current prompt — see `pkg/agent/provider_openai.go:207-238` for the exact OpenAI shape). | |
| ### What kc-agent does **not** send to AI providers | |
| - It does not upload `~/.kube/config`. | |
| - It does not upload cluster bearer tokens, client certificates, or any other credential material. | |
| - It does not auto-attach arbitrary cluster objects. The conversation context is whatever the user chose to type or paste, plus the system prompt defined in the provider implementation (`DefaultSystemPrompt`). | |
| If you need to audit what leaves the machine, the provider files under `pkg/agent/provider_*.go` each contain exactly one outbound HTTP call site per request type (`Chat` and `StreamChat`). Those are the only places any AI traffic originates. | |
| Key consequence: **the kubeconfig, raw secrets, and cluster credentials never cross the process boundary from kc-agent as direct credential uploads.** However, the data that can leave the machine depends on which kind of AI agent is configured: | |
| - **CLI tool agents** (for example `claude-code`, `codex`, `gemini-cli`) run an external CLI locally. These agents can execute tools such as `kubectl` and `helm`, and the external CLI may send tool output or other cluster-derived context to its upstream LLM depending on the agent's behavior and the prompt. | |
| - **Direct HTTP providers** (for example OpenAI-compatible/API-key providers) construct an HTTP chat payload in `pkg/agent/provider_*.go`. For those providers, the outbound data is the request body built by the provider implementation (for example, system prompt + message history + current prompt; see `pkg/agent/provider_openai.go:207-238` for the OpenAI shape). | |
| ### What kc-agent does **not** send to AI providers | |
| - It does not upload `~/.kube/config`. | |
| - It does not upload cluster bearer tokens, client certificates, or any other credential material. | |
| - It does not auto-attach arbitrary cluster objects on its own. For direct HTTP providers, the conversation context is whatever the user chose to type or paste, plus the system prompt defined in the provider implementation (`DefaultSystemPrompt`). | |
| - For CLI tool agents, be aware that cluster data can still leave the machine indirectly if the agent runs commands (for example via `kubectl` or `helm`) and the external CLI includes that output in requests to the upstream model. | |
| If you need to audit what leaves the machine, distinguish the two paths: for **direct HTTP providers**, inspect the outbound request construction and HTTP call sites in `pkg/agent/provider_*.go`; for **CLI tool agents**, audit the external CLI invocation plus whatever tool output and prompts that CLI may forward upstream. The `provider_*.go` call sites are therefore the AI egress points only for the direct HTTP provider path, not for every configured agent type. |
| All cluster-management features continue to work. **AI is optional.** If no key is configured for any provider, `IsKeyAvailable()` returns `false` (`pkg/agent/config.go:235-244`), and AI-driven features fall back to deterministic / rule-based behavior. The README covers this under *AI configuration*: "If no key is configured, AI-powered features fall back to deterministic / rule-based behavior." | ||
|
|
||
| To run without AI: | ||
|
|
||
| 1. Do **not** set `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, `GROQ_API_KEY`, `OPENROUTER_API_KEY`, or `OPEN_WEBUI_API_KEY` (`pkg/agent/config.go:277-314` lists every recognized variable). | ||
| 2. Leave the Settings → API Keys modal empty (no entries in `~/.kc/config.yaml`). | ||
| 3. Optionally block outbound DNS/HTTP to `api.anthropic.com`, `api.openai.com`, `generativelanguage.googleapis.com`, `api.groq.com`, and `openrouter.ai` at your egress. |
There was a problem hiding this comment.
The “restricted egress (no AI provider)” section ties AI enablement to API keys and ConfigManager.IsKeyAvailable(). However, the current agent registry only registers CLI-based agents and does not register API-key providers; additionally, the /settings/keys status endpoint currently returns an empty provider list. As written, the doc implies that setting OPENAI_API_KEY/ANTHROPIC_API_KEY/etc enables AI features, but that is not true in the current runtime behavior. Please update this posture description to describe the real gating condition (availability of registered CLI agents) or document that API-key providers are currently disabled/not wired in.
| All cluster-management features continue to work. **AI is optional.** If no key is configured for any provider, `IsKeyAvailable()` returns `false` (`pkg/agent/config.go:235-244`), and AI-driven features fall back to deterministic / rule-based behavior. The README covers this under *AI configuration*: "If no key is configured, AI-powered features fall back to deterministic / rule-based behavior." | |
| To run without AI: | |
| 1. Do **not** set `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, `GROQ_API_KEY`, `OPENROUTER_API_KEY`, or `OPEN_WEBUI_API_KEY` (`pkg/agent/config.go:277-314` lists every recognized variable). | |
| 2. Leave the Settings → API Keys modal empty (no entries in `~/.kc/config.yaml`). | |
| 3. Optionally block outbound DNS/HTTP to `api.anthropic.com`, `api.openai.com`, `generativelanguage.googleapis.com`, `api.groq.com`, and `openrouter.ai` at your egress. | |
| All cluster-management features continue to work. **AI is optional.** In the current runtime, AI features are gated by the availability of a registered AI agent/provider, and the active registry is CLI-based. API-key-based providers are recognized in configuration, but they are not currently wired into the runtime provider registry/status path, so setting `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, `GROQ_API_KEY`, `OPENROUTER_API_KEY`, or `OPEN_WEBUI_API_KEY` does **not** by itself enable AI features. When no supported CLI-backed AI agent/provider is available, AI-driven features fall back to deterministic / rule-based behavior. | |
| To run without AI: | |
| 1. Do **not** configure or run any supported CLI-backed AI agent/provider for kc-agent. | |
| 2. Treat the Settings → API Keys modal and related `*_API_KEY` variables as non-operative for current runtime enablement; leaving them empty is fine, but their presence alone does not activate AI. | |
| 3. Optionally block outbound DNS/HTTP to known hosted AI endpoints such as `api.anthropic.com`, `api.openai.com`, `generativelanguage.googleapis.com`, `api.groq.com`, and `openrouter.ai` at your egress as defense in depth. |
| ## 3. Local / Self-Hosted LLMs | ||
|
|
||
| The AI layer is a set of pluggable providers under `pkg/agent/provider_*.go`. Each provider maps to one API key env var (listed in `pkg/agent/config.go:277-314`) and, in some cases, a base-URL override. | ||
|
|
||
| ### Supported providers and env vars | ||
|
|
||
| | Provider | `provider` name | API key env var | Model env var | Base URL overridable? | Base URL env var | Source | | ||
| |---|---|---|---|---|---|---| | ||
| | Anthropic Claude | `claude` / `anthropic` | `ANTHROPIC_API_KEY` | `CLAUDE_MODEL` | no (fixed to Anthropic) | — | `pkg/agent/provider_claude.go` | | ||
| | OpenAI (ChatGPT) | `openai` | `OPENAI_API_KEY` | `OPENAI_MODEL` | no (fixed to `api.openai.com`) | — | `pkg/agent/provider_openai.go:15` | | ||
| | Google Gemini | `gemini` / `google` | `GOOGLE_API_KEY` | `GEMINI_MODEL` | no | — | `pkg/agent/provider_gemini.go:15` | | ||
| | Groq (OpenAI-compatible) | `groq` | `GROQ_API_KEY` | `GROQ_MODEL` | **yes** | `GROQ_BASE_URL` | `pkg/agent/provider_groq.go:22-51` | | ||
| | OpenRouter (OpenAI-compatible) | `openrouter` | `OPENROUTER_API_KEY` | `OPENROUTER_MODEL` | **yes** | `OPENROUTER_BASE_URL` | `pkg/agent/provider_openrouter.go:23-58` | | ||
| | Open WebUI (OpenAI-compatible) | `open-webui` | `OPEN_WEBUI_API_KEY` | `OPEN_WEBUI_MODEL` | **yes** | `OPEN_WEBUI_URL` | `pkg/agent/provider_openwebui.go:16,39` | | ||
|
|
||
| Note the asymmetry: **the upstream OpenAI provider does not currently honor an `OPENAI_BASE_URL` override.** The hostname is a package-level variable in `pkg/agent/provider_openai.go:15`, but it is not re-read from the environment. If you want to point an OpenAI-compatible local server at the console today, use one of the three providers whose base URLs *are* overridable: **Groq, OpenRouter, or Open WebUI**. All three speak the OpenAI chat-completions wire format. | ||
|
|
There was a problem hiding this comment.
This section says that the in-console AI layer supports Groq/OpenRouter/Open WebUI (with base-URL overrides) and presents them as supported providers. In the current source, those HTTP API providers are not registered by InitializeProviders (they’re explicitly excluded as “API-only agents”), so users cannot actually select/use them via the agent’s provider registry. Please either (a) adjust the doc to clarify these providers exist in code but are currently not wired/registered, or (b) update the implementation in a follow-up PR to register them and expose them safely.
| If you deploy the console inside a cluster with `deploy.sh`, outbound traffic from the **backend pod** is limited to: | ||
|
|
||
| - GitHub API calls for OAuth exchange and update checks (`update_checker.go`). These can be disabled. | ||
| - Nothing else in the core install. No telemetry, no AI calls. AI calls originate from the user's **local** kc-agent, not from the pod. |
There was a problem hiding this comment.
This “What actually leaves the cluster” bullet references update checks via update_checker.go in the context of the backend pod. In the current tree, update_checker.go is part of the local kc-agent package (auto-update) rather than the Go backend, so this reference is misleading for an in-cluster backend deployment. Please correct the file/path reference and be explicit about which process (backend vs local agent) performs any GitHub/update polling.
| | `CLAUDE_MODEL` / `OPENAI_MODEL` / `GEMINI_MODEL` / `GROQ_MODEL` / `OPENROUTER_MODEL` / `OPEN_WEBUI_MODEL` | kc-agent | Model override per provider | | ||
| | `KC_AGENT_TOKEN` | kc-agent | Optional shared secret for browser→agent auth | | ||
| | `KC_ALLOWED_ORIGINS` | kc-agent | Extra allowed origins (comma-separated) | | ||
| | `KC_DEV_MODE` | kc-agent | Development mode toggle (`1` to enable) | |
There was a problem hiding this comment.
The env var cheat sheet lists KC_DEV_MODE as a kc-agent “development mode toggle”. In the current code, the kc-agent binary’s logging/dev behavior is controlled by DEV_MODE (cmd/kc-agent/main.go), while KC_DEV_MODE=1 is used for the agent’s backend process restart path (pkg/agent/server_http.go) rather than a general dev-mode toggle. Please clarify the distinction or rename the entry so operators don’t set the wrong variable when trying to run in dev mode.
| | `KC_DEV_MODE` | kc-agent | Development mode toggle (`1` to enable) | | |
| | `DEV_MODE` | kc-agent | General kc-agent development/logging mode toggle | | |
| | `KC_DEV_MODE` | kc-agent | Used for the backend-driven agent restart/dev path; not the general kc-agent dev-mode toggle | |
|
Post-merge build verification passed ✅ Both Go and frontend builds compiled successfully against merge commit |
Enhances the merged security doc (#8203) with four mermaid diagrams that make the architecture and posture choices visible at a glance: 1. **Component diagram** — replaces the ASCII-art architecture block with a richer mermaid flowchart that groups components by trust boundary (user machine / console deployment / managed clusters / AI providers), uses solid vs dashed arrows to distinguish mandatory vs optional flows, and color-codes the boundaries. 2. **Auth + transport defense-in-depth** — shows the four layers that gate traffic into kc-agent (bind check on 127.0.0.1, CORS allow-list, DNS-rebinding guard, optional KC_AGENT_TOKEN) so readers can see why loopback alone isn't the whole story and when they should set the token. 3. **Posture comparison** — stacks the three network postures (fully online / restricted egress / fully air-gapped) side by side with red dotted arrows marking the flows explicitly blocked at egress in each posture. 4. **Local-LLM provider-slot routing** — shows how setting GROQ_BASE_URL (or OPENROUTER_BASE_URL / OPEN_WEBUI_URL) redirects the provider slot to Ollama, vLLM, LM Studio, or a corporate gateway without touching the console code. Also adds a short "Local LLM as a security posture" subsection framing the override as the correct choice for air-gapped and high-sensitivity environments — not a feature gap. This is deliberately scoped to the security doc; broader local-LLM support work lives elsewhere. No runtime behavior changes. Documentation only. Signed-off-by: Andrew Anderson <[email protected]>
…detail Makes the security picture visible in-context at the two moments users care about — installing the Console itself and installing a CNCF project via a guided mission. Both surfaces link to the SECURITY-MODEL.md doc merged in #8203. Setup install modal (SetupInstructionsDialog.tsx): - New expandable "Security posture" section next to the Dev Guide / K8s Deploy / OAuth sections - Four subsections covering kc-agent posture, AI key handling, what leaves your machine, and the air-gapped / local-LLM option (framed as a security posture, not a feature gap — deliberately scoped to NOT conflate with the separate broader local-LLM support work) - "Read the full security model" link to docs/security/SECURITY-MODEL.md Mission Detail view (MissionDetailView.tsx): - New 5th tab: install / uninstall / upgrade / troubleshooting / **security** - Renders mission.security steps via the existing StepCard component - When mission.security is populated, adds a footer link to the overall SECURITY-MODEL.md so users always have a path to the full doc - When mission.security is empty, shows a helpful fallback with the global doc link and an "Suggest security notes" button (reuses the existing onImprove flow) Schema (lib/missions/types.ts): - Adds optional `security?: MissionStep[]` to the MissionExport interface. Backwards-compatible. Locale (locales/en/common.json): - Adds `missions.detail.tabs.security` and `missions.detail.tabs.securityEmpty` strings Paired with kubestellar/console-kb#2027 which introduces the schema-side `mission.security` array and populates the first mission (install-kubevirt). Signed-off-by: Andrew Anderson <[email protected]>
Enhances the merged security doc (#8203) with four mermaid diagrams that make the architecture and posture choices visible at a glance: 1. **Component diagram** — replaces the ASCII-art architecture block with a richer mermaid flowchart that groups components by trust boundary (user machine / console deployment / managed clusters / AI providers), uses solid vs dashed arrows to distinguish mandatory vs optional flows, and color-codes the boundaries. 2. **Auth + transport defense-in-depth** — shows the four layers that gate traffic into kc-agent (bind check on 127.0.0.1, CORS allow-list, DNS-rebinding guard, optional KC_AGENT_TOKEN) so readers can see why loopback alone isn't the whole story and when they should set the token. 3. **Posture comparison** — stacks the three network postures (fully online / restricted egress / fully air-gapped) side by side with red dotted arrows marking the flows explicitly blocked at egress in each posture. 4. **Local-LLM provider-slot routing** — shows how setting GROQ_BASE_URL (or OPENROUTER_BASE_URL / OPEN_WEBUI_URL) redirects the provider slot to Ollama, vLLM, LM Studio, or a corporate gateway without touching the console code. Also adds a short "Local LLM as a security posture" subsection framing the override as the correct choice for air-gapped and high-sensitivity environments — not a feature gap. This is deliberately scoped to the security doc; broader local-LLM support work lives elsewhere. No runtime behavior changes. Documentation only. Signed-off-by: Andrew Anderson <[email protected]>
…DME (#8207) Fixes #8207 Addresses all 6 Copilot review comments from PR #8203 (security docs bundle). Verified each claim against source before applying: - Verified InitializeProviders (pkg/agent/registry.go:283) registers only CLI-based tool agents and explicitly excludes API-key HTTP providers (claude/openai/gemini/groq/openrouter/open-webui). - Verified update_checker.go lives in pkg/agent/ (local kc-agent), not in the Go backend server pod. - Verified DEV_MODE is read in cmd/kc-agent/main.go:18 while KC_DEV_MODE=1 is only used in pkg/agent/server_http.go:2202 for the backend-driven agent restart path. Changes: 1. README.md (finding #1): The "security model" paragraph no longer claims users can point an OpenAI-compatible local LLM at kc-agent via GROQ_BASE_URL / OPENROUTER_BASE_URL / OPEN_WEBUI_URL today. Reframed as a planned follow-up; currently supported path is the CLI-based agents. 2. SECURITY-MODEL.md §1 data flow (finding #2): Replaced the single-sentence "Key consequence" block with the two-path distinction (CLI tool agents vs direct HTTP providers). Notes that CLI agents can exfiltrate cluster data indirectly via kubectl/helm tool output; direct HTTP providers are not registered at runtime today. 3. SECURITY-MODEL.md §2 Posture B (finding #3): Rewrote the restricted-egress section to match runtime reality. AI gating is by registered CLI agent availability, not by API-key env vars. Setting *_API_KEY does not by itself enable AI. Settings → API Keys modal documented as non-operative. 4. SECURITY-MODEL.md §1 "leaves the cluster" (finding #5): Corrected the update_checker.go reference. The local kc-agent (not the backend pod) performs any GitHub update polling. In-cluster backend deployments do not poll GitHub from the server pod. 5. SECURITY-MODEL.md §3 Local/Self-hosted LLMs (finding #4): Added a prominent "current registration status" subsection stating that Groq/OpenRouter/Open WebUI provider implementations exist but are NOT registered by InitializeProviders. Relabeled the Ollama / vLLM / LM Studio / internal-gateway recipes as "planned follow-up" (not operative today). Base-URL env vars noted as "parsed, not wired". Retained the mermaid diagrams from PR #8206 and framed them as the intended direction. 6. SECURITY-MODEL.md §4 env var cheat sheet (finding #6): Split the KC_DEV_MODE row into two entries — DEV_MODE (general kc-agent dev/logging toggle, read in cmd/kc-agent/main.go) and KC_DEV_MODE (backend-driven restart/dev path in pkg/agent/server_http.go) — so operators don't set the wrong variable. Docs-only change. web build + lint pass. Signed-off-by: Andy Anderson <[email protected]>
…DME (#8207) (#8223) Fixes #8207 Addresses all 6 Copilot review comments from PR #8203 (security docs bundle). Verified each claim against source before applying: - Verified InitializeProviders (pkg/agent/registry.go:283) registers only CLI-based tool agents and explicitly excludes API-key HTTP providers (claude/openai/gemini/groq/openrouter/open-webui). - Verified update_checker.go lives in pkg/agent/ (local kc-agent), not in the Go backend server pod. - Verified DEV_MODE is read in cmd/kc-agent/main.go:18 while KC_DEV_MODE=1 is only used in pkg/agent/server_http.go:2202 for the backend-driven agent restart path. Changes: 1. README.md (finding #1): The "security model" paragraph no longer claims users can point an OpenAI-compatible local LLM at kc-agent via GROQ_BASE_URL / OPENROUTER_BASE_URL / OPEN_WEBUI_URL today. Reframed as a planned follow-up; currently supported path is the CLI-based agents. 2. SECURITY-MODEL.md §1 data flow (finding #2): Replaced the single-sentence "Key consequence" block with the two-path distinction (CLI tool agents vs direct HTTP providers). Notes that CLI agents can exfiltrate cluster data indirectly via kubectl/helm tool output; direct HTTP providers are not registered at runtime today. 3. SECURITY-MODEL.md §2 Posture B (finding #3): Rewrote the restricted-egress section to match runtime reality. AI gating is by registered CLI agent availability, not by API-key env vars. Setting *_API_KEY does not by itself enable AI. Settings → API Keys modal documented as non-operative. 4. SECURITY-MODEL.md §1 "leaves the cluster" (finding #5): Corrected the update_checker.go reference. The local kc-agent (not the backend pod) performs any GitHub update polling. In-cluster backend deployments do not poll GitHub from the server pod. 5. SECURITY-MODEL.md §3 Local/Self-hosted LLMs (finding #4): Added a prominent "current registration status" subsection stating that Groq/OpenRouter/Open WebUI provider implementations exist but are NOT registered by InitializeProviders. Relabeled the Ollama / vLLM / LM Studio / internal-gateway recipes as "planned follow-up" (not operative today). Base-URL env vars noted as "parsed, not wired". Retained the mermaid diagrams from PR #8206 and framed them as the intended direction. 6. SECURITY-MODEL.md §4 env var cheat sheet (finding #6): Split the KC_DEV_MODE row into two entries — DEV_MODE (general kc-agent dev/logging toggle, read in cmd/kc-agent/main.go) and KC_DEV_MODE (backend-driven restart/dev path in pkg/agent/server_http.go) — so operators don't set the wrong variable. Docs-only change. web build + lint pass. Signed-off-by: Andy Anderson <[email protected]>
…detail (#8210) Makes the security picture visible in-context at the two moments users care about — installing the Console itself and installing a CNCF project via a guided mission. Both surfaces link to the SECURITY-MODEL.md doc merged in #8203. Setup install modal (SetupInstructionsDialog.tsx): - New expandable "Security posture" section next to the Dev Guide / K8s Deploy / OAuth sections - Four subsections covering kc-agent posture, AI key handling, what leaves your machine, and the air-gapped / local-LLM option (framed as a security posture, not a feature gap — deliberately scoped to NOT conflate with the separate broader local-LLM support work) - "Read the full security model" link to docs/security/SECURITY-MODEL.md Mission Detail view (MissionDetailView.tsx): - New 5th tab: install / uninstall / upgrade / troubleshooting / **security** - Renders mission.security steps via the existing StepCard component - When mission.security is populated, adds a footer link to the overall SECURITY-MODEL.md so users always have a path to the full doc - When mission.security is empty, shows a helpful fallback with the global doc link and an "Suggest security notes" button (reuses the existing onImprove flow) Schema (lib/missions/types.ts): - Adds optional `security?: MissionStep[]` to the MissionExport interface. Backwards-compatible. Locale (locales/en/common.json): - Adds `missions.detail.tabs.security` and `missions.detail.tabs.securityEmpty` strings Paired with kubestellar/console-kb#2027 which introduces the schema-side `mission.security` array and populates the first mission (install-kubevirt). Signed-off-by: Andrew Anderson <[email protected]>
Summary
Adds
docs/security/SECURITY-MODEL.md— a single document covering the three related questions filed in #8194, #8195, and #8196:GROQ_BASE_URL,OPENROUTER_BASE_URL,OPEN_WEBUI_URL), and worked examples for Ollama (via the Groq slot), OpenRouter / internal LLM gateway, and Open WebUI.Also adds a one-line link from
README.md§ AI configuration → the new document.Grounding
Every runtime claim is grounded in source with file:line references. Verified while writing:
pkg/agent/server.go:578— kc-agent binds127.0.0.1:8585cmd/kc-agent/main.go:25— default port flagpkg/agent/config.go:277-314— the canonical list of env-var names per providerpkg/agent/provider_groq.go:22-51—GROQ_BASE_URLoverridepkg/agent/provider_openrouter.go:23-58—OPENROUTER_BASE_URLoverridepkg/agent/provider_openwebui.go:16,39—OPEN_WEBUI_URLpkg/agent/provider_openai.go:15— note the asymmetry that OpenAI's base URL is not currently env-overridablepkg/api/server.go:429-432— CSP allow-list for the local kc-agentpkg/agent/server.go:214—KC_AGENT_TOKENoptional shared secretPer
feedback_verify_readme_claims.md, this PR is strictly descriptive — no runtime behavior changes.Scope
Docs-only. No code, no config, no version bump. One new file plus two lines in README.
Test plan
cd web && npm run build— passes (build succeeded, all post-build safety checks passed)cd web && npm run lint— no new errors introduced (pre-existing baseline unchanged; zero matches for the new file path in the lint output)Fixes #8194, Fixes #8195, Fixes #8196