Skip to content

Azure Blob Storage data access extension#52272

Open
antonslutskyms wants to merge 17 commits intoopenclaw:mainfrom
antonslutskyms:openclaw_azure_blob_storage_extension
Open

Azure Blob Storage data access extension#52272
antonslutskyms wants to merge 17 commits intoopenclaw:mainfrom
antonslutskyms:openclaw_azure_blob_storage_extension

Conversation

@antonslutskyms
Copy link
Copy Markdown

Summary

Describe the problem and fix in 2–5 bullets:

  • Problem: Allow openclaw to access Azure Blob Storage. This is needed for a common RAG use case where business data might be located in Azure Blob Storage.
  • Why it matters: RAG is a valuable use case as it allows business users to leverage AI to interact with business data. Many organizations store business data (such as text, pdf, etc.) in Microsoft Azure Blob Storage
  • What changed: Added an extension azure-blob to src/extensions together with provider mappings and customer onboarding resources.
  • What did NOT change (scope boundary): Did not change other plugins or basic functionality

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor required for the fix
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

  • Closes #
  • Related #

User-visible / Behavior Changes

List user-visible changes (including defaults/config).
If none, write None.

Security Impact (required)

  • New permissions/capabilities? (Yes) -- Access to Azure Blob Storage. Risk is data access, mitigation through Azure RBAC
  • Secrets/tokens handling changed? (No)
  • New/changed network calls? (Yes) -- Openclaw needs network access to access the Azure Blob Storage network endpoint. Risk is open access to network resource, mitigation through openshell or other network restrictions to specific ports/ip ranges
  • Command/tool execution surface changed? (No)
  • Data access scope changed? (Yes) -- Openclaw can access network storage resource. Risk is access to sensitive business data, mitigation through RBAC controls
  • If any Yes, explain risk + mitigation:

Repro + Verification

Environment

  • OS:
  • Runtime/container:
  • Model/provider:
  • Integration/channel (if any):
  • Relevant config (redacted):

Steps

  1. Install openclaw
  2. Configure plug as per README.md
  3. Test the plugin in chat interface, such as "look in my azure blob storage and tell me what is the price for each of the hotels"

Expected

Here are the hotel prices from your Azure Blob Storage:

  • Budget Inn — $75 per night
  • Midrange Suites — $120 per night
  • Luxury Penthouse — $250 per night
  • Boutique Hotel — $180 per night

Actual

Here are the hotel prices from your Azure Blob Storage:

  • Budget Inn — $75 per night
  • Midrange Suites — $120 per night
  • Luxury Penthouse — $250 per night
  • Boutique Hotel — $180 per night

Evidence

Attach at least one:

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording
  • Perf numbers (if relevant)

Human Verification (required)

What you personally verified (not just CI), and how:

  • Verified scenarios: Once plugin is configured, for a sample hotel_prices.json stored in the Azure Blob Storage, ask in the chat interface "look in my azure blob storage and tell me what is the price for each of the hotels"

Verified result was:
Here are the hotel prices from your Azure Blob Storage:

  • Budget Inn — $75 per night

  • Midrange Suites — $120 per night

  • Luxury Penthouse — $250 per night

  • Boutique Hotel — $180 per night

  • Edge cases checked:
    No data matches the query, such as "find the airline ticket prices in my blob storage". Result is
    "
    I checked your blob storage container (container1). Right now it only contains one file:

hotel_prices.json
There aren’t any files related to airline ticket prices, so I can’t find that data in the storage at the moment. If you upload a file with the airline prices (for example airline_prices.json), I can read it and list them for you.
"

  • What you did not verify:

Review Conversations

  • I replied to or resolved every bot review conversation I addressed in this PR.
  • I left unresolved only the conversations that still need reviewer or maintainer judgment.

If a bot review conversation is addressed by this PR, resolve that conversation yourself. Do not leave bot review conversation cleanup for maintainers.

Compatibility / Migration

  • Backward compatible? (Yes)
  • Config/env changes? (Yes)
  • Migration needed? (No)
  • If yes, exact upgrade steps:
    Please see README.md in extensions/azure-blob

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly: remove configuration from openclaw.json
  • Files/config to restore: openclaw.json
  • Known bad symptoms reviewers should watch for: connection failures if Azure Blob Storage does not have public networking enabled or not in the correctly configured VNET

Risks and Mitigations

List only real risks for this PR. Add/remove entries as needed. If none, write None.

  • Risk: None
    • Mitigation:

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 22, 2026

Greptile Summary

This PR adds a new azure-blob extension that exposes three opt-in agent tools (azure_blob_list_containers, azure_blob_list_blobs, azure_blob_read) for reading data from Azure Blob Storage — a clean addition that fits the existing plugin/extension pattern and targets a common RAG use case.

The implementation is generally well-written: config resolution handles all supported auth modes (connection string, shared key, sovereign clouds) with proper env-var fallbacks and secret normalisation; error responses distinguish container-not-found / blob-not-found from generic failures; size caps and result caps are enforced consistently.

Two minor issues found:

  • Resource leak in readStreamToBufferMax (blob-client.ts): when an incoming chunk fills the read buffer exactly (buf.length === space), the loop breaks via the bottom if (total >= maxBytes) check without calling stream.destroy(). The stream is left open until garbage-collected, potentially holding a live HTTP connection. The else (over-capacity) branch correctly destroys the stream; the exact-fill path should match.
  • Missing empty-string guard on blobName (blob-read-tool.ts): the { required: true } flag on readStringParam guards against undefined/null, but a blank blobName ("" or whitespace) slips through to the Azure SDK where it produces a generic error instead of the friendly in-tool message used elsewhere. Adding a .trim() length check mirrors the containerName pattern in the same file.

Confidence Score: 4/5

  • PR is safe to merge; both findings are non-blocking P2 style issues with no data-loss or security impact.
  • The extension is a clean, self-contained addition with good error handling and no changes to existing code. The two issues found (stream not destroyed on exact-fill and missing empty blobName guard) are minor quality concerns that don't affect correctness in the common case. Score reflects one targeted cleanup that would round out the implementation.
  • extensions/azure-blob/src/blob-client.ts (stream cleanup) and extensions/azure-blob/src/blob-read-tool.ts (empty blobName guard)
Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/azure-blob/src/blob-client.ts
Line: 163-178

Comment:
**Stream not destroyed on exact-capacity fill**

When a chunk's length equals exactly the remaining space (`buf.length === space`), the code takes the `if` branch, appends the full chunk, sets `total === maxBytes`, and then breaks via `if (total >= maxBytes)` — without calling `stream.destroy()`. This leaves the underlying Azure SDK stream open until it's garbage-collected, which can hold a live HTTP connection longer than necessary.

The `stream.destroy()` call in the `else` branch handles the over-capacity case correctly, but the exact-fill case is missed. Consider consolidating the check:

```suggestion
    const buf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk as Uint8Array);
    const space = maxBytes - total;
    if (buf.length <= space) {
      chunks.push(buf);
      total += buf.length;
    } else {
      chunks.push(buf.subarray(0, space));
      total = maxBytes;
      stream.destroy();
      break;
    }
    if (total >= maxBytes) {
      stream.destroy();
      break;
    }
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/azure-blob/src/blob-read-tool.ts
Line: 47-54

Comment:
**Empty `blobName` not validated after trim**

`readStringParam(..., { required: true })` guards against `undefined`/`null`, but a caller passing `blobName: ""` or `blobName: "   "` would silently reach the Azure SDK with a blank name. The Azure SDK will return a generic 400/404 error rather than the clearer in-tool message you produce elsewhere. A brief guard matches the `containerName` pattern used just above it:

```suggestion
      const blobNameTrimmed = blobName.trim();
      if (!blobNameTrimmed) {
        return jsonResult({ ok: false, error: "blobName must not be empty." });
      }
```

Then use `blobNameTrimmed` in place of `blobName.trim()` for the rest of the function. The TypeBox schema enforces the field is present but doesn't set `minLength: 1`, so the runtime guard is the only safety net here.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "adding azure blob storage extension" | Re-trigger Greptile

Comment on lines +163 to +178
}
}

async function readStreamToBufferMax(stream: Readable, maxBytes: number): Promise<Buffer> {
const chunks: Buffer[] = [];
let total = 0;
for await (const chunk of stream) {
const buf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk as Uint8Array);
const space = maxBytes - total;
if (buf.length <= space) {
chunks.push(buf);
total += buf.length;
} else {
chunks.push(buf.subarray(0, space));
total = maxBytes;
stream.destroy();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Stream not destroyed on exact-capacity fill

When a chunk's length equals exactly the remaining space (buf.length === space), the code takes the if branch, appends the full chunk, sets total === maxBytes, and then breaks via if (total >= maxBytes) — without calling stream.destroy(). This leaves the underlying Azure SDK stream open until it's garbage-collected, which can hold a live HTTP connection longer than necessary.

The stream.destroy() call in the else branch handles the over-capacity case correctly, but the exact-fill case is missed. Consider consolidating the check:

Suggested change
}
}
async function readStreamToBufferMax(stream: Readable, maxBytes: number): Promise<Buffer> {
const chunks: Buffer[] = [];
let total = 0;
for await (const chunk of stream) {
const buf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk as Uint8Array);
const space = maxBytes - total;
if (buf.length <= space) {
chunks.push(buf);
total += buf.length;
} else {
chunks.push(buf.subarray(0, space));
total = maxBytes;
stream.destroy();
const buf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk as Uint8Array);
const space = maxBytes - total;
if (buf.length <= space) {
chunks.push(buf);
total += buf.length;
} else {
chunks.push(buf.subarray(0, space));
total = maxBytes;
stream.destroy();
break;
}
if (total >= maxBytes) {
stream.destroy();
break;
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/azure-blob/src/blob-client.ts
Line: 163-178

Comment:
**Stream not destroyed on exact-capacity fill**

When a chunk's length equals exactly the remaining space (`buf.length === space`), the code takes the `if` branch, appends the full chunk, sets `total === maxBytes`, and then breaks via `if (total >= maxBytes)` — without calling `stream.destroy()`. This leaves the underlying Azure SDK stream open until it's garbage-collected, which can hold a live HTTP connection longer than necessary.

The `stream.destroy()` call in the `else` branch handles the over-capacity case correctly, but the exact-fill case is missed. Consider consolidating the check:

```suggestion
    const buf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk as Uint8Array);
    const space = maxBytes - total;
    if (buf.length <= space) {
      chunks.push(buf);
      total += buf.length;
    } else {
      chunks.push(buf.subarray(0, space));
      total = maxBytes;
      stream.destroy();
      break;
    }
    if (total >= maxBytes) {
      stream.destroy();
      break;
    }
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +47 to +54
return jsonResult({
ok: false,
error:
"containerName is required unless defaultContainer is set in plugins.entries.azure-blob.config or AZURE_STORAGE_DEFAULT_CONTAINER.",
});
}

const maxBytes = clampMaxBytes(readNumberParam(rawParams, "maxBytes", { integer: true }));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Empty blobName not validated after trim

readStringParam(..., { required: true }) guards against undefined/null, but a caller passing blobName: "" or blobName: " " would silently reach the Azure SDK with a blank name. The Azure SDK will return a generic 400/404 error rather than the clearer in-tool message you produce elsewhere. A brief guard matches the containerName pattern used just above it:

Suggested change
return jsonResult({
ok: false,
error:
"containerName is required unless defaultContainer is set in plugins.entries.azure-blob.config or AZURE_STORAGE_DEFAULT_CONTAINER.",
});
}
const maxBytes = clampMaxBytes(readNumberParam(rawParams, "maxBytes", { integer: true }));
const blobNameTrimmed = blobName.trim();
if (!blobNameTrimmed) {
return jsonResult({ ok: false, error: "blobName must not be empty." });
}

Then use blobNameTrimmed in place of blobName.trim() for the rest of the function. The TypeBox schema enforces the field is present but doesn't set minLength: 1, so the runtime guard is the only safety net here.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/azure-blob/src/blob-read-tool.ts
Line: 47-54

Comment:
**Empty `blobName` not validated after trim**

`readStringParam(..., { required: true })` guards against `undefined`/`null`, but a caller passing `blobName: ""` or `blobName: "   "` would silently reach the Azure SDK with a blank name. The Azure SDK will return a generic 400/404 error rather than the clearer in-tool message you produce elsewhere. A brief guard matches the `containerName` pattern used just above it:

```suggestion
      const blobNameTrimmed = blobName.trim();
      if (!blobNameTrimmed) {
        return jsonResult({ ok: false, error: "blobName must not be empty." });
      }
```

Then use `blobNameTrimmed` in place of `blobName.trim()` for the rest of the function. The TypeBox schema enforces the field is present but doesn't set `minLength: 1`, so the runtime guard is the only safety net here.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 175ebc4bf8

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +7 to +9
"dependencies": {
"@azure/storage-blob": "^12.31.0",
"@sinclair/typebox": "0.34.48"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Update pnpm-lock.yaml for the new workspace dependencies

Adding @azure/storage-blob here without updating pnpm-lock.yaml leaves the workspace out of sync. On a clean checkout, pnpm install --frozen-lockfile will reject the repo because the new extensions/azure-blob importer and direct dependency entries are missing from the lockfile; the current lockfile has no azure-blob or @azure/storage-blob entry at all.

Useful? React with 👍 / 👎.

Comment on lines +121 to +123
const iter = container.listBlobsFlat({
...(namePrefix ? { prefix: namePrefix } : {}),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor maxResults at the Azure API page boundary

For large containers, this still fetches Azure's default first page before stopping locally at maxResults. The SDK's list APIs default to pages of up to 5000 items, so a request like maxResults: 1 can still transfer thousands of blob entries and make the tool slow or hit timeouts. Use .byPage({ maxPageSize: max }) here (and in listContainers) so the server-side work matches the advertised cap.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 466f0c51af

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +11 to +14
"openclaw": {
"extensions": [
"./index.ts"
]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Stage azure-blob runtime deps in bundled builds

@azure/storage-blob is introduced as a runtime dependency here, but this package never opts into openclaw.bundle.stageRuntimeDependencies. In packaged OpenClaw installs, bundled plugins are loaded from dist-runtime/extensions or dist/extensions (src/plugins/bundled-dir.ts:15-48), tsdown keeps dependencies external by default (tsdown.config.ts:194-202), and scripts/stage-bundled-plugin-runtime-deps.mjs:34-68 only installs plugin-local node_modules when that flag is true. Because the root package does not ship @azure/storage-blob (package.json:676-706), the Azure Blob plugin will fail to import with MODULE_NOT_FOUND anywhere outside a source checkout.

Useful? React with 👍 / 👎.

"Download and return the contents of a blob from Azure Blob Storage. Requires connection string or account name/key (see plugin config / env vars). Opt-in tool.",
parameters: AzureBlobReadToolSchema,
execute: async (_toolCallId: string, rawParams: Record<string, unknown>) => {
const blobName = readStringParam(rawParams, "blobName", { required: true });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve significant whitespace in blob names

blobName is parsed with the default readStringParam behavior and then trimmed again before the read call. Azure blob names are allowed to contain arbitrary characters, so leading/trailing spaces are valid identifiers; in that case azure_blob_list_blobs can surface the exact name from Azure, but azure_blob_read will strip the whitespace and either read the wrong blob or return Blob not found. This makes any container that uses space-padded blob keys unreadable through the new tool.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle bot added the docs Improvements or additions to documentation label Mar 22, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 305a5295d4

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +208 to +210
const connectionString = resolveAzureBlobConnectionString(cfg);
if (connectionString) {
return BlobServiceClient.fromConnectionString(connectionString);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Handle malformed Azure connection strings before client creation

If AZURE_STORAGE_CONNECTION_STRING or plugins.entries.azure-blob.config.connectionString contains a typo, BlobServiceClient.fromConnectionString() throws synchronously. Because this call happens before the try blocks in listBlobContainers, listBlobsInContainer, and readBlobBytes, the exception escapes the normal { ok: false, error } path and turns every Azure tool call into an unhandled failure instead of a user-facing config error. Wrapping client construction in the same error normalization used for request failures would keep bad secrets from crashing the tool.

Useful? React with 👍 / 👎.

Comment on lines +114 to +116
const max = clampListMaxResults(params.maxResults);
const namePrefix = typeof params.prefix === "string" ? params.prefix.trim() : "";
const container = client.getContainerClient(params.containerName.trim());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve significant whitespace in blob-prefix filters

Blob prefixes are matched against blob names verbatim, and Azure blob names can legally contain leading or trailing spaces. Trimming params.prefix here means azure_blob_list_blobs cannot accurately filter blobs like " reports/q1.json" or "foo "; it will query for a different prefix and either miss the target blob or return the wrong set. This is a separate regression from the azure_blob_read name trimming because it breaks discovery as well as reads.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0b510daa81

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +289 to +293
if (status === "404" || code === "BlobNotFound") {
return {
ok: false,
message: `Blob not found: ${params.containerName}/${params.blobName}`,
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Distinguish missing containers from missing blobs in reads

When azure_blob_read is pointed at a container that does not exist, Azure returns code: "ContainerNotFound" with HTTP 404. This branch matches any 404 before checking the error code, so the tool reports Blob not found: <container>/<blob> even though the container itself is wrong. That sends users toward the wrong fix and makes bad defaultContainer / containerName config look like a missing blob instead of a missing container.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle bot added channel: msteams Channel integration: msteams size: XL and removed size: L labels Mar 22, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d3104e8d08

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +2 to +4
"name": "@openclaw/azure-blob",
"version": "2026.3.14",
"private": true,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Add .github/labeler.yml coverage for azure-blob

AGENTS.md requires new extensions to update .github/labeler.yml and create a matching label. This commit adds extensions/azure-blob/**, but .github/labeler.yml has no extensions: azure-blob rule, so future PRs that only touch this plugin will miss the repo’s extension-label triage/review routing that existing plugins already get.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle bot added the channel: whatsapp-web Channel integration: whatsapp-web label Mar 22, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: edcdb0a47d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +208 to +210
const connectionString = resolveAzureBlobConnectionString(cfg);
if (connectionString) {
return BlobServiceClient.fromConnectionString(connectionString);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor explicit accountName/accountKey over env connection strings

If the operator configures plugins.entries.azure-blob.config.accountName/accountKey as documented, but the gateway process also inherits AZURE_STORAGE_CONNECTION_STRING for some other storage account, this branch always picks the env connection string and never reaches shared-key auth. In that multi-Azure-host scenario the plugin will query the wrong account or fail authentication, so the advertised account-name/key setup is effectively unusable unless the user unsets the global env var first.

Useful? React with 👍 / 👎.

Comment on lines +19 to +24
1. **Load** this extension from disk (`plugins.load.paths`).
2. **Allow** the plugin id if you use a global plugin allowlist (`plugins.allow`).
3. **Enable** it and pass **credentials** (`plugins.entries.azure-blob`).
4. **Expose** the optional tools via `tools.alsoAllow` (and `tools.sandbox.tools.allow` when sessions run sandboxed).

Use the **absolute path** to **this directory** (the folder that contains `openclaw.plugin.json`) on the **same machine** as the Gateway process. Do **not** point at `index.ts` — use the directory path to avoid plugin id mismatch warnings.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Remove the mandatory load-path step from bundled setup docs

This plugin is bundled and auto-discovered from the stock bundled extensions root, so requiring plugins.load.paths here sends normal npm/mac-app users to a repo path they do not have. On those installs, following these instructions makes Azure Blob setup look impossible even though enabling plugins.entries.azure-blob is sufficient, which is a user-facing setup regression for the new feature.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5284bede5f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +252 to +255
if (typeof contentLength === "number" && Number.isFinite(contentLength) && contentLength >= 0) {
const n = Math.min(maxBytes, Math.floor(contentLength));
data = n === 0 ? Buffer.alloc(0) : await blob.downloadToBuffer(0, n);
} else {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stop using stale blob properties to size the download

readBlobBytes() uses the contentLength fetched by getProperties() a few lines earlier to decide how many bytes to request here. If the blob is appended to or overwritten between those two requests—a common case for log/append blobs—n is stale: a blob that grew will be under-read and reported as truncated: false, while a blob that shrank can fail with a spurious range error. This makes reads from actively changing blobs return incomplete or incorrect data even when maxBytes would allow the current contents.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle bot added the channel: tlon Channel integration: tlon label Mar 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: msteams Channel integration: msteams channel: tlon Channel integration: tlon channel: whatsapp-web Channel integration: whatsapp-web docs Improvements or additions to documentation size: XL

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant