Interactive Sessions that Run Directly (without ngrok)#1371
Conversation
- Implement direct local access for interactive sessions in compute provider. - Update schemas to include local access option. - Modify tunnel log parsers to recognize local URLs. - Revise UI components to reflect changes in access methods and improve messaging.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds local interactive mode and accelerator metadata: new schema fields ( Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant API
participant LocalProvider
participant Service
User->>Frontend: enable "direct web access" + submit task
Frontend->>Frontend: include local=true and template_id in payload
Frontend->>API: POST /launch (local, interactive_type, template_id)
API->>API: launch_template_on_provider
alt local && interactive
API->>LocalProvider: send overridden local start command (jupyter/vllm/ollama/vscode)
else
API->>LocalProvider: send tunnel-based start command
end
LocalProvider->>Service: execute command (bind 0.0.0.0)
Service->>LocalProvider: prints local URL to stdout
LocalProvider->>API: stream logs
API->>API: tunnel_parser extracts localhost URL if present
API->>Frontend: stream URL and logs
Frontend->>User: display localhost URL for direct access
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx (2)
71-81:⚠️ Potential issue | 🔴 CriticalAdd
template_idto theonSubmitpayload type.
handleSubmitpassestemplate_id, but theonSubmitdata type doesn’t include it, which will trigger excess-property errors in TS.Suggested fix
provider_id?: string; + template_id?: string; env_parameters?: Record<string, string>; local?: boolean;Also applies to: 300-310
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 71 - 81, The onSubmit payload type in NewInteractiveTaskModal.tsx is missing template_id while handleSubmit supplies it, causing TypeScript excess-property errors; update the onSubmit parameter type for the function (the onSubmit declaration and the identical type block used again around the other occurrence) to include template_id?: string (or string if required) so the shape matches what handleSubmit passes, referencing the onSubmit signature in NewInteractiveTaskModal and the other duplicate type block near the later occurrence (around the handleSubmit usage).
315-331:⚠️ Potential issue | 🟠 MajorInclude
isLocalincanSubmitmemo dependencies.
canSubmitfilters required fields based onisLocal, but the memo deps omit it, so toggling local mode won’t update button state reliably.Suggested fix
- }, [title, selectedProviderId, selectedTemplate, configFieldValues]); + }, [title, selectedProviderId, selectedTemplate, configFieldValues, isLocal]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 315 - 331, The canSubmit useMemo references isLocal when filtering required fields but does not include isLocal in its dependency array, causing stale button state when local mode toggles; update the dependency list of the canSubmit React.useMemo (the const canSubmit in NewInteractiveTaskModal) to include isLocal alongside title, selectedProviderId, selectedTemplate, and configFieldValues so the memo recomputes when local mode changes.src/renderer/components/Experiment/Tasks/InteractiveVSCodeModal.tsx (1)
1-15:⚠️ Potential issue | 🟡 MinorFix Prettier formatting to unblock CI.
The Prettier check reported a formatting issue for this file; please run the formatter so the pipeline passes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/InteractiveVSCodeModal.tsx` around lines 1 - 15, This file fails the Prettier check; run the project's Prettier formatter on InteractiveVSCodeModal.tsx (or run the repo script that formats files, e.g., prettier --write or npm/yarn format) and commit the formatted changes so the import block and the rest of the component (InteractiveVSCodeModal) match the project's Prettier rules and unblock CI.
🧹 Nitpick comments (3)
api/transformerlab/compute_providers/local.py (1)
158-176: Consider using theloggingmodule instead ofprint()for debug statements.The debug statements are useful for visibility, but
print()doesn't integrate with log level filtering. Using Python'sloggingmodule would allow these to be controlled via configuration.♻️ Suggested refactor to use logging
+import logging + +logger = logging.getLogger(__name__) + class LocalProvider(ComputeProvider): # ... def launch_cluster(self, cluster_name: str, config: ClusterConfig) -> Dict[str, Any]: # ... - print(f"DEBUG: LocalProvider.launch_cluster: cluster_name={cluster_name}") - print(f"DEBUG: LocalProvider.launch_cluster: job_dir={job_dir}") + logger.debug(f"LocalProvider.launch_cluster: cluster_name={cluster_name}") + logger.debug(f"LocalProvider.launch_cluster: job_dir={job_dir}") if config.setup: - print(f"DEBUG: LocalProvider.launch_cluster: running setup: {config.setup}") + logger.debug(f"LocalProvider.launch_cluster: running setup: {config.setup}") # ... if setup_result.returncode != 0: - print(f"DEBUG: LocalProvider.launch_cluster: setup failed with code {setup_result.returncode}") - print(f"DEBUG: LocalProvider.launch_cluster: setup stderr: {setup_result.stderr}") + logger.debug(f"LocalProvider.launch_cluster: setup failed with code {setup_result.returncode}") + logger.debug(f"LocalProvider.launch_cluster: setup stderr: {setup_result.stderr}") raise RuntimeError(...) - print(f"DEBUG: LocalProvider.launch_cluster: running command: {config.command}") + logger.debug(f"LocalProvider.launch_cluster: running command: {config.command}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/compute_providers/local.py` around lines 158 - 176, Replace the ad-hoc print() debug statements in LocalProvider.launch_cluster with the Python logging module: add "import logging" and obtain a module logger via logging.getLogger(__name__), then change prints to logger.debug(...) for the debug traces (cluster_name, job_dir, running setup, running command) and use logger.error(...) when logging setup_result.stderr/returncode before raising the RuntimeError; keep the same variable names (config.setup, setup_result) and preserve the raised RuntimeError text but log the detailed error via the logger instead of print.src/renderer/components/Experiment/Interactive/Interactive.tsx (1)
339-344: Consider defining an interface for the template type instead of usingany.The template selection logic is correct, but the use of
anytype violates coding guidelines. Consider defining a proper interface for the gallery template structure.As per coding guidelines: "Avoid using
anytype in TypeScript; define interfaces for all props and API responses to ensure type safety."♻️ Suggested interface definition
interface InteractiveTemplate { id: string; interactive_type: string; name: string; description: string; setup: string; command: string; env_parameters: Array<{ field_name: string; env_var: string; field_type: string; required: boolean; placeholder?: string; help_text?: string; password?: boolean; }>; icon: string; }Then update the find call:
-const template = galleryData.data?.find((t: any) => { +const template = galleryData.data?.find((t: InteractiveTemplate) => { if (data.template_id) { return t.id === data.template_id; } return t.interactive_type === interactiveType; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Interactive/Interactive.tsx` around lines 339 - 344, The code uses an `any` in the template selection which breaks TypeScript safety: define a proper interface (e.g., InteractiveTemplate) describing the template shape (id, interactive_type, name, description, setup, command, env_parameters, icon, etc.) and replace the `any` with that interface on `galleryData.data` elements so the `find` call that assigns `template` uses typed objects; update the type of `galleryData` or the array element type where `galleryData.data` is declared and change the find callback signature from `(t: any)` to `(t: InteractiveTemplate)` (or the appropriate alias) to restore type safety while keeping the same selection logic.api/transformerlab/routers/compute_provider.py (1)
1506-1521: Hardcoded commands may drift from gallery definitions.The local mode implementation bypasses the interactive gallery and hardcodes commands directly in the router. While this works, it creates a maintenance risk where the gallery commands and local commands could diverge over time.
Consider extracting these command templates to a shared location or generating local variants from the gallery definitions to ensure consistency.
Additionally, per coding guidelines, business logic like command generation could be moved to a service module. As per coding guidelines: "Place business logic in
api/transformerlab/services/using the Service pattern; routers should only handle HTTP request/response validation."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/routers/compute_provider.py` around lines 1506 - 1521, The router compute_provider.py currently hardcodes per-interactive-type local command strings (see request.local, request.subtype, request.interactive_type and the command_with_secrets assignments), which risks drifting from gallery definitions and mixes business logic into the router; refactor by extracting the command templates and local-variant generation into a new service (e.g., a CommandGenerationService in api/transformerlab/services/) that exposes a method like get_local_command(interactive_type, model_name, tp_size, ...) and have the router call that service to obtain command_with_secrets instead of embedding the strings; ensure the service sources templates from the gallery definitions (or a shared templates module) so local variants are derived programmatically, and update the router to only perform request validation and delegate command construction to the new service.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/renderer/components/Experiment/Tasks/InteractiveVSCodeModal.tsx`:
- Around line 1-15: This file fails the Prettier check; run the project's
Prettier formatter on InteractiveVSCodeModal.tsx (or run the repo script that
formats files, e.g., prettier --write or npm/yarn format) and commit the
formatted changes so the import block and the rest of the component
(InteractiveVSCodeModal) match the project's Prettier rules and unblock CI.
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx`:
- Around line 71-81: The onSubmit payload type in NewInteractiveTaskModal.tsx is
missing template_id while handleSubmit supplies it, causing TypeScript
excess-property errors; update the onSubmit parameter type for the function (the
onSubmit declaration and the identical type block used again around the other
occurrence) to include template_id?: string (or string if required) so the shape
matches what handleSubmit passes, referencing the onSubmit signature in
NewInteractiveTaskModal and the other duplicate type block near the later
occurrence (around the handleSubmit usage).
- Around line 315-331: The canSubmit useMemo references isLocal when filtering
required fields but does not include isLocal in its dependency array, causing
stale button state when local mode toggles; update the dependency list of the
canSubmit React.useMemo (the const canSubmit in NewInteractiveTaskModal) to
include isLocal alongside title, selectedProviderId, selectedTemplate, and
configFieldValues so the memo recomputes when local mode changes.
---
Nitpick comments:
In `@api/transformerlab/compute_providers/local.py`:
- Around line 158-176: Replace the ad-hoc print() debug statements in
LocalProvider.launch_cluster with the Python logging module: add "import
logging" and obtain a module logger via logging.getLogger(__name__), then change
prints to logger.debug(...) for the debug traces (cluster_name, job_dir, running
setup, running command) and use logger.error(...) when logging
setup_result.stderr/returncode before raising the RuntimeError; keep the same
variable names (config.setup, setup_result) and preserve the raised RuntimeError
text but log the detailed error via the logger instead of print.
In `@api/transformerlab/routers/compute_provider.py`:
- Around line 1506-1521: The router compute_provider.py currently hardcodes
per-interactive-type local command strings (see request.local, request.subtype,
request.interactive_type and the command_with_secrets assignments), which risks
drifting from gallery definitions and mixes business logic into the router;
refactor by extracting the command templates and local-variant generation into a
new service (e.g., a CommandGenerationService in api/transformerlab/services/)
that exposes a method like get_local_command(interactive_type, model_name,
tp_size, ...) and have the router call that service to obtain
command_with_secrets instead of embedding the strings; ensure the service
sources templates from the gallery definitions (or a shared templates module) so
local variants are derived programmatically, and update the router to only
perform request validation and delegate command construction to the new service.
In `@src/renderer/components/Experiment/Interactive/Interactive.tsx`:
- Around line 339-344: The code uses an `any` in the template selection which
breaks TypeScript safety: define a proper interface (e.g., InteractiveTemplate)
describing the template shape (id, interactive_type, name, description, setup,
command, env_parameters, icon, etc.) and replace the `any` with that interface
on `galleryData.data` elements so the `find` call that assigns `template` uses
typed objects; update the type of `galleryData` or the array element type where
`galleryData.data` is declared and change the find callback signature from `(t:
any)` to `(t: InteractiveTemplate)` (or the appropriate alias) to restore type
safety while keeping the same selection logic.
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
api/transformerlab/routers/tasks.py (3)
583-610: 🛠️ Refactor suggestion | 🟠 MajorMove config construction into the service layer.
The request→config translation is business logic; per the service-pattern guideline, it should live in
api/transformerlab/services/with the router limited to validation and delegation.As per coding guidelines, "Implement business logic in
api/transformerlab/services/using the Service Pattern; routers should only handle HTTP validation and call services".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/routers/tasks.py` around lines 583 - 610, The router currently builds the task config and gallery_entry inline in the POST handler (the block that constructs config and the gallery_entry dict); move that business logic into a service function in api/transformerlab/services/ (e.g., add a function named build_task_config(request) or create_gallery_entry(request) that returns the config dict and gallery_entry structure). Update the router to perform only validation and delegation by calling the new service function(s) and using the returned config/gallery_entry, and ensure any UUID generation or field mapping (github_repo_dir → github_directory) is handled inside the service so the router no longer contains request→config translation logic.
308-319: 🛠️ Refactor suggestion | 🟠 MajorImport
AddTeamTaskToGalleryRequestfrom the shared schema instead of defining it locally in the router.This model is already defined in
api/transformerlab/schemas/task.pywith identical fields. Remove the local class definition and import it from the schema module to maintain a single source of truth for validation models, as required by the coding guidelines.Suggested fix
from pydantic import BaseModel +from transformerlab.schemas.task import AddTeamTaskToGalleryRequest -class AddTeamTaskToGalleryRequest(BaseModel): - title: str - description: Optional[str] = None - setup: Optional[str] = None - command: str - cpus: Optional[str] = None - memory: Optional[str] = None - supported_accelerators: Optional[str] = None - github_repo_url: Optional[str] = None - github_repo_dir: Optional[str] = None - github_branch: Optional[str] = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/routers/tasks.py` around lines 308 - 319, Remove the duplicate local Pydantic model by deleting the AddTeamTaskToGalleryRequest class definition in routers/tasks.py and instead import the canonical model from the shared schema module (the AddTeamTaskToGalleryRequest defined in api/transformerlab/schemas/task.py); update the top-of-file imports to import AddTeamTaskToGalleryRequest and ensure any references in this router use that imported symbol so the router relies on the single shared validation model.
573-576:⚠️ Potential issue | 🟡 MinorAdd type hints to the route handler signature.
The handler is missing type annotations for the
user_and_teamparameter and return type. Add both to comply with the type hints requirement forapi/**/*.py.✍️ Suggested update
+from typing import Any, Optional @@ async def add_team_task_to_gallery( request: AddTeamTaskToGalleryRequest, - user_and_team=Depends(get_user_and_team), -): + user_and_team: dict[str, Any] = Depends(get_user_and_team), +) -> dict[str, Any]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/routers/tasks.py` around lines 573 - 576, The route handler add_team_task_to_gallery is missing type annotations for the dependency parameter and its return type; update the signature to annotate user_and_team with the concrete type returned by get_user_and_team (for example use the actual UserAndTeam model or Tuple[User, Team] and the Depends pattern: user_and_team: UserAndTeam = Depends(get_user_and_team)) and add an explicit return type (for example -> AddTeamTaskToGalleryResponse or the actual response model/JSONResponse type used by this endpoint), importing the referenced types if necessary.src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx (2)
382-398:⚠️ Potential issue | 🟠 MajorAdd
isLocalto theuseMemodependency array.Line 389 uses
isLocalto conditionally filter out the NGROK_AUTH_TOKEN requirement, but the dependency array omits it. WhenisLocalchanges,canSubmitwon't recalculate, leaving the validation stale and potentially allowing submission with missing tokens or blocking submission when a token is no longer required.Suggested fix
- }, [title, selectedProviderId, selectedTemplate, configFieldValues]); + }, [title, selectedProviderId, selectedTemplate, configFieldValues, isLocal]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 382 - 398, The memo for canSubmit (React.useMemo in NewInteractiveTaskModal) uses isLocal in its body but omits it from the dependency array, causing stale validation; update the dependency array for the canSubmit useMemo to include isLocal so the memo recalculates when isLocal changes (keep [title, selectedProviderId, selectedTemplate, configFieldValues, isLocal] as the dependencies) and run the linter/tests to ensure no other related dependencies are missing.
72-85:⚠️ Potential issue | 🟠 MajorAdd
template_idto theonSubmitdata type.Line 373 passes
template_id: selectedTemplate.idto theonSubmitcallback, but the data type definition (lines 72-85) doesn't include it. This creates a type mismatch that will trigger excess property checks and hides the field from callers.Suggested update
interactive_type: 'vscode' | 'jupyter' | 'vllm' | 'ssh' | 'ollama'; + template_id: string; provider_id?: string; env_parameters?: Record<string, string>; local?: boolean;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 72 - 85, The onSubmit data type in NewInteractiveTaskModalProps is missing the template_id property even though the component passes template_id: selectedTemplate.id to onSubmit; update the data payload type inside NewInteractiveTaskModalProps (the onSubmit parameter type) to include template_id?: string (or string | undefined if you prefer) so callers see and accept the template_id field; locate the type definition named NewInteractiveTaskModalProps and the onSubmit data shape to add this property.src/renderer/components/TasksGallery/TasksGallery.tsx (1)
122-140:⚠️ Potential issue | 🟠 MajorReplace
anywith a typed gallery item interface.The
task: anyparameter at line 132 anddata: any[]in filterTasksGallery at line 43 violate type safety guidelines. The supported_accelerators block and other property accesses (title, name, description, icon, github_repo_url, metadata.framework, etc.) require a proper type definition.Define a GalleryTask interface and apply it to both TaskCard and filterTasksGallery:
🧩 Suggested update
+type GalleryTask = { + id?: string; + name?: string; + title?: string; + description?: string; + icon?: string; + supported_accelerators?: string; + config?: { supported_accelerators?: string }; + metadata?: { category?: string; framework?: string[] }; + github_repo_url?: string; + github_repo_dir?: string; + github_repo_branch?: string; + github_branch?: string; +}; @@ -function filterTasksGallery(data: any[], searchText: string = '') { +function filterTasksGallery(data: GalleryTask[], searchText: string = '') { @@ -}: { - task: any; +}: { + task: GalleryTask; galleryIdentifier: string | number; onImport: (identifier: string | number) => void;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/TasksGallery/TasksGallery.tsx` around lines 122 - 140, Define a typed interface (e.g., GalleryTask) describing the task shape (title, name, description, icon, github_repo_url, metadata: { framework?: string }, supported_accelerators: string[], id, etc.) and replace the loose any usages: change TaskCard's prop type from task: any to task: GalleryTask and update filterTasksGallery signature from data: any[] to data: GalleryTask[]; ensure all property accesses in TaskCard and filterTasksGallery use the new interface and update any optional fields to be optional in GalleryTask so existing code compiles.
🧹 Nitpick comments (5)
api/transformerlab/compute_providers/config.py (1)
39-41: Type inconsistency between ComputeProviderConfig and ProviderConfigBase.
ComputeProviderConfig.supported_acceleratorsusesOptional[list[str]], whileProviderConfigBaseinschemas/compute_providers.pyusesOptional[List[AcceleratorType]]. This type inconsistency could cause validation issues or confusion when data flows between these models.Consider aligning the types for consistency:
♻️ Suggested fix
+from transformerlab.shared.models.models import AcceleratorType +from typing import Dict, Any, Optional, List ... # Accelerators supported by this provider - supported_accelerators: Optional[list[str]] = Field(default=None) + supported_accelerators: Optional[List[AcceleratorType]] = Field(default=None)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/compute_providers/config.py` around lines 39 - 41, The ComputeProviderConfig model's supported_accelerators field is typed as Optional[list[str]] while ProviderConfigBase uses Optional[List[AcceleratorType]], causing inconsistency; update ComputeProviderConfig.supported_accelerators to use the same type (Optional[List[AcceleratorType]]) and import or reference AcceleratorType from the same module where ProviderConfigBase defines it, and run/adjust any imports or type aliases so both models share the identical type definition.api/transformerlab/shared/models/models.py (1)
136-142: Inconsistent casing in AcceleratorType enum values.The enum values use inconsistent casing:
"AppleSilicon"(PascalCase),"NVIDIA"(uppercase),"AMD"(uppercase), and"cpu"(lowercase). This inconsistency could cause matching issues when comparing accelerator strings from different sources (e.g., gallery JSON, provider configs, frontend).Consider standardizing the casing, for example using uppercase for all hardware-specific types:
♻️ Suggested fix
class AcceleratorType(str, enum.Enum): """Enum for accelerator types.""" - APPLE_SILICON = "AppleSilicon" + APPLE_SILICON = "APPLE_SILICON" NVIDIA = "NVIDIA" AMD = "AMD" - CPU = "cpu" + CPU = "CPU"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/shared/models/models.py` around lines 136 - 142, The AcceleratorType enum has inconsistent member string values (APPLE_SILICON="AppleSilicon", NVIDIA="NVIDIA", AMD="AMD", CPU="cpu"); normalize them to a single casing (e.g., all uppercase: APPLE_SILICON="APPLE_SILICON", NVIDIA="NVIDIA", AMD="AMD", CPU="CPU") so comparisons are stable across sources, and update any code that relies on the old literal strings (parsing/serialization/deserialization points that reference AcceleratorType) to use the new standardized values or use AcceleratorType members instead of raw strings.api/transformerlab/galleries/interactive-gallery.json (1)
11-11:supported_acceleratorsuses string instead of array format.The
supported_acceleratorsfield uses a single string (e.g.,"cpu","NVIDIA") rather than an array. If a template supports multiple accelerator types, this format won't accommodate that. Consider using an array format for future extensibility:"supported_accelerators": ["cpu", "NVIDIA"]Also applies to: 32-32, 78-78, 107-107, 127-127, 148-148
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/galleries/interactive-gallery.json` at line 11, Update the JSON entries where the key supported_accelerators is currently a string (e.g., "cpu") to use an array format (e.g., ["cpu"]) so multiple accelerator types can be listed; find every occurrence of the supported_accelerators key in this file (notably the entries referenced around lines 11, 32, 78, 107, 127, 148) and replace the string value with an array value, and ensure any consumer logic that reads supported_accelerators expects an array (or normalize a string to a single-item array) so parsing remains backward-compatible.api/transformerlab/routers/compute_provider.py (1)
1508-1524: Hardcoded commands duplicate gallery logic and create maintenance burden.The local interactive mode bypasses the gallery with hardcoded commands (lines 1518-1523), duplicating logic from
interactive-gallery.json. This means changes to commands must be made in two places. The comment at line 1510 acknowledges this is a temporary solution.Consider extracting local-mode commands into the gallery JSON (e.g., a separate
local_commandfield) or a dedicated configuration, so all interactive commands are centralized.Additionally, the
vscodecase (line 1517) is a no-op (pass), meaning VS Code local mode still uses the tunnel command, which may not work locally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/routers/compute_provider.py` around lines 1508 - 1524, The local interactive branch in compute_provider.py duplicates hardcoded commands for request.interactive_type (jupyter, vllm, ollama) and leaves the vscode branch as a no-op; extract these local-mode commands out of the function and centralize them in the interactive gallery config (e.g., add a local_command field to interactive-gallery.json) and then load and use that config here instead of hardcoding strings; update the vscode branch to read a local_command from the gallery (or fall back to the tunnel only if no local_command exists) and change any places referencing request.interactive_type or the current hardcoded command_* variables to use the new config lookup so updates live in one place.src/renderer/components/Experiment/Tasks/QueueTaskModal.tsx (1)
1012-1012: Consider defining a proper interface forproviderinstead ofany.The
providerparameter usesanytype. Define an interface for the provider object to ensure type safety, per coding guidelines which advise avoidinganyin TypeScript.interface Provider { id: string; name: string; type: string; config?: { supported_accelerators?: string[]; }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/QueueTaskModal.tsx` at line 1012, The providers.map callback in QueueTaskModal.tsx uses the unsafe any type for provider; define a Provider interface (e.g., fields id, name, type, optional config.supported_accelerators) and replace any with Provider wherever provider is used (including the providers array declaration and the map callback signature), update imports/exports or local type declarations as needed, and ensure references in the component (e.g., inside the providers.map loop in QueueTaskModal) access typed fields rather than arbitrary properties to restore type safety.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx`:
- Around line 132-183: Add unit tests covering all accelerator-matching branches
of isProviderCompatible in NewInteractiveTaskModal.tsx: Apple/MPS, NVIDIA
(including numeric-only requests), AMD/ROCm, CPU, the case where
provider.config.supported_accelerators is empty (defaults to compatible), and
when taskSupportedAccelerators is undefined. If isProviderCompatible is not
exported, first export it or extract it to a small utility so tests can import
it; write tests asserting true/false for combinations of
provider.config.supported_accelerators (e.g., ['AppleSilicon'], ['NVIDIA'],
['AMD'], ['cpu'], []) and request strings like "mps", "cuda", "a100", "rocm",
"cpu", "8" and ensure case-insensitivity.
- Around line 38-43: ProviderOption currently uses config?: any and functions
reference provider: any (e.g., dereferencing provider.config), which breaks type
safety; define a ProviderConfig interface (or a union of specific provider
config types) and replace config?: any with config?: ProviderConfig in the
ProviderOption type, update any function signatures that accept provider (e.g.,
the handler that uses provider.config) to use the typed ProviderOption (or
ProviderConfig) instead of any, and use optional chaining (provider?.config)
where provider may be undefined to avoid runtime errors; ensure all usages of
provider.config are updated to the new typed shape so TypeScript can validate
fields.
- Around line 850-858: Introduce a proper TypeScript interface (e.g.,
TeamGalleryItem) describing fields used from teamGallery
(supported_accelerators, config, title, icon, description, etc.), use it when
casting the SWR response that populates teamGallery and replace the two
occurrences of task: any in NewInteractiveTaskModal's filter and map with task:
TeamGalleryItem; ensure isProviderCompatible still accepts selectedProvider and
task.supported_accelerators (or task.config?.supported_accelerators) without
casting, and update any imports/exports so the new interface is available where
the SWR response and component use teamGallery.
In `@src/renderer/components/Experiment/Tasks/QueueTaskModal.tsx`:
- Around line 129-187: The isProviderCompatible React.useCallback currently
references taskResources and serverInfoData which are declared later, violating
hook ordering; relocate the isProviderCompatible useCallback so it comes after
the definitions (useMemo / state) that produce taskResources and serverInfoData
(or refactor it into a plain function that reads those values at call time),
ensuring the dependency array [taskResources, serverInfoData] stays accurate and
avoiding stale closures.
- Around line 140-171: The compatibility checks in QueueTaskModal.tsx compare
the lowercased reqAcc (String(taskResources.accelerators).toLowerCase()) against
case-sensitive supported entries (e.g., 'NVIDIA'), causing false negatives; fix
by normalizing supported before comparisons (e.g., create a lowercasedSupported
= supported.map(s => String(s).toLowerCase()) or otherwise compare via
s.toLowerCase()) and then use lowercasedSupported.includes(...) for all checks
(references: reqAcc and supported in the AppleSilicon / NVIDIA / AMD / CPU
condition blocks).
In `@src/renderer/components/Team/ProviderDetailsModal.tsx`:
- Around line 288-291: The current logic only sets
parsedConfig.supported_accelerators when supportedAccelerators has values, but
doesn't remove an existing supported_accelerators when the user clears the
selection; update the code where parsedConfig is built (the block referencing
supportedAccelerators and parsedConfig.supported_accelerators) to explicitly
delete parsedConfig.supported_accelerators when supportedAccelerators.length ===
0 so clearing the UI removes the key from the saved config.
- Around line 282-291: Replace the loose any usage and mismatched string types
by introducing a typed interface (e.g. ProviderConfig) that represents the
provider config shape and use it for parsedConfig, the return/parameter types of
buildSlurmConfig() and parseSlurmConfig(), and for the config parameters of
createProvider and updateProvider (or use a union if some providers require
strings), and update any call sites accordingly; specifically change the
parsedConfig declaration from any to ProviderConfig, make buildSlurmConfig():
ProviderConfig and parseSlurmConfig(): ProviderConfig, and change
createProvider(config: string) / updateProvider(config: string) signatures to
accept ProviderConfig (or ProviderConfig | string) so the passed object types
match the function signatures and preserve supported_accelerators typing.
---
Outside diff comments:
In `@api/transformerlab/routers/tasks.py`:
- Around line 583-610: The router currently builds the task config and
gallery_entry inline in the POST handler (the block that constructs config and
the gallery_entry dict); move that business logic into a service function in
api/transformerlab/services/ (e.g., add a function named
build_task_config(request) or create_gallery_entry(request) that returns the
config dict and gallery_entry structure). Update the router to perform only
validation and delegation by calling the new service function(s) and using the
returned config/gallery_entry, and ensure any UUID generation or field mapping
(github_repo_dir → github_directory) is handled inside the service so the router
no longer contains request→config translation logic.
- Around line 308-319: Remove the duplicate local Pydantic model by deleting the
AddTeamTaskToGalleryRequest class definition in routers/tasks.py and instead
import the canonical model from the shared schema module (the
AddTeamTaskToGalleryRequest defined in api/transformerlab/schemas/task.py);
update the top-of-file imports to import AddTeamTaskToGalleryRequest and ensure
any references in this router use that imported symbol so the router relies on
the single shared validation model.
- Around line 573-576: The route handler add_team_task_to_gallery is missing
type annotations for the dependency parameter and its return type; update the
signature to annotate user_and_team with the concrete type returned by
get_user_and_team (for example use the actual UserAndTeam model or Tuple[User,
Team] and the Depends pattern: user_and_team: UserAndTeam =
Depends(get_user_and_team)) and add an explicit return type (for example ->
AddTeamTaskToGalleryResponse or the actual response model/JSONResponse type used
by this endpoint), importing the referenced types if necessary.
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx`:
- Around line 382-398: The memo for canSubmit (React.useMemo in
NewInteractiveTaskModal) uses isLocal in its body but omits it from the
dependency array, causing stale validation; update the dependency array for the
canSubmit useMemo to include isLocal so the memo recalculates when isLocal
changes (keep [title, selectedProviderId, selectedTemplate, configFieldValues,
isLocal] as the dependencies) and run the linter/tests to ensure no other
related dependencies are missing.
- Around line 72-85: The onSubmit data type in NewInteractiveTaskModalProps is
missing the template_id property even though the component passes template_id:
selectedTemplate.id to onSubmit; update the data payload type inside
NewInteractiveTaskModalProps (the onSubmit parameter type) to include
template_id?: string (or string | undefined if you prefer) so callers see and
accept the template_id field; locate the type definition named
NewInteractiveTaskModalProps and the onSubmit data shape to add this property.
In `@src/renderer/components/TasksGallery/TasksGallery.tsx`:
- Around line 122-140: Define a typed interface (e.g., GalleryTask) describing
the task shape (title, name, description, icon, github_repo_url, metadata: {
framework?: string }, supported_accelerators: string[], id, etc.) and replace
the loose any usages: change TaskCard's prop type from task: any to task:
GalleryTask and update filterTasksGallery signature from data: any[] to data:
GalleryTask[]; ensure all property accesses in TaskCard and filterTasksGallery
use the new interface and update any optional fields to be optional in
GalleryTask so existing code compiles.
---
Nitpick comments:
In `@api/transformerlab/compute_providers/config.py`:
- Around line 39-41: The ComputeProviderConfig model's supported_accelerators
field is typed as Optional[list[str]] while ProviderConfigBase uses
Optional[List[AcceleratorType]], causing inconsistency; update
ComputeProviderConfig.supported_accelerators to use the same type
(Optional[List[AcceleratorType]]) and import or reference AcceleratorType from
the same module where ProviderConfigBase defines it, and run/adjust any imports
or type aliases so both models share the identical type definition.
In `@api/transformerlab/galleries/interactive-gallery.json`:
- Line 11: Update the JSON entries where the key supported_accelerators is
currently a string (e.g., "cpu") to use an array format (e.g., ["cpu"]) so
multiple accelerator types can be listed; find every occurrence of the
supported_accelerators key in this file (notably the entries referenced around
lines 11, 32, 78, 107, 127, 148) and replace the string value with an array
value, and ensure any consumer logic that reads supported_accelerators expects
an array (or normalize a string to a single-item array) so parsing remains
backward-compatible.
In `@api/transformerlab/routers/compute_provider.py`:
- Around line 1508-1524: The local interactive branch in compute_provider.py
duplicates hardcoded commands for request.interactive_type (jupyter, vllm,
ollama) and leaves the vscode branch as a no-op; extract these local-mode
commands out of the function and centralize them in the interactive gallery
config (e.g., add a local_command field to interactive-gallery.json) and then
load and use that config here instead of hardcoding strings; update the vscode
branch to read a local_command from the gallery (or fall back to the tunnel only
if no local_command exists) and change any places referencing
request.interactive_type or the current hardcoded command_* variables to use the
new config lookup so updates live in one place.
In `@api/transformerlab/shared/models/models.py`:
- Around line 136-142: The AcceleratorType enum has inconsistent member string
values (APPLE_SILICON="AppleSilicon", NVIDIA="NVIDIA", AMD="AMD", CPU="cpu");
normalize them to a single casing (e.g., all uppercase:
APPLE_SILICON="APPLE_SILICON", NVIDIA="NVIDIA", AMD="AMD", CPU="CPU") so
comparisons are stable across sources, and update any code that relies on the
old literal strings (parsing/serialization/deserialization points that reference
AcceleratorType) to use the new standardized values or use AcceleratorType
members instead of raw strings.
In `@src/renderer/components/Experiment/Tasks/QueueTaskModal.tsx`:
- Line 1012: The providers.map callback in QueueTaskModal.tsx uses the unsafe
any type for provider; define a Provider interface (e.g., fields id, name, type,
optional config.supported_accelerators) and replace any with Provider wherever
provider is used (including the providers array declaration and the map callback
signature), update imports/exports or local type declarations as needed, and
ensure references in the component (e.g., inside the providers.map loop in
QueueTaskModal) access typed fields rather than arbitrary properties to restore
type safety.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
api/transformerlab/compute_providers/config.pyapi/transformerlab/galleries/interactive-gallery.jsonapi/transformerlab/routers/compute_provider.pyapi/transformerlab/routers/tasks.pyapi/transformerlab/schemas/compute_providers.pyapi/transformerlab/schemas/task.pyapi/transformerlab/services/provider_service.pyapi/transformerlab/shared/models/models.pysrc/renderer/components/Experiment/Tasks/InteractiveJupyterModal.tsxsrc/renderer/components/Experiment/Tasks/InteractiveOllamaModal.tsxsrc/renderer/components/Experiment/Tasks/InteractiveVSCodeModal.tsxsrc/renderer/components/Experiment/Tasks/InteractiveVllmModal.tsxsrc/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsxsrc/renderer/components/Experiment/Tasks/QueueTaskModal.tsxsrc/renderer/components/TasksGallery/NewTeamTaskModal.tsxsrc/renderer/components/TasksGallery/TasksGallery.tsxsrc/renderer/components/TasksGallery/TeamInteractiveGalleryModal.tsxsrc/renderer/components/Team/ProviderDetailsModal.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- src/renderer/components/Experiment/Tasks/InteractiveJupyterModal.tsx
- src/renderer/components/Experiment/Tasks/InteractiveOllamaModal.tsx
| type ProviderOption = { | ||
| id: string; | ||
| name: string; | ||
| type?: string; | ||
| config?: any; | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx | sed -n '35,50p'Repository: transformerlab/transformerlab-app
Length of output: 565
🏁 Script executed:
cat -n src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx | sed -n '130,190p'Repository: transformerlab/transformerlab-app
Length of output: 2245
🏁 Script executed:
rg "any" src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx -nRepository: transformerlab/transformerlab-app
Length of output: 457
Define provider config types and remove any type annotations to ensure type safety.
Line 42 uses config?: any, and line 134 accepts provider: any while dereferencing provider.config. This violates TypeScript type safety requirements. Add a typed ProviderConfig interface and apply optional chaining to safely handle undefined providers.
🛠️ Suggested update
+type ProviderConfig = {
+ supported_accelerators?: string[];
+};
+
type ProviderOption = {
id: string;
name: string;
type?: string;
- config?: any;
+ config?: ProviderConfig;
};
@@
-const isProviderCompatible = React.useCallback(
- (provider: any, taskSupportedAccelerators: string | undefined) => {
+const isProviderCompatible = React.useCallback(
+ (
+ provider: ProviderOption | undefined,
+ taskSupportedAccelerators: string | undefined,
+ ) => {
if (!taskSupportedAccelerators) return true;
- const supported = provider.config?.supported_accelerators || [];
+ const supported = provider?.config?.supported_accelerators || [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around
lines 38 - 43, ProviderOption currently uses config?: any and functions
reference provider: any (e.g., dereferencing provider.config), which breaks type
safety; define a ProviderConfig interface (or a union of specific provider
config types) and replace config?: any with config?: ProviderConfig in the
ProviderOption type, update any function signatures that accept provider (e.g.,
the handler that uses provider.config) to use the typed ProviderOption (or
ProviderConfig) instead of any, and use optional chaining (provider?.config)
where provider may be undefined to avoid runtime errors; ensure all usages of
provider.config are updated to the new typed shape so TypeScript can validate
fields.
| // Helper to check if a provider supports requested accelerators | ||
| const isProviderCompatible = React.useCallback( | ||
| (provider: any, taskSupportedAccelerators: string | undefined) => { | ||
| if (!taskSupportedAccelerators) return true; | ||
|
|
||
| const supported = provider.config?.supported_accelerators || []; | ||
| if (supported.length === 0) return true; // Default to compatible if not specified | ||
|
|
||
| const reqAcc = String(taskSupportedAccelerators).toLowerCase(); | ||
|
|
||
| // Check for Apple Silicon | ||
| if ( | ||
| (reqAcc.includes('apple') || reqAcc.includes('mps')) && | ||
| supported.includes('AppleSilicon') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for NVIDIA | ||
| if ( | ||
| (reqAcc.includes('nvidia') || | ||
| reqAcc.includes('cuda') || | ||
| reqAcc.includes('rtx') || | ||
| reqAcc.includes('a100') || | ||
| reqAcc.includes('h100') || | ||
| reqAcc.includes('v100')) && | ||
| supported.includes('NVIDIA') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for AMD | ||
| if ( | ||
| (reqAcc.includes('amd') || reqAcc.includes('rocm')) && | ||
| supported.includes('AMD') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for CPU | ||
| if (reqAcc.includes('cpu') && supported.includes('cpu')) { | ||
| return true; | ||
| } | ||
|
|
||
| // If it's just a number, we assume it's NVIDIA/CUDA | ||
| if (/^\d+$/.test(reqAcc)) { | ||
| return supported.includes('NVIDIA'); | ||
| } | ||
|
|
||
| return false; | ||
| }, | ||
| [], |
There was a problem hiding this comment.
Add unit tests for provider compatibility logic.
isProviderCompatible now encodes multiple accelerator rules; please add unit coverage for Apple/NVIDIA/AMD/CPU/numeric cases and the “no supported accelerators” default.
As per coding guidelines, "Write unit tests for all utility functions and complex hooks in the frontend".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around
lines 132 - 183, Add unit tests covering all accelerator-matching branches of
isProviderCompatible in NewInteractiveTaskModal.tsx: Apple/MPS, NVIDIA
(including numeric-only requests), AMD/ROCm, CPU, the case where
provider.config.supported_accelerators is empty (defaults to compatible), and
when taskSupportedAccelerators is undefined. If isProviderCompatible is not
exported, first export it or extract it to a small utility so tests can import
it; write tests asserting true/false for combinations of
provider.config.supported_accelerators (e.g., ['AppleSilicon'], ['NVIDIA'],
['AMD'], ['cpu'], []) and request strings like "mps", "cuda", "a100", "rocm",
"cpu", "8" and ensure case-insensitivity.
| {teamGallery | ||
| .filter((task: any) => | ||
| isProviderCompatible( | ||
| selectedProvider, | ||
| task.supported_accelerators || | ||
| task.config?.supported_accelerators, | ||
| ), | ||
| ) | ||
| .map((task: any, index: number) => { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file at the relevant lines and surrounding context
cat -n src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx | sed -n '840,870p'Repository: transformerlab/transformerlab-app
Length of output: 1836
🏁 Script executed:
# Search for ImportedTask type definition
rg "type ImportedTask|interface ImportedTask" --type ts --type tsxRepository: transformerlab/transformerlab-app
Length of output: 104
🏁 Script executed:
# Check how teamGalleryData is defined and typed
rg "teamGalleryData" src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx -A 3 -B 3Repository: transformerlab/transformerlab-app
Length of output: 1134
🏁 Script executed:
# Look for the teamGallery variable definition to understand its source
rg "const teamGallery|let teamGallery" src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx -A 5 -B 2Repository: transformerlab/transformerlab-app
Length of output: 292
🏁 Script executed:
# Search for ImportedTask type definition
rg "type ImportedTask|interface ImportedTask" --type tsRepository: transformerlab/transformerlab-app
Length of output: 167
🏁 Script executed:
# Check what task fields are accessed in the component
rg "task\." src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx | head -20Repository: transformerlab/transformerlab-app
Length of output: 845
🏁 Script executed:
# Look for other any[] casts related to gallery
rg "as any\[\]|as any" src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 120
🏁 Script executed:
# Check if there are other components or files that define types for team gallery
rg "TeamGallery|team.*gallery" --type ts -i | grep -i "type\|interface" | head -20Repository: transformerlab/transformerlab-app
Length of output: 190
🏁 Script executed:
# Get the full ImportedTask type definition
rg "type ImportedTask" src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx -A 20Repository: transformerlab/transformerlab-app
Length of output: 565
🏁 Script executed:
# Check if there are other places in the component using any with team gallery or imported tasks
rg ": any[^a-z]" src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 305
🏁 Script executed:
# Verify if the component is indeed in src/ directory which requires functional components and proper typing
git ls-files src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 145
Replace any for team gallery tasks.
Lines 851 and 858 use task: any, defeating type safety for the supported_accelerators, title, icon, and description fields. Define a typed interface for team gallery items and use it in both the SWR response cast and the filter/map operations.
🧩 Suggested update
+type TeamGalleryTask = ImportedTask & {
+ title?: string;
+ description?: string;
+ icon?: string;
+ supported_accelerators?: string;
+ config?: {
+ supported_accelerators?: string;
+ };
+};
@@
- if (
- teamGalleryData?.status === 'success' &&
- Array.isArray(teamGalleryData.data)
- ) {
- return teamGalleryData.data as any[];
- }
+ if (
+ teamGalleryData?.status === 'success' &&
+ Array.isArray(teamGalleryData.data)
+ ) {
+ return teamGalleryData.data as TeamGalleryTask[];
+ }
@@
- {teamGallery
- .filter((task: any) =>
+ {teamGallery
+ .filter((task: TeamGalleryTask) =>
isProviderCompatible(
selectedProvider,
task.supported_accelerators ||
task.config?.supported_accelerators,
),
)
- .map((task: any, index: number) => {
+ .map((task: TeamGalleryTask, index: number) => {Coding guideline: "Avoid using any type in TypeScript; define interfaces for all props and API responses to ensure type safety."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around
lines 850 - 858, Introduce a proper TypeScript interface (e.g., TeamGalleryItem)
describing fields used from teamGallery (supported_accelerators, config, title,
icon, description, etc.), use it when casting the SWR response that populates
teamGallery and replace the two occurrences of task: any in
NewInteractiveTaskModal's filter and map with task: TeamGalleryItem; ensure
isProviderCompatible still accepts selectedProvider and
task.supported_accelerators (or task.config?.supported_accelerators) without
casting, and update any imports/exports so the new interface is available where
the SWR response and component use teamGallery.
| // Helper to check if a provider supports requested accelerators | ||
| const isProviderCompatible = React.useCallback( | ||
| (provider: any) => { | ||
| if (!taskResources || !taskResources.accelerators) return true; | ||
|
|
||
| const supported = provider.config?.supported_accelerators || []; | ||
| if (supported.length === 0) return true; // Default to compatible if not specified | ||
|
|
||
| const reqAcc = String(taskResources.accelerators).toLowerCase(); | ||
|
|
||
| // Check for Apple Silicon | ||
| if ( | ||
| (reqAcc.includes('apple') || reqAcc.includes('mps')) && | ||
| supported.includes('AppleSilicon') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for NVIDIA | ||
| if ( | ||
| (reqAcc.includes('nvidia') || | ||
| reqAcc.includes('cuda') || | ||
| reqAcc.includes('rtx') || | ||
| reqAcc.includes('a100') || | ||
| reqAcc.includes('h100') || | ||
| reqAcc.includes('v100')) && | ||
| supported.includes('NVIDIA') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for AMD | ||
| if ( | ||
| (reqAcc.includes('amd') || reqAcc.includes('rocm')) && | ||
| supported.includes('AMD') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for CPU | ||
| if (reqAcc.includes('cpu') && supported.includes('cpu')) { | ||
| return true; | ||
| } | ||
|
|
||
| // If it's just a number, we assume it's NVIDIA/CUDA unless it's a local provider on Mac | ||
| if (/^\d+$/.test(reqAcc)) { | ||
| if ( | ||
| provider.type === 'local' && | ||
| serverInfoData?.device_type === 'mps' | ||
| ) { | ||
| return supported.includes('AppleSilicon'); | ||
| } | ||
| return supported.includes('NVIDIA'); | ||
| } | ||
|
|
||
| return false; | ||
| }, | ||
| [taskResources, serverInfoData], | ||
| ); |
There was a problem hiding this comment.
Hook dependency ordering issue: isProviderCompatible references variables defined later.
The isProviderCompatible callback at line 130 depends on taskResources (defined at line 213) and serverInfoData (defined at line 207). While JavaScript hoisting allows this to work syntactically, React hooks should be ordered so dependencies are defined before they're used to ensure predictable behavior and avoid potential stale closure issues.
Move this useCallback after the useMemo definitions for serverInfoData and taskResources, or restructure to ensure proper hook ordering.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/Experiment/Tasks/QueueTaskModal.tsx` around lines 129
- 187, The isProviderCompatible React.useCallback currently references
taskResources and serverInfoData which are declared later, violating hook
ordering; relocate the isProviderCompatible useCallback so it comes after the
definitions (useMemo / state) that produce taskResources and serverInfoData (or
refactor it into a plain function that reads those values at call time),
ensuring the dependency array [taskResources, serverInfoData] stays accurate and
avoiding stale closures.
| if ( | ||
| (reqAcc.includes('apple') || reqAcc.includes('mps')) && | ||
| supported.includes('AppleSilicon') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for NVIDIA | ||
| if ( | ||
| (reqAcc.includes('nvidia') || | ||
| reqAcc.includes('cuda') || | ||
| reqAcc.includes('rtx') || | ||
| reqAcc.includes('a100') || | ||
| reqAcc.includes('h100') || | ||
| reqAcc.includes('v100')) && | ||
| supported.includes('NVIDIA') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for AMD | ||
| if ( | ||
| (reqAcc.includes('amd') || reqAcc.includes('rocm')) && | ||
| supported.includes('AMD') | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Check for CPU | ||
| if (reqAcc.includes('cpu') && supported.includes('cpu')) { | ||
| return true; | ||
| } |
There was a problem hiding this comment.
Case-sensitive comparison may cause compatibility mismatches.
The accelerator requirement string is lowercased (reqAcc = String(taskResources.accelerators).toLowerCase()), but the supported array values are compared case-sensitively (e.g., supported.includes('NVIDIA')). If provider configs have values with different casing (e.g., 'nvidia' vs 'NVIDIA'), the compatibility check will incorrectly return false.
Consider normalizing the supported array values for comparison:
♻️ Suggested fix
const isProviderCompatible = React.useCallback(
(provider: any) => {
if (!taskResources || !taskResources.accelerators) return true;
- const supported = provider.config?.supported_accelerators || [];
+ const supportedRaw = provider.config?.supported_accelerators || [];
+ const supported = supportedRaw.map((s: string) => s.toLowerCase());
if (supported.length === 0) return true;
const reqAcc = String(taskResources.accelerators).toLowerCase();
// Check for Apple Silicon
if (
(reqAcc.includes('apple') || reqAcc.includes('mps')) &&
- supported.includes('AppleSilicon')
+ supported.includes('applesilicon')
) {
return true;
}
// ... similar changes for other checks🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/Experiment/Tasks/QueueTaskModal.tsx` around lines 140
- 171, The compatibility checks in QueueTaskModal.tsx compare the lowercased
reqAcc (String(taskResources.accelerators).toLowerCase()) against case-sensitive
supported entries (e.g., 'NVIDIA'), causing false negatives; fix by normalizing
supported before comparisons (e.g., create a lowercasedSupported =
supported.map(s => String(s).toLowerCase()) or otherwise compare via
s.toLowerCase()) and then use lowercasedSupported.includes(...) for all checks
(references: reqAcc and supported in the AppleSilicon / NVIDIA / AMD / CPU
condition blocks).
| let parsedConfig: any; | ||
| if (type === 'slurm') { | ||
| parsedConfig = buildSlurmConfig(); | ||
| } else { | ||
| // The API expects an object for config, not a JSON string | ||
| parsedConfig = typeof config === 'string' ? JSON.parse(config) : config; | ||
| // Ensure supported_accelerators from state is included if set | ||
| if (supportedAccelerators.length > 0) { | ||
| parsedConfig.supported_accelerators = supportedAccelerators; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/renderer/components/Team/ProviderDetailsModal.tsx | sed -n '275,300p'Repository: transformerlab/transformerlab-app
Length of output: 1098
🏁 Script executed:
# Search for buildSlurmConfig function definition
rg -A 20 "const buildSlurmConfig|function buildSlurmConfig" src/renderer/components/Team/ProviderDetailsModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 707
🏁 Script executed:
# Check the imports and existing types in the file
head -50 src/renderer/components/Team/ProviderDetailsModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 1228
🏁 Script executed:
# Check the createProvider and updateProvider function signatures
rg -B 2 -A 5 "createProvider|updateProvider" src/renderer/components/Team/ProviderDetailsModal.tsx | head -40Repository: transformerlab/transformerlab-app
Length of output: 794
🏁 Script executed:
# Search for type definitions related to providers in the codebase
fd -t f "\.ts$|\.tsx$" src | xargs rg "type.*Provider.*Config|interface.*Provider.*Config" | head -20Repository: transformerlab/transformerlab-app
Length of output: 59
🏁 Script executed:
# Get the full function signatures for createProvider and updateProvider
rg -B 2 -A 15 "async function createProvider|async function updateProvider" src/renderer/components/Team/ProviderDetailsModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 865
🏁 Script executed:
# Check if there are overloads or other definitions
rg "createProvider|updateProvider" src/renderer/components/Team/ProviderDetailsModal.tsx | head -20Repository: transformerlab/transformerlab-app
Length of output: 351
🏁 Script executed:
# Check how config parameter is actually used in the body
rg -B 5 -A 10 "JSON.stringify.*name.*type.*config" src/renderer/components/Team/ProviderDetailsModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 494
🏁 Script executed:
# Search for any type definitions or interfaces that might define config as object
rg "type.*Config\s*=|interface.*Config" src/renderer/components/Team/ProviderDetailsModal.tsxRepository: transformerlab/transformerlab-app
Length of output: 59
🏁 Script executed:
# Check the complete context of the saveProvider function to understand the full flow
cat -n src/renderer/components/Team/ProviderDetailsModal.tsx | sed -n '278,310p'Repository: transformerlab/transformerlab-app
Length of output: 1413
🏁 Script executed:
# Look for any broader type definitions or see how config is used throughout
rg "config:" src/renderer/components/Team/ProviderDetailsModal.tsx | head -10Repository: transformerlab/transformerlab-app
Length of output: 230
🏁 Script executed:
# Check if there are other usages of config in the file or if there are type overloads
rg -n "config" src/renderer/components/Team/ProviderDetailsModal.tsx | grep -E "interface|type|config:" | head -20Repository: transformerlab/transformerlab-app
Length of output: 740
🏁 Script executed:
# Check the full buildSlurmConfig function to see its actual return type structure
rg -A 50 "const buildSlurmConfig = " src/renderer/components/Team/ProviderDetailsModal.tsx | head -60Repository: transformerlab/transformerlab-app
Length of output: 1646
🏁 Script executed:
# Check how supportedAccelerators is typed/used
rg -B 5 -A 5 "supportedAccelerators" src/renderer/components/Team/ProviderDetailsModal.tsx | head -40Repository: transformerlab/transformerlab-app
Length of output: 1240
🏁 Script executed:
# Look at the broader context to see if there are other type issues
cat -n src/renderer/components/Team/ProviderDetailsModal.tsx | sed -n '50,120p'Repository: transformerlab/transformerlab-app
Length of output: 2801
Replace parsedConfig: any with a properly typed config shape to maintain type safety.
Line 282 introduces any, which weakens type safety. Additionally, the createProvider and updateProvider function signatures expect config: String, but objects are passed to them. Define a proper config type for both the variable and the function parameters.
🧩 Suggested update
+type ProviderConfig = {
+ supported_accelerators?: string[];
+ [key: string]: unknown;
+};
+
+async function createProvider(
+ name: String,
+ type: String,
+ config: string | ProviderConfig
+) {
return await fetchWithAuth(
getPath('compute_provider', ['create'], { teamId }),
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
- body: JSON.stringify({ name, type, config }),
+ body: JSON.stringify({
+ name,
+ type,
+ config: typeof config === 'string' ? config : config,
+ }),
},
);
}
+
+async function updateProvider(
+ id: String,
+ name: String,
+ config: string | ProviderConfig
+) {
@@
- let parsedConfig: any;
+ let parsedConfig: ProviderConfig;
@@
- parsedConfig = typeof config === 'string' ? JSON.parse(config) : config;
+ parsedConfig =
+ typeof config === 'string'
+ ? (JSON.parse(config) as ProviderConfig)
+ : (config as ProviderConfig);Also refactor buildSlurmConfig() and parseSlurmConfig() to use the ProviderConfig type instead of any.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/Team/ProviderDetailsModal.tsx` around lines 282 -
291, Replace the loose any usage and mismatched string types by introducing a
typed interface (e.g. ProviderConfig) that represents the provider config shape
and use it for parsedConfig, the return/parameter types of buildSlurmConfig()
and parseSlurmConfig(), and for the config parameters of createProvider and
updateProvider (or use a union if some providers require strings), and update
any call sites accordingly; specifically change the parsedConfig declaration
from any to ProviderConfig, make buildSlurmConfig(): ProviderConfig and
parseSlurmConfig(): ProviderConfig, and change createProvider(config: string) /
updateProvider(config: string) signatures to accept ProviderConfig (or
ProviderConfig | string) so the passed object types match the function
signatures and preserve supported_accelerators typing.
| // Ensure supported_accelerators from state is included if set | ||
| if (supportedAccelerators.length > 0) { | ||
| parsedConfig.supported_accelerators = supportedAccelerators; | ||
| } |
There was a problem hiding this comment.
Allow clearing supported accelerators on edit.
When the selection is cleared, the old supported_accelerators value stays in config. Explicitly delete it when the array is empty.
🧹 Suggested update
- if (supportedAccelerators.length > 0) {
- parsedConfig.supported_accelerators = supportedAccelerators;
- }
+ if (supportedAccelerators.length > 0) {
+ parsedConfig.supported_accelerators = supportedAccelerators;
+ } else {
+ delete parsedConfig.supported_accelerators;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Ensure supported_accelerators from state is included if set | |
| if (supportedAccelerators.length > 0) { | |
| parsedConfig.supported_accelerators = supportedAccelerators; | |
| } | |
| // Ensure supported_accelerators from state is included if set | |
| if (supportedAccelerators.length > 0) { | |
| parsedConfig.supported_accelerators = supportedAccelerators; | |
| } else { | |
| delete parsedConfig.supported_accelerators; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/Team/ProviderDetailsModal.tsx` around lines 288 -
291, The current logic only sets parsedConfig.supported_accelerators when
supportedAccelerators has values, but doesn't remove an existing
supported_accelerators when the user clears the selection; update the code where
parsedConfig is built (the block referencing supportedAccelerators and
parsedConfig.supported_accelerators) to explicitly delete
parsedConfig.supported_accelerators when supportedAccelerators.length === 0 so
clearing the UI removes the key from the saved config.
| "interactive_type": "ollama", | ||
| "name": "Ollama Server (macOS)", | ||
| "description": "Ollama model server for macOS (no tunnel)", | ||
| "setup": "if ! command -v uv >/dev/null 2>&1; then curl -LsSf https://astral.sh/uv/install.sh | sh; fi; export PATH=\"$HOME/.local/bin:$HOME/.cargo/bin:$PATH\"; uv venv ~/ollama-venv --seed --python 3.11 && . ~/ollama-venv/bin/activate && uv pip install open-webui;", |
There was a problem hiding this comment.
There might be a problem with this one I think. It first ran alembic upgrade for some reason re-running all our migrations.
I think it also didnt run ollama correctly as I can see the openwebui interface but the ollama server never started for me.
I think it might be because the setup command never installs ollama?
╰ cat /tmp/ollama.log
/bin/bash: ollama: command not found
There was a problem hiding this comment.
> [!CAUTION]
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx (4)
1-30:⚠️ Potential issue | 🟡 MinorFix Prettier check failure.
CI reports Prettier formatting issues in this file; please runprettier --writeto unblock the pipeline.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 1 - 30, The file NewInteractiveTaskModal has Prettier formatting violations (likely in the import block and JSX/spacing); run the formatter to fix it by running `prettier --write` on this file (or your project's configured formatting command), or apply the project's Prettier rules to reformat the import statements and component code around NewInteractiveTaskModal and the large MUI import list (Modal, DialogTitle, DialogContent, ModalDialog, Stack, etc.); then re-run lint/CI to ensure the Prettier check passes.
55-63:⚠️ Potential issue | 🟡 MinorFix
supported_acceleratorstype to match actual usage.The type is declared as
stringbut the component treats it asstring | string[]. At the rendering layer, the code explicitly checksArray.isArray(template.supported_accelerators)and handles both cases, which indicates the type definition is incomplete.Suggested fix
type InteractiveTemplate = { id: string; interactive_type: string; name: string; description: string; env_parameters?: ConfigField[]; icon?: string; - supported_accelerators?: string; + supported_accelerators?: string | string[]; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 55 - 63, The InteractiveTemplate type incorrectly declares supported_accelerators as string; update its declaration in NewInteractiveTaskModal.tsx to supported_accelerators?: string | string[] so it matches runtime checks (Array.isArray(template.supported_accelerators)) and existing render logic; ensure any code that reads template.supported_accelerators (e.g., rendering conditionals or mapping) still compiles with the new union type (add narrowings or Array.isArray checks where needed).
382-398:⚠️ Potential issue | 🟠 MajorAdd
isLocaltocanSubmitdependency array.The memoized validation logic uses
isLocalto filter required fields (line 389), butisLocalis missing from the dependency array. When users toggle local mode, the memoized value won't recompute, causing the submit button to show incorrect enabled/disabled state.✅ Dependency fix
- }, [title, selectedProviderId, selectedTemplate, configFieldValues]); + }, [title, selectedProviderId, selectedTemplate, configFieldValues, isLocal]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 382 - 398, The memoized canSubmit (defined with React.useMemo) uses isLocal when filtering requiredFields but isLocal is not listed in the dependency array; update the dependency array for canSubmit to include isLocal (alongside title, selectedProviderId, selectedTemplate, configFieldValues) so the validation recomputes when local mode toggles, ensuring the submit button state stays correct.
72-85:⚠️ Potential issue | 🔴 CriticalAdd
template_idto theonSubmitdata type.
handleSubmitsendstemplate_id(line 375), but the props type doesn't include it, causing a TypeScript type mismatch.Suggested type fix
data: { title: string; cpus?: string; memory?: string; accelerators?: string; interactive_type: 'vscode' | 'jupyter' | 'vllm' | 'ssh' | 'ollama'; + template_id: string; provider_id?: string; env_parameters?: Record<string, string>; local?: boolean; },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 72 - 85, The onSubmit type in NewInteractiveTaskModalProps is missing template_id even though handleSubmit sends template_id, causing a TypeScript mismatch; update the onSubmit data shape in NewInteractiveTaskModalProps to include template_id?: string (or string | undefined if preferred) so the submitted payload from handleSubmit (where template_id is used) matches the declared prop type and TypeScript errors are resolved.
♻️ Duplicate comments (3)
src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx (3)
856-865: Replaceanyfor team gallery items.
This repeats a previously flagged issue; add a typed interface for team gallery tasks and use it in the filter/map.As per coding guidelines: Avoid using
anytype in TypeScript; define interfaces for all props and API responses to ensure type safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 856 - 865, Define a proper TypeScript interface for team gallery items (e.g., TeamGalleryItem) that includes fields used here like supported_accelerators, config (with supported_accelerators), title/label, id, etc., then replace the use of any in the filter/map callbacks by typing teamGallery as TeamGalleryItem[] (or casting elements as TeamGalleryItem) and update the callback signatures to (task: TeamGalleryItem, index: number) so isProviderCompatible(selectedProvider, task.supported_accelerators || task.config?.supported_accelerators) and subsequent uses of task are typed; update the NewInteractiveTaskModal component props/state types accordingly so teamGallery is strongly typed throughout.
38-43: Replaceanyin provider config and guard undefined providers.
This repeats a previously flagged issue (config?: any,provider: any) and can crash whenprovideris undefined.As per coding guidelines: Avoid using
anytype in TypeScript; define interfaces for all props and API responses to ensure type safety.Also applies to: 132-138
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 38 - 43, The ProviderOption type and usages in NewInteractiveTaskModal use `any` and assume `provider` is defined; replace `config?: any` with a stricter type (e.g., an interface like ProviderConfig or `Record<string, unknown>`/`unknown`) and add a runtime guard where `provider` is accessed (in the component logic around the provider selection and the code at lines referenced ~132-138) to handle undefined providers safely—update the ProviderOption type definition, change places that declare `provider: any` to the new typed shape, and add null/undefined checks before reading provider.config or calling provider-specific methods so the modal no longer crashes when provider is missing.
132-183: Add unit tests forisProviderCompatible.
This utility encodes multiple accelerator rules and needs coverage; this was already flagged in a prior review.As per coding guidelines: Write unit tests for all utility functions and complex hooks in the frontend.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 132 - 183, Add unit tests covering the isProviderCompatible callback in NewInteractiveTaskModal.tsx: create tests that assert true/false for AppleSilicon matches when taskSupportedAccelerators contains "apple" or "mps" and provider.config.supported_accelerators includes/excludes "AppleSilicon"; similar tests for NVIDIA variants ("nvidia","cuda","rtx","a100","h100","v100") against "NVIDIA"; AMD variants ("amd","rocm") against "AMD"; CPU strings against "cpu"; a numeric accelerator string (e.g., "8") should map to NVIDIA presence; and edge cases where provider.config.supported_accelerators is empty and where taskSupportedAccelerators is undefined (both should return true). Ensure each test uses the isProviderCompatible function reference from NewInteractiveTaskModal and covers both positive and negative matches.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx`:
- Around line 1-30: The file NewInteractiveTaskModal has Prettier formatting
violations (likely in the import block and JSX/spacing); run the formatter to
fix it by running `prettier --write` on this file (or your project's configured
formatting command), or apply the project's Prettier rules to reformat the
import statements and component code around NewInteractiveTaskModal and the
large MUI import list (Modal, DialogTitle, DialogContent, ModalDialog, Stack,
etc.); then re-run lint/CI to ensure the Prettier check passes.
- Around line 55-63: The InteractiveTemplate type incorrectly declares
supported_accelerators as string; update its declaration in
NewInteractiveTaskModal.tsx to supported_accelerators?: string | string[] so it
matches runtime checks (Array.isArray(template.supported_accelerators)) and
existing render logic; ensure any code that reads
template.supported_accelerators (e.g., rendering conditionals or mapping) still
compiles with the new union type (add narrowings or Array.isArray checks where
needed).
- Around line 382-398: The memoized canSubmit (defined with React.useMemo) uses
isLocal when filtering requiredFields but isLocal is not listed in the
dependency array; update the dependency array for canSubmit to include isLocal
(alongside title, selectedProviderId, selectedTemplate, configFieldValues) so
the validation recomputes when local mode toggles, ensuring the submit button
state stays correct.
- Around line 72-85: The onSubmit type in NewInteractiveTaskModalProps is
missing template_id even though handleSubmit sends template_id, causing a
TypeScript mismatch; update the onSubmit data shape in
NewInteractiveTaskModalProps to include template_id?: string (or string |
undefined if preferred) so the submitted payload from handleSubmit (where
template_id is used) matches the declared prop type and TypeScript errors are
resolved.
---
Duplicate comments:
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx`:
- Around line 856-865: Define a proper TypeScript interface for team gallery
items (e.g., TeamGalleryItem) that includes fields used here like
supported_accelerators, config (with supported_accelerators), title/label, id,
etc., then replace the use of any in the filter/map callbacks by typing
teamGallery as TeamGalleryItem[] (or casting elements as TeamGalleryItem) and
update the callback signatures to (task: TeamGalleryItem, index: number) so
isProviderCompatible(selectedProvider, task.supported_accelerators ||
task.config?.supported_accelerators) and subsequent uses of task are typed;
update the NewInteractiveTaskModal component props/state types accordingly so
teamGallery is strongly typed throughout.
- Around line 38-43: The ProviderOption type and usages in
NewInteractiveTaskModal use `any` and assume `provider` is defined; replace
`config?: any` with a stricter type (e.g., an interface like ProviderConfig or
`Record<string, unknown>`/`unknown`) and add a runtime guard where `provider` is
accessed (in the component logic around the provider selection and the code at
lines referenced ~132-138) to handle undefined providers safely—update the
ProviderOption type definition, change places that declare `provider: any` to
the new typed shape, and add null/undefined checks before reading
provider.config or calling provider-specific methods so the modal no longer
crashes when provider is missing.
- Around line 132-183: Add unit tests covering the isProviderCompatible callback
in NewInteractiveTaskModal.tsx: create tests that assert true/false for
AppleSilicon matches when taskSupportedAccelerators contains "apple" or "mps"
and provider.config.supported_accelerators includes/excludes "AppleSilicon";
similar tests for NVIDIA variants ("nvidia","cuda","rtx","a100","h100","v100")
against "NVIDIA"; AMD variants ("amd","rocm") against "AMD"; CPU strings against
"cpu"; a numeric accelerator string (e.g., "8") should map to NVIDIA presence;
and edge cases where provider.config.supported_accelerators is empty and where
taskSupportedAccelerators is undefined (both should return true). Ensure each
test uses the isProviderCompatible function reference from
NewInteractiveTaskModal and covers both positive and negative matches.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
api/transformerlab/galleries/interactive-gallery.json (1)
114-115: Consider loopback-only binding for local (no-tunnel) mode.Since this is explicitly “no tunnel,” binding to
0.0.0.0exposes the services on the LAN. If the intent is local-only access, prefer loopback (127.0.0.1) or make it configurable.Suggested change
- "command": "export OLLAMA_HOST=0.0.0.0:11434 && ollama serve > /tmp/ollama.log 2>&1 & sleep 3 && ollama pull $MODEL_NAME > /tmp/ollama-pull.log 2>&1 & sleep 5 && OLLAMA_BASE_URL=http://127.0.0.1:11434 ~/ollama-venv/bin/open-webui serve --host 0.0.0.0 --port 8080 > /tmp/openwebui.log 2>&1 & sleep 5 && echo 'Local Ollama API: http://localhost:11434'; echo 'Local Open WebUI: http://localhost:8080'; tail -f /tmp/ollama.log /tmp/ollama-pull.log /tmp/openwebui.log", + "command": "export OLLAMA_HOST=127.0.0.1:11434 && ollama serve > /tmp/ollama.log 2>&1 & sleep 3 && ollama pull $MODEL_NAME > /tmp/ollama-pull.log 2>&1 & sleep 5 && OLLAMA_BASE_URL=http://127.0.0.1:11434 ~/ollama-venv/bin/open-webui serve --host 127.0.0.1 --port 8080 > /tmp/openwebui.log 2>&1 & sleep 5 && echo 'Local Ollama API: http://localhost:11434'; echo 'Local Open WebUI: http://localhost:8080'; tail -f /tmp/ollama.log /tmp/ollama-pull.log /tmp/openwebui.log",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/galleries/interactive-gallery.json` around lines 114 - 115, The current startup script in the "command" and "setup" strings binds services to 0.0.0.0 (e.g., OLLAMA_HOST and the open-webui serve call), exposing them to the LAN; change those bindings to loopback (127.0.0.1) or make the host configurable so local "no-tunnel" runs are loopback-only: update occurrences where "ollama serve" is started (and where OLLAMA_HOST is set) and where open-webui is served (the open-webui serve --host flag) to use 127.0.0.1 (or read a config/env var like OLLAMA_BIND or WEBUI_HOST) so services are not publicly reachable by default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@api/transformerlab/galleries/interactive-gallery.json`:
- Around line 114-115: The current startup script in the "command" and "setup"
strings binds services to 0.0.0.0 (e.g., OLLAMA_HOST and the open-webui serve
call), exposing them to the LAN; change those bindings to loopback (127.0.0.1)
or make the host configurable so local "no-tunnel" runs are loopback-only:
update occurrences where "ollama serve" is started (and where OLLAMA_HOST is
set) and where open-webui is served (the open-webui serve --host flag) to use
127.0.0.1 (or read a config/env var like OLLAMA_BIND or WEBUI_HOST) so services
are not publicly reachable by default.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx (3)
382-397:⚠️ Potential issue | 🟠 MajorAdd
isLocalto the dependency array.The
canSubmitmemo usesisLocalin the filter logic (line 389) but it's missing from the dependency array. This will cause stale validation when toggling local mode.return true; - }, [title, selectedProviderId, selectedTemplate, configFieldValues]); + }, [title, selectedProviderId, selectedTemplate, configFieldValues, isLocal]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 382 - 397, canSubmit's useMemo reads isLocal but does not include it in the dependency array, causing stale validation when toggling local mode; update the dependency array of the canSubmit memo (the React.useMemo that references title, selectedProviderId, selectedTemplate, configFieldValues) to also include isLocal so the memo re-evaluates whenever local mode changes.
72-87:⚠️ Potential issue | 🟡 MinorAdd
template_idto theonSubmitdata type.Line 373 adds
template_id: selectedTemplate.idto the payload, but this field is not declared in theonSubmitprop type definition. Update the type to include it:onSubmit: ( data: { title: string; cpus?: string; memory?: string; accelerators?: string; interactive_type: 'vscode' | 'jupyter' | 'vllm' | 'ssh' | 'ollama'; provider_id?: string; env_parameters?: Record<string, string>; local?: boolean; + template_id?: string; }, shouldLaunch?: boolean, ) => void;Also applies to: 373-373
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 72 - 87, The onSubmit payload type in NewInteractiveTaskModalProps is missing the template_id field; update the onSubmit data type (in the NewInteractiveTaskModalProps declaration) to include template_id?: string (or string | undefined as appropriate) so the payload that sets template_id: selectedTemplate.id on submit matches the prop type, ensuring the onSubmit signature and the code that references selectedTemplate.id are consistent.
209-217: 🛠️ Refactor suggestion | 🟠 MajorReplace
any[]cast with a proper typed interface.Define a type for team gallery tasks to maintain type safety:
+type TeamGalleryTask = { + id?: string; + name?: string; + title?: string; + description?: string; + icon?: string; + supported_accelerators?: string; + config?: { + supported_accelerators?: string; + }; +}; + const teamGallery = React.useMemo(() => { if ( teamGalleryData?.status === 'success' && Array.isArray(teamGalleryData.data) ) { - return teamGalleryData.data as any[]; + return teamGalleryData.data as TeamGalleryTask[]; } return []; }, [teamGalleryData]);As per coding guidelines: "Avoid using
anytype in TypeScript; define interfaces for all props and API responses to ensure type safety."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 209 - 217, The teamGallery memo currently casts API results to any[]; define a proper interface (e.g., TeamGalleryTask) describing the shape returned by the team gallery endpoint and use it instead of any[]. Update the relevant types where the data is fetched/returned (the API response type for teamGalleryData) and change the useMemo signature to return TeamGalleryTask[] (and cast as TeamGalleryTask[] only if needed). Ensure imports/types for TeamGalleryTask are added and adjust any downstream usages of teamGallery to the new typed properties.
♻️ Duplicate comments (3)
src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx (3)
858-866: 🛠️ Refactor suggestion | 🟠 MajorReplace
anytype annotations withTeamGalleryTask.After defining the
TeamGalleryTasktype, update the filter and map callbacks:{teamGallery - .filter((task: any) => + .filter((task: TeamGalleryTask) => isProviderCompatible( selectedProvider, task.supported_accelerators || task.config?.supported_accelerators, ), ) - .map((task: any, index: number) => { + .map((task: TeamGalleryTask, index: number) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 858 - 866, Update the anonymous callback parameter types in the teamGallery filter and map to use the defined TeamGalleryTask type instead of any: change the filter callback signature (task: any) to (task: TeamGalleryTask) and the map callback signature (task: any, index: number) to (task: TeamGalleryTask, index: number), keeping the existing uses of isProviderCompatible and selectedProvider unchanged so TypeScript correctly infers task.supported_accelerators and task.config?.supported_accelerators.
38-43: 🛠️ Refactor suggestion | 🟠 MajorDefine a typed interface for
configinstead of usingany.The
config?: anydefeats TypeScript's type checking. Based on usage in the component (accessingconfig.supported_accelerators), define a proper type:+type ProviderConfig = { + supported_accelerators?: string[]; +}; + type ProviderOption = { id: string; name: string; type?: string; - config?: any; + config?: ProviderConfig; };As per coding guidelines: "Avoid using
anytype in TypeScript; define interfaces for all props and API responses to ensure type safety."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 38 - 43, The ProviderOption type uses config?: any which bypasses type checking; replace it with a typed interface (e.g., ProviderConfig) that captures fields the component reads such as supported_accelerators: string[] (and any other known keys), then change ProviderOption.config?: ProviderConfig; update references in NewInteractiveTaskModal where config.supported_accelerators is accessed to rely on the new typed shape and adjust optional chaining if needed (e.g., config?.supported_accelerators) to satisfy nullability.
132-184:⚠️ Potential issue | 🟠 MajorFix type annotation and add null safety for
providerparameter.The function uses
provider: anybut is called withselectedProviderwhich can beundefined. This will cause a runtime error when accessingprovider.config.const isProviderCompatible = React.useCallback( - (provider: any, taskSupportedAccelerators: string | undefined) => { + (provider: ProviderOption | undefined, taskSupportedAccelerators: string | undefined) => { if (!taskSupportedAccelerators) return true; - const supported = provider.config?.supported_accelerators || []; + const supported = provider?.config?.supported_accelerators || []; if (supported.length === 0) return true;This change requires the
ProviderConfigtype from the previous suggestion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx` around lines 132 - 184, The isProviderCompatible callback currently types provider as any and assumes provider is defined, causing runtime errors when called with selectedProvider which can be undefined; update the signature to accept provider: ProviderConfig | undefined (import or define ProviderConfig) and add a null/undefined guard at the top (e.g., return true/false or treat missing config as compatible per existing logic) before accessing provider.config; also adjust uses of provider.config?.supported_accelerators to handle provider being undefined and keep the existing accelerator-matching logic intact.
🧹 Nitpick comments (1)
api/transformerlab/routers/experiment/jobs.py (1)
446-449: Inconsistent provider type check compared to similar code.This code correctly uses
ProviderType.LOCAL.value, but the equivalent code inget_provider_job_logsat line 327 uses the string literal"local":# Line 327 if getattr(provider, "type", None) == "local" and hasattr(provider_instance, "extra_config"):Consider updating line 327 to also use the enum for consistency.
Additionally, this logic is duplicated between both functions. You could extract it into a small helper to reduce duplication.
♻️ Suggested fix for consistency at line 327
# For local provider, set workspace_dir (job dir) so LocalProvider can read logs. # Use the dedicated local-only directory so this works even when TFL_REMOTE_STORAGE_ENABLED is set. - if getattr(provider, "type", None) == "local" and hasattr(provider_instance, "extra_config"): + if getattr(provider, "type", None) == ProviderType.LOCAL.value and hasattr(provider_instance, "extra_config"): job_dir = get_local_provider_job_dir(job_id, org_id=user_and_team["team_id"]) provider_instance.extra_config["workspace_dir"] = job_dir🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/transformerlab/routers/experiment/jobs.py` around lines 446 - 449, The provider-type check in get_provider_job_logs currently compares to the string literal "local"; change that check to use ProviderType.LOCAL.value for consistency with the other block (use the same getattr(provider, "type", None) == ProviderType.LOCAL.value pattern) and ensure the hasattr(provider_instance, "extra_config") guard remains; then extract the duplicated logic that computes the local job_dir and sets provider_instance.extra_config["workspace_dir"] into a small helper (e.g., set_local_provider_workspace(provider, provider_instance, job_id, user_and_team) or set_local_workspace_dir) and call it from both get_provider_job_logs and the other function (referencing get_local_provider_job_dir and provider_instance.extra_config inside the helper).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx`:
- Around line 382-397: canSubmit's useMemo reads isLocal but does not include it
in the dependency array, causing stale validation when toggling local mode;
update the dependency array of the canSubmit memo (the React.useMemo that
references title, selectedProviderId, selectedTemplate, configFieldValues) to
also include isLocal so the memo re-evaluates whenever local mode changes.
- Around line 72-87: The onSubmit payload type in NewInteractiveTaskModalProps
is missing the template_id field; update the onSubmit data type (in the
NewInteractiveTaskModalProps declaration) to include template_id?: string (or
string | undefined as appropriate) so the payload that sets template_id:
selectedTemplate.id on submit matches the prop type, ensuring the onSubmit
signature and the code that references selectedTemplate.id are consistent.
- Around line 209-217: The teamGallery memo currently casts API results to
any[]; define a proper interface (e.g., TeamGalleryTask) describing the shape
returned by the team gallery endpoint and use it instead of any[]. Update the
relevant types where the data is fetched/returned (the API response type for
teamGalleryData) and change the useMemo signature to return TeamGalleryTask[]
(and cast as TeamGalleryTask[] only if needed). Ensure imports/types for
TeamGalleryTask are added and adjust any downstream usages of teamGallery to the
new typed properties.
---
Duplicate comments:
In `@src/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx`:
- Around line 858-866: Update the anonymous callback parameter types in the
teamGallery filter and map to use the defined TeamGalleryTask type instead of
any: change the filter callback signature (task: any) to (task: TeamGalleryTask)
and the map callback signature (task: any, index: number) to (task:
TeamGalleryTask, index: number), keeping the existing uses of
isProviderCompatible and selectedProvider unchanged so TypeScript correctly
infers task.supported_accelerators and task.config?.supported_accelerators.
- Around line 38-43: The ProviderOption type uses config?: any which bypasses
type checking; replace it with a typed interface (e.g., ProviderConfig) that
captures fields the component reads such as supported_accelerators: string[]
(and any other known keys), then change ProviderOption.config?: ProviderConfig;
update references in NewInteractiveTaskModal where config.supported_accelerators
is accessed to rely on the new typed shape and adjust optional chaining if
needed (e.g., config?.supported_accelerators) to satisfy nullability.
- Around line 132-184: The isProviderCompatible callback currently types
provider as any and assumes provider is defined, causing runtime errors when
called with selectedProvider which can be undefined; update the signature to
accept provider: ProviderConfig | undefined (import or define ProviderConfig)
and add a null/undefined guard at the top (e.g., return true/false or treat
missing config as compatible per existing logic) before accessing
provider.config; also adjust uses of provider.config?.supported_accelerators to
handle provider being undefined and keep the existing accelerator-matching logic
intact.
---
Nitpick comments:
In `@api/transformerlab/routers/experiment/jobs.py`:
- Around line 446-449: The provider-type check in get_provider_job_logs
currently compares to the string literal "local"; change that check to use
ProviderType.LOCAL.value for consistency with the other block (use the same
getattr(provider, "type", None) == ProviderType.LOCAL.value pattern) and ensure
the hasattr(provider_instance, "extra_config") guard remains; then extract the
duplicated logic that computes the local job_dir and sets
provider_instance.extra_config["workspace_dir"] into a small helper (e.g.,
set_local_provider_workspace(provider, provider_instance, job_id, user_and_team)
or set_local_workspace_dir) and call it from both get_provider_job_logs and the
other function (referencing get_local_provider_job_dir and
provider_instance.extra_config inside the helper).
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
api/transformerlab/routers/experiment/jobs.pysrc/renderer/components/Experiment/Tasks/NewInteractiveTaskModal.tsx
…github.com/transformerlab/transformerlab-app into add/support-non-ngrok-interactive-sessions
Summary by CodeRabbit
New Features
UI/UX Improvements
Data/Config