Skip to content

Issue about model downloads #2

@Flame-byte

Description

@Flame-byte

This issue arises when using version 0.1.0 #1 on a Windows system.

When I started the FLM server for the first time, there were some issues with pulling the model file. Then I found the network connection caused it.

Model: llama3.2:1B
Name: Llama-3.2-1B-NPU2
Missing files:
config.json
- tokenizer.json
- attn.xclbin
- mm.xclbin
- dequant.xclbin
- layer.xclbin
- lm_head.xclbin
- model.q4nx
[PULL] Downloading 8 files...
Downloading 1/8: config.json?download=true
CURL error: Timeout was reached
Failed to download: https://huggingface.co/FastFlowLM/Llama-3.2-1B-NPU2/resolve/main/config.json?download=true
[ERROR] Failed to download model files.
[FLM] Loading model: C:\Users\wisemodel-aipc-04-b\flm/../../models/Llama-3.2-1B-NPU2
Failed to open file: C:\Users\wisemodel-aipc-04-b\flm/../../models/Llama-3.2-1B-NPU2

In some regions, model image pull may fail due to network issues.
To solve this issue. You can find mirrors of the project models on the Hugging Face mirror site in your area. For example, you can use the following link.

https://hf-mirror.com/FastFlowLM

After pulling the model, you should put the model file in the following path. For example

C:\Users\models\Llama-3.2-1B-NPU2

In this way, the model can be accessed by flm.

By the way. The path of the model file has been moved to the following path in v0.1.1#3.
https://github.com/FastFlowLM/FastFlowLM/releases/tag/v0.1.1

C:\Users\<USER>\Documents\flm\models\

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions