Skip to content

can't read file xx.jng #3

@EduardTalianu

Description

@EduardTalianu

probably i'm doing something wrong but here it is on Kali linux:

/minicpmv-cli -m ggml-model-Q4_K_M.gguf --mmproj mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jng -p "What is in the image?"
Log start
clip_model_load: description: image encoder for MiniCPM-V
clip_model_load: GGUF version: 3
clip_model_load: alignment: 32
clip_model_load: n_tensors: 455
clip_model_load: n_kv: 18
clip_model_load: ftype: f16

clip_model_load: loaded meta data with 18 key-value pairs and 455 tensors from mmproj-model-f16.gguf
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv 0: general.architecture str = clip
clip_model_load: - kv 1: clip.has_text_encoder bool = false
clip_model_load: - kv 2: clip.has_vision_encoder bool = true
clip_model_load: - kv 3: clip.has_minicpmv_projector bool = true
clip_model_load: - kv 4: general.file_type u32 = 1
clip_model_load: - kv 5: general.description str = image encoder for MiniCPM-V
clip_model_load: - kv 6: clip.projector_type str = resampler
clip_model_load: - kv 7: clip.vision.image_size u32 = 448
clip_model_load: - kv 8: clip.vision.patch_size u32 = 14
clip_model_load: - kv 9: clip.vision.embedding_length u32 = 1152
clip_model_load: - kv 10: clip.vision.feed_forward_length u32 = 4304
clip_model_load: - kv 11: clip.vision.projection_dim u32 = 0
clip_model_load: - kv 12: clip.vision.attention.head_count u32 = 16
clip_model_load: - kv 13: clip.vision.attention.layer_norm_epsilon f32 = 0.000001
clip_model_load: - kv 14: clip.vision.block_count u32 = 26
clip_model_load: - kv 15: clip.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000]
clip_model_load: - kv 16: clip.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000]
clip_model_load: - kv 17: clip.use_gelu bool = true
clip_model_load: - type f32: 285 tensors
clip_model_load: - type f16: 170 tensors
clip_model_load: CLIP using CPU backend
clip_model_load: text_encoder: 0
clip_model_load: vision_encoder: 1
clip_model_load: llava_projector: 1
clip_model_load: model size: 984.29 MB
clip_model_load: metadata size: 984.47 MB
clip_model_load: params backend buffer size = 984.29 MB (455 tensors)
key clip.vision.image_grid_pinpoints not found in file
key clip.vision.mm_patch_merge_type not found in file
key clip.vision.image_crop_resolution not found in file
clip_image_build_graph: ctx->buf_compute_meta.size(): 884880
clip_model_load: compute allocated memory: 88.80 MB
load_file_to_bytes: can't read file xx.jng
llava_image_embed_make_with_filename_slice: failed to load xx.jng
zsh: segmentation fault ./minicpmv-cli -m ggml-model-Q4_K_M.gguf --mmproj mmproj-model-f16.gguf -c

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions