-
Notifications
You must be signed in to change notification settings - Fork 141
Permalink
Choose a base ref
{{ refName }}
default
Choose a head ref
{{ refName }}
default
Comparing changes
Choose two branches to see what’s changed or to start a new pull request.
If you need to, you can also or
learn more about diff comparisons.
Open a pull request
Create a new pull request by comparing changes across two branches. If you need to, you can also .
Learn more about diff comparisons here.
base repository: ModelCloud/GPTQModel
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v1.6.1
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
...
head repository: ModelCloud/GPTQModel
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v1.7.0
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
- 18 commits
- 22 files changed
- 6 contributors
Commits on Jan 9, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 8761c08 - Browse repository at this point
Copy the full SHA 8761c08View commit details -
Configuration menu - View commit details
-
Copy full SHA for 24895b7 - Browse repository at this point
Copy the full SHA 24895b7View commit details -
* add export * cleanup * new nn.linear * export to mlx * move dequantize to troch linear * cleanup * desc_act=True is not supported * fix lm_head * should allow gptq_v2 * add TestExport * save tokenizer after save model * fix torch.dequantize_weight * fix bias * fix test_export * add dynamic check * add convert_gptq_to_mlx_weights * add backend.mlx * load backend.mlx * fix load * fix group size * add mlx_generate * fix generate * fix load mlx model * Update loader.py * Rename test_export.py to test_mlx.py * Update backend.py * Revert "Update loader.py" This reverts commit 1366d35. * Update setup.py * Update loader.py * add mlx check --------- Co-authored-by: LRL-ModelCloud <[email protected]> Co-authored-by: CL-ModelCloud <[email protected]> Co-authored-by: Qubitium-ModelCloud <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 3dd1466 - Browse repository at this point
Copy the full SHA 3dd1466View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0b42b1a - Browse repository at this point
Copy the full SHA 0b42b1aView commit details
Commits on Jan 10, 2025
-
[CI] upload source in build step (#1070)
* [CI] upload source & use same codes to test * [CI] disable show-statistics * [CI] fix dir exists * [CI] fix dir exists * [CI] fix hash * [CI] fix file name * [CI] print tags * [CI] print enve * [CI] fix compress * [CI] print files name * [CI] print files name * [CI] always run build, but skip compile * [CI] rename step * [CI] update uploading source
Configuration menu - View commit details
-
Copy full SHA for 071ceb8 - Browse repository at this point
Copy the full SHA 071ceb8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 3d72d5e - Browse repository at this point
Copy the full SHA 3d72d5eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 196afce - Browse repository at this point
Copy the full SHA 196afceView commit details -
Add option to quantize
lm_head(#1037)* quantize lm_head * update * Fix incorrect call to layer.forward() * lm_head uses a special quantize config * remove store_lm_head_input_hook() * added code of save/load lm_head_layer_inputs.pt * fix pack_module() * remove pack_module() * Check if quant lm_head supports * cleanup * add only_quant_lm_head * fix only_quant_lm_head * add store_lm_head_input_hook() * fix lm_head layer forward error with marlin * Revert "add store_lm_head_input_hook()" This reverts commit 10c97a8. * cleanup * QuantizeConfig add "lm_head_low_gpu_mem_usage" field * add TestLmHeadQuant * fix merge error
Configuration menu - View commit details
-
Copy full SHA for 98d1a05 - Browse repository at this point
Copy the full SHA 98d1a05View commit details -
* revert marlin dequanitze code * move dequantize_weight -> qlinear.utils
LRL-ModelCloud authoredJan 10, 2025 Configuration menu - View commit details
-
Copy full SHA for b292bf9 - Browse repository at this point
Copy the full SHA b292bf9View commit details -
* [CI] move mlx test to m4 * [CI] fix syntax * [CI] update build if * [CI] fix mlx-files * [CI] check not '' * [CI] update build if * [CI] if * [CI] if * [CI] print if env * [CI] remove alywas * [CI] remove cancel * [CI] Print conditions and parameters * [CI] update outputs * [CI] use _ * [CI] add needs * [CI] add needs * [CI] rename test sh * [CI] fix parameter not received * [CI] rename * [CI] update * [CI] update if * [CI] remove ignore * [CI] clean local * [CI] clean local * [CI] update * darwin BUILD_CUDA_EXT false * [CI] add test var * [CI] append .py * [CI] fix regex
Configuration menu - View commit details
-
Copy full SHA for 492076a - Browse repository at this point
Copy the full SHA 492076aView commit details
Commits on Jan 11, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 3add320 - Browse repository at this point
Copy the full SHA 3add320View commit details -
* update prompt * no need redo quant model * fix import * [CI] replace model path * [CI] replace model path * [CI] fix path * [CI] update path * [CI] update path * [CI] fix replace * fix not export * remove deprecated repetition_penalty * check none * [CI] remove all in clean cache step * print repo & ref * use zen3 public ip * [CI] install with index * [CI] print dir at top * update prompt * import at top * fix python not activated * force update * [CI] deelte hidden files * [CI] update rm * [CI] fix clean error * [CI] fix clean error
Configuration menu - View commit details
-
Copy full SHA for 37018fc - Browse repository at this point
Copy the full SHA 37018fcView commit details
Commits on Jan 16, 2025
-
convert to mlx support desc_act true (#1082)
LRL-ModelCloud authoredJan 16, 2025 Configuration menu - View commit details
-
Copy full SHA for 2f04c05 - Browse repository at this point
Copy the full SHA 2f04c05View commit details -
Configuration menu - View commit details
-
Copy full SHA for 976e27e - Browse repository at this point
Copy the full SHA 976e27eView commit details
Commits on Jan 17, 2025
-
catch module error for setup.py (#1084)
* [CI] check monster dir is mounted * [CI] check monster dir is mounted * add ModuleNotFoundError * sys.exit with error msg * Update setup.py --------- Co-authored-by: Qubitium-ModelCloud <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 066f727 - Browse repository at this point
Copy the full SHA 066f727View commit details -
* prepare for v1.7.0 release * Update version.py * Update README.md
Configuration menu - View commit details
-
Copy full SHA for 55dc91d - Browse repository at this point
Copy the full SHA 55dc91dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 6a245a8 - Browse repository at this point
Copy the full SHA 6a245a8View commit details -
Configuration menu - View commit details
-
Copy full SHA for d247fd0 - Browse repository at this point
Copy the full SHA d247fd0View commit details
Loading
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v1.6.1...v1.7.0