hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4 Text Generation • 8B • Updated Aug 7, 2024 • 183k • 82
DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-Deep-Reasoning-32B-GGUF Text Generation • 25B • Updated Jul 28 • 746 • 8
DavidAU/L3.1-Dark-Reasoning-Unholy-Hermes-R1-Uncensored-8B Text Generation • 8B • Updated May 28 • 4 • 12
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-GGUF Text Generation • 25B • Updated Jul 28 • 952 • 26
DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-gguf Text Generation • 14B • Updated Jul 28 • 421 • 12
DavidAU/L3.1-Dark-Reasoning-LewdPlay-evo-Hermes-R1-Uncensored-8B Text Generation • 8B • Updated Jul 28 • 29 • 31
hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4 Text Generation • 410B • Updated Sep 13, 2024 • 896 • 36
hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 Text Generation • 71B • Updated Aug 7, 2024 • 162k • 107
hugging-quants/Meta-Llama-3.1-405B-Instruct-GPTQ-INT4 Text Generation • 410B • Updated Aug 7, 2024 • 254 • 16
hugging-quants/Meta-Llama-3.1-405B-Instruct-BNB-NF4 Text Generation • 423B • Updated Sep 16, 2024 • 20 • 5
hugging-quants/Meta-Llama-3.1-8B-Instruct-BNB-NF4 Text Generation • 8B • Updated Aug 8, 2024 • 341 • 8
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit Text Generation • 71B • Updated Jul 27, 2024 • 44 • 4