You're almost certainly (definitely, in fact) confusing the 120b and 20b models.
I'm most certainly not doing so.
seg@seg-epyc:~/models$ du -sh * /llmzoo/models/* | sort -n
4.0K metrics.txt
4.0K opus
4.0K start_llama
8.2G nvidia_Orchestrator-8B-Q8_0.gguf
12K config.ini
34G Qwen3.5-27B
47G Qwen3.5-35B
51G Qwen3.5-27B-BF16
61G gpt-oss-120b-F16.gguf
65G Qwen3.5-35B-BF16
106G Qwen3.5-122B-Q6
117G GLM4.6V
175G MiniMax-M2.5
232G /llmzoo/models/small_models
240G Ernie4.5-300B
377G DeepSeekv3.2-nolight
380G /llmzoo/models/DeepSeek-V3.2-UD
400G /llmzoo/models/Qwen3.5-397B-Q8
424G /llmzoo/models/KimiK2Thinking
443G DeepSeek-Math-v2
443G DeepSeek-V3-0324-Q5
500G /llmzoo/models/GLM5-Q5
546G /llmzoo/models/KimiK2.5Oh I missed the "quad" before 3090.