DeepSeek-R1
https://github.com/deepseek-ai/DeepSeek-R1We've been running qualitative experiments on OpenAI o1 and QwQ-32B-Preview [1]. In those experiments, I'd say there were two primary things going against QwQ. First, QwQ went into endless repetitive loops, "thinking out loud" what it said earlier maybe with a minor modification. We had to stop the model when that happened; and I feel that it significantly hurt the user experience.
It's great that DeepSeek-R1 fixes that.
The other thing was that o1 had access to many more answer / search strategies. For example, if you asked o1 to summarize a long email, it would just summarize the email. QwQ reasoned about why I asked it to summarize the email. Or, on hard math questions, o1 could employ more search strategies than QwQ. I'm curious how DeepSeek-R1 will fare in that regard.
Either way, I'm super excited that DeepSeek-R1 comes with an MIT license. This will notably increase how many people can evaluate advanced reasoning models.
The one I'm running is the 8.54GB file. I'm using Ollama like this:
ollama run hf.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:Q8_0
You can prompt it directly there, but I'm using my LLM tool and the llm-ollama plugin to run and log prompts against it. Once Ollama has loaded the model (from the above command) you can try those with uvx like this: uvx --with llm-ollama \
llm -m 'hf.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:Q8_0' \
'a joke about a pelican and a walrus who run a tea room together'
Here's what I got - the joke itself is rubbish but the "thinking" section is fascinating: https://gist.github.com/simonw/f505ce733a435c8fc8fdf3448e381...I also set an alias for the model like this:
llm aliases set r1l 'hf.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF:Q8_0'
Now I can run "llm -m r1l" (for R1 Llama) instead.I wrote up my experiments so far on my blog: https://simonwillison.net/2025/Jan/20/deepseek-r1/
I tried the same tests on DeepSeek-R1 just now, and it did much better. While still not as good as o1, its answers no longer contained obviously misguided analyses or hallucinated solutions. (I recognize that my data set is small and that my ratings of the responses are somewhat subjective.)
By the way, ever since o1 came out, I have been struggling to come up with applications of reasoning models that are useful for me. I rarely write code or do mathematical reasoning. Instead, I have found LLMs most useful for interactive back-and-forth: brainstorming, getting explanations of difficult parts of texts, etc. That kind of interaction is not feasible with reasoning models, which can take a minute or more to respond. I’m just beginning to find applications where o1, at least, is superior to regular LLMs for tasks I am interested in.
Their parent hedge fund company isn't huge either, just 160 employees and $7b AUM according to Wikipedia. If that was a US hedge fund it would be the #180 largest in terms of AUM, so not small but nothing crazy either
The negative downsides begin at "dystopia worse than 1984 ever imagined" and get worse from there
- function calling is broken (responding with excessive number of duplicated FC, halucinated names and parameters)
- response quality is poor (my use case is code generation)
- support is not responding
I will give a try to the reasoning model, but my expectations are low.
ps. the positive side of this is that apparently it removed some traffic from anthropic APIs, and latency for sonnet/haikku improved significantly.
Wow. They’re really trying to undercut closed source LLMs
My only concern is that on openrouter.ai it says:
"To our knowledge, this provider may use your prompts and completions to train new models."
https://openrouter.ai/deepseek/deepseek-chat
This is a dealbreaker for me to use it at the moment.
https://arxiv.org/abs/2410.18982?utm_source=substack&utm_med...
Journey learning is doing something that is effectively close to depth-first tree search (see fig.4. on p.5), and does seem close to what OpenAI are claiming to be doing, as well as what DeepSeek-R1 is doing here... No special tree-search sampling infrastructure, but rather RL-induced generation causing it to generate a single sampling sequence that is taking a depth first "journey" through the CoT tree by backtracking when necessary.
Also wild that few shot prompting leads to worse results in reasoning models. OpenAI hinted at that as well, but it's always just a sentence or two, no benchmarks or specific examples.
Note: I wrote yek so it might be a little bit of shameless plug!
I'm also hoping for progress on mini models, could you imagine playing Magic The Gathering against a LLM model! It would quickly become impossible like Chess.
Also a lot more fun reading the reasoning chatter. Kinda cute seeing it say "Wait a minute..." a lot
There's the distilled R1 GGUFs for Llama 8B, Qwen 1.5B, 7B, 14B, and I'm still uploading Llama 70B and Qwen 32B.
Also I uploaded a 2bit quant for the large MoE (200GB in disk size) to https://huggingface.co/unsloth/DeepSeek-R1-GGUF
At least don’t use the hosted version unless you want your data to go to China