How many proprietary use cases truly need pre-training or even fine-tuning as opposed to RAG approach? And at what point does it make sense to pre-train/fine tune? Curious.
loading story #47426984
loading story #47421821
rag basically gives the llm a bunch of documents to search thru for the answer.
What it doesn't do is make the algorithm any better. pre-training and fine-tunning improve the llm abaility to reason about your task.
RAG is dead
loading story #47420766
loading story #47420775
loading story #47420799
loading story #47420794