• Hasan

    Administrator
    January 12, 2025 at 12:58 pm

    Just to make things clear, in RAG, it is not only the database of Q&A that decides the answers, it is the LLM model capabilities + data + prompts.
    and if you want to run RAG in production, it is not enough to connect a bunch of questions to an LLMM, you will need another later of verification if the answers are critical for you.

    anyway, coming back to RelevanceAI, try playing with the prompt, and see if the results changes, testing is crucial when setting up AI systems.

    and if you want full control over your system (which I prefer too 😅)

    maybe you should think about building this using Langchain Libraries, and maybe taking a look at guardrails may help you too if you want to create something stable and production ready.

    I don’t know how much critical your business is, and how much you want the answers to be accurate, maybe sharing more details will help me more point you in the right direction.