I have released an open source RAG-based solution on Github, called PHP-RAG.
With it, you can build an AI assistant (or a "chatbot") using common web frameworks and tools, like PHP, Elasticsearch or Solr.
It connects with several LLM APIs, like OpenAI (Chatgpt), and open source ones like KoboldAI.
So what is RAG anyway? It's short for "Retrieval-augmented generation". In other words, it's a way to augment or enrich LLM inference, by providing extra context from a third-party source. I personally think that Lucene-based apps like Elastic or Solr are a great fit for this, since they are fast and optimized for these kinds of queries.
Go check the project out, and I would much appreciate feedback on it!