Hi Amber R.
As you previously noted, we initially began our work with RAG and prompt engineering, but unfortunately, the results were not as promising as we had hoped. As a result, we decided to fine-tune the embedding models using the MNR loss, and we observed a significant improvement in the hit rate. Now, we are considering fine-tuning a generator model. However, it's worth noting that we currently only have access to question and passage pairs; we do not have answer data. Consequently, we are actively exploring methods to fine-tune the generator model using only these question and passage pairs.