Setting Up Phoenix for Insights on Llama-13b Predictions
Vincent A. moved your question here for more relevant support
Vincent Abraham [8:03 AM] Hi everyone, I'm a beginner with using LLMs and I would like to use Phoenix to gain insights on how my fine-tuned Llama-13b-chat model makes predictions on binary classification problems. Could anyone guide me on how do I go about setting up Phoenix for this purpose? This was the article that I referred to for fine-tuning: https://medium.com/@geronimo7/finetuning-llama2-mistral-945f9c200611. I have the fine-tuned model locally and I could upload it to HuggingFace as well. I used the HF Transformers framework for fine-tuning. I've created some customized datasets for training, validation and testing, which I'm using for fine-tuning and inference. What I want to do is, gain insights from the on which portions of the user prompt are influencing it's predictions.
