I see there is a REST endpoint to upload a dataset but how about appending?
To append data to an existing dataset in Phoenix, you can use the append_to_dataset method provided by the Phoenix client. This method allows you to append examples to a dataset on the Phoenix server. You can provide data either as a pandas DataFrame, a CSV file, or as a sequence of dictionaries. The method requires specifying input keys and optionally output and metadata keys if using a DataFrame or CSV file. For REST API usage, the /v1/datasets/upload endpoint supports both creating and appending datasets by specifying the action parameter as either "create" or "append" in the request body. This endpoint can handle JSON, CSV, or PyArrow formats for dataset uploads. Would you like more details on how to use these methods or endpoints?Sources:
- Phoenix Client API
- GitHub - Phoenix Client
- Phoenix REST API Documentation
馃挕聽Hint: Mention RunLLM in the thread for followups.
To use HTTPS for interacting with the Phoenix API without the Python client, you can directly make HTTP requests to the Phoenix REST API endpoints. For example, to upload a dataset, you can use the /v1/datasets/upload endpoint with a POST request. You need to include the necessary headers, such as Content-Type and Authorization if authentication is required. You can use tools like curl or libraries like httpx in Python to make these requests.Here's an example using curl to upload a dataset: LBKScurl --request POST \ --url https://phoenix.com/v1/datasets/upload \ --header 'Content-Type: multipart/form-data' \ --header 'Authorization: Bearer YOUR_API_KEY' \ --form 'file=@/path/to/your/file.csv' \ --form 'name=your_dataset_name' \ --form 'input_keys[]=input_column_name' \ --form 'output_keys[]=output_column_name'LBKSWould you like more details on how to structure these requests or handle responses?Sources:
- GitHub Issue - Datasets Upload
- Phoenix REST API Overview
馃挕聽Hint: Mention RunLLM in the thread for followups.
ah, perfect. thanks!
