Connecting your stack#
In this section, you’ll learn how to add your own endpoints and datasets to the console.
Custom endpoints#
Prerequisite#
In this section, we’ll assume you have already set up your own LLM endpoints.
If not, one option is to use off-the-shelf endpoints, such as those available in the Azure ML Model Catalog, Vertex AI Model Garden and AWS Bedrock. Alternatively, you can create and host your own LLM endpoint. There are a whole variety of ways to do this, but again it’s most common to do so via one of the major cloud providers.
Regardless of how you set up your LLM endpoints, you’ll need to expose an API for this. The API should also adhere to the OpenAI standard to integrate with Unify.
Adding the endpoints#
Once you’ve got your custom LLM endpoints set up, the next step is to add these to the Endpoints
section of the console.
Click on Add Endpoint
to upload a new endpoint. You’ll have to specify a name, and the cloud provider used for the endpoint. You will also need to include your API key for said provider so we can query your endpoint on your behalf.
That’s all! Your custom endpoints are now available through the Unify API as well as our interfaces, ready to be benchmarked and routed across.
Custom datasets#
You can add a dataset on the Datasets
section of the console. There, click the Add Dataset button.
The resulting screen lets you specify the local .jsonl
file to upload, containing the prompts you would like to benchmark on. You can also upload a file with reference answer prompts if you want to build a labelled dataset.
Once your dataset is uploaded, you can click on it to preview the prompts. For example, the image below shows the preview for a labelled dataset.
Round Up#
That’s it, you now know how to upload your own endpoints and datasets! You can now run custom benchmarks, build a custom router, or query your endpoint with the Unify API.