Vortex Admin API: A Comprehensive Guide

The Vortex Admin API is a powerful tool designed to streamline the integration of OpenAI services into your applications. By enabling the programmatic creation of consumers and the issuance of API keys, the Admin API simplifies complex integration processes, offering a flexible alternative to the user interface for advanced use cases.

Prerequisites

Before starting, ensure you have the following:

  • Vortex proxy URL
  • Vortex admin URL
  • Admin API key for Vortex

Refer to the Developers section if you're missing any of these details.

Provider ID

The initial step is to acquire the provider ID for the LLM provider, which is crucial for channel creation.

Get Provider ID

Various code snippets are provided to obtain the provider ID using cURL and JavaScript.

curl --location 'https://your.vortex.base.url.com/api/admin/services' \ --header 'Content-Type: application/json' \ --header 'apikey: VORTEX_ADMIN_API_KEY'


const VORTEX_ADMIN_URL = "https://your.vortex.base.url.com/api/admin"; const VORTEX_ADMIN_API_KEY = "your_vortex_admin_api_key"; const headers = { "Content-Type": "application/json", apiKey: VORTEX_ADMIN_API_KEY, }; const response = fetch(`${VORTEX_ADMIN_URL}/services`, { method: "POST", headers, });




Get the id of the openai object. Keep the provider ID handy for the next steps.

Channels

Channels serve as pathways for your requests to OpenAI.

The channel name and path should be identical, with the path prefixed with a /.

Instructions and code snippets for creating and listing channels are provided.

Creating a Channel

If you want to create a new channel

curl --location 'https://your.vortex.base.url.com/api/admin/services/PROVIDER_ID/routes' \ --header 'Content-Type: application/json' \ --header 'apikey: VORTEX_ADMIN_API_KEY' \ --data '{ "expression": "(http.path ^=\"/your_channel_name\")", "name":"your_channel_name" }'


const VORTEX_ADMIN_URL = "https://your.vortex.base.url.com/api/admin"; const VORTEX_ADMIN_API_KEY = "your_vortex_admin_api_key"; const provider_id = ""; // from previous step const headers = { "Content-Type": "application/json", apiKey: VORTEX_ADMIN_API_KEY, }; const data = { expression: `(http.path ^= "/your_channel_name")`, name: `your_channel_name`, }; const response = fetch(`${VORTEX_ADMIN_URL}/services/${provider_id}/routes`, { method: "POST", headers, body: JSON.stringify(data), });



Listing Channels

If you want to reuse an existing channel


curl --location 'https://your.vortex.base.url.com/api/admin/routes' \ --header 'Content-Type: application/json' \ --header 'apikey: VORTEX_ADMIN_API_KEY'


const VORTEX_ADMIN_URL = "https://your.vortex.base.url.com/api/admin"; const VORTEX_ADMIN_API_KEY = "your_vortex_admin_api_key"; const headers = { "Content-Type": "application/json", apiKey: VORTEX_ADMIN_API_KEY, }; const data = { expression: `(http.path ^= "/your_channel_name")`, name: `your_channel_name`, }; const response = fetch(`${VORTEX_ADMIN_URL}/routes`, { method: "GET", headers, });



Remember to note the channel ID for future reference.

Store LLM provider API key

Storing the LLM provider's API key is essential for enabling communication between your application and OpenAI.

  • LLM_PROVIDER_API_KEY is the API key generated on LLM provider.
  • It's possible to link several API keys to a single channel to distribute the load.
curl --location 'https://your.vortex.base.url.com/api/admin/route-api-keys' \ --header 'Content-Type: application/json' \ --header 'Accept: application/json' \ --header 'apikey: VORTEX_ADMIN_API_KEY' \ --data '{ "route_api_key": { "api_key": "LLM_PROVIDER_API_KEY" }, "route": { "id": "CHANNEL_ID" } }'


const VORTEX_ADMIN_URL = "https://your.vortex.base.url.com/api/admin"; const VORTEX_ADMIN_API_KEY = "your_vortex_admin_api_key"; const channel_id = ""; // from previous step const headers = { "Content-Type": "application/json", Accept: "application/json", apiKey: VORTEX_ADMIN_API_KEY, }; const data = { route_api_key: { api_key: `llm_provider_api_key`, }, route: { id: `channel_id`, }, }; const response = fetch(`${VORTEX_ADMIN_URL}/route-api-keys`, { method: "POST", headers, body: JSON.stringify(data), });


Create Consumer

A consumer represents the user or system that will interact with OpenAI through Vortex.

Remember:

  • The username should be unique, but the same email address can be used for multiple consumers.
  • A consumer might be an individual, a team, or a product. For teams or products, utilize a group email and the name of the team or product.


curl --location 'https://your.vortex.base.url.com/api/admin/consumers' \ --header 'Content-Type: application/json' \ --header 'apikey: VORTEX_ADMIN_API_KEY' \ --data '{ "username": "CONSUMER_NAME", #e.g johndoe "tags":["email:CONSUMER_EMAIL"] #e.g johndoe@customer.com }'


const VORTEX_ADMIN_URL = "https://your.vortex.base.url.com/api/admin"; const VORTEX_ADMIN_API_KEY = "your_vortex_admin_api_key"; const headers = { "Content-Type": "application/json", apiKey: VORTEX_ADMIN_API_KEY, }; const data = { username: `consumer_name`, //e.g johndoe tags: `consumer_email`, //e.g johndoe@customer.com }; const response = fetch(`${VORTEX_ADMIN_URL}/consumers`, { method: "POST", headers, body: JSON.stringify(data), });


Generate a Vortex API Key

Each consumer requires a unique API key for authentication and access control.

Remember:

  • CONSUMER_NAME path parameter is the username from the Create a Consumer step
  • KEY_NAME names help identify a Vortex key for tracking usage and costs
curl --location 'https://your.vortex.base.url.com/api/admin/consumers/CONSUMER_NAME/key-auth' \ --header 'Content-Type: application/json' \ --header 'apikey: VORTEX_ADMIN_API_KEY' \ --data '{ "tags":["key-name:KEY_NAME"]" #e.g ai-feature-poc }'


const VORTEX_ADMIN_URL = "https://your.vortex.base.url.com/api/admin"; const VORTEX_ADMIN_API_KEY = "your_vortex_admin_api_key"; const headers = { "Content-Type": "application/json", apiKey: VORTEX_ADMIN_API_KEY, }; const data = { tags: `key_name`, //e.g ai-feature-poc }; const response = fetch(`${VORTEX_ADMIN_URL}/consumers/consumer_name/key-auth`, { method: "POST", headers, body: JSON.stringify(data), });


Make sure to note down the key and the ID of the key for subsequent configuration.

Keys cannot be retrieved later, so it's crucial to copy and secure them immediately.

Configuring Access and Usage Limits for Vortex API Key

New keys won't have access by default. Keys need to be permitted access to a channel and a usage limit should be setup.

  • Get the CHANNEL_ID for the channel you want by Listing channels.
  • VORTEX_KEY_ID is the id of the Vortex key generated in the previous step.

Important

Ensure the model name exactly matches a valid model that will be used the in the request; otherwise, subsequent requests will be blocked.

Usage limit configuration

To add usage limits for multiple models, repeat the step.


curl --location 'https://your.vortex.base.url.com/api/admin/token-limits' \ --header 'apikey: VORTEX_ADMIN_API_KEY' \ --header 'Content-Type: application/json' \ --header 'Accept: application/json' \ --data '{ "route_id": "CHANNEL_ID", "period_seconds": PERIOD_IN_SECONDS, #e.g 60 "limit": TOKEN_LIMIT, #e.g 100 "keyauth_id": "VORTEX_KEY_ID", "model": "MODEL_NAME" #e.g gpt-3.5-turbo }'


const VORTEX_ADMIN_URL = "https://your.vortex.base.url.com/api/admin"; const VORTEX_ADMIN_API_KEY = "your_vortex_admin_api_key"; const headers = { "Content-Type": "application/json", "Accept": "application/json", apiKey: VORTEX_ADMIN_API_KEY, }; const data = { route_id: 'channel_id', period_seconds: period_in_seconds, limit: token_limit, keyauth_id: 'VORTEX_KEY_ID', model: 'MODEL_NAME' }; const response = fetch( `${VORTEX_ADMIN_URL}/token-limits`, { method: "POST", headers, body: JSON.stringify(data), } );

Connect to LLMs via Vortex

With all the pieces in place, your application is now ready to harness the power of OpenAI services through Vortex. For detailed instructions on establishing this connection, refer to Connecting to LLM via Vortex

Congratulations! By following these steps, you've successfully integrated OpenAI services into your applications via Vortex, unlocking a new realm of possibilities for your projects.

Common Errors

  • You cannot have more than one token-limit for the same combination of api key, route and model.

    This error is when you are trying to add a duplicate usage limit configuration again for the same Vortex key . Vortex currently doesn't allow users to added multiple configures for the same model for one api key.

    bash { "error":"Conflict", "message":"You cannot have more than one token-limit for the same combination of api key, route and model." }

  • Bad request or schema violation .

    This usually happens if you are missing certain headers.

    bash { "error":"Bad Request", "message":"keyauth_id is invalid or missing" }
    bash { "message":"schema violation ({\n \"tags\":[\"key-name:ai-feature-poc\"]\n}: unknown field)", "name":"schema violation", "code":2, "fields":{"{\n \"tags\":[\"key-name:ai-feature-poc\"]\n}":"unknown field"} }

On this page