Vortex Proxy

Vortex acts as a proxy, seamlessly bridging your applications with Large Language Models (LLMs), such as those provided by OpenAI.

Prerequisites

Before initiating the integration process, ensure you have:

  • Vortex Proxy URL: The endpoint through which your application will communicate with LLMs. For more information on how to get your Vortex proxy URL see the Developers section.
  • Consumer API Key for Vortex: A unique key that authenticates requests from your application to Vortex. For more information on how to get your consumer API key see the Generating Vortex API Keys section.

Establishing a Connection to LLMs via Vortex

With a channel configured and a Vortex API key generated for your consumer, connecting your application to leverage LLM services becomes a streamlined process. While the exact integration steps might vary based on your application's programming language and framework, the primary mechanism involves using the Vortex API key to authenticate requests and ensure a secure connection.

  • VORTEX_CHANNEL_NAME is the name of the channel you have created. e.g. openai
  • VORTEX_CONSUMER_API_KEY is the Vortex key created for a consumer.

Below are examples demonstrating how to connect to OpenAI services through Vortex:


curl --location 'https://your.vortex.proxy.url.com/VORTEX_CHANNEL_NAME/chat/completions' \ --header "Content-Type: application/json" \ --header "apikey: VORTEX_CONSUMER_API_KEY" \ --data '{ "model": "gpt-3.5-turbo", "messages": [ { "role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair." }, { "role": "user", "content": "Say this is a test." } ] }'


import OpenAI from "openai"; // Configure OpenAI with your Vortex API settings const openai = new OpenAI({ apiKey: "VORTEX_CONSUMER_API_KEY", baseURL: "https://your.vortex.proxy.url.com/VORTEX_CHANNEL_NAME", defaultHeaders: { 'apikey': "VORTEX_CONSUMER_API_KEY", } }); // Create a chat completion request const chatCompletion = await openai.chat.completions.create({ model: "gpt-3.5-turbo", messages: [{ role: "user", content: "Say this is a test" }], }); // Output the response console.log(chatCompletion.choices);



These examples illustrate how to send requests to the OpenAI API through Vortex, utilising the proxy URL and your consumer API key for authentication. By adhering to these guidelines, you can effectively integrate OpenAI services into your applications via Vortex, unlocking the potential to enhance your projects with advanced AI capabilities.

Common Errors

  • Invalid authentication credentials

    If you get this error, check your Vortex consumer api key.

{ "message":"Invalid authentication credentials" }

{ "message":"no Route matched with those values" }

On this page