Managing OpenAI Access with Vortex

Vortex offers a streamlined approach for managing OpenAI API key distribution, enforcing usage limitations, monitoring real-time usage, and effectively controlling costs. This documentation outlines a step-by-step guide to leveraging Vortex for OpenAI access, enabling your team to innovate while maintaining budgetary control.

By allowing the creation of multiple Vortex keys linked to a single OpenAI key, Vortex simplifies the management of OpenAI API keys.

Issue Vortex Keys to Teams to Access OpenAI

Suppose you have teams within the organisation eager to incorporate AI features using OpenAI's GPT models into their applications. As they develop and test these AI features, you aim to limit usage to manage costs effectively.

For teams intending to use the gpt-3.5-turbo, gpt-4, and gpt-4-turbo models for development, the usage limits are as follows:

  • gpt-3.5-turbo: 10000 tokens/day
  • gpt-4: 1000 tokens/day
  • gpt-4-turbo: 2500 tokens/day

This configuration also blocks access to all other models.

Steps

  1. Start by Creating a Channel named openai within Vortex if one does not already exist.

  2. Determine if the team requires one API key for everyone or individual keys. If one key suffices, Create a Consumer for the team. For individual keys, create a consumer for each team member.

  3. Generate a Vortex key for the consumers. Note: If creating multiple keys for the same consumer, repeat this step as necessary.

    Resource Limits

    1. Model: gpt-3.5-turbo, Token limit: 10000, Duration: 1d
    2. Model: gpt-4-turbo, Token limit: 2500, Duration: 1d
    3. Model: gpt-4, Token limit: 1000, Duration: 1d
  4. Distribute the generated Vortex key(s) and the proxy URL to the relevant teams.

Monitoring and Managing Costs

With keys issued to the teams, Vortex enables detailed insights into API usage and costs—essential for effective budget management.