scalaone/azure-openai-proxyAzure-OpenAI-Proxy is an application that serves as a proxy for OpenAI's API. It enables users to request AI-generated text completions for specific prompts, using different models and parameters. The proxy supports GPT-4 models in addition to other available models. It simplifies the interaction with the OpenAI API and helps manage multiple deployments with ease for your AI-based text generation applications.
Follow these steps to set up Azure-OpenAI-Proxy:
git clone [***]
cd azure-openai-proxy
npm install
Replace the placeholder values in the example request as mentioned in the Usage section with your actual resource ID, deployment IDs, model names, and API key.
Run the application:
npm run start
The Azure-OpenAI-Proxy will be running on your server and listening for incoming requests.
To send a request, use a curl command to POST the input data to the application's URL. Replace the placeholder values with your actual resource ID, deployment IDs, model names, and API key.
Example request:
curl -X "POST" "[***]" \ -H 'Authorization: Bearer YOUR_RESOURCE_ID:YOUR_MODEL_NAME_IDENTIFIERS:YOUR_API_KEY' \ -H 'Content-Type: application/json; charset=utf-8' \ -d $'{ "messages": [ { "content": "hi", "role": "user" } ], "temperature": 1, "model": "gpt-3.5-turbo", "stream": false }'
Replace the following placeholder values:
YOUR_RESOURCE_ID with the actual resource ID (e.g., hai).YOUR_MODEL_NAME_IDENTIFIERS with the actual model and deployment ID mappings (e.g., gpt-3.5-turbo|gpt-35-turbo,gpt-4|gpt-4,gpt-4-32k|gpt-4-32k).YOUR_API_KEY with your actual API key (e.g., xxxxxx).The request body contains the following parameters:
messages: An array of message objects containing content and role properties. The content represents the text input, and the role can be one of the following options: 'system', 'user', or 'assistant'.temperature: Controls the randomness of generated completions. Higher values (e.g., 1) result in more random responses, while lower values (e.g., 0.1) produce more focused and deterministic responses.model: Specifies the AI model to be used for generating completions. In the example, it is set to 'gpt-3.5-turbo'.stream: A boolean value indicating whether the response should be streamed.This project is licensed under the MIT License. See the LICENSE file for details.
Contributions to Azure-OpenAI-Proxy are greatly appreciated. To contribute, follow these steps:
Please make sure to follow the project's coding standards and update documentation accordingly.
For further information, check the OpenAI API documentation.
Thank you for your interest in contributing to Azure-OpenAI-Proxy. Your efforts will help improve the project and benefit the community.
manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务