Chat UI 文件

OpenAI

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

OpenAI

特性 可用
工具
多模態

聊天 UI 可以與任何支援 OpenAI API 相容性的 API 伺服器一起使用,例如 text-generation-webuiLocalAIFastChatllama-cpp-pythonialacolvllm

以下示例配置使聊天 UI 能夠與 text-generation-webui 配合工作,endpoint.baseUrl 是 OpenAI API 相容伺服器的 URL,這會覆蓋 OpenAI 例項要使用的 baseUrl。endpoint.completion 決定使用哪個端點,預設為 chat_completions,它使用 /chat/completions,將 endpoint.completion 更改為 completions 以使用 /completions 端點。

MODELS=`[
  {
    "name": "text-generation-webui",
    "id": "text-generation-webui",
    "parameters": {
      "temperature": 0.9,
      "top_p": 0.95,
      "repetition_penalty": 1.2,
      "top_k": 50,
      "truncate": 1000,
      "max_new_tokens": 1024,
      "stop": []
    },
    "endpoints": [{
      "type" : "openai",
      "baseURL": "https://:8000/v1"
    }]
  }
]`

openai 型別包括官方的 OpenAI 模型。您可以新增,例如 GPT4/GPT3.5 作為“openai”模型

OPENAI_API_KEY=#your openai api key here
MODELS=`[{
  "name": "gpt-4",
  "displayName": "GPT 4",
  "endpoints" : [{
    "type": "openai",
    "apiKey": "or your openai api key here"
  }]
},{
  "name": "gpt-3.5-turbo",
  "displayName": "GPT 3.5 Turbo",
  "endpoints" : [{
    "type": "openai",
    "apiKey": "or your openai api key here"
  }]
}]`

我們也支援 o1 系列中的模型。您需要在配置中新增一些額外的選項:這裡有一個 o1-mini 的例子

MODELS=`[
  {
      "name": "o1-mini",
      "description": "ChatGPT o1-mini",
      "systemRoleSupported": false,
      "parameters": {
        "max_new_tokens": 2048,
      },
      "endpoints" : [{
        "type": "openai",
        "useCompletionTokens": true,
      }]
  }
]

您也可以使用任何提供相容 OpenAI API 端點的模型提供商。例如,您可以自託管 Portkey 閘道器並試驗由 Azure OpenAI 提供的 Claude 或 GPTs。以下是來自 Anthropic 的 Claude 的示例

MODELS=`[{
  "name": "claude-2.1",
  "displayName": "Claude 2.1",
  "description": "Anthropic has been founded by former OpenAI researchers...",
  "parameters": {
    "temperature": 0.5,
    "max_new_tokens": 4096,
  },
  "endpoints": [
    {
      "type": "openai",
      "baseURL": "https://gateway.example.com/v1",
      "defaultHeaders": {
        "x-portkey-config": '{"provider":"anthropic","api_key":"sk-ant-abc...xyz"}'
      }
    }
  ]
}]`

在 Azure OpenAI 上部署的 GPT 4 示例

MODELS=`[{
  "id": "gpt-4-1106-preview",
  "name": "gpt-4-1106-preview",
  "displayName": "gpt-4-1106-preview",
  "parameters": {
    "temperature": 0.5,
    "max_new_tokens": 4096,
  },
  "endpoints": [
    {
      "type": "openai",
      "baseURL": "https://{resource-name}.openai.azure.com/openai/deployments/{deployment-id}",
      "defaultHeaders": {
        "api-key": "{api-key}"
      },
      "defaultQuery": {
        "api-version": "2023-05-15"
      }
    }
  ]
}]`

DeepInfra

或者嘗試 Deepinfra 的 Mistral

注意,apiKey 既可以為每個端點自定義設定,也可以使用 OPENAI_API_KEY 變數進行全域性設定。

MODELS=`[{
  "name": "mistral-7b",
  "displayName": "Mistral 7B",
  "description": "A 7B dense Transformer, fast-deployed and easily customisable. Small, yet powerful for a variety of use cases. Supports English and code, and a 8k context window.",
  "parameters": {
    "temperature": 0.5,
    "max_new_tokens": 4096,
  },
  "endpoints": [
    {
      "type": "openai",
      "baseURL": "https://api.deepinfra.com/v1/openai",
      "apiKey": "abc...xyz"
    }
  ]
}]`

非流式端點

對於像 Azure 上的 o1 這樣不支援流式的端點,您可以在端點配置中傳遞 streamingSupported: false

MODELS=`[{
  "id": "o1-preview",
  "name": "o1-preview",
  "displayName": "o1-preview",
  "systemRoleSupported": false,
  "endpoints": [
    {
      "type": "openai",
      "baseURL": "https://my-deployment.openai.azure.com/openai/deployments/o1-preview",
      "defaultHeaders": {
        "api-key": "$SECRET"
      },
      "streamingSupported": false,
    }
  ]
}]`

其他

其他一些提供商及其 baseURL 供參考。

Groqhttps://api.groq.com/openai/v1 Fireworkshttps://api.fireworks.ai/inference/v1

< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.