Access Ling / Ring on ZenMux
If you already use ZenMux to uniformly manage multiple large models (authentication, quota, audit, grayscale), you can add Ant Ling as an upstream provider; the application side still uses OpenAI SDK (or OpenAI compatible HTTP) calling method.
This article is divided into two parts:
- Platform side (ZenMux): Configure Ant Ling as upstream in ZenMux, and create an external access credential for the business
- Business side (Application): Point
base_urlto ZenMux, callling/ming/ringrespectively
What You Need to Prepare
- Ant Ling API Key (see: Prerequisites)
- ZenMux gateway’s external address (example:
https://zenmux.ai/api/v1) - API Key issued by ZenMux for the business (example:
ZENMUX_API_KEY)
Platform Side: Configure Ant Ling as Upstream in ZenMux
Ant Ling provides OpenAI compatible interface:
- Upstream Base URL:
https://api.ant-ling.com/v1 - Upstream Authentication:
Authorization: Bearer <ANT_LING_API_KEY>
Therefore, in ZenMux’s “Upstream Provider / Provider / Upstream” configuration, add it as an OpenAI compatible provider (field names depend on your ZenMux version UI/configuration):
- provider type: OpenAI Compatible (or OpenAI)
- base_url:
https://api.ant-ling.com/v1 - api_key: Fill in your Ant Ling API Key
- (Optional) timeout/retry/concurrency: Set according to business traffic
- (Optional) logging and audit: It is recommended to enable request/response metadata logging (pay attention to desensitization)
Then create an “external key / token” for the business on ZenMux, and bind this key to the upstream provider or corresponding route.
Business Side: Point OpenAI SDK to ZenMux
Your application no longer directly accesses https://api.ant-ling.com/v1, but accesses ZenMux’s OpenAI compatible entry:
- base_url:
https://zenmux.ai/api/v1 - api_key: Key issued by ZenMux for the business (not Ant Ling’s key)
The following example uses the OpenAI official SDK, keeping consistency with other tutorials on your site.
When making calls as shown in the examples below, the model field should be set according to the model ID displayed on the ZenMux platform.
Call Ling
OpenAI:Python
from openai import OpenAI
client = OpenAI(
base_url="https://zenmux.ai/api/v1",
api_key="<ZENMUX_API_KEY>",
)
# Chat Completion
completion = client.chat.completions.create(
model="inclusionai/ling-2.6-flash",
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)
print(completion.choices[0].message.content)
# Responses API
responses = client.responses.create(
model="inclusionai/ling-2.6-flash",
input="What is the meaning of life?"
)
print(responses)
Call Ring (Reasoning / Math / Code)
Ring is more suitable for complex reasoning, math, and code tasks. For the business side, the calling method is the same as Ling, the key is selecting the appropriate model name:
OpenAI:Python
from openai import OpenAI
client = OpenAI(
base_url="https://zenmux.ai/api/v1",
api_key="<ZENMUX_API_KEY>",
)
# Chat Completion
completion = client.chat.completions.create(
model="inclusionai/ring-1t",
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)
print(completion.choices[0].message.content)
FAQ
What model name should I fill in?
Use the model ID displayed on the ZenMux platform , e.g., inclusionai/ling-2.6-1t, inclusionai/ling-2.6-flash, inclusionai/ling-1t, inclusionai/ring-1t, etc.
How to troubleshoot 401/403?
- Confirm you are using ZenMux’s key (not Ant Ling’s key)
- Confirm ZenMux’s key has been authorized to access the corresponding route/model
- On platform side, confirm ZenMux’s upstream key for accessing Ant Ling is still valid and has sufficient quota