Anthropic API Compatible
Use this interface to call Ant Ling models, supporting multi-turn conversations, long-range tasks, and other scenarios. The payload format is fully compatible with Anthropic messages — you can integrate directly using the Anthropic SDK.
Request Address
| Method | URL |
|---|---|
POST | https://api.ant-ling.com/anthropic/v1/messages |
Authorization
When calling the Anthropic-compatible API, either Authorization or x-api-key can be used to verify identity — exactly one is required.
Authorization
- Type:
HTTP Bearer Auth, value typestring - Required (mutually exclusive with
x-api-key) - Description: Used to verify account information. Go to API Console , click Create Token to obtain.
Authorization: Bearer <YOUR_API_KEY>x-api-key
- Type:
string - Required (mutually exclusive with
Authorization) - Description: Used to verify account information. Go to API Console , click Create Token to obtain.
x-api-key: <YOUR_API_KEY>Request Headers
Content-Type
- Type:
enum<string> - Required, fixed value
application/json - Description: The media type of the request body. Must be
application/json.
Request Body
Overview
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | Yes | — | Model ID |
messages | object[] | Yes | — | Message list |
max_tokens | integer | No | 32000 | Maximum output tokens |
stream | boolean | No | false | Whether to enable streaming output |
system | object[] | No | — | System prompt |
tools | object[] | No | — | Tool list (Function Calling) |
tool_choice | object | No | auto | Tool usage strategy |
stop_sequences | string[] | No | — | Stop sequences |
temperature | double | No | 1 | Randomness, range [0.0, 1.0] |
top_p | double | No | 1 | Nucleus sampling, range (0.0, 1.0] |
model
- Type:
string - Required
- Description: The ID of the model to call.
- Options:
Ling-2.6-1T,Ling-2.6-flash,Ling-2.5-1T,Ling-1T,Ring-2.5-1T,Ring-1T
messages
- Type:
object[] - Required
- Description: The list of messages provided to the model, in conversation order. Supports
text,tool_use, andtool_resultcontent types.
[
{ "role": "user", "content": "Hello" }
]messages.role
- Type:
string - Required
- Description: The role of the message sender.
- Options:
user(user),assistant(assistant)
messages.content
- Type:
string - Required
- Description: The content of the message sent to the model.
- Example:
Hello, Ant Ling
max_tokens
- Type:
integer - Optional, default
32000 - Description: Limits the maximum number of output tokens in a single model response.
stream
- Type:
boolean - Optional, default
false - Description: When set to
true, enables streaming output. The server returns chunks in SSE format.
The timeout for non-streaming calls is 90 seconds. For longer generation tasks, it is recommended to enable stream: true to avoid timeouts.
system
- Type:
object[] - Optional
- Description: System prompt used to set the model’s role or behavior.
[
{
"type": "text",
"text": "You are an intelligent conversation assistant"
}
]tools
- Type:
object[] - Optional
- Description: Tool list (Function Calling). The model may invoke these tools in its reply.
[
{
"name": "search_city_weather",
"description": "Search city weather",
"input_schema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
},
"date": {
"type": "string",
"description": "Date in yyyy-mm-dd format"
}
},
"required": ["city", "date"]
}
}
]tool_choice
- Type:
object - Optional, default
{ "type": "auto" } - Description: Controls how the model uses tools.
- Types:
auto(model decides),any(must use a tool),tool(use a specific tool),none(disable tools)
stop_sequences
- Type:
string[] - Optional
- Description: Stop sequences. The model stops generation early when it emits any string in this list.
temperature
- Type:
double - Optional, default
1 - Description: Controls randomness. Range
[0.0, 1.0]— lower values are more deterministic, higher values more diverse.
top_p
- Type:
double - Optional, default
1 - Description: Nucleus sampling threshold. Range
(0.0, 1.0]— lower values bias toward higher-probability tokens.
Request and Response
Request Example
Streaming
cURL
curl --request POST \
--url https://api.ant-ling.com/anthropic/v1/messages \
--header 'Authorization: Bearer <YOUR_API_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "Ling-2.6-flash",
"stream": true,
"messages": [
{
"role": "user",
"content": "What opportunities and challenges will the Chinese large language model industry face in 2025?"
}
]
}'Response Example
Streaming response
Streaming responses use SSE . Each event has an event line and a data line, ending with message_stop.
event: message_start
data: {"message":{"content":[],"id":"0be8c630...","model":"Ling-2.6-flash","role":"assistant","type":"message","usage":{"input_tokens":0,"output_tokens":0}},"type":"message_start"}
event: content_block_start
data: {"content_block":{"text":"","type":"text"},"index":0,"type":"content_block_start"}
event: content_block_delta
data: {"delta":{"text":"Hello","type":"text_delta"},"index":0,"type":"content_block_delta"}
event: content_block_delta
data: {"delta":{"text":"! How can I help you today?","type":"text_delta"},"index":0,"type":"content_block_delta"}
event: content_block_stop
data: {"index":0,"type":"content_block_stop"}
event: message_delta
data: {"delta":{"stop_reason":"end_turn"},"type":"message_delta","usage":{"input_tokens":19,"output_tokens":21}}
event: message_stop
data: {"type":"message_stop"}Was this page helpful?
Last updated on