Skip to Content
TutorialsPrompt Engineering

Prompt Engineering

Prompt Engineering is the ability to clearly explain the task: the clearer and more verifiable your input to the model, the more stable and reusable the output. This applies equally to Ant Ling.

This document summarizes a practical prompt writing method: starting from “goals and constraints”, progressively adding context, format, examples, and evaluation to make outputs more reliable.


Core Concepts

DimensionWhat You Need to DoTypical Benefits
GoalClarify the task to complete and success criteriaReduce off-topic, fewer follow-up questions
ConstraintsLimit output scope, tone, length, prohibited itemsMore consistent, more controllable
ContextProvide necessary materials (facts/rules/data/code snippets) and mark boundariesReduce hallucinations, improve accuracy
StructureSpecify output format (list/table/JSON/steps)Easy to parse and automate
ExamplesGive 1-5 high-quality input-output examples (few-shot)Stabilize style, reduce bias
VerificationRequire self-check, provide citations/calculation process, or use structured output constraintsImprove verifiability

Quick Selection Guide:

  • Fact Q&A / Code / Math: Clear goal + strong constraints + low sampling (see sampling.md)
  • Writing and Creative: Clear goal + style constraints + examples + moderate sampling
  • Information Extraction / Automation: Prioritize structured output (see structured-outputs.md), combine with tool calling if needed
  • Long Document Processing: Boundary markers + segmented tasks + traceable output structure

You can break the prompt into 5 small pieces, use as needed (don’t need to write all of them every time):

  1. Task: What you want the model to do
  2. Context/Materials: What information the model needs to reference (wrapped in boundaries)
  3. Constraints: Output language, tone, length, prohibited items, must-cover points
  4. Output Format: Use clearly parseable structure (list/table/JSON)
  5. Examples: Give a few high-quality examples (optional)

Here’s a “ready to copy” example.

You are a rigorous assistant. 【Task】 Rewrite the content provided by the user into Chinese copy suitable for external publication. 【Context Materials (only use the following, do not fabricate)】 <<< (Paste original materials here) >>> 【Requirements】 - Language: Simplified Chinese - Style: Professional, concise, no exaggerated promotional language - Must include: Product positioning, 3 core selling points, 1 usage scenario - Prohibited: Introducing features/data not provided; using absolute statements like "best/first" 【Output Format】 Output in Markdown: 1) Title 2) Three selling points (list) 3) Usage scenario (1 paragraph)

Role and Instruction Priority (How to Make It More “Obedient”)

Most conversational interfaces support organizing messages with different roles (e.g., system / developer / user / assistant). The priority is typically:

system/developer (highest) > user > assistant (historical output).

Practical suggestions:

  • Write stable, unchanging rules in system/developer (e.g., output language, compliance boundaries, format constraints)
  • Write variable inputs in user (e.g., text/question to process this time)
  • Avoid embedding key constraints in long context passages; list them as items

Using “Boundary Markers” to Prevent Context Confusion

When you need to provide long reference materials (documents, logs, code, contract terms), wrap them in clear boundaries and tell the model “can only cite/infer within the boundaries”.

Recommended写法:

【Reference Materials】 <<<DOC ... very long materials ... DOC >>>

With配套 constraints:

- Only answer based on information in <<<DOC ... DOC>>> - If materials are insufficient to draw conclusions, output: Insufficient information, and specify what's missing

This is very effective for reducing the confusion of “treating prompts as instructions/instructions as materials”.


Making Output Parseable: Structured and Format Constraints

Markdown Structure (Lightweight, Universal)

Suitable for: Summaries, comparisons, solution reviews, meeting minutes.

【Output Format】 ## Conclusion (one sentence) ## Basis - ... ## Risks - ... ## Next Steps 1. ...

JSON / Schema (Strong Constraints, Suitable for Production)

Suitable for: Information extraction, rule engine input, automated workflows.

Prioritize using structured output capabilities, see structured-outputs.md.


Few-shot: Stabilizing Style and Rules with Few Examples

When tasks are more “like human scoring/classification/rewriting”, few-shot can significantly improve consistency. The key is:

  • Examples should cover edge cases (easily misclassified examples)
  • Examples should be short, avoid bringing irrelevant information into the pattern
  • When rules change, examples should be updated synchronously

Example (sentiment classification):

You are a classifier, can only output: Positive / Neutral / Negative. 【Example】 Input: The earphone sound quality is good, battery life is also sufficient. Output: Positive Input: It's okay, not as good as advertised. Output: Neutral Input: Customer service attitude is terrible, never buying again. Output: Negative 【Start Now】 Input: {User Review} Output:

Breaking Complex Tasks into “Acceptable” Steps

Models tend to miss points when facing complex requirements. Break tasks into clear steps and require item-by-item verification.

【Task】 Write a technical solution based on the requirements document. 【Steps】 1) First give your 5 key understandings of the requirements (cite original text snippets) 2) Give 2 optional solutions (各自优缺点、成本、风险) 3) Select recommended solution and give implementation plan (milestones + rollback strategy) 【Acceptance Criteria】 - Must cover: Performance, availability, monitoring alerts, security - Not allowed: Introducing new goals outside the requirements document

Self-Check and “Insufficient Information” Strategy (Reducing Hallucinations)

In production scenarios, verifiability is most important. You can require the model to:

  • First judge if information is sufficient: If not, clearly state what’s missing
  • Provide basis for key conclusions: Which material, which rule, or which calculation step
  • Self-check before output: Check if format/required fields/prohibited items are met

Example:

Before answering, do two steps: 1) Judge if materials are sufficient; if not, output "Insufficient information: missing..." 2) After outputting the final answer, attach a "self-check list", item by item write "met/not met"

Code Example: Applying Prompt Engineering to API Calls

The following example follows the SDK写法 (base_url + api_key) from other tutorials on your site. The focus is on the structure of messages and the prompt content itself.

from openai import OpenAI client = OpenAI( base_url="https://api.ant-ling.com/v1", api_key="YOUR_API_KEY" ) prompt = """You are a rigorous Chinese assistant. 【Task】 Answer questions based on given materials. 【Materials (only use the following, do not fabricate)】 <<<DOC Product A supports offline mode; in offline mode, only cached data can be viewed. Product A's data cache defaults to 7 days, configurable from 1-30 days. DOC >>> 【Question】 Can new data be viewed in offline mode? How long can the cache be retained? 【Output Format】 Answer in two key points, each no more than 20 characters. """ resp = client.chat.completions.create( model="Ling-2.6-flash", messages=[{"role": "user", "content": prompt}], temperature=0.2 ) print(resp.choices[0].message.content)

Common Issues and Anti-Patterns

1) Too Vague Prompts

Anti-example: “Help me optimize this paragraph.”

Improvement: Give goals (audience/channel/style), hard constraints (length/prohibited words/must-cover points), and output format.

2) Combining Multiple Tasks Together

Anti-example: “First summarize, then translate, then extract key points, finally give an action plan.”

Improvement: Break into steps and output item by item, or split into two calls.

3) Long Context Without Boundaries

Anti-example: Paste documents directly, without specifying which are materials and which are instructions.

Improvement: Wrap materials with <<< >>>, and clarify “only use material content”.

4) Output Not Verifiable

Anti-example: No format, no required fields, results difficult to parse and compare.

Improvement: Specify structure (table/JSON/segmented headings), and add “self-check list” or schema.


Further Reading

Was this page helpful?
Last updated on