Skip to content

Hope for add support for deepseek-v4 #1127

@ZhenyuWANG-PolyU

Description

@ZhenyuWANG-PolyU

Before you asking

  • I have tried the PDFMathTranslate-next and give feedback in PDFMathTranslate-next

Is your feature request related to a problem?

In Translate with different services function,the default model for DeepSeek is deepseek-chat which is a model without thinking. However, deepseek-v4 pro and flash has already published and in the future deepseek-chat will not be supported by DeepSeek . If we want to close think function,we must change the origin code. So we need add support for deepseek-v4 pro and flash.

Describe the solution you'd like

According to document of DeepSeek API.
The new url is https://api.deepseek.com
The way to close thinging for both pro and flash is:

response = client.chat.completions.create(
model="deepseek-v4-pro",

...

extra_body={"thinking": {"type": "disabled"}}
)

Additional context

deepseek-chat (将于 2026/07/24 弃用)
deepseek-reasoner (将于 2026/07/24 弃用)
(1) 默认思考开关为 enabled
(2) 思考模式下,对普通请求,默认 effort 为 high;对一些复杂 Agent 类请求(如 Claude Code、OpenCode),effort 自动设置为 max
(3) 思考模式下,出于兼容考虑 low、medium 会映射为 high, xhigh 会映射为 max
您在使用 OpenAI SDK 设置 thinking 参数时,需要将 thinking 参数传入 extra_body 中:
response = client.chat.completions.create(
model="deepseek-v4-pro",

...

reasoning_effort="high",
extra_body={"thinking": {"type": "enabled"}}
)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions