auto translate Chinese to English using free GPT-5 Mini model of GitHub Copilot SDK

Table of Contents

    I feel that Chinese blogs have little hope left, with very few readers and traffic from search engines declining day by day. However, some programming-related content actually gets more traffic from overseas, even in Chinese. So, I thought about automatically translating my Chinese blog into English. This might reach a larger audience.

    Previously, I tried Free AI Large Model API Interface: Golang Proxy Implementation for Gemini 3 Flash Preview Version. Even the smaller model performed well, but accessing Google from domestic servers is a major hassle. Although I could access it through a proxy in Germany, after much hesitation, I abandoned this solution. One reason is the low monthly quota for Gemini-related models, and another is the reluctance to maintain another proxy service.

    Using the Github Copilot SDK for Translation Services

    Refer to the previous article: Unlimited Free Large Model Tokens: Github Copilot CLI SDK Installation and Testing. Since the Github Copilot SDK provides free large models like gpt-5-mini and gpt-4.1, both are more than sufficient for English translation.

    I am unsure whether deploying a Copilot CLI on a server would lead to an account ban because using the same Copilot Pro account on multiple machines with different IPs simultaneously always feels risky. Therefore, I adopted a solution similar to OpenClaw:

    1. The translation service based on the Github Copilot Golang SDK runs on my local machine, i.e., my personal laptop.
    2. The translation service on this laptop connects to my server’s blog service via WebSocket.
    3. When a Chinese content in the blog needs translation, the translation task is automatically distributed through WebSocket.
    4. After the local translation service completes processing, it returns the result to the server. The server then publishes the new content.

    The effect is quite good.

    Model Selection

    Both gpt-5-mini and gpt-4.1 work well. However, avoid using gpt-4o because actual tests show that gpt-4o consumes Premium requests quota. Moreover, each request consumes 1x… which is even more expensive than Gemini 3 Flash. That’s frustrating. But the model list in VSCode’s Copilot shows that gpt-4o is free. I was scared when I saw how much quota I consumed today.

    I found a similar issue on GitHub:

    https://github.com/github/copilot-sdk/issues/334

    Some models are missing from CopilotClient.list_models. The issue indicates that several models (Gemini 3 Flash, Groke Code Fast 1, GPT-4o, and Raptor mini) are not appearing in CopilotClient.list_models()

    Although it doesn’t mention the issue of consuming Premium requests, it seems that the Copilot SDK is a bit confusing.

    About the Author 🌱

    I am a developer from Yantai, Shandong, China. If you have any interesting topics or software development needs, feel free to email me at: zhongwei.sun2008@gmail.com for a chat, or follow my personal public account "Elephant Tools", See more contact information