Talking to LLMs with curl
Last year I posted about aichat, an LLM command line shell. I talked about a wrapper built on it to spellcheck a portion of Unicode text. I use it daily with my text editors: Kakoune and Helix.
Yesterday my OpenAI credits expired, so I switched to [OpenRouter][4], a unified API for many AI models.
I had been using the openai/gpt-5-mini model for a while, and I figured it was time to update to something more recent. I tried deepseek/deepseek-v4-flash, but unfortunately the thinking block of the model was included in the output:
$ echo I catn spelll weel | proofreader
<think>
We are asked to proofread the text: "I catn spelll weel"
We need to correct spelling and grammar while preserving structure and phrasing.
...
Thus output: I can’t spell well.
</think>
I can’t spell well.
I tried disabling this behavior with aichat but found no option. OpenRouter’s documentation showed I could query the model directly with curl. I knew LLMs worked over HTTP, but I didn’t realize how simple a call could be. All you need is an authentication header and a JSON body.
I rewrote my proofreader script with curl and jq to work with the OpenRouter API:
#!/bin/sh
readonly system_prompt="
You are a proofreader.
Reproduce the text with the correct spelling and grammar.
Keep the structure and the phrasing of the text and only correct mistakes.
Fix any spelling mistake, but if unsure, don't correct.
Fix any grammar mistake, but if unsure, don't correct.
Preserve the Markdown markup and the existing curly quotes.
Turn straight quotes into curly quotes, except in code blocks.
Convert -- in text into en-dash (–).
Convert --- in text into em-dash (—).
Do not provide explanations.
"
payload=$(jq -n \
--arg user_content "$(cat)" \
--arg system_prompt "$system_prompt" \
'{
model: "deepseek/deepseek-v4-flash",
messages: [
{role: "system", content: $system_prompt},
{role: "user", content: $user_content}
],
stream: false,
reasoning: {
exclude: true,
enabled: true
}
}')
curl -s https://openrouter.ai/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_TOKEN" \
-d "$payload" | jq -r '.choices[0].message.content'
Reasoning is the model’s internal thought process; I excluded it from the output but kept it enabled to improve the final result. These parameters are specific to OpenRouter; other providers use different ones.
In the end deepseek/deepseek-v4-flash was too slow, so I ended up picking qwen/qwen3-32b as my default spellchecking model.