aichat is my LLM toolbox
I jumped on the LLM bandwagon when OpenAI released ChatGPT in late 2022. I started using the web app to interact with the chatbot. As I learned more about LLMs’ abilities, I came to rely more and more on ChatGPT, using it multiple times a day. After six months, I got tired of using the browser. I’m a text-mode hermit; my natural habitat is the command line. Using the browser to use LLMs felt cumbersome. I used plenty of good text-mode programs to talk to humans; there must be some good terminal tools to talk to chatbots. I was looking for the irssi of LLMs.
I tried and discarded various tools before settling on aichat. It ticks all the boxes for me:
- It supports a wide range of back-ends and models (OpenAI, Anthropic, Google, etc.)
- It’s actively developed. Given how quickly LLMs evolve these days, it’s important. I want to be able to use the latest models and features.
- It has syntax highlighting and good editing, making the interactive experience nicer than the browser.
- It supports advanced features like agents, RAG, macros, and function calls.
- It is a lightweight console app written in Rust.
- It’s easy to install since it’s a single binary.
- It feels snappy and polished.
Here are a few tips and tricks I learned using aichat.
By default, aichat doesn’t create a session, so it won’t remember your previous messages, which can be annoying:
$ aichat
Welcome to aichat 0.29.0
Type ".help" for additional help.
> My name is Henry.
Nice to meet you, Henry! How can I help you today?
> What’s my name?
I don’t have access to personal information unless you’ve already told me
during this conversation. Could you please tell me your name?
I have a script to automatically start a session with my favorite model (currently Claude 4 Sonnet):
#!/bin/sh
readonly model='claude:claude-sonnet-4-20250514'
role=$1
if test -n "$role"; then
aichat --model "$model" --session --role "$role" $@ </dev/tty >/dev/tty
else
aichat --model "$model" --session </dev/tty >/dev/tty
fi
aichat can be used as an interactive chat, or as a stand-alone tool. For example, I have a proofreading script that automatically fix spelling and grammar mistakes. I invoke it from my text editor kakoune by selecting the text that I want checked and piping it to the proofreader
script:
#!/bin/sh
exec aichat \
--prompt '
You are a proofreader.
Read the text carefully and reproduce the text with the correct spelling, and grammar.
Keep the structure and the phrasing of the text and only correct mistakes.
Fix any spelling mistake, but if unsure, don't correct.
Fix any grammar mistake, but if unsure, don't correct.
Preserve the Markdown markup and the existing curly quotes.
Turn straight quotes into curly quotes, except in code blocks.
Convert -- in text into en-dash (–).
Convert --- in text into em-dash (—).
Do not provide explanations.
' \
--model 'openai:gpt-4.1-mini'
I have custom roles for things I frequently use. For work, I have three roles to help me: dev
for general development and tech, k8s-expert
when I want information about Kubernetes in particular, and unix
for Unix and Linux-specific questions. I have a copyeditor
role to criticize and improve my writing; an editor-rewrite
role to edit and clean up my professional writing; a taxcanada
role to advise me when it’s tax season; a starcraft2
role to coach me with StarCraft 2; a cbt
role to do Cognitive Behavior Therapy on the cheap. These roles allow me to get answers quickly without having to write a long preamble.
There are still features that I haven’t tried, like agent, retrieval-augmented generation (RAG), and macros.
A year in, aichat is my go-to LLM interface—it’s reliable, fast, easy to use, supports various models, and it keeps getting better. If you’re looking for a command-line tool to leverage LLMs, give aichat a try.