Skip to main content

Documentation Index

Fetch the complete documentation index at: https://reliatrack.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

By default, the Claude Code CLI authenticates with Anthropic and routes every request to their cloud API. You can override this behavior with a single configuration file that points the CLI at your LM Studio server instead. Once the redirect is in place, the CLI works exactly as normal — tools, MCP integrations, and context handling all behave identically — except the model runs on your own hardware and no requests leave your network.

Prerequisites

  • Node.js 18 or later installed on your local machine
  • LM Studio running on your GPU server with a model loaded and the local server started on port 1234 (see Set up a local LLM with LM Studio)
  • Network connectivity to the server: either localhost (same machine) or a Tailscale IP for a remote server
1

Install the Claude Code CLI

Install the official Anthropic Claude Code package globally from npm:
npm install -g @anthropic-ai/claude-code
Confirm the installation succeeded:
claude --version
2

Create the Claude configuration directory

The CLI reads its configuration from ~/.claude/config.json. Create the directory if it does not already exist:
mkdir -p ~/.claude
3

Write the config.json file

Create or edit ~/.claude/config.json with the following content. Set ANTHROPIC_BASE_URL to the address of your LM Studio server.
{
  "env": {
    "ANTHROPIC_BASE_URL": "http://100.82.56.40:1234",
    "ANTHROPIC_API_KEY": "local"
  }
}
The Tailscale IP 100.82.56.40 shown above is an example. Replace it with the actual Tailscale address of your GPU server. You can find it by running tailscale ip -4 on the server.
How this works: Setting ANTHROPIC_BASE_URL redirects all API calls away from api.anthropic.com to your local server. The CLI never contacts Anthropic’s OAuth service, so no cloud authentication occurs. The ANTHROPIC_API_KEY field is required by the CLI’s schema but its value is not validated by LM Studio — "local" is a safe placeholder.
4

Verify the connection

Run a simple prompt to confirm the CLI is talking to your local model:
claude "Hello, are you running locally?"
The model should respond. If LM Studio is the active backend, you will notice that no Anthropic sign-in prompt appears and the response originates from whichever model is loaded in LM Studio.
You can also check LM Studio’s server logs in the Developer tab. Each request from the CLI appears as an incoming POST to /v1/messages, confirming the traffic is hitting your server.
5

(Optional) Switch back to Anthropic cloud

To restore cloud API behavior, either delete ~/.claude/config.json or remove the ANTHROPIC_BASE_URL key from the env object. The CLI falls back to its default Anthropic endpoint automatically.
rm ~/.claude/config.json

Troubleshooting

Confirm that LM Studio’s local server is running and bound to 0.0.0.0 on port 1234. Test connectivity directly with nc -vz <SERVER_IP> 1234. If using Tailscale, verify both devices are connected to the same tailnet with tailscale status.
This means ANTHROPIC_BASE_URL is not being picked up. Check that ~/.claude/config.json is valid JSON and that the env key is at the top level of the object. You can validate the file with cat ~/.claude/config.json | python3 -m json.tool.
The model loaded in LM Studio may be too large for your hardware. See Choose the right AI model for guidance on selecting a model that matches your server’s memory capacity.

Next steps

With the CLI connected to your local model, you can extend it with custom tools using the Model Context Protocol. Follow Add MCP tools to your AI agent to install and configure your first tools.