Most ApexSpriteAI problems fall into a small number of categories: network connectivity between Claude Code and LM Studio, misconfigured environment variables, Tailscale routing failures, MCP registration errors, and hardware resource exhaustion when loading large models. Work through the relevant section below to isolate and fix your issue. Each section describes how to diagnose the problem, then provides the exact steps to resolve it.Documentation Index
Fetch the complete documentation index at: https://reliatrack.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Connection refused when Claude Code contacts LM Studio
Connection refused when Claude Code contacts LM Studio
Symptoms: Claude Code returns an error such as If this fails, the issue is at the network or server layer, not in Claude Code itself.Fixes:
ECONNREFUSED, connect ECONNREFUSED 100.82.56.40:1234, or Failed to connect to localhost:1234.Diagnosis:Run a direct port check from your terminal:- Confirm LM Studio is running. Open LM Studio and switch to the Local Server tab. The server status must show green. Click Start Server if it is stopped.
-
Check the port. The default port is
1234. If you changed it in LM Studio, updateANTHROPIC_BASE_URLin~/.claude/config.jsonto use the new port number. -
Set the bind address to
0.0.0.0. If Claude Code runs on a different machine from LM Studio, LM Studio must bind to0.0.0.0rather than127.0.0.1. In LM Studio, open Local Server → Bind Address and change the value, then restart the server.
Claude Code is still connecting to Anthropic's cloud
Claude Code is still connecting to Anthropic's cloud
Symptoms: Requests succeed but responses come from Anthropic’s API, you are billed for cloud usage, or you receive an Anthropic authentication error when you have no active subscription.Diagnosis:Check whether If the output is empty, the environment variable is not active.Fixes:You should see:
ANTHROPIC_BASE_URL is set in your current shell session:- Verify
~/.claude/config.jsoncontains the correct value:
- If you set it in your shell profile, reload the profile:
- Check for a typo. The URL must not have a trailing slash and must not include the
/v1/messagespath. The correct format ishttp://<ip>:<port>only.
The shell environment variable takes precedence over
config.json. If you have both set, make sure neither one contains a stale Anthropic URL.Tailscale connectivity issues
Tailscale connectivity issues
Symptoms: The Spark server should appear in the list with a green connected indicator. If it shows as offline, reconnect Tailscale before continuing.Fixes:Update
nc -vz fails, Claude Code times out, or you can reach LM Studio from the GPU server itself but not from your Mac.Diagnosis:Check Tailscale status on your Mac:- Reconnect Tailscale on both machines if either shows as offline. On macOS, click the Tailscale menu bar icon and select Connect.
- Verify the Tailscale IP. Run the following on your GPU server to confirm its current IP:
ANTHROPIC_BASE_URL in your config if the IP has changed.- Test reachability with netcat:
- Check firewall rules on the Spark server. Port 1234 must not be blocked by the host firewall. On Linux, check with:
- Confirm LM Studio is bound to
0.0.0.0, not127.0.0.1. Even with Tailscale working, a127.0.0.1bind address blocks all remote requests.
MCP tool not found or command not available
MCP tool not found or command not available
Symptoms: Claude Code reports that an MCP server failed to start, the tool is unavailable, or the command was not found.Diagnosis:Inspect the current MCP configuration:Check whether the For example:Install the package if the command is missing, then re-register the MCP server.
mcpServers entry exists and whether the command field points to an executable that is available in your PATH.Fixes:- Re-register the MCP server. Remove the existing entry and add it again:
- Confirm the command exists. If the MCP server uses a binary (not
npx), verify it is installed and on yourPATH:
- Check Node.js is installed if the MCP server uses
npx:
Model fails to load in LM Studio
Model fails to load in LM Studio
Symptoms: LM Studio shows an error when loading a model, the loading progress bar stalls, or the application crashes.Diagnosis:Check how much unified memory or VRAM is currently in use. On macOS with Apple Silicon:On the Spark server (Linux + NVIDIA):Compare free memory against the model’s expected memory footprint at your chosen quantization level.Fixes:
- Close other memory-intensive applications before loading a large model. Browser tabs, open IDE windows, and other AI tools all compete for the same memory pool.
- Reduce the context window size. A context window of 64k requires significantly more memory than 32k. Lower the context length in Model Settings and try loading again.
- Switch to a smaller or more aggressively quantized model. If Qwen2.5-Coder-32B at Q8 fails to load, try Q4 quantization, or drop to a 16B model such as DeepSeek-Coder-V2-Lite.
- Restart LM Studio to release memory held by a previously loaded model before loading a new one.
DNS resolution failures on macOS
DNS resolution failures on macOS
Symptoms: Hostnames that previously resolved correctly stop working, you can connect by IP address but not by hostname, or network changes do not take effect.Diagnosis:Attempt to resolve the hostname manually:If the response contains a stale or incorrect IP, the DNS cache needs to be cleared.Fixes:Flush the macOS DNS cache:Then retry the resolution:If you need to query a specific DNS server directly to compare results: