# ApexSpriteAI ## Docs - [How ApexSpriteAI works: a system architecture guide](https://docs-apexspriteai.reliatrack.org/architecture-overview.md): Learn how ApexSpriteAI's four components — CLI, VPN, LM Studio, and MCP tools — work together to deliver private, GPU-accelerated AI agent workflows. - [AI agent orchestration: how ApexSpriteAI coordinates tasks](https://docs-apexspriteai.reliatrack.org/concepts/agent-orchestration.md): Learn how ApexSpriteAI orchestrates AI agents by coordinating model inference, tool execution, and context management across your workflow. - [Local AI models in ApexSpriteAI: LM Studio and model tiers](https://docs-apexspriteai.reliatrack.org/concepts/local-ai-models.md): ApexSpriteAI uses LM Studio to run open-source LLMs on your own GPU hardware. Learn which models work best and how they compare in speed and capability. - [MCP tools: how the Model Context Protocol extends agents](https://docs-apexspriteai.reliatrack.org/concepts/mcp-tools.md): MCP tools let your AI agent interact with files, run commands, and call APIs. Learn how the Model Context Protocol extends your agent's capabilities. - [Configure Claude Code: config.json and environment variables](https://docs-apexspriteai.reliatrack.org/configuration/environment.md): Set up your shell environment and Claude Code configuration file to connect ApexSpriteAI components. Covers ANTHROPIC_BASE_URL, config.json, and MCP settings. - [Tune LM Studio model and server settings for ApexSpriteAI](https://docs-apexspriteai.reliatrack.org/configuration/model-settings.md): Tune LM Studio model settings including context window size, temperature, and server binding to optimize ApexSpriteAI performance for your workload. - [Configure networking between Claude Code and LM Studio](https://docs-apexspriteai.reliatrack.org/configuration/networking.md): Configure your network so Claude Code CLI can reach your LM Studio backend. Covers Tailscale VPN setup, port settings, and connectivity verification. - [Install, configure, and manage MCP tools in ApexSpriteAI](https://docs-apexspriteai.reliatrack.org/guides/adding-mcp-tools.md): Extend your ApexSpriteAI agent with custom tools using the Model Context Protocol. Install, configure, and verify MCP tools in Claude Code CLI. - [Route Claude Code CLI to your local LM Studio backend](https://docs-apexspriteai.reliatrack.org/guides/connecting-claude-code.md): Configure the Claude Code CLI to use your local LM Studio backend instead of Anthropic's cloud API. Includes config file setup and verification steps. - [Install and configure LM Studio for local AI inference](https://docs-apexspriteai.reliatrack.org/guides/local-llm-setup.md): Install and configure LM Studio on your GPU server to serve AI inference requests. This guide covers installation, model loading, and server configuration. - [Compare AI models: speed, capability, and hardware needs](https://docs-apexspriteai.reliatrack.org/guides/model-selection.md): Compare Qwen2.5-Coder, Llama 3.3, and DeepSeek models in ApexSpriteAI to find the right balance of speed, capability, and hardware requirements. - [What is ApexSpriteAI? Platform overview and features](https://docs-apexspriteai.reliatrack.org/introduction.md): ApexSpriteAI orchestrates local and cloud LLMs with MCP tool use, letting you run private, GPU-accelerated AI coding assistants on your own hardware. - [Get ApexSpriteAI running: install, configure, and test](https://docs-apexspriteai.reliatrack.org/quickstart.md): A step-by-step quickstart that gets you from zero to a running AI coding assistant using LM Studio and the Claude Code CLI on your Mac. - [Fix common ApexSpriteAI connection and configuration errors](https://docs-apexspriteai.reliatrack.org/troubleshooting/common-issues.md): Fix the most frequent ApexSpriteAI problems including connection errors, authentication failures, model loading issues, and MCP tool errors. - [Reduce latency and improve ApexSpriteAI throughput](https://docs-apexspriteai.reliatrack.org/troubleshooting/performance.md): Reduce latency and improve throughput in ApexSpriteAI by choosing the right model size, optimizing context window, and tuning LM Studio server settings. ## OpenAPI Specs - [openapi](https://docs-apexspriteai.reliatrack.org/api-reference/openapi.json)