Welcome to ApexSpriteAI — a sophisticated AI agent orchestration platform that puts the power of large language models directly in your hands. Whether you’re running local models on dedicated GPU hardware or integrating cloud AI into your workflows, ApexSpriteAI provides the tools you need to build fast, capable, and extensible AI coding assistants.Documentation Index
Fetch the complete documentation index at: https://docs-apexspriteai.reliatrack.org/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Get your AI agent running in minutes with step-by-step setup instructions.
Architecture Overview
Understand how ApexSpriteAI’s components fit together.
MCP Tools
Extend your AI agent with the Model Context Protocol tool system.
Model Selection
Choose the right model for your speed and capability requirements.
Get up and running
Set up your local AI backend
Install LM Studio and load your preferred model on your GPU hardware. ApexSpriteAI supports models from 16B to 120B parameters.
Configure your network
Connect your local Mac or workstation to your GPU server securely over Tailscale VPN, or run everything on a single machine.
Connect Claude Code CLI
Point the Claude Code CLI at your local LM Studio backend to get an AI-powered coding assistant with full MCP tool support.
Why ApexSpriteAI?
Local & Private
Run AI models entirely on your own hardware. No data leaves your network.
GPU Accelerated
Leverage NVIDIA GPU hardware for fast, low-latency inference even with large models.
Extensible via MCP
The Model Context Protocol lets your AI agent use tools like file read/write, shell commands, and custom integrations.
Open Model Ecosystem
Works with Qwen2.5-Coder, Llama 3.3, DeepSeek, and any model compatible with LM Studio.