Skip to main content

Documentation Index

Fetch the complete documentation index at: https://reliatrack.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Out of the box, a language model can only read and write text. MCP tools change that. The Model Context Protocol is a standard that lets your AI agent declare a set of callable tools — functions with defined names, inputs, and outputs — that the model can invoke when answering your requests. In ApexSpriteAI, the Claude Code CLI handles all tool execution locally on your machine, so adding a new MCP tool immediately expands what your agent can do without any changes to the model itself.

What MCP tools are

MCP (Model Context Protocol) is a specification for how AI models communicate intent to use external capabilities. When the model wants to read a file or run a command, it does not do so directly. Instead, it returns a structured description of the action it wants to take — a tool call. The Claude Code CLI reads that description, runs the actual operation on your machine, and sends the result back. This design gives you full control. The model decides what to do; your local environment decides how and whether to do it.

How tool execution works

1

The model returns a tool call

Instead of a plain text response, the model returns a JSON payload that names the tool it wants to use and provides the required arguments.
{
  "type": "tool_use",
  "name": "read_file",
  "input": {
    "path": "src/server.js"
  }
}
2

The CLI executes the tool locally

The Claude Code CLI receives the tool call, runs the corresponding MCP tool on your local machine, and captures the output. Execution always happens on your machine — not on the remote model server.
3

The result is sent back to the model

The CLI appends the tool result to the conversation and sends the updated context back to the model.
{
  "type": "tool_result",
  "tool_use_id": "read_file_1",
  "content": "const express = require('express');\n..."
}
4

The model continues reasoning

With the file content now in context, the model can generate a meaningful response — or issue another tool call if it needs more information.

Adding MCP tools

You add MCP tools to your agent with a single command. The Claude Code CLI registers the tool and makes it available to the model on all future requests.
claude mcp add <name> <command>
For example, to add a tool that executes shell commands:
claude mcp add shell bash
To list all currently installed tools:
claude mcp list
To remove a tool you no longer need:
claude mcp remove <name>
You do not need to restart your session after adding a tool. The updated tool list is included in the next request you send to the model.

Common MCP tools

File read / write

Lets the model read source files, configuration files, and logs, or write changes directly to disk. Essential for coding tasks.

Shell execution

Lets the model run arbitrary shell commands — build scripts, test runners, git operations — and see the output before responding.

Web search

Lets the model query the web for up-to-date documentation, package versions, or error messages it has not seen before.

API calls

Lets the model call REST or GraphQL endpoints, retrieve data from external services, or trigger webhooks as part of a workflow.

Why local execution matters

Because the Claude Code CLI executes every tool call on your own machine, your agent has access to your local file system, running processes, private APIs, and any tool you can run from a terminal — even when the AI model itself is running on a remote GPU server. The model never needs direct network access to your environment; it only needs to describe what it wants the CLI to do.
If a tool call fails, the error output is automatically sent back to the model as the tool result. The model can read the error and decide how to recover — retrying with different arguments, trying a different approach, or asking you for clarification.