Core Tools
The essential tools every coding agent needs. Complete specifications based on Amp's implementation.
Evidence source: Amp Code v0.0.1769212917 (tool definitions with exact schemas and behaviors)
The Essential Six
With these six tools, an agent can handle most coding tasks:
| Tool | Purpose | Why Essential |
|---|---|---|
| Read | See file contents | Can't edit what you can't see |
| edit_file | Modify existing code | The core action |
| create_file | Create new files | Sometimes you need new files |
| Bash | Run commands | Tests, builds, git, verification |
| glob | Find files by pattern | Navigate large codebases |
| Grep | Search file contents | Find relevant code |
Note on naming: Tool names use mixed casing for historical reasons. Some use PascalCase (
Read,Bash,Grep), others use snake_case (edit_file,create_file,glob). Both conventions coexist—the LLM is trained on both and aliases map between them (e.g.,Write→create_file,Edit→edit_file).
Tool 1: Read
Purpose: Read file contents or list directory entries.
Schema
{
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The absolute path to the file or directory (MUST be absolute, not relative)."
},
"read_range": {
"type": "array",
"items": { "type": "number" },
"minItems": 2,
"maxItems": 2,
"description": "Line range [start, end], 1-indexed. Default: [1, 500]. Hard cap: 2,000 lines."
}
},
"required": ["path"]
}
Output Formats
For files:
1: first line of content
2: second line of content
3: third line...
For directories:
file.txt
subdir/
another-file.js
(Directories have / suffix)
For images (PNG, JPEG, GIF):
{
status: "done",
result: {
type: "image",
media_type: "image/png",
data: string // Base64 encoded
}
}
Error Responses
| Condition | Error |
|---|---|
| File not found | ENOENT: no such file or directory |
| Secret file | Refusing to read env file. Reading secrets is not permitted. |
| Binary file | File appears to be binary and cannot be displayed as text. |
| File too large | File content exceeds maximum allowed size (65536 bytes) |
Constants
| Name | Value | Description |
|---|---|---|
MAX_FILE_SIZE |
65,536 bytes | Maximum file size for Read tool |
DEFAULT_LINE_LIMIT |
500 lines | Default read range |
MAX_BYTES_PER_LINE |
4,096 bytes | Line truncation |
MAX_READ_LINES |
2,000 lines | Hard cap on returned lines |
Note: This 64KB limit is the hard cap for the Read tool. Token estimation uses a separate 32KB cap (see 06-context-window.md) to be conservative in budget calculations.
Behavior Contract
- If path is a file: Return line-numbered content
- If path is a directory: Return entry list with
/suffix for subdirs - Expand
~to home directory - Resolve relative paths against working directory
- For images: Return visual content (multimodal)
- Enforce a hard cap of 2,000 lines even if read_range is larger
- Block reads of secret files (
.env,credentials.*, etc.)
Implementation
def execute_read(args):
path = expand_path(args["path"])
# Check for secret files
if is_secret_file(path):
return {
"status": "error",
"error": {
"errorCode": "reading-secret-file",
"message": "Refusing to read env file. Reading secrets is not permitted."
}
}
if not os.path.exists(path):
return {
"status": "error",
"error": {
"message": f"ENOENT: no such file or directory '{path}'",
"absolutePath": path
}
}
# Directory listing
if os.path.isdir(path):
entries = []
for entry in sorted(os.listdir(path)):
if os.path.isdir(os.path.join(path, entry)):
entries.append(f"{entry}/")
else:
entries.append(entry)
return {"status": "done", "result": "\n".join(entries)}
# File reading
with open(path, "r", encoding="utf-8", errors="replace") as f:
lines = f.readlines()
# Apply read_range if specified
start, end = args.get("read_range", [1, 500])
start = max(1, start) - 1 # Convert to 0-indexed
end = min(len(lines), end)
lines = lines[start:end]
# Format with line numbers
formatted = []
for i, line in enumerate(lines, start + 1):
formatted.append(f"{i}: {line.rstrip()}")
return {"status": "done", "result": "\n".join(formatted)}
Tool 2: edit_file
Purpose: Make targeted text replacements in existing files.
Schema
{
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Absolute path to the file (MUST exist)"
},
"old_str": {
"type": "string",
"description": "Text to search for. Must match exactly."
},
"new_str": {
"type": "string",
"description": "Text to replace old_str with."
},
"replace_all": {
"type": "boolean",
"default": false,
"description": "Replace all occurrences of old_str"
}
},
"required": ["path", "old_str", "new_str"]
}
Success Output
{
status: "done",
result: {
diff: string, // Git-style diff
lineRange: [number, number] // [startLine, endLine]
},
trackFiles: ["/absolute/path/to/file"]
}
Error Responses
| Condition | Error |
|---|---|
| File not found | file not found. Cannot update a file that doesn't exist. |
| No match | Could not find exact match for old_str |
| Multiple matches | found multiple matches for edit... |
| Same strings | old_str and new_str must be different |
Behavior Contract
- File MUST exist (use
create_filefor new files) - old_str MUST exist in file content
- old_str != new_str (strings must differ)
- If
replace_all: false: old_str must appear exactly once - If
replace_all: true: replace all occurrences - Attempts fuzzy whitespace matching if exact match fails
- Returns git-style diff showing changes
- Acquires file lock before writing
Implementation
def execute_edit_file(args):
path = args["path"]
old_str = args["old_str"]
new_str = args["new_str"]
replace_all = args.get("replace_all", False)
# Validation
if not os.path.exists(path):
return {
"status": "error",
"error": {"message": "file not found. Cannot update a file that doesn't exist."}
}
if old_str == new_str:
return {
"status": "error",
"error": {"message": "old_str and new_str must be different"}
}
with open(path, "r", encoding="utf-8") as f:
content = f.read()
# Check for matches
count = content.count(old_str)
if count == 0:
# Try fuzzy whitespace matching
fuzzy_result = try_fuzzy_match(content, old_str)
if fuzzy_result:
old_str = fuzzy_result
count = 1
else:
return {
"status": "error",
"error": {"message": "Could not find exact match for old_str"}
}
if count > 1 and not replace_all:
return {
"status": "error",
"error": {"message": f"found multiple matches for edit ({count} occurrences). Use replace_all or provide more context."}
}
# Perform replacement
if replace_all:
new_content = content.replace(old_str, new_str)
else:
new_content = content.replace(old_str, new_str, 1)
# Write with lock
with file_lock(path):
with open(path, "w", encoding="utf-8") as f:
f.write(new_content)
# Generate diff
diff = generate_diff(content, new_content, path)
line_range = find_changed_lines(content, new_content)
return {
"status": "done",
"result": {"diff": diff, "lineRange": line_range},
"trackFiles": [path]
}
Tool 3: create_file
Purpose: Create new files or overwrite existing files.
Schema
{
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Absolute path of file to create. If exists, will be overwritten."
},
"content": {
"type": "string",
"description": "The content for the file."
}
},
"required": ["path", "content"]
}
Behavior Contract
- Creates parent directories if they don't exist
- Appends trailing newline if content doesn't end with one
- Overwrites existing files without warning
- Acquires file lock before writing
- Checks for AGENTS.md discovery after creation
Implementation
def execute_create_file(args):
path = args["path"]
content = args["content"]
# Ensure trailing newline
if content and not content.endswith("\n"):
content += "\n"
# Create parent directories
directory = os.path.dirname(path)
if directory:
os.makedirs(directory, exist_ok=True)
existed = os.path.exists(path)
# Write with lock
with file_lock(path):
with open(path, "w", encoding="utf-8") as f:
f.write(content)
# Check for guidance file discovery
discovered = check_guidance_discovery(path)
result = f"Successfully {'overwrote' if existed else 'created'} file {path}"
if discovered:
return {
"status": "done",
"result": {
"message": result,
"discoveredGuidanceFiles": discovered
},
"trackFiles": [path]
}
return {
"status": "done",
"result": result,
"trackFiles": [path]
}
Tool 4: Bash
Purpose: Execute shell commands.
Schema
{
"type": "object",
"properties": {
"cmd": {
"type": "string",
"description": "The shell command to execute"
},
"cwd": {
"type": "string",
"description": "Working directory (absolute path)"
}
},
"required": ["cmd"]
}
Output Format (XML)
<command>ls -la</command>
<working_directory>/path/to/dir</working_directory>
<output>total 48
drwxr-xr-x 12 user staff 384 Jan 23 10:00 .
...</output>
<exit_code>0</exit_code>
Constants
| Name | Value | Description |
|---|---|---|
MAX_OUTPUT_CHARS |
50,000 | Output truncation limit |
Behavior Contract
- Uses bash (or sh if bash unavailable)
- Output truncated to last 50,000 characters
- Environment variables do NOT persist between calls
- cd does NOT persist (use cwd parameter)
- Runs serially (not in parallel)
- No timeout (disableTimeout: true)
- Non-zero exit codes are NOT errors (informational)
- Strips trailing
&from commands (no background processes)
Preprocessing
Before execution, Bash tool preprocesses the command:
- Removes trailing
&(background process syntax) - Expands
~to home directory in cwd - Converts
cd dir && cmdtocwd: dir+cmd
Implementation
def execute_bash(args):
cmd = args["cmd"]
cwd = args.get("cwd", os.getcwd())
# Preprocess
cmd = cmd.rstrip("&").strip()
cwd = os.path.expanduser(cwd)
# Extract cd from command
if cmd.startswith("cd ") and "&&" in cmd:
cd_part, rest = cmd.split("&&", 1)
dir_path = cd_part[3:].strip()
cwd = os.path.join(cwd, dir_path)
cmd = rest.strip()
try:
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
cwd=cwd,
executable="/bin/bash"
)
output = result.stdout + result.stderr
# Truncate to last 50000 chars
if len(output) > 50000:
output = output[-50000:]
return {
"status": "done",
"result": f"""<command>{cmd}</command>
<working_directory>{cwd}</working_directory>
<output>{output}</output>
<exit_code>{result.returncode}</exit_code>"""
}
except Exception as e:
return {
"status": "error",
"error": {"message": str(e)}
}
Tool 5: glob
Purpose: Find files by name patterns.
Schema
{
"type": "object",
"properties": {
"filePattern": {
"type": "string",
"description": "Glob pattern like \"**/*.js\" or \"src/**/*.ts\""
},
"limit": {
"type": "number",
"description": "Maximum results to return"
},
"offset": {
"type": "number",
"description": "Results to skip (pagination)"
}
},
"required": ["filePattern"],
"additionalProperties": false
}
Success Output
{
status: "done",
result: {
files: string[], // Array of absolute file paths
remaining: number // Count of additional matches not returned
}
}
Pattern Syntax
| Pattern | Matches |
|---|---|
**/*.js |
All JavaScript files in any directory |
src/**/*.ts |
TypeScript files under src |
*.json |
JSON files in current directory only |
**/*test* |
Files with "test" in name |
**/*.{js,ts} |
JavaScript and TypeScript files |
src/[a-z]*/*.ts |
TS files in lowercase subdirs of src |
Behavior Contract
- Uses ripgrep (
rg --files) for file discovery - Respects .gitignore patterns
- Default limit: 1000 files
Implementation
def execute_glob(args):
pattern = args["filePattern"]
limit = args.get("limit", 1000)
offset = args.get("offset", 0)
# Use ripgrep for fast file discovery
try:
result = subprocess.run(
["rg", "--files", "--glob", pattern],
capture_output=True,
text=True,
cwd=os.getcwd()
)
files = [f for f in result.stdout.strip().split("\n") if f]
except:
# Fallback to Python glob
import glob as glob_module
files = glob_module.glob(pattern, recursive=True)
# Apply pagination
total = len(files)
files = files[offset:offset + limit]
remaining = max(0, total - offset - limit)
# Convert to absolute paths
files = [os.path.abspath(f) for f in files]
return {
"status": "done",
"result": {
"files": files,
"remaining": remaining
}
}
Tool 6: Grep
Purpose: Search file contents for patterns using ripgrep.
Schema
{
"type": "object",
"properties": {
"pattern": {
"type": "string",
"description": "Regex pattern to search for"
},
"path": {
"type": "string",
"description": "File or directory path. Cannot use with glob."
},
"glob": {
"type": "string",
"description": "Glob pattern for files. Cannot use with path."
},
"caseSensitive": {
"type": "boolean",
"description": "Case-sensitive search (default: false)"
},
"literal": {
"type": "boolean",
"description": "Treat pattern as literal string, not regex"
}
},
"required": ["pattern"]
}
Success Output
{
status: "done",
result: string[] // Array: "path/file.ts:42: matching line content"
}
No Results (NOT an error)
{
status: "done",
result: [
"No results found.",
"If you meant to search for a literal string, run Grep again with literal:true."
]
}
Constants
| Name | Value | Description |
|---|---|---|
MAX_MATCHES_PER_FILE |
10 | Limit per file |
MAX_LINE_LENGTH |
200 | Line truncation |
MAX_TOTAL_RESULTS |
100 | Total limit |
Behavior Contract
- Uses ripgrep under the hood
- Case-insensitive by default
- Uses Rust-style regex (escape
{and}with\) - Results truncated at 100 matches
- Lines truncated at 200 characters
- No results is NOT an error
Implementation
def execute_grep(args):
pattern = args["pattern"]
path = args.get("path", ".")
glob_pattern = args.get("glob")
case_sensitive = args.get("caseSensitive", False)
literal = args.get("literal", False)
cmd = ["rg", "--line-number", "--max-count", "10"]
if not case_sensitive:
cmd.append("-i")
if literal:
cmd.append("-F")
if glob_pattern:
cmd.extend(["--glob", glob_pattern])
cmd.append(pattern)
cmd.append(path)
try:
result = subprocess.run(
cmd,
capture_output=True,
text=True,
cwd=os.getcwd()
)
if result.returncode == 1: # No matches
return {
"status": "done",
"result": [
"No results found.",
"If you meant to search for a literal string, run Grep again with literal:true."
]
}
if result.returncode >= 2: # Error
return {
"status": "error",
"error": {"message": f"ripgrep exited with code {result.returncode}"}
}
# Parse and limit results
lines = result.stdout.strip().split("\n")[:100]
# Truncate long lines
truncated = []
for line in lines:
if len(line) > 200:
truncated.append(line[:200] + "...")
else:
truncated.append(line)
return {"status": "done", "result": truncated}
except Exception as e:
return {
"status": "error",
"error": {"message": str(e)}
}
Additional Tools
Beyond the essential six, Amp has many more tools:
File Operations
undo_edit- Revert the last edit to a filedelete_file- Delete a fileformat_file- Auto-format a file
Search
finder- AI-powered semantic code search (Gemini 3 Flash Preview)web_search- Search the webread_web_page- Fetch and extract content from URLs
Execution
Check- Run CI/CD checks (typechecker, linter, tests)
Subagents
Task- Spawn independent subtasksoracle- Deep reasoning (GPT-5.2)librarian- Multi-repository searchkraken- Multi-file refactoring
See 04-tool-system.spec.md for complete specifications of all 45+ tools.
Implementation Checklist
Building your core tools? Ensure:
Read
- Line-numbered output
- Directory listing with
/suffix - Secret file blocking
- Range support
edit_file
- Exact string matching
- Fuzzy whitespace fallback
- Multiple match detection
- Git-style diff output
create_file
- Parent directory creation
- Trailing newline normalization
- File lock acquisition
Bash
- Output truncation
- Working directory support
- No timeout
- Serial execution
glob
- Recursive patterns
- Pagination support
- .gitignore respect
Grep
- Case-insensitive default
- Literal mode option
- Result truncation
- "No results" is success
What's Next
You have the essential tools. Now let's manage the context they consume.
→ 06-context-window.md - Token counting, truncation, handoff