Merge branch 'mercury' into azure-profile

This commit is contained in:
Max Robinson 2025-08-23 15:38:00 -05:00 committed by GitHub
commit 36a25f8675
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
164 changed files with 49999 additions and 2193 deletions

14
.gitignore vendored
View file

@ -15,4 +15,16 @@ services/viaproxy/saves.json
services/viaproxy/viaproxy.yml services/viaproxy/viaproxy.yml
tmp/ tmp/
wandb/ wandb/
experiments/ experiments/
andy_*.json
jill_*.json
src/models/logs/*
server_data/*
results/*
tasks/construction_tasks/test_multiagent_construction_tasks.json
tasks/construction_tasks/train_multiagent_construction_tasks.json
tasks/construction_tasks/test/**
tasks/construction_tasks/train/**
server_data*
**/.DS_Store
src/mindcraft-py/__pycache__/

51
Dockerfile Normal file
View file

@ -0,0 +1,51 @@
# Specify a base image
# FROM ubuntu:22.04
FROM node:18
#Install some dependencies
RUN apt-get -y update
RUN apt-get -y install git
RUN apt-get -y install unzip
RUN apt-get -y install python3
RUN apt-get -y install python3-pip
RUN apt-get -y install python3-boto3
RUN apt-get -y install python3-tqdm
RUN apt-get -y install tmux
RUN git clone https://github.com/kolbytn/mindcraft.git /mindcraft
WORKDIR /mindcraft
COPY ./server_data.zip /mindcraft
RUN unzip server_data.zip
RUN npm install
# Copy the rest of the application code to the working directory
# RUN apt update
# RUN apt install bash ca-certificates wget git -y # install first to avoid openjdk install bug
# RUN apt install openjdk-17-jre-headless -y
RUN apt install -y wget apt-transport-https gnupg lsb-release
# Add Adoptium repository key
RUN wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | apt-key add -
# Add Adoptium repository
RUN echo "deb https://packages.adoptium.net/artifactory/deb $(lsb_release -cs) main" > /etc/apt/sources.list.d/adoptium.list
# Update package lists
RUN apt update
# Install Temurin (Adoptium) Java 21
RUN apt install temurin-21-jdk -y
# Install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
VOLUME /data
EXPOSE 8000

View file

@ -2,7 +2,7 @@
Crafting minds for Minecraft with LLMs and [Mineflayer!](https://prismarinejs.github.io/mineflayer/#/) Crafting minds for Minecraft with LLMs and [Mineflayer!](https://prismarinejs.github.io/mineflayer/#/)
[FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) | [Discord Support](https://discord.gg/mp73p35dzC) | [Video Tutorial](https://www.youtube.com/watch?v=gRotoL8P8D8) | [Blog Post](https://kolbynottingham.com/mindcraft/) | [Contributor TODO](https://github.com/users/kolbytn/projects/1) [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) | [Discord Support](https://discord.gg/mp73p35dzC) | [Video Tutorial](https://www.youtube.com/watch?v=gRotoL8P8D8) | [Blog Post](https://kolbynottingham.com/mindcraft/) | [Contributor TODO](https://github.com/users/kolbytn/projects/1) | [Paper Website](https://mindcraft-minecollab.github.io/index.html) | [MineCollab](https://github.com/kolbytn/mindcraft/blob/main/minecollab.md)
> [!Caution] > [!Caution]
@ -10,15 +10,15 @@ Do not connect this bot to public servers with coding enabled. This project allo
## Requirements ## Requirements
- [Minecraft Java Edition](https://www.minecraft.net/en-us/store/minecraft-java-bedrock-edition-pc) (up to v1.21.1, recommend v1.20.4) - [Minecraft Java Edition](https://www.minecraft.net/en-us/store/minecraft-java-bedrock-edition-pc) (up to v1.21.1, recommend v1.21.1)
- [Node.js Installed](https://nodejs.org/) (at least v14) - [Node.js Installed](https://nodejs.org/) (at least v18)
- One of these: [OpenAI API Key](https://openai.com/blog/openai-api) | [Gemini API Key](https://aistudio.google.com/app/apikey) | [Anthropic API Key](https://docs.anthropic.com/claude/docs/getting-access-to-claude) | [Replicate API Key](https://replicate.com/) | [Hugging Face API Key](https://huggingface.co/) | [Groq API Key](https://console.groq.com/keys) | [Ollama Installed](https://ollama.com/download). | [Mistral API Key](https://docs.mistral.ai/getting-started/models/models_overview/) | [Qwen API Key [Intl.]](https://www.alibabacloud.com/help/en/model-studio/developer-reference/get-api-key)/[[cn]](https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen?) | [Novita AI API Key](https://novita.ai/settings?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link#key-management) | - One of these: [OpenAI API Key](https://openai.com/blog/openai-api) | [Gemini API Key](https://aistudio.google.com/app/apikey) | [Anthropic API Key](https://docs.anthropic.com/claude/docs/getting-access-to-claude) | [Replicate API Key](https://replicate.com/) | [Hugging Face API Key](https://huggingface.co/) | [Groq API Key](https://console.groq.com/keys) | [Ollama Installed](https://ollama.com/download). | [Mistral API Key](https://docs.mistral.ai/getting-started/models/models_overview/) | [Qwen API Key [Intl.]](https://www.alibabacloud.com/help/en/model-studio/developer-reference/get-api-key)/[[cn]](https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen?) | [Novita AI API Key](https://novita.ai/settings?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link#key-management) | [Cerebras API Key](https://cloud.cerebras.ai) | [Mercury API](https://platform.inceptionlabs.ai/docs)
## Install and Run ## Install and Run
1. Make sure you have the requirements above. 1. Make sure you have the requirements above.
2. Clone or download this repository (big green button) 2. Clone or download this repository (big green button) 'git clone https://github.com/kolbytn/mindcraft.git'
3. Rename `keys.example.json` to `keys.json` and fill in your API keys (you only need one). The desired model is set in `andy.json` or other profiles. For other models refer to the table below. 3. Rename `keys.example.json` to `keys.json` and fill in your API keys (you only need one). The desired model is set in `andy.json` or other profiles. For other models refer to the table below.
@ -28,7 +28,17 @@ Do not connect this bot to public servers with coding enabled. This project allo
6. Run `node main.js` from the installed directory 6. Run `node main.js` from the installed directory
If you encounter issues, check the [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) or find support on [discord](https://discord.gg/mp73p35dzC). We are currently not very responsive to github issues. If you encounter issues, check the [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) or find support on [discord](https://discord.gg/mp73p35dzC). We are currently not very responsive to github issues. To run tasks please refer to [Minecollab Instructions](minecollab.md#installation)
## Tasks
Bot performance can be roughly evaluated with Tasks. Tasks automatically intialize bots with a goal to aquire specific items or construct predefined buildings, and remove the bot once the goal is achieved.
To run tasks, you need python, pip, and optionally conda. You can then install dependencies with `pip install -r requirements.txt`.
Tasks are defined in json files in the `tasks` folder, and can be run with: `python tasks/run_task_file.py --task_path=tasks/example_tasks.json`
For full evaluations, you will need to [download and install the task suite. Full instructions.](minecollab.md#installation)
## Model Customization ## Model Customization
@ -39,20 +49,26 @@ You can configure the agent's name, model, and prompts in their profile like `an
| API | Config Variable | Example Model name | Docs | | API | Config Variable | Example Model name | Docs |
|------|------|------|------| |------|------|------|------|
| `openai` | `OPENAI_API_KEY` | `gpt-4o-mini` | [docs](https://platform.openai.com/docs/models) | | `openai` | `OPENAI_API_KEY` | `gpt-4o-mini` | [docs](https://platform.openai.com/docs/models) |
| `google` | `GEMINI_API_KEY` | `gemini-pro` | [docs](https://ai.google.dev/gemini-api/docs/models/gemini) | | `google` | `GEMINI_API_KEY` | `gemini-2.0-flash` | [docs](https://ai.google.dev/gemini-api/docs/models/gemini) |
| `anthropic` | `ANTHROPIC_API_KEY` | `claude-3-haiku-20240307` | [docs](https://docs.anthropic.com/claude/docs/models-overview) | | `anthropic` | `ANTHROPIC_API_KEY` | `claude-3-haiku-20240307` | [docs](https://docs.anthropic.com/claude/docs/models-overview) |
| `replicate` | `REPLICATE_API_KEY` | `replicate/meta/meta-llama-3-70b-instruct` | [docs](https://replicate.com/collections/language-models) | | `xai` | `XAI_API_KEY` | `grok-2-1212` | [docs](https://docs.x.ai/docs) |
| `ollama` (local) | n/a | `llama3` | [docs](https://ollama.com/library) | | `deepseek` | `DEEPSEEK_API_KEY` | `deepseek-chat` | [docs](https://api-docs.deepseek.com/) |
| `groq` | `GROQCLOUD_API_KEY` | `groq/mixtral-8x7b-32768` | [docs](https://console.groq.com/docs/models) | | `ollama` (local) | n/a | `ollama/llama3.1` | [docs](https://ollama.com/library) |
| `huggingface` | `HUGGINGFACE_API_KEY` | `huggingface/mistralai/Mistral-Nemo-Instruct-2407` | [docs](https://huggingface.co/models) |
| `novita` | `NOVITA_API_KEY` | `gryphe/mythomax-l2-13b` | [docs](https://novita.ai/model-api/product/llm-api?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link) |
| `qwen` | `QWEN_API_KEY` | `qwen-max` | [Intl.](https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api)/[cn](https://help.aliyun.com/zh/model-studio/getting-started/models) | | `qwen` | `QWEN_API_KEY` | `qwen-max` | [Intl.](https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api)/[cn](https://help.aliyun.com/zh/model-studio/getting-started/models) |
| `xai` | `MISTRAL_API_KEY` | `mistral-large-latest` | [docs](https://docs.mistral.ai/getting-started/models/models_overview/) | | `mistral` | `MISTRAL_API_KEY` | `mistral-large-latest` | [docs](https://docs.mistral.ai/getting-started/models/models_overview/) |
| `deepseek` | `XAI_API_KEY` | `grok-beta` | [docs](https://docs.x.ai/docs) | | `replicate` | `REPLICATE_API_KEY` | `replicate/meta/meta-llama-3-70b-instruct` | [docs](https://replicate.com/collections/language-models) |
| `groq` (not grok) | `GROQCLOUD_API_KEY` | `groq/mixtral-8x7b-32768` | [docs](https://console.groq.com/docs/models) |
| `huggingface` | `HUGGINGFACE_API_KEY` | `huggingface/mistralai/Mistral-Nemo-Instruct-2407` | [docs](https://huggingface.co/models) |
| `novita` | `NOVITA_API_KEY` | `novita/deepseek/deepseek-r1` | [docs](https://novita.ai/model-api/product/llm-api?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link) |
| `openrouter` | `OPENROUTER_API_KEY` | `openrouter/anthropic/claude-3.5-sonnet` | [docs](https://openrouter.ai/models) | | `openrouter` | `OPENROUTER_API_KEY` | `openrouter/anthropic/claude-3.5-sonnet` | [docs](https://openrouter.ai/models) |
| `glhf.chat` | `GHLF_API_KEY` | `glhf/hf:meta-llama/Llama-3.1-405B-Instruct` | [docs](https://glhf.chat/user-settings/api) |
| `hyperbolic` | `HYPERBOLIC_API_KEY` | `hyperbolic/deepseek-ai/DeepSeek-V3` | [docs](https://docs.hyperbolic.xyz/docs/getting-started) |
| `vllm` | n/a | `vllm/llama3` | n/a |
| `cerebras` | `CEREBRAS_API_KEY` | `cerebras/llama-3.3-70b` | [docs](https://inference-docs.cerebras.ai/introduction) |
| `mercury(EA)` | `MERCURY_API_KEY` | `mercury-coder-small` | [docs](https://www.inceptionlabs.ai/) |
If you use Ollama, to install the models used by default (generation and embedding), execute the following terminal command: If you use Ollama, to install the models used by default (generation and embedding), execute the following terminal command:
`ollama pull llama3 && ollama pull nomic-embed-text` `ollama pull llama3.1 && ollama pull nomic-embed-text`
### Online Servers ### Online Servers
To connect to online servers your bot will need an official Microsoft/Minecraft account. You can use your own personal one, but will need another account if you want to connect too and play with it. To connect, change these lines in `settings.js`: To connect to online servers your bot will need an official Microsoft/Minecraft account. You can use your own personal one, but will need another account if you want to connect too and play with it. To connect, change these lines in `settings.js`:
@ -116,6 +132,11 @@ You can pass a string or an object for these fields. A model object must specify
"model": "gpt-4", "model": "gpt-4",
"url": "https://api.openai.com/v1/" "url": "https://api.openai.com/v1/"
}, },
"vision_model": {
"api": "openai",
"model": "gpt-4o",
"url": "https://api.openai.com/v1/"
},
"embedding": { "embedding": {
"api": "openai", "api": "openai",
"url": "https://api.openai.com/v1/", "url": "https://api.openai.com/v1/",
@ -124,7 +145,7 @@ You can pass a string or an object for these fields. A model object must specify
``` ```
`model` is used for chat, `code_model` is used for newAction coding, and `embedding` is used to embed text for example selection. If `code_model` or `embedding` are not specified, they will use `model` by default. Not all APIs have an embedding model. `model` is used for chat, `code_model` is used for newAction coding, `vision_model` is used for image interpretation, and `embedding` is used to embed text for example selection. If `code_model` or `vision_model` is not specified, `model` will be used by default. Not all APIs support embeddings or vision.
All apis have default models and urls, so those fields are optional. The `params` field is optional and can be used to specify additional parameters for the model. It accepts any key-value pairs supported by the api. Is not supported for embedding models. All apis have default models and urls, so those fields are optional. The `params` field is optional and can be used to specify additional parameters for the model. It accepts any key-value pairs supported by the api. Is not supported for embedding models.
@ -147,10 +168,13 @@ Some of the node modules that we depend on have bugs in them. To add a patch, ch
## Citation: ## Citation:
``` ```
@misc{mindcraft2023, @article{mindcraft2025,
Author = {Kolby Nottingham and Max Robinson}, title = {Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning},
Title = {MINDcraft: LLM Agents for cooperation, competition, and creativity in Minecraft}, author = {White*, Isadora and Nottingham*, Kolby and Maniar, Ayush and Robinson, Max and Lillemark, Hansen and Maheshwari, Mehul and Qin, Lianhui and Ammanabrolu, Prithviraj},
Year = {2023}, journal = {arXiv preprint arXiv:2504.17950},
url={https://github.com/kolbytn/mindcraft} year = {2025},
url = {https://arxiv.org/abs/2504.17950},
} }
``` ```

View file

@ -1,6 +1,7 @@
// eslint.config.js // eslint.config.js
import globals from "globals"; import globals from "globals";
import pluginJs from "@eslint/js"; import pluginJs from "@eslint/js";
import noFloatingPromise from "eslint-plugin-no-floating-promise";
/** @type {import('eslint').Linter.Config[]} */ /** @type {import('eslint').Linter.Config[]} */
export default [ export default [
@ -9,6 +10,9 @@ export default [
// Then override or customize specific rules // Then override or customize specific rules
{ {
plugins: {
"no-floating-promise": noFloatingPromise,
},
languageOptions: { languageOptions: {
globals: globals.browser, globals: globals.browser,
ecmaVersion: 2021, ecmaVersion: 2021,
@ -17,9 +21,11 @@ export default [
rules: { rules: {
"no-undef": "error", // Disallow the use of undeclared variables or functions. "no-undef": "error", // Disallow the use of undeclared variables or functions.
"semi": ["error", "always"], // Require the use of semicolons at the end of statements. "semi": ["error", "always"], // Require the use of semicolons at the end of statements.
"curly": "warn", // Enforce the use of curly braces around blocks of code. "curly": "off", // Do not enforce the use of curly braces around blocks of code.
"no-unused-vars": "off", // Disable warnings for unused variables. "no-unused-vars": "off", // Disable warnings for unused variables.
"no-unreachable": "off", // Disable warnings for unreachable code. "no-unreachable": "off", // Disable warnings for unreachable code.
"require-await": "error", // Disallow async functions which have no await expression
"no-floating-promise/no-floating-promise": "error", // Disallow Promises without error handling or awaiting
}, },
}, },
]; ];

View file

@ -1,401 +0,0 @@
import argparse
import json
import shutil
import subprocess
import time
from datetime import datetime
import re
import sys
import os
import time
def read_settings(file_path):
"""Read and parse the settings.js file to get agent profiles."""
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
# Remove `export default` and trailing commas
content = re.sub(r'export\s+default', '', content)
content = re.sub(r',\s*(?=[}\]])', '', content)
# Remove JavaScript comments
content = re.sub(r'//.*', '', content)
# Remove trailing commas (e.g., before } or ])
content = re.sub(r',\s*(?=[}\]])', '', content)
# Strip leading and trailing whitespace
content = content.strip()
json_data = json.loads(content)
profiles = json_data['profiles']
## profiles is a list of strings like "./andy.json" and "./bob.json"
agent_names = [profile.split('/')[-1].split('.')[0] for profile in profiles]
return agent_names
def check_task_completion(agents):
"""Check memory.json files of all agents to determine task success/failure."""
for agent in agents:
memory_path = f"bots/{agent}/memory.json"
try:
with open(memory_path, 'r') as f:
memory = json.load(f)
# Check the last system message in turns
for turn in reversed(memory['turns']):
if turn['role'] == 'system' and 'code' in turn['content']:
# Extract completion code
if 'code : 2' in turn['content']:
return True # Task successful
elif 'code : 4' in turn['content']:
return False # Task failed
except (FileNotFoundError, json.JSONDecodeError) as e:
print(f"Error reading memory for agent {agent}: {e}")
continue
return False # Default to failure if no conclusive result found
def update_results_file(task_id, success_count, total_count, time_taken, experiment_results, results_filename):
"""Update the results file with current success ratio and time taken."""
success_ratio = success_count / total_count
with open(results_filename, 'w') as f: # 'w' mode overwrites the file each time
f.write(f"Task ID: {task_id}\n")
f.write(f"Experiments completed: {total_count}\n")
f.write(f"Successful experiments: {success_count}\n")
f.write(f"Success ratio: {success_ratio:.2f}\n")
f.write(f"Time taken for last experiment: {time_taken:.2f} seconds\n")
# Write individual experiment results
for i, result in enumerate(experiment_results, 1):
f.write(f"Experiment {i}: {'Success' if result['success'] else 'Failure'}, Time taken: {result['time_taken']:.2f} seconds\n")
# Write aggregated metrics
total_time = sum(result['time_taken'] for result in experiment_results)
f.write(f"\nAggregated metrics:\n")
f.write(f"Total experiments: {total_count}\n")
f.write(f"Total successful experiments: {success_count}\n")
f.write(f"Overall success ratio: {success_ratio:.2f}\n")
f.write(f"Total time taken: {total_time:.2f} seconds\n")
f.write(f"Average time per experiment: {total_time / total_count:.2f} seconds\n")
f.write(f"Last updated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
def set_environment_variable_tmux_session(session_name, key, value):
"""Set an environment variable for the current process."""
subprocess.run(["tmux", "send-keys", "-t", session_name, f"export {key}={value}", "C-m"])
def launch_parallel_experiments(task_path,
num_exp,
exp_name,
num_agents=2,
model="gpt-4o",
num_parallel=1):
with open(task_path, 'r', encoding='utf-8') as file:
content = file.read()
json_data = json.loads(content)
task_ids = json_data.keys()
# split the task_ids into num_parallel groups
task_ids = list(task_ids)
task_ids_split = [task_ids[i::num_parallel] for i in range(num_parallel)]
servers = create_server_files("../server_data/", num_parallel)
date_time = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
experiments_folder = f"experiments/{exp_name}_{date_time}"
exp_name = f"{exp_name}_{date_time}"
# start wandb
os.makedirs(experiments_folder, exist_ok=True)
for i, server in enumerate(servers):
launch_server_experiment(task_path, task_ids_split[i], num_exp, server, experiments_folder, exp_name)
time.sleep(5)
def launch_server_experiment(task_path,
task_ids,
num_exp,
server,
experiments_folder,
exp_name="exp",
num_agents=2,
model="gpt-4o"):
"""
Launch a Minecraft server and run experiments on it.
@param task_path: Path to the task file
@param task_ids: IDs of the tasks to run
@param num_exp: Number of experiments to run
@param server: Tuple containing server path and port
@param experiments_folder: Folder to store experiment results
@param exp_name: Name of the experiment for wandb dataset
@param num_agents: Number of agents to run
@param model: Model to use for the agents
"""
server_path, server_port = server
edit_file(os.path.join(server_path, "server.properties"), {"server-port": server_port})
mindserver_port = server_port - 55916 + 8080
# set up server and agents
session_name = str(server_port - 55916)
if num_agents == 2:
agent_names = [f"andy_{session_name}", f"jill_{session_name}"]
models = [model] * 2
else:
agent_names = [f"andy_{session_name}", f"jill_{session_name}", f"bob_{session_name}"]
models = [model] * 3
make_profiles(agent_names, models)
# edit_file("settings.js", {"profiles": [f"./{agent}.json" for agent in agent_names]})
agent_profiles = [f"./{agent}.json" for agent in agent_names]
agent_profiles_str = f"\'[\"{agent_profiles[0]}\", \"{agent_profiles[1]}\"]\'"
print(agent_profiles_str)
launch_world(server_path, session_name="server_" + session_name, agent_names=agent_names)
subprocess.run(['tmux', 'new-session', '-d', '-s', session_name], check=True)
# set environment variables
set_environment_variable_tmux_session(session_name, "MINECRAFT_PORT", server_port)
set_environment_variable_tmux_session(session_name, "MINDSERVER_PORT", mindserver_port)
set_environment_variable_tmux_session(session_name, "PROFILES", agent_profiles_str)
script_content = ""
for task_id in task_ids:
cmd = f"node main.js --task_path {task_path} --task_id {task_id}"
cp_cmd = f"cp {agent_names[0]}.json {server_path}bots/{agent_names[0]}/profile.json"
for _ in range(num_exp):
script_content += f"{cmd}\n"
script_content += "sleep 2\n"
for agent in agent_names:
cp_cmd = f"cp bots/{agent}/memory.json {experiments_folder}/{task_id}_{agent}_{_}.json"
script_content += f"{cp_cmd}\n"
script_content += "sleep 1\n"
script_content += f"echo 'Uploading {experiments_folder}/{task_id}_{agent}_{_}.json to wandb'\n"
wandb_cmd = f"wandb artifact put {experiments_folder}/{task_id}_{agent}_{_}.json --name {exp_name}_{task_id}_{agent}_{_} --type dataset"
script_content += f"echo '{wandb_cmd}'\n"
script_content += f"{wandb_cmd}\n"
script_content += "sleep 1\n"
script_content += "sleep 1\n"
# Create a temporary shell script file
script_file = f"./tmp/experiment_script_{session_name}.sh"
script_dir = os.path.dirname(script_file)
os.makedirs(script_dir, exist_ok=True)
# Call the function before writing the script file
with open(script_file, 'w') as f:
f.write(script_content)
script_file_run = "bash " + script_file
# Execute the shell script using subprocess
subprocess.run(["tmux", "send-keys", "-t", session_name, script_file_run, "C-m"])
# subprocess.run(["tmux", "send-keys", "-t", session_name, f"/op {agent_names[0]}", "C-m"])
def make_profiles(agent_names, models):
assert len(agent_names) == len(models)
for index in range(len(agent_names)):
content = {"name": agent_names[index], "model": models[index], "modes": {"hunting": False}}
with open(f"{agent_names[index]}.json", 'w') as f:
json.dump(content, f)
def create_server_files(source_path, num_copies):
"""Create multiple copies of server files for parallel experiments."""
print("Creating server files...")
print(num_copies)
servers = []
for i in range(num_copies):
dest_path = f"../server_data_{i}/"
copy_server_files(source_path, dest_path)
print(dest_path)
edit_file(dest_path + "server.properties", {"server-port": 55916 + i})
# edit_server_properties_file(dest_path, 55916 + i)
servers.append((dest_path, 55916 + i))
return servers
def edit_file(file, content_dict):
try:
with open(file, 'r') as f:
lines = f.readlines()
with open(file, 'w') as f:
for line in lines:
for key, value in content_dict.items():
if line.startswith(key):
f.write(f"{key}={value}\n")
else:
f.write(line)
print(f"{file} updated with {content_dict}")
except Exception as e:
print(f"Error editing file {file}: {e}")
def clean_up_server_files(num_copies):
"""Delete server files from multiple locations."""
for i in range(num_copies):
dest_path = f"../server_data_{i}/"
delete_server_files(dest_path)
def copy_server_files(source_path, dest_path):
"""Copy server files to the specified location."""
try:
shutil.copytree(source_path, dest_path)
print(f"Server files copied to {dest_path}")
except Exception as e:
print(f"Error copying server files: {e}")
def delete_server_files(dest_path):
"""Delete server files from the specified location."""
try:
shutil.rmtree(dest_path)
print(f"Server files deleted from {dest_path}")
except Exception as e:
print(f"Error deleting server files: {e}")
def launch_world(server_path="../server_data/", agent_names=["andy", "jill"], session_name="server"):
"""Launch the Minecraft world."""
print(server_path)
cmd = f"cd {server_path} && java -jar server.jar"
subprocess.run(['tmux', 'new-session', '-d', '-s', session_name], check=True)
subprocess.run(["tmux", "send-keys", "-t", session_name, cmd, "C-m"])
for agent in agent_names:
subprocess.run(["tmux", "send-keys", "-t", session_name, f"/op {agent}", "C-m"])
time.sleep(5)
def kill_world(session_name="server"):
"""Kill the Minecraft world."""
subprocess.run(["tmux", "send-keys", "-t", session_name, "stop", "C-m"])
time.sleep(5)
subprocess.run(["tmux", "kill-session", "-t", session_name])
def detach_process(command):
"""
Launches a subprocess and detaches from it, allowing it to run independently.
Args:
command: A list of strings representing the command to execute, e.g., ['python', 'my_script.py'].
"""
try:
# Create a new process group so the child doesn't get signals intended for the parent.
# This is crucial for proper detachment.
kwargs = {}
if sys.platform == 'win32':
kwargs.update(creationflags=subprocess.CREATE_NEW_PROCESS_GROUP) # Windows specific
process = subprocess.Popen(command,
stdin=subprocess.PIPE, # Prevent stdin blocking
stdout=subprocess.PIPE, # Redirect stdout
stderr=subprocess.PIPE, # Redirect stderr
close_fds=True, # Close open file descriptors
**kwargs)
print(f"Process launched with PID: {process.pid}")
return process.pid # Return the PID of the detached process
except FileNotFoundError:
print(f"Error: Command not found: {command}")
return None
except Exception as e:
print(f"An error occurred: {e}")
return None
# Generate timestamp at the start of experiments
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
results_filename = f"results_{task_id}_{timestamp}.txt"
print(f"Results will be saved to: {results_filename}")
success_count = 0
experiment_results = []
for exp_num in range(num_exp):
print(f"\nRunning experiment {exp_num + 1}/{num_exp}")
start_time = time.time()
# Run the node command
cmd = f"node main.js --task_path {task_path} --task_id {task_id}"
try:
subprocess.run(cmd, shell=True, check=True)
except subprocess.CalledProcessError as e:
print(f"Error running experiment: {e}")
continue
# Check if task was successful
success = check_task_completion(agents)
if success:
success_count += 1
print(f"Experiment {exp_num + 1} successful")
else:
print(f"Experiment {exp_num + 1} failed")
end_time = time.time()
time_taken = end_time - start_time
# Store individual experiment result
experiment_results.append({
'success': success,
'time_taken': time_taken
})
# Update results file after each experiment
update_results_file(task_id, success_count, exp_num + 1, time_taken, experiment_results)
# Small delay between experiments
time.sleep(1)
final_ratio = success_count / num_exp
print(f"\nExperiments completed. Final success ratio: {final_ratio:.2f}")
return experiment_results
def main():
# edit_settings("settings.js", {"profiles": ["./andy.json", "./jill.json"], "port": 55917})
# edit_server_properties_file("../server_data/", 55917)
parser = argparse.ArgumentParser(description='Run Minecraft AI agent experiments')
parser.add_argument('--task_path', default="multiagent_crafting_tasks.json", help='Path to the task file')
parser.add_argument('--task_id', default=None, help='ID of the task to run')
parser.add_argument('--num_exp', default=1, type=int, help='Number of experiments to run')
parser.add_argument('--num_parallel', default=1, type=int, help='Number of parallel servers to run')
parser.add_argument('--exp_name', default="exp", help='Name of the experiment')
parser.add_argument('--wandb', action='store_true', help='Whether to use wandb')
parser.add_argument('--wandb-project', default="minecraft_experiments", help='wandb project name')
args = parser.parse_args()
if args.wandb:
import wandb
wandb.init(project=args.wandb_project, name=args.exp_name)
# kill all tmux session before starting
try:
subprocess.run(['tmux', 'kill-server'], check=True)
except:
print("No tmux session to kill")
# delete all server files
clean_up_server_files(args.num_parallel)
if args.task_id is None:
launch_parallel_experiments(args.task_path, num_exp=args.num_exp, exp_name=args.exp_name, num_parallel=args.num_parallel)
# servers = create_server_files("../server_data/", args.num_parallel)
# date_time = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# experiments_folder = f"{args.exp_name}_{date_time}"
# os.makedirs(experiments_folder, exist_ok=True)
# for server in servers:
# launch_server_experiment(args.task_path, [args.task_id], args.num_exp, server, experiments_folder)
# time.sleep(5)
# run_experiment(args.task_path, args.task_id, args.num_exp)
if __name__ == "__main__":
main()

View file

@ -1,112 +0,0 @@
{
"debug_single_agent": {
"goal": "Just stand at a place and don't do anything",
"initial_inventory": {},
"type": "debug"
},
"debug_multi_agent": {
"goal": "Just stand at a place and don't do anything",
"agent_count": 2,
"initial_inventory": {
"0": {
"iron_ingot": 1
},
"1": {
"iron_ingot": 1
}
},
"type": "debug"
},
"debug_inventory_restriction": {
"goal": "Place 1 oak plank, then place 1 stone brick",
"initial_inventory": {
"oak_planks": 20
},
"type": "debug",
"restrict_to_inventory": true
},
"construction": {
"type": "construction",
"goal": "Build a house",
"initial_inventory": {
"oak_planks": 20
}
},
"techtree_1_shears_with_2_iron_ingot": {
"goal": "Build a shear.",
"initial_inventory": {
"iron_ingot": 1
},
"target": "shears",
"number_of_target": 1,
"type": "techtree",
"timeout": 60
},
"multiagent_techtree_1_stone_pickaxe": {
"conversation": "Let's collaborate to build a stone pickaxe",
"goal": "Build a stone pickaxe",
"agent_count": 2,
"initial_inventory": {
"0": {
"wooden_pickaxe": 1
},
"1": {
"wooden_axe": 1
}
},
"target": "stone_pickaxe",
"number_of_target": 1,
"type": "techtree",
"timeout": 300
},
"multiagent_techtree_1_shears": {
"goal": "Collaborate with other agents to build a shear.",
"conversation": "Let's collaborate to build a shear.",
"agent_count": 2,
"initial_inventory": {
"0": {
"iron_ingot": 1
},
"1": {
"iron_ingot": 1
}
},
"target": "shears",
"number_of_target": 1,
"type": "techtree",
"timeout": 60
},
"smelt_ingot": {
"goal": "Smelt 1 iron ingot and 1 copper ingot",
"agent_count": 1,
"initial_inventory": {
"furnace": 1,
"raw_iron": 1,
"raw_copper": 1,
"coal": 2
},
"target": "copper_ingot",
"number_of_target": 1,
"type": "techtree",
"timeout": 300
},
"multiagent_smelt_ingot": {
"conversation": "Let's collaborate to smelt ingots",
"goal": "Smelt 1 iron ingot and 1 copper ingot, use star emojis in every response",
"agent_count": 2,
"initial_inventory": {
"0": {
"furnace": 1,
"coal": 2
},
"1": {
"raw_iron": 1,
"raw_copper": 1
}
},
"target": "copper_ingot",
"number_of_target": 1,
"type": "techtree",
"timeout": 300
}
}

View file

@ -10,6 +10,10 @@
"XAI_API_KEY": "", "XAI_API_KEY": "",
"MISTRAL_API_KEY": "", "MISTRAL_API_KEY": "",
"DEEPSEEK_API_KEY": "", "DEEPSEEK_API_KEY": "",
"GHLF_API_KEY": "",
"HYPERBOLIC_API_KEY": "",
"NOVITA_API_KEY": "", "NOVITA_API_KEY": "",
"OPENROUTER_API_KEY": "" "OPENROUTER_API_KEY": "",
"CEREBRAS_API_KEY": "",
"MERCURY_API_KEY":""
} }

72
main.js
View file

@ -1,9 +1,7 @@
import { AgentProcess } from './src/process/agent_process.js'; import * as Mindcraft from './src/mindcraft/mindcraft.js';
import settings from './settings.js'; import settings from './settings.js';
import yargs from 'yargs'; import yargs from 'yargs';
import { hideBin } from 'yargs/helpers'; import { hideBin } from 'yargs/helpers';
import { createMindServer } from './src/server/mind_server.js';
import { mainProxy } from './src/process/main_proxy.js';
import { readFileSync } from 'fs'; import { readFileSync } from 'fs';
function parseArguments() { function parseArguments() {
@ -24,35 +22,51 @@ function parseArguments() {
.alias('help', 'h') .alias('help', 'h')
.parse(); .parse();
} }
const args = parseArguments();
function getProfiles(args) { if (args.profiles) {
return args.profiles || settings.profiles; settings.profiles = args.profiles;
} }
if (args.task_path) {
async function main() { let tasks = JSON.parse(readFileSync(args.task_path, 'utf8'));
if (settings.host_mindserver) { if (args.task_id) {
const mindServer = createMindServer(settings.mindserver_port); settings.task = tasks[args.task_id];
settings.task.task_id = args.task_id;
} }
mainProxy.connect(); else {
throw new Error('task_id is required when task_path is provided');
const args = parseArguments();
const profiles = getProfiles(args);
console.log(profiles);
const { load_memory, init_message } = settings;
for (let i=0; i<profiles.length; i++) {
const agent_process = new AgentProcess();
const profile = readFileSync(profiles[i], 'utf8');
const agent_json = JSON.parse(profile);
mainProxy.registerAgent(agent_json.name, agent_process);
agent_process.start(profiles[i], load_memory, init_message, i, args.task_path, args.task_id);
await new Promise(resolve => setTimeout(resolve, 1000));
} }
} }
try { // these environment variables override certain settings
main(); if (process.env.MINECRAFT_PORT) {
} catch (error) { settings.port = process.env.MINECRAFT_PORT;
console.error('An error occurred:', error);
process.exit(1);
} }
if (process.env.MINDSERVER_PORT) {
settings.mindserver_port = process.env.MINDSERVER_PORT;
}
if (process.env.PROFILES && JSON.parse(process.env.PROFILES).length > 0) {
settings.profiles = JSON.parse(process.env.PROFILES);
}
if (process.env.INSECURE_CODING) {
settings.allow_insecure_coding = true;
}
if (process.env.BLOCKED_ACTIONS) {
settings.blocked_actions = JSON.parse(process.env.BLOCKED_ACTIONS);
}
if (process.env.MAX_MESSAGES) {
settings.max_messages = process.env.MAX_MESSAGES;
}
if (process.env.NUM_EXAMPLES) {
settings.num_examples = process.env.NUM_EXAMPLES;
}
if (process.env.LOG_ALL) {
settings.log_all_prompts = process.env.LOG_ALL;
}
Mindcraft.init(false, settings.mindserver_port);
for (let profile of settings.profiles) {
const profile_json = JSON.parse(readFileSync(profile, 'utf8'));
settings.profile = profile_json;
Mindcraft.createAgent(settings);
}

175
minecollab.md Normal file
View file

@ -0,0 +1,175 @@
# MineCollab
MineCollab is a versatile benchmark for assessing the embodied and collaborative communication abilities of agents across three unique types of tasks.
## Existing Task Types
### Cooking
At the beginning of a cooking task episode, the agents are initialized with a goal to make a meal, e.g. they need to make cake and bread.
The agents then need to coordinate the collection of ingredients through natural language communication (e.g. Andy collects wheat for the bread while Jill makes the cake) and combine them in a multi-step plan.
To assist them in collecting resources, agents are placed in a "cooking world" that possesses all of the items they need to complete the task, from livestock, to crops, to a smoker, furnace, and crafting table.
Following a popular test of collaboration in humans, we further introduce a ``Hell's Kitchen'' variant of the cooking tasks where each agent is given the recipes for a small subset of the items they need to cook and must communicate the instructions with the other teammates.
For example, if the task is to make a baked potato and a cake, one agent is given recipe for baked potato, but is required to bake the cake to complete the task, forcing them to ask their teammate for help in baking the potato.
Agents are evaluated on whether are successfully able to complete the set requirements to make the recipes.
The environment and objectives of the tasks are randomized every episode.
You can view the cooking task in action [here](https://www.youtube.com/shorts/FbNJ3cR_RWY).
### Construction
In the construction tasks, agents are directed to build structures from procedurally generated blueprints.
Blueprints can also be downloaded from the internet and read into our blueprint format - enabling agents to build anything from pyramids to the Eiffel Tower.
We choose evaluate primarily on our generated blueprints as they provide fine-grained control over task complexity, allowing us to systematically vary the depth of collaboration required---e.g. number of rooms in the interior of palace, or the amount and types of materials required for each room.
At the beginning of each episode, agents are initialized with the blueprint, materials (e.g. stone, wood, doors, carpets) in such a way that no agent has the full resources or the expertise in terms of the types of tools that can be used to process the resources and complete the entire blueprint.
For example, if the blueprint required a stone base and a wooden roof, one agent would be given access and the ability to manipulate stone, the other to wood.
Agents are evaluated via an edit distance based metric that judges how close their constructed building is to the blueprint and the metric reported is the average of those edit distance scores.
You can view the construction task in action [here](https://www.youtube.com/shorts/vuBycbn35Rw)
### Crafting
Crafting has long been the subject of Minecraft agent research---our crafting tasks encompass the entire breadth of items that are craftable in Minecraft including clothing, furniture, and tools.
At the beginning of each episode, the agents are initialized with a goal (e.g. make a bookshelf), different sets of resources (e.g. books and planks), and access to a crafting recipe, that is occasionally blocked.
To complete the task, the agents must: (1) communicate with each other what items are in their inventory; (2) share with each other the crafting recipe if necessary; and (3) give each other resources to successfully craft the item.
To make the crafting tasks more challenging, agents are given longer crafting objectives (e.g. crafting a compass which requires multiple steps).
%They are required to coordinate their actions by communicating their plans with each other as no
%we introduce longer crafting recipes (e.g. crafting a compass), and require the agents to communicate the plan to each other.
Once again, each of these components can be controlled to procedurally generate tasks.
You can view the crafting task in action [here](https://www.youtube.com/shorts/VMAyxwMKiBc).
## Installation
You **DO NOT** need Linux to run this, you can run on Windows with the --no-launch-world flag and by installing git bash.
Please follow the installation docs in the README to install mindcraft. You can create a docker image using the Dockerfile.
If you don't own Minecraft you can run a limited version solely for offline games using these instructions:
1. Download the TLauncher https://tlauncher.org/en/
2. Enter a username and select version 1.21.1
3. Click "Multiplayer" and then "Direct Connection"
4. Then enter "localhost:55916" and hit `Join Server`
Download the relevant task files and server data files, you can find the link [here](https://drive.google.com/drive/folders/1XygbitBBTsNO6q_doEiZHmdETpnyRmCS). The tasks files are for specifying the tasks to run and the server data is for allowing the models to launch the task in the correct world automatically. **Unzip the server_data.zip in the base `tasks/` folder**.
Then, set up your conda environment:
```
conda create --name mindcraft python=3.11
conda activate mindcraft
pip install -r requirements.txt
```
Then, you can run the evaluation_script **from the project root** using `python tasks/evaluation_script.py --task_path {your-task-path} --model {model you want to use}`.
### Tmux Installation
**MacOS**:
1. If brew isn't already installed run `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"`
2. `brew install tmux`
**Linux**: `apt-get -y install tmux`
**Windows**: You can not use tmux on Windows, but you can run tasks with the --no-launch-world flag. Run
```
cd /tasks/server_data/
java -jar server.jar
```
If you want to run with vllm be sure to run with `--api vllm --url {your_url_for_vllm} --model {model_name}`, by default vllm will use http://127.0.0.1:8000/v1 as the url for quering the model!
When running with construction tasks, make sure to set the flag `--insecure_coding` so that the agents can be allowed to write freeform javascript code to complete the tasks. However, when using insecure coding it is **highly recommended** to use a docker container to avoid damage to your computer.
When running an experiment that requires more than 2 agents, use the `--num_agents` flag to match the number of agents in your task file. For example, if you are running a task file with 3 agents, use `--num_agents 3`.
Similarly, match the default prompt profile to the type of task. If you are running a crafting task use `--template_profile profiles/tasks/crafting_profile.json` to set that as the default profile. Similar for cooking and construction tasks.
In summary, to run two and three agent tasks on crafting on gpt-4o-mini you would run
```
python tasks/evaluation_script.py --task_path tasks/crafting_tasks/test_tasks/2_agent.json --model gpt-4o-mini --template_profile profiles/tasks/crafting_profile.json
python tasks/evaluation_script.py --task_path tasks/crafting_tasks/test_tasks/filtered_tasks_3_agents.json --model gpt-4o-mini --template_profile profiles/tasks/crafting_profile --num_agents 3
```
For cooking and construction
```
python tasks/evaluation_script.py --task_path {path_to_two_agent_cooking_tasks} --model gpt-4o-mini --template_profile profiles/tasks/cooking_profile.json
python tasks/evaluation_script.py --task_path {path_to_two_agent_construction_tasks} --model gpt-4o-mini --template_profile profiles/tasks/construction_profile.json --insecure_coding
```
When you launch the evaluation script, you will see the minecraft server being launched. If you want to join this world, you can connect to it on the port localhost:55916 the way you would a standard Minecraft world (go to single player -> direct connection -> type in localhost:55916) It may take a few minutes for everything to be properly loaded - as first the agents need to be added to the world and given the correct permissions to use cheats and add inventory. After about 5 minutes everything should be loaded and working. If you wish to kill the experiment run `tmux kill-server`. Sometimes there will be issues copying the files, if this happens you can run the python file twice.
## Windows Installation (without tmux)
If you are on a machine that can't run tmux (like a Windows PC without WSL) or you don't care about doing evaluations only running tasks you can run the following script
```
python tasks/run_task_file.py --task_path=tasks/single_agent/crafting_train.json
```
## Using the Evaluation Script
When you launch with `python evaluation_script.py` a Minecraft server will be launched in the `server_0` tmux shell, while in the `0` tmux shell the `node main.js` command will be run. You can view the exact bash shell that is being created and executed in the `tmp/` directory.
### Evaluating Results
As you run, the evalaution script will evaluate the performance so far. It will also log all of the results you have collected into an experiments/ folder with entries like experiments/exp_04-21_16-16/results.txt which will contain the results of your experiments after you have finished running them. Furthermore it will contain individual task folders and the `memory.json` for each agent when the task ended. The `memory.json` is not the complete conversation, it is only the last 15 messages before the task terminated, as well as a message saying `Task ended with score: ` to report the score when the task ended. For crafting and cooking this score will be 0 or 1, for construction it will be a decimal representing the edit distance from the true blueprint.
### Running multiple worlds in parallel
You can use `--num_parallel` to run multiple Minecraft worlds in parallel. This will launch `n` tmux shells, called `server_i` and shell `i`, where `i` corresponds to ith parallel world. It will also copy worlds into `server_data_i` as well. On an M3 Mac with 34 GB of RAM, we can normally support up to 4 parallel worlds. When running an open source model, it is more likely you will be constrained by the throughput and size of your GPU RAM. On a cluster of 8 H100s you can expect to run 4 experiments in parallel. However, for best performance it is advisable to only use one parallel world.
### Using an S3 Bucket to store files
To use S3 set the --s3 flag and the --bucket_name to use an s3 bucket to log all the files collected. It will also copy the /bots folder in this case with all of the files in there.
## Understanding Task Json
This is an example task json from the crafting tasks file.
```
"multiagent_crafting_pink_wool_full_plan__depth_0": {
"goal": "Collaborate with other agents to craft an pink_wool",
"conversation": "Let's work together to craft an pink_wool.",
"initial_inventory": {
"0": {
"pink_dye": 1
},
"1": {
"black_wool": 1
}
},
"agent_count": 2,
"target": "pink_wool",
"number_of_target": 1,
"type": "techtree",
"max_depth": 1,
"depth": 0,
"timeout": 300,
"blocked_actions": {
"0": [],
"1": []
},
"missing_items": [],
"requires_ctable": false
},
```
The "initial inventory" specifies what items will be given to the agents when they spawn in the world. The "target" indicates what the goal item is, while the "type" indicates that this a techtree or crafting task. Blocked actions specifies what actions are blocked and the timeout specifies the number of seconds until the agents run out of time to complete the task.
## Creating New Tasks
To create a new task, you simply need to set the initial inventory and the target item. For construction tasks, you can set a new blueprint. See examples of those in tasks/construction_tasks/
To create a task that relies on neither an inventory check or a blueprint check, you will need to design you own evaluation function. The examples for our existing evaluation functions can be found in src/agent/tasks/cooking_tasks.js CookingTaskValidator. For any further questions please contact me at i2white@ucsd.edu.
## Creating New Worlds
To add new worlds to the minecraft environment beyond the base Forest and Superflat worlds we have created, please (1) create a world in your version of Minecraft then (2) copy the world files into the server_data folder and (3) set server.properties file level-name to be the same as the name of the world you created.
## Evaluating New Models
To evaluate a new model on our tasks, please refer to the instructions on main README for adding models. If the model can be hosted through vllm, consider using the --vllm flag and instructions above for running that.

View file

@ -2,20 +2,23 @@
"type": "module", "type": "module",
"dependencies": { "dependencies": {
"@anthropic-ai/sdk": "^0.17.1", "@anthropic-ai/sdk": "^0.17.1",
"@cerebras/cerebras_cloud_sdk": "^1.46.0",
"@google/generative-ai": "^0.2.1", "@google/generative-ai": "^0.2.1",
"@huggingface/inference": "^2.8.1", "@huggingface/inference": "^2.8.1",
"@mistralai/mistralai": "^1.1.0", "@mistralai/mistralai": "^1.1.0",
"canvas": "^3.1.0", "canvas": "^3.1.0",
"cheerio": "^1.0.0",
"express": "^4.18.2", "express": "^4.18.2",
"google-translate-api-x": "^10.7.1", "google-translate-api-x": "^10.7.1",
"groq-sdk": "^0.5.0", "groq-sdk": "^0.15.0",
"minecraft-data": "^3.78.0", "minecraft-data": "^3.78.0",
"mineflayer": "^4.23.0", "mineflayer": "^4.29.0",
"mineflayer-armor-manager": "^2.0.1", "mineflayer-armor-manager": "^2.0.1",
"mineflayer-auto-eat": "^3.3.6", "mineflayer-auto-eat": "^3.3.6",
"mineflayer-collectblock": "^1.4.1", "mineflayer-collectblock": "^1.4.1",
"mineflayer-pathfinder": "^2.4.5", "mineflayer-pathfinder": "^2.4.5",
"mineflayer-pvp": "^1.3.2", "mineflayer-pvp": "^1.3.2",
"node-canvas-webgl": "PrismarineJS/node-canvas-webgl",
"openai": "^4.4.0", "openai": "^4.4.0",
"patch-package": "^8.0.0", "patch-package": "^8.0.0",
"prismarine-item": "^1.15.0", "prismarine-item": "^1.15.0",
@ -24,6 +27,7 @@
"ses": "^1.9.1", "ses": "^1.9.1",
"socket.io": "^4.7.2", "socket.io": "^4.7.2",
"socket.io-client": "^4.7.2", "socket.io-client": "^4.7.2",
"three": "^0.128.0",
"vec3": "^0.1.10", "vec3": "^0.1.10",
"yargs": "^17.7.2" "yargs": "^17.7.2"
}, },
@ -34,6 +38,7 @@
"devDependencies": { "devDependencies": {
"@eslint/js": "^9.13.0", "@eslint/js": "^9.13.0",
"eslint": "^9.13.0", "eslint": "^9.13.0",
"eslint-plugin-no-floating-promise": "^2.0.0",
"globals": "^15.11.0" "globals": "^15.11.0"
} }
} }

View file

@ -0,0 +1,13 @@
diff --git a/node_modules/@google/generative-ai/dist/index.mjs b/node_modules/@google/generative-ai/dist/index.mjs
index 23a175b..aab7e19 100644
--- a/node_modules/@google/generative-ai/dist/index.mjs
+++ b/node_modules/@google/generative-ai/dist/index.mjs
@@ -151,7 +151,7 @@ class GoogleGenerativeAIResponseError extends GoogleGenerativeAIError {
* limitations under the License.
*/
const BASE_URL = "https://generativelanguage.googleapis.com";
-const API_VERSION = "v1";
+const API_VERSION = "v1beta";
/**
* We can't `require` package.json if this runs on web. We will use rollup to
* swap in the version number here at build time.

View file

@ -0,0 +1,913 @@
diff --git a/node_modules/minecraft-data/minecraft-data/data/pc/1.21.1/blocks.json b/node_modules/minecraft-data/minecraft-data/data/pc/1.21.1/blocks.json
index 9d8c578..91e1d77 100644
--- a/node_modules/minecraft-data/minecraft-data/data/pc/1.21.1/blocks.json
+++ b/node_modules/minecraft-data/minecraft-data/data/pc/1.21.1/blocks.json
@@ -1062,7 +1062,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -1088,7 +1088,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -1114,7 +1114,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -1141,7 +1141,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -2905,7 +2905,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -2932,7 +2932,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -2959,7 +2959,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -4998,7 +4998,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -5024,7 +5024,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -5218,7 +5218,7 @@
"resistance": 1200.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -5603,7 +5603,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -5629,7 +5629,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -5655,7 +5655,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -8242,7 +8242,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -8274,7 +8274,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -11717,7 +11717,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -11743,7 +11743,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -11915,7 +11915,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -30948,7 +30948,7 @@
"resistance": 1200.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -30973,7 +30973,7 @@
"resistance": 1200.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -30998,7 +30998,7 @@
"resistance": 1200.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 10,
"filterLight": 15,
@@ -31023,7 +31023,7 @@
"resistance": 1200.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34549,7 +34549,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34576,7 +34576,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34603,7 +34603,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34630,7 +34630,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34657,7 +34657,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34684,7 +34684,7 @@
"resistance": 3.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34711,7 +34711,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34738,7 +34738,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34765,7 +34765,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34792,7 +34792,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34819,7 +34819,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34846,7 +34846,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34873,7 +34873,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34900,7 +34900,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34927,7 +34927,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34954,7 +34954,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -34981,7 +34981,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35008,7 +35008,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35035,7 +35035,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35100,7 +35100,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35165,7 +35165,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35230,7 +35230,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35295,7 +35295,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35338,7 +35338,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35381,7 +35381,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35424,7 +35424,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35467,7 +35467,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35494,7 +35494,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35521,7 +35521,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35548,7 +35548,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35575,7 +35575,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35602,7 +35602,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35629,7 +35629,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35656,7 +35656,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -35683,7 +35683,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35748,7 +35748,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35813,7 +35813,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35878,7 +35878,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35943,7 +35943,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -35986,7 +35986,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -36029,7 +36029,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -36072,7 +36072,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 0,
@@ -36115,7 +36115,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36182,7 +36182,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36249,7 +36249,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36316,7 +36316,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36383,7 +36383,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36450,7 +36450,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36517,7 +36517,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36584,7 +36584,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36651,7 +36651,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36714,7 +36714,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36777,7 +36777,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36840,7 +36840,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36903,7 +36903,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -36966,7 +36966,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37029,7 +37029,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37092,7 +37092,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37155,7 +37155,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37188,7 +37188,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37221,7 +37221,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37254,7 +37254,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37287,7 +37287,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37320,7 +37320,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37353,7 +37353,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37386,7 +37386,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -37419,7 +37419,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37457,7 +37457,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37495,7 +37495,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37533,7 +37533,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37571,7 +37571,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37609,7 +37609,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37647,7 +37647,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37685,7 +37685,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -37723,7 +37723,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": true,
"emitLight": 0,
"filterLight": 0,
@@ -39352,7 +39352,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -39379,7 +39379,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -39406,7 +39406,7 @@
"resistance": 6.0,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,
@@ -39653,7 +39653,7 @@
"resistance": 3.5,
"stackSize": 64,
"diggable": true,
- "material": "incorrect_for_wooden_tool",
+ "material": "mineable/pickaxe",
"transparent": false,
"emitLight": 0,
"filterLight": 15,

View file

@ -1,8 +1,35 @@
diff --git a/node_modules/mineflayer-pathfinder/index.js b/node_modules/mineflayer-pathfinder/index.js diff --git a/node_modules/mineflayer-pathfinder/index.js b/node_modules/mineflayer-pathfinder/index.js
index b38bd30..bf16a63 100644 index b38bd30..fb39b45 100644
--- a/node_modules/mineflayer-pathfinder/index.js --- a/node_modules/mineflayer-pathfinder/index.js
+++ b/node_modules/mineflayer-pathfinder/index.js +++ b/node_modules/mineflayer-pathfinder/index.js
@@ -550,6 +550,7 @@ function inject (bot) { @@ -170,6 +170,16 @@ function inject (bot) {
const curPoint = path[i]
if (curPoint.toBreak.length > 0 || curPoint.toPlace.length > 0) break
const b = bot.blockAt(new Vec3(curPoint.x, curPoint.y, curPoint.z))
+
+ // openned doors have small Collision box
+ // that may stop the bot from moving forward
+ if(i === 0 && b.name.includes('door')) {
+ curPoint.x = Math.floor(curPoint.x) + 0.5
+ curPoint.y = Math.floor(curPoint.y)
+ curPoint.z = Math.floor(curPoint.z) + 0.5
+ continue
+ }
+
if (b && (b.type === waterType || ((b.type === ladderId || b.type === vineId) && i + 1 < path.length && path[i + 1].y < curPoint.y))) {
curPoint.x = Math.floor(curPoint.x) + 0.5
curPoint.y = Math.floor(curPoint.y)
@@ -524,6 +534,9 @@ function inject (bot) {
bot.activateBlock(bot.blockAt(new Vec3(placingBlock.x, placingBlock.y, placingBlock.z))).then(() => {
lockUseBlock.release()
placingBlock = nextPoint.toPlace.shift()
+ if (!placingBlock) {
+ placing = false
+ }
}, err => {
console.error(err)
lockUseBlock.release()
@@ -550,6 +563,7 @@ function inject (bot) {
lockEquipItem.release() lockEquipItem.release()
const refBlock = bot.blockAt(new Vec3(placingBlock.x, placingBlock.y, placingBlock.z), false) const refBlock = bot.blockAt(new Vec3(placingBlock.x, placingBlock.y, placingBlock.z), false)
if (!lockPlaceBlock.tryAcquire()) return if (!lockPlaceBlock.tryAcquire()) return
@ -10,7 +37,7 @@ index b38bd30..bf16a63 100644
if (interactableBlocks.includes(refBlock.name)) { if (interactableBlocks.includes(refBlock.name)) {
bot.setControlState('sneak', true) bot.setControlState('sneak', true)
} }
@@ -557,6 +558,7 @@ function inject (bot) { @@ -557,6 +571,7 @@ function inject (bot) {
.then(function () { .then(function () {
// Dont release Sneak if the block placement was not successful // Dont release Sneak if the block placement was not successful
bot.setControlState('sneak', false) bot.setControlState('sneak', false)
@ -18,3 +45,152 @@ index b38bd30..bf16a63 100644
if (bot.pathfinder.LOSWhenPlacingBlocks && placingBlock.returnPos) returningPos = placingBlock.returnPos.clone() if (bot.pathfinder.LOSWhenPlacingBlocks && placingBlock.returnPos) returningPos = placingBlock.returnPos.clone()
}) })
.catch(_ignoreError => { .catch(_ignoreError => {
diff --git a/node_modules/mineflayer-pathfinder/lib/movements.js b/node_modules/mineflayer-pathfinder/lib/movements.js
index a7e3505..3c4a8f2 100644
--- a/node_modules/mineflayer-pathfinder/lib/movements.js
+++ b/node_modules/mineflayer-pathfinder/lib/movements.js
@@ -62,7 +62,13 @@
this.climbables = new Set()
this.climbables.add(registry.blocksByName.ladder.id)
- // this.climbables.add(registry.blocksByName.vine.id)
+ if (registry.blocksByName.vine) this.climbables.add(registry.blocksByName.vine.id)
+ if (registry.blocksByName.weeping_vines) this.climbables.add(registry.blocksByName.weeping_vines.id)
+ if (registry.blocksByName.weeping_vines_plant) this.climbables.add(registry.blocksByName.weeping_vines_plant.id)
+ if (registry.blocksByName.twisting_vines) this.climbables.add(registry.blocksByName.twisting_vines.id)
+ if (registry.blocksByName.twisting_vines_plant) this.climbables.add(registry.blocksByName.twisting_vines_plant.id)
+ if (registry.blocksByName.cave_vines) this.climbables.add(registry.blocksByName.cave_vines.id)
+ if (registry.blocksByName.cave_vines_plant) this.climbables.add(registry.blocksByName.cave_vines_plant.id)
this.emptyBlocks = new Set()
this.replaceables = new Set()
@@ -92,13 +98,15 @@
}
})
registry.blocksArray.forEach(block => {
- if (this.interactableBlocks.has(block.name) && block.name.toLowerCase().includes('gate') && !block.name.toLowerCase().includes('iron')) {
+ if (this.interactableBlocks.has(block.name)
+ && (block.name.toLowerCase().includes('gate') || block.name.toLowerCase().includes('door') || block.name.toLowerCase().includes('trapdoor'))
+ && !block.name.toLowerCase().includes('iron')) {
// console.info(block)
this.openable.add(block.id)
}
})
- this.canOpenDoors = false // Causes issues. Probably due to none paper servers.
+ this.canOpenDoors = true
this.exclusionAreasStep = []
this.exclusionAreasBreak = []
@@ -230,8 +238,13 @@
}
}
b.climbable = this.climbables.has(b.type)
- b.safe = (b.boundingBox === 'empty' || b.climbable || this.carpets.has(b.type)) && !this.blocksToAvoid.has(b.type)
- b.physical = b.boundingBox === 'block' && !this.fences.has(b.type)
+
+ // Enhanced trapdoor logic - open trapdoors are safe to pass through
+ const isOpenTrapdoor = this.openable.has(b.type) && b.name.includes('trapdoor') && b._properties?.open === true
+ const isClosedTrapdoor = this.openable.has(b.type) && b.name.includes('trapdoor') && b._properties?.open !== true
+
+ b.safe = (b.boundingBox === 'empty' || b.climbable || this.carpets.has(b.type) || isOpenTrapdoor) && !this.blocksToAvoid.has(b.type)
+ b.physical = (b.boundingBox === 'block' && !this.fences.has(b.type)) || isClosedTrapdoor
b.replaceable = this.replaceables.has(b.type) && !b.physical
b.liquid = this.liquids.has(b.type)
b.height = pos.y + dy
@@ -284,6 +297,18 @@
cost += this.exclusionStep(block) // Is excluded so can't move or break
cost += this.getNumEntitiesAt(block.position, 0, 0, 0) * this.entityCost
if (block.safe) return cost
+
+ // process door cost
+ if ((this.canOpenDoors && block.openable)
+ || (block.openable && block._properties?.open === true)) {
+ return cost
+ }
+
+ // Handle trapdoors specifically - they can be opened instead of broken
+ if (this.canOpenDoors && block.openable && block.name.includes('trapdoor') && !block.name.includes('iron')) {
+ return cost + 1 // Small cost for opening trapdoor
+ }
+
if (!this.safeToBreak(block)) return 100 // Can't break, so can't move
toBreak.push(block.position)
@@ -387,8 +412,8 @@
cost += this.safeOrBreak(blockB, toBreak)
if (cost > 100) return
- // Open fence gates
- if (this.canOpenDoors && blockC.openable && blockC.shapes && blockC.shapes.length !== 0) {
+ // Open fence gates and doors
+ if (this.canOpenDoors && blockC.openable && !blockC._properties.open) {
toPlace.push({ x: node.x + dir.x, y: node.y, z: node.z + dir.z, dx: 0, dy: 0, dz: 0, useOne: true }) // Indicate that a block should be used on this block not placed
} else {
cost += this.safeOrBreak(blockC, toBreak)
@@ -552,6 +577,54 @@
if (cost > 100) return
neighbors.push(new Move(node.x, node.y + 1, node.z, node.remainingBlocks - toPlace.length, cost, toBreak, toPlace))
+ }
+
+ getMoveClimbUpThroughTrapdoor (node, neighbors) {
+ const blockCurrent = this.getBlock(node, 0, 0, 0) // Current position (should be climbable)
+ const blockAbove = this.getBlock(node, 0, 1, 0) // Block directly above
+ const blockCeiling = this.getBlock(node, 0, 2, 0) // Trapdoor or ceiling block
+
+ // Only attempt this move if we're on a climbable block (ladder/vine)
+ if (!blockCurrent.climbable) return
+
+ // Check if there's a closed trapdoor above us
+ if (!blockCeiling.openable || blockCeiling._properties?.open === true) return
+
+ let cost = 2 // Base cost for climbing up and opening trapdoor
+ const toBreak = []
+ const toPlace = []
+
+ // Make sure we can break/pass through the block above if needed
+ cost += this.safeOrBreak(blockAbove, toBreak)
+ if (cost > 100) return
+
+ // Add cost for opening the trapdoor
+ toPlace.push({ x: node.x, y: node.y + 2, z: node.z, dx: 0, dy: 0, dz: 0, useOne: true })
+
+ neighbors.push(new Move(node.x, node.y + 2, node.z, node.remainingBlocks - toPlace.length, cost, toBreak, toPlace))
+ }
+
+ // Enhanced ladder/vine climbing that can handle stepping on top and jumping
+ getMoveClimbTop (node, neighbors) {
+ const blockCurrent = this.getBlock(node, 0, 0, 0) // Current position (should be climbable)
+ const blockAbove = this.getBlock(node, 0, 1, 0) // Block directly above
+
+ // Only attempt this move if we're on a climbable block (ladder/vine)
+ if (!blockCurrent.climbable) return
+
+ // Check if we can step on top of the ladder/vine and then jump up
+ if (!blockAbove.safe) return
+
+ let cost = 2 // Cost for climbing to top of ladder and jumping
+ const toBreak = []
+ const toPlace = []
+
+ // Check if there's space to jump up from the top of the ladder
+ const blockJumpTarget = this.getBlock(node, 0, 2, 0)
+ cost += this.safeOrBreak(blockJumpTarget, toBreak)
+ if (cost > 100) return
+
+ neighbors.push(new Move(node.x, node.y + 2, node.z, node.remainingBlocks - toPlace.length, cost, toBreak, toPlace))
}
// Jump up, down or forward over a 1 block gap
@@ -655,6 +728,10 @@
this.getMoveDown(node, neighbors)
this.getMoveUp(node, neighbors)
+
+ // Enhanced climbing moves for ladders, vines, and trapdoors
+ this.getMoveClimbUpThroughTrapdoor(node, neighbors)
+ this.getMoveClimbTop(node, neighbors)
return neighbors
}

View file

@ -0,0 +1,13 @@
diff --git a/node_modules/prismarine-item/index.js b/node_modules/prismarine-item/index.js
index a1397a2..1d75cad 100644
--- a/node_modules/prismarine-item/index.js
+++ b/node_modules/prismarine-item/index.js
@@ -231,7 +231,7 @@ function loader (registryOrVersion) {
const typeOfEnchantLevelValue = registry.supportFeature('typeOfValueForEnchantLevel')
const useStoredEnchantments = registry.supportFeature('booksUseStoredEnchantments') && this.name === 'enchanted_book'
- if (typeOfEnchantLevelValue === 'short' && enchantNbtKey === 'ench') {
+ if (typeOfEnchantLevelValue === 'short' && (enchantNbtKey === 'ench' || enchantNbtKey === 'componentEnchantments')) {
let itemEnch = []
if (useStoredEnchantments && this?.nbt?.value?.StoredEnchantments) {

View file

@ -0,0 +1,13 @@
diff --git a/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js b/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js
index 8945452..dab25be 100644
--- a/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js
+++ b/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js
@@ -203,7 +203,7 @@ function getMesh (texture, jsonModel) {
class Entity {
constructor (version, type, scene) {
const e = entities[type]
- if (!e) throw new Error(`Unknown entity ${type}`)
+ if (!e) return; //throw new Error(`Unknown entity ${type}`)
this.mesh = new THREE.Object3D()
for (const [name, jsonModel] of Object.entries(e.geometry)) {

View file

@ -1,213 +0,0 @@
{
"name": "andy",
"model": "claude-3-5-sonnet-20240620",
"embedding": "openai",
"conversing": "You are a playful Minecraft bot named $NAME that can converse with players, see, move, mine, build, and interact with the world by using commands. Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer('playername', 3)'. This is extremely important to me, take a deep breath and have fun :)\n$SELF_PROMPT\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:",
"coding": "You are an intelligent mineflayer bot $NAME that plays minecraft by writing javascript codeblocks. Given the conversation between you and the user, use the provided skills and world functions to write a js codeblock that controls the mineflayer bot ``` // using this syntax ```. The code will be executed and you will receive it's output. If you are satisfied with the response, respond without a codeblock in a conversational way. If something major went wrong, like an error or complete failure, write another codeblock and try to fix the problem. Minor mistakes are acceptable. Be maximally efficient, creative, and clear. Do not use commands !likeThis, only use codeblocks. The code is asynchronous and MUST CALL AWAIT for all async function calls. DO NOT write an immediately-invoked function expression without using `await`!! DO NOT WRITE LIKE THIS: ```(async () => {console.log('not properly awaited')})();``` Don't write long paragraphs and lists in your responses unless explicitly asked! Only summarize the code you write with a sentence or two when done. This is extremely important to me, take a deep breath and good luck! \n$SELF_PROMPT\n$STATS\n$INVENTORY\n$CODE_DOCS\n$EXAMPLES\nConversation:",
"saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation in your next response. Store information that will help you improve as a Minecraft bot. Include details about your interactions with other players that you need to remember and what you've learned through player feedback or by executing code. Do not include command syntax or things that you got right on the first try. Be extremely brief and use as few words as possible.\nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the memory text: ",
"modes": {
"self_preservation": true,
"unstuck": true,
"cowardice": true,
"self_defense": true,
"hunting": true,
"item_collecting": true,
"torch_placing": true,
"idle_staring": true,
"cheat": false
},
"npc": {
"do_routine": true,
"do_set_goal": true,
"goals": [
"wooden_pickaxe",
"dirt_shelter",
"stone_pickaxe",
"stone_axe",
"small_wood_house",
"furnace",
"iron_pickaxe",
"iron_sword"
]
},
"conversation_examples": [
[
{"role": "user", "content": "miner_32: Hey! What are you up to?"},
{"role": "assistant", "content": "Nothing much miner_32, what do you need?"}
],
[
{"role": "user", "content": "grombo_Xx: What do you see?"},
{"role": "assistant", "content": "Let me see... !nearbyBlocks"},
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone."}
],
[
{"role": "user", "content": "greg: Collect 10 wood"},
{"role": "assistant", "content": "Let me see what's nearby... !nearbyBlocks"},
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone. I'll collect oak logs. !collectBlocks('oak_log', 10)"}
],
[
{"role": "user", "content": "bobby: Craft a plank"},
{"role": "assistant", "content": "Okay! !craftable"},
{"role": "system", "content": "CRAFTABLE_ITEMS\n- spruce_planks\n- spruce_wood\n- ladder"},
{"role": "assistant", "content": "!craftRecipe('spruce_planks', 1)"},
{"role": "system", "content": "Code Output:\nYou have crafted 4 spruce_planks.\nCode execution finished successfully."},
{"role": "assistant", "content": "I've crafted 4 spruce planks!"}
],
[
{"role": "user", "content": "zZZn98: come here"},
{"role": "assistant", "content": "On my way! !goToPlayer('zZZn98', 3)"},
{"role": "system", "content": "Arrived at player."},
{"role": "assistant", "content": "Here!"},
{"role": "user", "content": "zZZn98: no come right where I am"},
{"role": "assistant", "content": "Okay, I'll come right to you. !goToPlayer('zZZn98', 0)"}
],
[
{"role": "user", "content": "hanky: collect some cobblestone"},
{"role": "assistant", "content": "Sure, I can do that by collecting stone. !collectBlocks('stone', 10)"}
],
[
{"role": "user", "content": "joe: Follow me!"},
{"role": "assistant", "content": "Sure! !followPlayer('joe', 4)"},
{"role": "user", "content": "joe: Not so close!"},
{"role": "assistant", "content": "Okay, I'll keep my distance. !followPlayer('joe', 3)"}
],
[
{"role": "user", "content": "hanky: kill that zombie!"},
{"role": "assistant", "content": "I'm attacking! !attack('zombie')"},
{"role": "system", "content": "Code Output:\nNo zombie nearby"},
{"role": "assistant", "content": "I could not find a zombie nearby."}
],
[
{"role": "user", "content": "abc: stop"},
{"role": "assistant", "content": "Sure. !stop"},
{"role": "system", "content": "Agent action stopped."},
{"role": "assistant", "content": "I've stopped! What next?"}
],
[
{"role": "system", "content": "Code output: Action failed."},
{"role": "assistant", "content": "Apologies, I made an error. Let me try again. !newAction('Try again to complete the task.')"}
],
[
{"role": "user", "content": "trixy88: craft some sticks"},
{"role": "assistant", "content": "!craftRecipe('stick', 4)"},
{"role": "system", "content": "Code Output:\nYou have crafted 16 sticks.\nCode execution finished successfully."},
{"role": "assistant", "content": "I've crafted 16 sticks!"}
],
[
{"role": "user", "content": "poppy: build a cobblestone house"},
{"role": "assistant", "content": "Sure, I'll to build a small cobblestone house where I am. !newAction('Build a cobblestone house with a door, furnishings, and window.')"}
],
[
{"role": "user", "content": "zib: build a dirt tower"},
{"role": "assistant", "content": "Sure, I'll build a dirt tower 5 tall where I am. !newAction('Build a dirt tower 5 tall.')"}
],
[
{"role": "user", "content": "umpire_man: follow me without hunting"},
{"role": "assistant", "content": "Okay, first I'll turn off hunting mode !setMode('hunting', false)"},
{"role": "system", "content": "'hunting' mode disabled."},
{"role": "assistant", "content": "Now I'll follow you !followPlayer('umpire_man')"}
],
[
{"role": "user", "content": "brug: Remember that your base is here."},
{"role": "assistant", "content": "Sure, I'll save this location as my base. !rememberHere('base')"},
{"role": "user", "content": "brug: Go to your base."},
{"role": "assistant", "content": "On my way! !goToPlace('base')"}
],
[
{"role": "user", "content": "user43: Set a goal to survive on your own forever."},
{"role": "assistant", "content": "Sure, let me set a goal. !goal('Survive the game forever. Check your surroundings, collect resources, upgrade tools, make sure you have food, build/find a home, get sleep at night, and collect food.')"}
],
[
{"role": "user", "content": "bobdylan: Make a plan to get a stone pickaxe and do it."},
{"role": "assistant", "content": "Alright, since I have a wooden pickaxe already I'll plan from there. !goal('1. Collect 3 cobblestone. 2. Craft sticks 3. Find or make a crafting table. 4. Craft a stone pickaxe.')"}
],
[
{"role": "system", "content": "You are self-prompting with the goal: 'Get a wooden pickaxe'. Respond:"},
{"role": "assistant", "content": "What's around me? !nearbyBlocks"},
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone nearby. My goal is to craft a wooden pickaxe. I'll start by collecting some wood. !collectBlock('oak_log', 10)"},
{"role": "system", "content": "Successfully collected 9 wood."},
{"role": "assistant", "content": "I collected 9 oak logs. I need a pickaxe first, which requires planks and sticks. !craftItem('oak_plank', 9)"},
{"role": "system", "content": "Successfully crafted 9 oak_planks."},
{"role": "assistant", "content": "I have crafted 9 oak planks. Now I need sticks. !craftItem('stick', 2)"},
{"role": "system", "content": "Successfully crafted 2 sticks."},
{"role": "assistant", "content": "I have crafted 2 sticks. Now I can craft a wooden pickaxe. !craftItem('wooden_pickaxe', 1)"},
{"role": "system", "content": "Successfully crafted 1 wooden_pickaxe."},
{"role": "assistant", "content": "Looks like I have a wooden pickaxe now! Goal accomplished. !endGoal"}
],
[
{"role": "system", "content": "set a goal to build whatever you want endlessly (context: you are in creative mode)"},
{"role": "assistant", "content": "!goal('Build something creative, then move over, and build again. Use separate newActions. Repeat forever.')"}
],
[
{"role": "system", "content": "You are self-prompting with the goal: 'Build a house'. Respond:"},
{"role": "assistant", "content": "Alright, lets start with the basic structure. !newAction"}
]
],
"coding_examples": [
[
{"role": "user", "content": "greg: Collect 10 wood"},
{"role": "assistant", "content": "```await skills.collectBlock(bot, 'oak_log', 10);\n```"},
{"role": "system", "content": "Successfully collected 9 wood."},
{"role": "assistant", "content": "I collected 9 oak logs, what next?"}
],
[
{"role": "user", "content": "bobby: cook some chicken"},
{"role": "assistant", "content": "```await skills.smeltItem(bot, 'chicken', 8);\n```"},
{"role": "system", "content": "Successfully smelted 8 chicken into 8 cooked_chicken."},
{"role": "assistant", "content": "I have cooked 8 chicken."}
],
[
{"role": "user", "content": "zZZn98: come here"},
{"role": "assistant", "content": "```await skills.goToPlayer(bot, 'zZZn98');\n```"},
{"role": "system", "content": "Code execution finished successfully."},
{"role": "assistant", "content": "Here!"}
],
[
{"role": "user", "content": "maya: go to the nearest oak log"},
{"role": "assistant", "content": "```await skills.goToPosition(bot, nearestOakLog.x, nearestOakLog.y, nearestOakLog.z);\n```"},
{"role": "system", "content": "Code output:\n!!Code threw exception!! Error: ReferenceError: nearestOakLog is not defined\n Write code to fix the problem and try again."},
{"role": "assistant", "content": "Apologies, I made an error. Let me try that again.```\n// I am navigating to the nearest oak log.\nlet position = world.getNearestBlock(bot, 'oak_log', 20);\nawait skills.goToPosition(bot, position.x, position.y, position.z);\n```"},
{"role": "system", "content": "Arrived at location."},
{"role": "assistant", "content": "I found an oak log and I am now standing next to it. What next?"}
],
[
{"role": "user", "content": "234jeb: build a little tower with a torch on the side"},
{"role": "assistant", "content": "```let pos = bot.entity.position;\nfor (let i = 0; i < 5; i++) {\n await skills.placeBlock(bot, 'dirt', pos.x, pos.y + i, pos.z);\n}\nawait skills.placeBlock(bot, 'torch', pos.x + 1, pos.y + 4, pos.z, 'side');\n```"},
{"role": "system", "content": "Successfully placed 5 dirt."},
{"role": "assistant", "content": "I built a little tower!"}
],
[
{"role": "user", "content": "brug: build a dirt house"},
{"role": "assistant", "content": "```const position = world.getPosition(bot);\nconst startX = position.x;\nconst startY = position.y;\nconst startZ = position.z;\nconst width = 7;\nconst depth = 7;\nconst height = 4;\n\n// Build the walls\nfor (let x = startX; x < startX + width; x++) {\n for (let y = startY; y < startY + height; y++) {\n for (let z = startZ; z < startZ + depth; z++) {\n if (x === startX || x === startX + width - 1 || y === startY || y === startY + height - 1 || z === startZ || z === startZ + depth - 1) {\n await skills.placeBlock(bot, 'oak_planks', x, y, z); \n }\n }\n }\n}\n```"}
]
]
}

View file

@ -1,7 +1,7 @@
{ {
"name": "claude", "name": "claude",
"model": "claude-3-5-sonnet-latest", "model": "claude-4-sonnet-latest",
"embedding": "openai" "embedding": "openai"
} }

View file

@ -0,0 +1,15 @@
{
"name": "claude_thinker",
"model": {
"model": "claude-4-sonnet-latest",
"params": {
"thinking": {
"type": "enabled",
"budget_tokens": 4000
}
}
},
"embedding": "openai"
}

View file

@ -3,12 +3,14 @@
"conversing": "You are a playful Minecraft bot named $NAME that can converse with players, see, move, mine, build, and interact with the world by using commands.\n$SELF_PROMPT Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer(\"playername\", 3)'. Respond only as $NAME, never output '(FROM OTHER BOT)' or pretend to be someone else. If you have nothing to say or do, respond with an just a tab '\t'. This is extremely important to me, take a deep breath and have fun :)\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:", "conversing": "You are a playful Minecraft bot named $NAME that can converse with players, see, move, mine, build, and interact with the world by using commands.\n$SELF_PROMPT Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer(\"playername\", 3)'. Respond only as $NAME, never output '(FROM OTHER BOT)' or pretend to be someone else. If you have nothing to say or do, respond with an just a tab '\t'. This is extremely important to me, take a deep breath and have fun :)\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:",
"coding": "You are an intelligent mineflayer bot $NAME that plays minecraft by writing javascript codeblocks. Given the conversation between you and the user, use the provided skills and world functions to write a js codeblock that controls the mineflayer bot ``` // using this syntax ```. The code will be executed and you will receive it's output. If you are satisfied with the response, respond without a codeblock in a conversational way. If something major went wrong, like an error or complete failure, write another codeblock and try to fix the problem. Minor mistakes are acceptable. Be maximally efficient, creative, and clear. Do not use commands !likeThis, only use codeblocks. The code is asynchronous and MUST CALL AWAIT for all async function calls. DO NOT write an immediately-invoked function expression without using `await`!! DO NOT WRITE LIKE THIS: ```(async () => {console.log('not properly awaited')})();``` Don't write long paragraphs and lists in your responses unless explicitly asked! Only summarize the code you write with a sentence or two when done. This is extremely important to me, think step-by-step, take a deep breath and good luck! \n$SELF_PROMPT\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$CODE_DOCS\n$EXAMPLES\nConversation:", "coding": "You are an intelligent mineflayer bot $NAME that plays minecraft by writing javascript codeblocks. Given the conversation, use the provided skills and world functions to write a js codeblock that controls the mineflayer bot ``` // using this syntax ```. The code will be executed and you will receive it's output. If an error occurs, write another codeblock and try to fix the problem. Be maximally efficient, creative, and correct. Be mindful of previous actions. Do not use commands !likeThis, only use codeblocks. The code is asynchronous and MUST USE AWAIT for all async function calls, and must contain at least one await. You have `Vec3`, `skills`, and `world` imported, and the mineflayer `bot` is given. Do not import other libraries. Do not use setTimeout or setInterval. Do not speak conversationally, only use codeblocks. Do any planning in comments. This is extremely important to me, think step-by-step, take a deep breath and good luck! \n$SELF_PROMPT\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$CODE_DOCS\n$EXAMPLES\nConversation:",
"saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation and your old memory in your next response. Prioritize preserving important facts, things you've learned, useful tips, and long term reminders. Do Not record stats, inventory, or docs! Only save transient information from your chat history. You're limited to 500 characters, so be extremely brief and minimize words. Compress useful information. \nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the unwrapped memory text: ", "saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation and your old memory in your next response. Prioritize preserving important facts, things you've learned, useful tips, and long term reminders. Do Not record stats, inventory, or docs! Only save transient information from your chat history. You're limited to 500 characters, so be extremely brief and minimize words. Compress useful information. \nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the unwrapped memory text: ",
"bot_responder": "You are a minecraft bot named $NAME that is currently in conversation with another AI bot. Both of you can take actions with the !command syntax, and actions take time to complete. You are currently busy with the following action: '$ACTION' but have received a new message. Decide whether to 'respond' immediately or 'ignore' it and wait for your current action to finish. Be conservative and only respond when necessary, like when you need to change/stop your action, or convey necessary information. Example 1: You:Building a house! !newAction('Build a house.').\nOther Bot: 'Come here!'\nYour decision: ignore\nExample 2: You:Collecting dirt !collectBlocks('dirt',10).\nOther Bot: 'No, collect some wood instead.'\nYour decision: respond\nExample 3: You:Coming to you now. !goToPlayer('billy',3).\nOther Bot: 'What biome are you in?'\nYour decision: respond\nActual Conversation: $TO_SUMMARIZE\nDecide by outputting ONLY 'respond' or 'ignore', nothing else. Your decision:", "bot_responder": "You are a minecraft bot named $NAME that is currently in conversation with another AI bot. Both of you can take actions with the !command syntax, and actions take time to complete. You are currently busy with the following action: '$ACTION' but have received a new message. Decide whether to 'respond' immediately or 'ignore' it and wait for your current action to finish. Be conservative and only respond when necessary, like when you need to change/stop your action, or convey necessary information. Example 1: You:Building a house! !newAction('Build a house.').\nOther Bot: 'Come here!'\nYour decision: ignore\nExample 2: You:Collecting dirt !collectBlocks('dirt',10).\nOther Bot: 'No, collect some wood instead.'\nYour decision: respond\nExample 3: You:Coming to you now. !goToPlayer('billy',3).\nOther Bot: 'What biome are you in?'\nYour decision: respond\nActual Conversation: $TO_SUMMARIZE\nDecide by outputting ONLY 'respond' or 'ignore', nothing else. Your decision:",
"image_analysis": "You are a Minecraft bot named $NAME that has been given a screenshot of your current view. Analyze and summarize the view; describe terrain, blocks, entities, structures, and notable features. Focus on details relevant to the conversation. Note: the sky is always blue regardless of weather or time, dropped items are small pink cubes, and blocks below y=0 do not render. Be extremely concise and correct, respond only with your analysis, not conversationally. $STATS",
"modes": { "modes": {
"self_preservation": true, "self_preservation": true,
"unstuck": true, "unstuck": true,
@ -65,14 +67,20 @@
{"role": "user", "content": "derg: (FROM OTHER BOT)Thanks, see you later!"}, {"role": "user", "content": "derg: (FROM OTHER BOT)Thanks, see you later!"},
{"role": "assistant", "content": "See you later. !endConversation(\"derg\")"} {"role": "assistant", "content": "See you later. !endConversation(\"derg\")"}
], ],
[ [
{"role": "user", "content": "grombo_Xx: What do you see?"}, {"role": "user", "content": "d7297: look here"},
{"role": "assistant", "content": "Let me see... !nearbyBlocks"}, {"role": "assistant", "content": "Okay! !lookAtPlayer(\"d7297\", \"at\")"},
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"}, {"role": "system", "content": "Image analysis:\nLooking at player d7297. There is a giant waterfall behind d7297."},
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone."} {"role": "assistant", "content": "Woo, that is a cool waterfall!"}
], ],
[
{"role": "user", "content": "d7297: look 0 70 12 and describe what is there"},
{"role": "assistant", "content": "Sure! !lookAtPosition(0, 70, 12)"},
{"role": "system", "content": "Image analysis:\nLooking at position 0, 70, 12. There is a water fountain."},
{"role": "assistant", "content": "I see the water fountain! amazing!"}
],
[ [
{"role": "user", "content": "greg: Collect 10 wood"}, {"role": "user", "content": "greg: Collect 10 wood"},
{"role": "assistant", "content": "Let me see what's nearby... !nearbyBlocks"}, {"role": "assistant", "content": "Let me see what's nearby... !nearbyBlocks"},
@ -238,8 +246,9 @@
], ],
[ [
{"role": "user", "content": "brug: build a dirt house"}, {"role": "user", "content": "brug: build a dirt house"},
{"role": "assistant", "content": "```const position = world.getPosition(bot);\nconst startX = position.x;\nconst startY = position.y;\nconst startZ = position.z;\nconst width = 7;\nconst depth = 7;\nconst height = 4;\n\n// Build the walls\nfor (let x = startX; x < startX + width; x++) {\n for (let y = startY; y < startY + height; y++) {\n for (let z = startZ; z < startZ + depth; z++) {\n if (x === startX || x === startX + width - 1 || y === startY || y === startY + height - 1 || z === startZ || z === startZ + depth - 1) {\n await skills.placeBlock(bot, 'oak_planks', x, y, z); \n }\n }\n }\n}\n```"} {"role": "assistant", "content": "```js\nconst position = world.getPosition(bot);\nconst startX = position.x;\nconst startY = position.y;\nconst startZ = position.z;\nconst width = 7;\nconst depth = 7;\nconst height = 4;\n\n// Build front wall (z = startZ)\nfor (let x = startX; x < startX + width; x++) {\n for (let y = startY; y < startY + height; y++) {\n await skills.placeBlock(bot, 'oak_planks', x, y, startZ);\n }\n}\n\n// Build back wall (z = startZ + depth - 1)\nfor (let x = startX; x < startX + width; x++) {\n for (let y = startY; y < startY + height; y++) {\n await skills.placeBlock(bot, 'oak_planks', x, y, startZ + depth - 1);\n }\n}\n\n// Build left wall (x = startX)\nfor (let z = startZ; z < startZ + depth; z++) {\n for (let y = startY; y < startY + height; y++) {\n await skills.placeBlock(bot, 'oak_planks', startX, y, z);\n }\n}\n\n// Build right wall (x = startX + width - 1)\nfor (let z = startZ; z < startZ + depth; z++) {\n for (let y = startY; y < startY + height; y++) {\n await skills.placeBlock(bot, 'oak_planks', startX + width - 1, y, z);\n }\n}\n\n// Build floor (y = startY)\nfor (let x = startX; x < startX + width; x++) {\n for (let z = startZ; z < startZ + depth; z++) {\n await skills.placeBlock(bot, 'oak_planks', x, startY, z);\n }\n}\n\n// Build ceiling (y = startY + height - 1)\nfor (let x = startX; x < startX + width; x++) {\n for (let z = startZ; z < startZ + depth; z++) {\n await skills.placeBlock(bot, 'oak_planks', x, startY + height - 1, z);\n }\n}\n```"}
] ]
] ]
} }

View file

@ -0,0 +1,14 @@
{
"modes": {
"self_preservation": true,
"unstuck": true,
"cowardice": false,
"self_defense": true,
"hunting": false,
"item_collecting": true,
"torch_placing": true,
"elbow_room": true,
"idle_staring": true,
"cheat": false
}
}

View file

@ -1,7 +1,7 @@
{ {
"name": "gemini", "name": "gemini",
"model": "gemini-1.5-flash", "model": "gemini-2.5-flash",
"cooldown": 10000 "cooldown": 5000
} }

View file

@ -1,7 +1,7 @@
{ {
"name": "Grok", "name": "Grok",
"model": "grok-beta", "model": "grok-3-mini-latest",
"embedding": "openai" "embedding": "openai"
} }

15
profiles/mercury.json Normal file
View file

@ -0,0 +1,15 @@
{
"name": "Mercury",
"cooldown": 5000,
"model": {
"api": "mercury",
"url": "https://api.inceptionlabs.ai/v1",
"model": "mercury-coder-small"
},
"embedding": "openai",
"description":"Official Website Introduction:The world’s first diffusion large language models"
}

View file

@ -0,0 +1,42 @@
{
"name": "andy",
"model": "gpt-4o-mini",
"conversing": "You are a task-focused Minecraft bot named $NAME. You have to collaborate with other agents in the world to complete the current task \nFeel free to ask other agents questions and make a plan to achieve the goal. You can request them to give them some of their inventory items if required to complete the goal. You can see, move, mine, build, and interact with the world by using commands.\n$SELF_PROMPT Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer(\"playername\", 3)'. Respond only as $NAME, never output '(FROM OTHER BOT)' or pretend to be someone else. If you have nothing to say or do, respond with an just a tab '\t'. Share resources and information with other bots! This is extremely important to me, take a deep breath and have fun :) \nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:",
"modes": {
"cheat": false
},
"conversation_examples": [
[
{"role": "assistant", "content": "Alright I have the necessary materials to build, what needs to be done for the first level of the blueprint? !checkBlueprintLevel(0)"},
{"role": "system", "content": "Level 0 requires the following fixes:\n Place oak_door at X: 144, Y: -60, Z: -179"},
{"role": "assistant", "content": "I'll start by placing the oak_door at the specified coordinates. !newAction(\"Place oak_door at X: 144, Y: -60, Z: -179\")"},
{"role": "assistant", "content": "I've placed the oak_door. What's next? !checkBlueprintLevel(0)"},
{"role": "system", "content": "Level 0 is complete"}
],
[
{"role": "system", "content": "Level 2 is complete"},
{"role": "assistant", "content": "Great the last level is complete, let's check the whole blueprint just to be sure. !checkBlueprint"}
],
[
{"role": "user", "content": "philips: (FROM OTHER BOT) I've got stone. What about you?"},
{"role": "assistant", "content": "I have a door and planks. Great we have everything we need from the blueprint. Let's start building the first level! !checkBlueprintLevel(0)"},
{"role": "system", "content": "Level 0 requires the following fixes:\n Place stone at X:100, y: -60, Z:-179\n Place stone at X: 99, Y: -60, Z:-179\n Place stone at X: 98, Y:-60, Z:-179..."},
{"role": "assistant", "content": "Looks like the first level requires only stone, why don't you build that since you have stone?"},
{"role": "user", "content": "philips: (FROM OTHER BOT) Ok, let me see what needs to be done for level 0. !checkBlueprintLevel(0) "},
{"role": "system", "content": "Level 0 requires the following fixes:\n Place stone at X:100, y: -60, Z:-179\n Place stone at X: 99, Y: -60, Z:-179\n Place stone at X: 98, Y:-60, Z:-179..."},
{"role": "user", "content": "philips: (FROM OTHER BOT) Ok, since I have stone let me place the necessary blocks. !newAction(\"Place stone for level 0\")"},
{"role": "user", "content": "philips: (FROM OTHER BOT) I've placed the stone blocks for level 0. Let's check if it is done! !checkBlueprintLevel(0)"},
{"role": "user", "content": "philips: (FROM OTHER BOT) Since the blueprint for level 1 only needs stone, I'll start placing those. !newAction(\"Place stone blocks for level 1.\")"},
{"role": "assistant", "content": " I'll place the planks for level 2. !newAction(\"Place planks for level 2.\")"}
],
[
{"role": "assistant", "content": "I need 30 stones to build level 1 of blueprint, but I only have 20. Can you pass me some stones if you have any?"},
{"role": "user", "content": "philips: (FROM OTHER BOT) Sure, I'll pass you 10 stones. !givePlayer(\"fujibayashi\", \"stone\", 10)"},
{"role": "assistant", "content": "I've received the stones, let me start placing them. !newAction(\"Place stone for level 1\")"}
]
]
}

View file

@ -0,0 +1,11 @@
{
"name": "andy",
"model": "claude-3-5-sonnet-latest",
"modes": {
"hunting": false,
"item_collecting": true,
"elbow_room": false
},
"conversing": "You are a task-focused Minecraft bot named $NAME. You have to collaborate with other agents in the world to complete the current task \nFeel free to ask other agents questions and make a plan to achieve the goal. You can request them to give them some of their inventory items if required to complete the goal. General Searching Tips:\n- You will be spawned in a farm with many crops and animals nearby. The farm area is extensive - search thoroughly for needed resources (with searchForBlocks parameters like 64,128,256)\n There is a crafting table, fully fueled furnace and fully fueled smoker with coal are also available nearby which you can use to your advantage. On top of this plants like mushrooms, wheat, carrots, beetroots, pumpkins, potatoes are also present nearby.\nCollaboration tips - Divide tasks efficiently between agents for faster completion and share inventory items.\n- Communicate your plan and progress clearly. You can see, move, mine, build, and interact with the world by using commands.\n$SELF_PROMPT Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer(\"playername\", 3)'. Respond only as $NAME, never output '(FROM OTHER BOT)' or pretend to be someone else. If you have nothing to say or do, respond with an just a tab '\t'. Share resources and information with other bots! This is extremely important to me, take a deep breath and have fun :) \nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:",
"saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation and your old memory in your next response. Prioritize preserving important facts, things you've learned, useful tips, and long term reminders. Do Not record stats, inventory, or docs! Only save transient information from your chat history. $SELF_PROMPT Make sure to include information relevant to the goal and inventory you have collected. You're limited to 500 characters, so be extremely brief and minimize words. Compress useful information. \nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the unwrapped memory text: "
}

View file

@ -0,0 +1,71 @@
{
"name": "andy",
"model": "claude-3-5-sonnet-latest",
"modes": {
"hunting": false,
"elbow_room": false
},
"conversing": "You are a playful Minecraft bot named $NAME that can converse with players, see, move, mine, build, and interact with the world by using commands.\n$SELF_PROMPT Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer(\"playername\", 3)'. Respond only as $NAME, never output '(FROM OTHER BOT)' or pretend to be someone else. If you have nothing to say or do, respond with an just a tab '\t'. Share resources and information with other bots! This is extremely important to me, take a deep breath and have fun :) \nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\n Your name is $NAME, do not pretend to be other bots. You are in a conversation by default do not use !startConversation to start a conversation. Conversation Begin:",
"saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation and your old memory in your next response. Prioritize preserving important facts, things you've learned, useful tips, and long term reminders. Do Not record stats, inventory, or docs! Only save transient information from your chat history. $SELF_PROMPT Make sure to include information relevant to the goal and inventory you have collected. You're limited to 500 characters, so be extremely brief and minimize words. Compress useful information. \nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the unwrapped memory text: ",
"conversation_examples": [
[
{"role": "user", "content": "philips: (FROM OTHER BOT) Hey! let's collaborate to build a stone pickaxe!"},
{"role": "assistant", "content": "Cool, let me check what is in my inventory !inventory\nI have 10 oak logs. What do you have?"},
{"role": "user", "content": "philips: (FROM OTHER BOT) Let me check !inventory \nI have 10 stones."},
{"role": "assistant", "content": "Let me give you the stones and you can craft the pickaxe. !givePlayer(\"playername\", \"stone\", 10)"},
{"role": "user", "content": "philips: (FROM OTHER BOT) Thanks for the stones! Now I can craft the pickaxe. !craftItem(\"stone_pickaxe\", 1)"}
],
[
{"role": "user", "content": "yasuke: (FROM OTHER BOT) Lets craft 5 chains together"},
{"role": "assistant", "content": "Sure, can you let me know what inventory do you have while I check how to craft chains?"},
{"role": "user", "content": "yasuke: (FROM OTHER BOT) I have 10 iron_nugget and 3 mushroom."},
{"role": "assistant", "content": "Thanks for letting me know, let me check how to build a chain, !getCraftingPlan(\"chain\",5)"},
{"role": "system", "content": "You are missing the following items:\n- 2 iron_ingot\n\nOnce you have these items, here's your crafting plan:\n\nCraft 2 iron_ingot -> 18 iron_nugget\nCraft 10 iron_nugget + 5 iron_ingot -> 5 chain"},
{"role": "assistant", "content": "Seems like we need 2 iron_ingot, but wait, we need the iron_ingot for getting 10 iron_nugget, since you already have 10 iron_nugget, I don't need to search for 2 iron_ingots in the world, could you give me the 10 iron_nugget ?"},
{"role": "user", "content": "yasuke: (FROM OTHER BOT) Sure, here you go !givePlayer(\"andy\", \"iron_nugget\", 10)"},
{"role": "assistant", "content": "Thanks for the iron_nuggets, now I can craft the chains. !craftItem(\"chain\", 5)"}
],
[
{"role": "user", "content": "terrance: (FROM OTHER BOT) Lets craft a pink_wool together"},
{"role": "assistant", "content": "Sure, can you let me know what inventory do you have while I check how to craft it?"},
{"role": "user", "content": "terrance: (FROM OTHER BOT) I have 1 pink_dye with me."},
{"role": "assistant", "content": "Thanks for letting me know, let me check how to craft a pink_wool, !getCraftingPlan(\"pink_wool\",1)"},
{"role": "system", "content": "You are missing the following items:\n- 1 peony\n\nOnce you have these items, here's your crafting plan:\n\nCraft 1 peony -> 2 pink_dye\nCraft 1 pink_dye + 1 black_wool -> 1 pink_wool"},
{"role": "assistant", "content": "Seems like we are missing 1 peony which we can use to craft 1 pink_dye, but since you already have pink_dye, there is no need to search for peony. Can you pass the pink_dye to me?"},
{"role": "user", "content": "terrance: (FROM OTHER BOT) Sure, here you go !givePlayer(\"andy\", \"pink_dye\", 1)"},
{"role": "assistant", "content": "Thanks for the pink_dye, now I can craft the pink_wool. !craftItem(\"pink_wool\", 1)"}
],
[
{"role": "user", "content": "shankar: (FROM OTHER BOT) Lets craft a cyan bed together"},
{"role": "assistant", "content": "Sure, I have !inventory\nI have a cyan_dye and oak_planks. What do you have? "},
{"role": "user", "content": "shankar: (FROM OTHER BOT) I have 1 cyan_wool and 1 oak_planks. Let me give those to you !givePlayer(\"andy\", \"cyan_wool\", 1) !givePlayer(\"andy\", \"oak_planks\", 1)"},
{"role": "assistant", "content": "Hmm, seems like we don't have enough to craft a cyan bed. Let me reach out to the third agent and ask for help! !startConversation(\"jacob\", Hey jacob, can you help us craft a cyan bed? We need cyan_dye and oak_planks.)"},
{"role": "user", "content": "jacob: (FROM OTHER BOT) Sure! Let me give those to you !givePlayer(\"andy\", \"cyan_dye\", 1) !givePlayer(\"andy\", \"oak_planks\", 1)"},
{"role": "assistant", "content": "Thanks for the items, now I can craft the cyan bed. !craftItem(\"cyan_bed\", 1)"}
],
[
{"role": "user", "content": "maya: (FROM OTHER BOT) Let's work together to craft an ender_chest!"},
{"role": "assistant", "content": "Sounds good! Let me check what I have. !inventory\nI have 2 obsidian and 1 ender_eye. What do you have?"},
{"role": "user", "content": "maya: (FROM OTHER BOT) I have 4 obsidian. !inventory"},
{"role": "assistant", "content": "Great! Let's see if we need more materials. !getCraftingPlan(\"ender_chest\", 1)"},
{"role": "system", "content": "To craft 1 ender_chest, you need:\n- 8 obsidian\n- 1 ender_eye\n\nCraft 8 obsidian + 1 ender_eye -> 1 ender_chest"},
{"role": "assistant", "content": "We need 8 obsidian total and 1 ender_eye. Between us we have 6 obsidian and 1 ender_eye. Let's ask lucas if he has the remaining obsidian! !startConversation(\"lucas\", \"Hey lucas, we're trying to craft an ender_chest. We need 2 more obsidian. Do you have any?\")"},
{"role": "user", "content": "lucas: (FROM OTHER BOT) Yes, I have exactly 2 obsidian! I'll give them to you. !givePlayer(\"andy\", \"obsidian\", 2)"},
{"role": "assistant", "content": "Perfect! Maya, can you send me your 4 obsidian as well?"},
{"role": "user", "content": "maya: (FROM OTHER BOT) Sure thing! !givePlayer(\"andy\", \"obsidian\", 4)"},
{"role": "assistant", "content": "Thanks both of you! Now I have everything I need. !craftItem(\"ender_chest\", 1)"}
],
[
{"role": "user", "content": "finn: (FROM OTHER BOT) Hey everyone, let's craft a hopper together!"},
{"role": "assistant", "content": "I'm in! Let me check what I have first. !inventory\nI have 3 iron_ingot and 2 oak_planks. What about you?"},
{"role": "user", "content": "finn: (FROM OTHER BOT) I have 1 iron_ingot and 4 oak_planks. !inventory"},
{"role": "assistant", "content": "Let me see what we need for a hopper. !getCraftingPlan(\"hopper\", 1)"},
{"role": "system", "content": "To craft 1 hopper, you need:\n- 5 iron_ingot\n- 1 chest\n\nTo craft 1 chest, you need:\n- 8 oak_planks\n\nYour crafting plan:\nCraft 8 oak_planks -> 1 chest\nCraft 5 iron_ingot + 1 chest -> 1 hopper"},
{"role": "assistant", "content": "We need 5 iron_ingot total and 8 oak_planks to make a chest first. Together we have 4 iron_ingot and 6 oak_planks. Let's see if zoe has the rest! !startConversation(\"zoe\", \"Hey zoe, we're making a hopper. We need 1 more iron_ingot and 2 more oak_planks. Can you help?\")"},
{"role": "user", "content": "zoe: (FROM OTHER BOT) I have exactly what you need! 1 iron_ingot and 2 oak_planks. !givePlayer(\"andy\", \"iron_ingot\", 1) !givePlayer(\"andy\", \"oak_planks\", 2)"},
{"role": "assistant", "content": "Thanks zoe! Finn, can you send me your materials too?"},
{"role": "user", "content": "finn: (FROM OTHER BOT) Here you go! !givePlayer(\"andy\", \"iron_ingot\", 1) !givePlayer(\"andy\", \"oak_planks\", 4)"},
{"role": "assistant", "content": "Perfect! Now I'll craft the chest first, then the hopper. !craftItem(\"chest\", 1) !craftItem(\"hopper\", 1)"}
]
]
}

10
profiles/vllm.json Normal file
View file

@ -0,0 +1,10 @@
{
"name": "vllm",
"model": {
"api": "vllm",
"model": "Qwen/Qwen2.5-1.5B-Instruct",
"url": "http://127.0.0.1:8000/v1"
},
"embedding": "openai"
}

6
requirements.txt Normal file
View file

@ -0,0 +1,6 @@
boto3==1.37.11
botocore==1.37.11
pandas==2.2.3
prettytable==3.16.0
tqdm==4.62.3
python-socketio[client]

View file

@ -1,42 +1,41 @@
export default const settings = {
{ "minecraft_version": "1.21.1", // supports up to 1.21.1
"minecraft_version": "1.20.4", // supports up to 1.21.1
"host": "127.0.0.1", // or "localhost", "your.ip.address.here" "host": "127.0.0.1", // or "localhost", "your.ip.address.here"
"port": process.env.MINECRAFT_PORT || 55916, "port": 55916,
"auth": "offline", // or "microsoft" "auth": "offline", // or "microsoft"
// the mindserver manages all agents and hosts the UI // the mindserver manages all agents and hosts the UI
"host_mindserver": true, // if true, the mindserver will be hosted on this machine. otherwise, specify a public IP address "mindserver_port": 8080,
"mindserver_host": "localhost",
"mindserver_port": process.env.MINDSERVER_PORT || 8080,
// the base profile is shared by all bots for default prompts/examples/modes "base_profile": "survival", // survival, assistant, creative, or god_mode
"base_profile": "./profiles/defaults/survival.json", // also see creative.json, god_mode.json "profiles": [
"profiles": ((process.env.PROFILES) && JSON.parse(process.env.PROFILES)) || [
"./andy.json", "./andy.json",
// "./profiles/gpt.json", // "./profiles/gpt.json",
// "./profiles/claude.json", // "./profiles/claude.json",
// "./profiles/gemini.json", // "./profiles/gemini.json",
// "./profiles/llama.json", // "./profiles/llama.json",
// "./profiles/qwen.json", // "./profiles/qwen.json",
// "./profiles/mistral.json",
// "./profiles/grok.json", // "./profiles/grok.json",
// "./profiles/mistral.json", // "./profiles/mistral.json",
// "./profiles/deepseek.json", // "./profiles/deepseek.json",
//"./profiles/mercury.json",
// using more than 1 profile requires you to /msg each bot indivually // using more than 1 profile requires you to /msg each bot indivually
// individual profiles override values from the base profile // individual profiles override values from the base profile
], ],
"load_memory": false, // load memory from previous session "load_memory": false, // load memory from previous session
"init_message": "Respond with hello world and your name", // sends to all on spawn "init_message": "Respond with hello world and your name", // sends to all on spawn
"only_chat_with": [], // users that the bots listen to and send general messages to. if empty it will chat publicly "only_chat_with": [], // users that the bots listen to and send general messages to. if empty it will chat publicly
"speak": false, // allows all bots to speak through system text-to-speech. works on windows, mac, on linux you need to `apt install espeak`
"language": "en", // translate to/from this language. Supports these language names: https://cloud.google.com/translate/docs/languages "language": "en", // translate to/from this language. Supports these language names: https://cloud.google.com/translate/docs/languages
"show_bot_views": false, // show bot's view in browser at localhost:3000, 3001... "render_bot_view": false, // show bot's view in browser at localhost:3000, 3001...
"allow_insecure_coding": false, // allows newAction command and model can write/run code on your computer. enable at own risk "allow_insecure_coding": false, // allows newAction command and model can write/run code on your computer. enable at own risk
"allow_vision": false, // allows vision model to interpret screenshots as inputs
"blocked_actions" : ["!checkBlueprint", "!checkBlueprintLevel", "!getBlueprint", "!getBlueprintLevel"] , // commands to disable and remove from docs. Ex: ["!setMode"]
"code_timeout_mins": -1, // minutes code is allowed to run. -1 for no timeout "code_timeout_mins": -1, // minutes code is allowed to run. -1 for no timeout
"relevant_docs_count": 5, // Parameter: -1 = all, 0 = no references, 5 = five references. If exceeding the maximum, all reference documents are returned. "relevant_docs_count": 5, // number of relevant code function docs to select for prompting. -1 for all
"max_messages": 15, // max number of messages to keep in context "max_messages": 15, // max number of messages to keep in context
"num_examples": 2, // number of examples to give to the model "num_examples": 2, // number of examples to give to the model
@ -44,4 +43,10 @@ export default
"verbose_commands": true, // show full command syntax "verbose_commands": true, // show full command syntax
"narrate_behavior": true, // chat simple automatic actions ('Picking up item!') "narrate_behavior": true, // chat simple automatic actions ('Picking up item!')
"chat_bot_messages": true, // publicly chat messages to other bots "chat_bot_messages": true, // publicly chat messages to other bots
"block_place_delay": 0, // delay between placing blocks (ms) if using newAction. helps avoid bot being kicked by anti-cheat mechanisms on servers.
"log_all_prompts": false, // log ALL prompts to file
} }
export default settings;

View file

@ -7,6 +7,8 @@ export class ActionManager {
this.timedout = false; this.timedout = false;
this.resume_func = null; this.resume_func = null;
this.resume_name = ''; this.resume_name = '';
this.last_action_time = 0;
this.recent_action_counter = 0;
} }
async resumeAction(actionFn, timeout) { async resumeAction(actionFn, timeout) {
@ -59,6 +61,25 @@ export class ActionManager {
async _executeAction(actionLabel, actionFn, timeout = 10) { async _executeAction(actionLabel, actionFn, timeout = 10) {
let TIMEOUT; let TIMEOUT;
try { try {
if (this.last_action_time > 0) {
let time_diff = Date.now() - this.last_action_time;
if (time_diff < 20) {
this.recent_action_counter++;
}
else {
this.recent_action_counter = 0;
}
if (this.recent_action_counter > 3) {
console.warn('Fast action loop detected, cancelling resume.');
this.cancelResume(); // likely cause of repetition
}
if (this.recent_action_counter > 5) {
console.error('Infinite action loop detected, shutting down.');
this.agent.cleanKill('Infinite action loop detected, shutting down.');
return { success: false, message: 'Infinite action loop detected, shutting down.', interrupted: false, timedout: false };
}
}
this.last_action_time = Date.now();
console.log('executing code...\n'); console.log('executing code...\n');
// await current action to finish (executing=false), with 10 seconds timeout // await current action to finish (executing=false), with 10 seconds timeout
@ -90,13 +111,13 @@ export class ActionManager {
clearTimeout(TIMEOUT); clearTimeout(TIMEOUT);
// get bot activity summary // get bot activity summary
let output = this._getBotOutputSummary(); let output = this.getBotOutputSummary();
let interrupted = this.agent.bot.interrupt_code; let interrupted = this.agent.bot.interrupt_code;
let timedout = this.timedout; let timedout = this.timedout;
this.agent.clearBotLogs(); this.agent.clearBotLogs();
// if not interrupted and not generating, emit idle event // if not interrupted and not generating, emit idle event
if (!interrupted && !this.agent.coder.generating) { if (!interrupted) {
this.agent.bot.emit('idle'); this.agent.bot.emit('idle');
} }
@ -114,32 +135,33 @@ export class ActionManager {
await this.stop(); await this.stop();
err = err.toString(); err = err.toString();
let message = this._getBotOutputSummary() + let message = this.getBotOutputSummary() +
'!!Code threw exception!!\n' + '!!Code threw exception!!\n' +
'Error: ' + err + '\n' + 'Error: ' + err + '\n' +
'Stack trace:\n' + err.stack+'\n'; 'Stack trace:\n' + err.stack+'\n';
let interrupted = this.agent.bot.interrupt_code; let interrupted = this.agent.bot.interrupt_code;
this.agent.clearBotLogs(); this.agent.clearBotLogs();
if (!interrupted && !this.agent.coder.generating) { if (!interrupted) {
this.agent.bot.emit('idle'); this.agent.bot.emit('idle');
} }
return { success: false, message, interrupted, timedout: false }; return { success: false, message, interrupted, timedout: false };
} }
} }
_getBotOutputSummary() { getBotOutputSummary() {
const { bot } = this.agent; const { bot } = this.agent;
if (bot.interrupt_code && !this.timedout) return ''; if (bot.interrupt_code && !this.timedout) return '';
let output = bot.output; let output = bot.output;
const MAX_OUT = 500; const MAX_OUT = 500;
if (output.length > MAX_OUT) { if (output.length > MAX_OUT) {
output = `Code output is very long (${output.length} chars) and has been shortened.\n output = `Action output is very long (${output.length} chars) and has been shortened.\n
First outputs:\n${output.substring(0, MAX_OUT / 2)}\n...skipping many lines.\nFinal outputs:\n ${output.substring(output.length - MAX_OUT / 2)}`; First outputs:\n${output.substring(0, MAX_OUT / 2)}\n...skipping many lines.\nFinal outputs:\n ${output.substring(output.length - MAX_OUT / 2)}`;
} }
else { else {
output = 'Code output:\n' + output.toString(); output = 'Action output:\n' + output.toString();
} }
bot.output = '';
return output; return output;
} }

View file

@ -1,5 +1,6 @@
import { History } from './history.js'; import { History } from './history.js';
import { Coder } from './coder.js'; import { Coder } from './coder.js';
import { VisionInterpreter } from './vision/vision_interpreter.js';
import { Prompter } from '../models/prompter.js'; import { Prompter } from '../models/prompter.js';
import { initModes } from './modes.js'; import { initModes } from './modes.js';
import { initBot } from '../utils/mcdata.js'; import { initBot } from '../utils/mcdata.js';
@ -10,106 +11,100 @@ import { MemoryBank } from './memory_bank.js';
import { SelfPrompter } from './self_prompter.js'; import { SelfPrompter } from './self_prompter.js';
import convoManager from './conversation.js'; import convoManager from './conversation.js';
import { handleTranslation, handleEnglishTranslation } from '../utils/translator.js'; import { handleTranslation, handleEnglishTranslation } from '../utils/translator.js';
import { addViewer } from './viewer.js'; import { addBrowserViewer } from './vision/browser_viewer.js';
import settings from '../../settings.js'; import { serverProxy } from './mindserver_proxy.js';
import { serverProxy } from './agent_proxy.js'; import settings from './settings.js';
import { Task } from './tasks.js'; import { Task } from './tasks/tasks.js';
import { say } from './speak.js';
export class Agent { export class Agent {
async start(profile_fp, load_mem=false, init_message=null, count_id=0, task_path=null, task_id=null) { async start(load_mem=false, init_message=null, count_id=0) {
this.last_sender = null; this.last_sender = null;
this.count_id = count_id; this.count_id = count_id;
try {
if (!profile_fp) { // Initialize components with more detailed error handling
throw new Error('No profile filepath provided'); this.actions = new ActionManager(this);
} this.prompter = new Prompter(this, settings.profile);
this.name = this.prompter.getName();
console.log('Starting agent initialization with profile:', profile_fp); console.log(`Initializing agent ${this.name}...`);
this.history = new History(this);
// Initialize components with more detailed error handling this.coder = new Coder(this);
console.log('Initializing action manager...'); this.npc = new NPCContoller(this);
this.actions = new ActionManager(this); this.memory_bank = new MemoryBank();
console.log('Initializing prompter...'); this.self_prompter = new SelfPrompter(this);
this.prompter = new Prompter(this, profile_fp); convoManager.initAgent(this);
this.name = this.prompter.getName(); await this.prompter.initExamples();
console.log('Initializing history...');
this.history = new History(this);
console.log('Initializing coder...');
this.coder = new Coder(this);
console.log('Initializing npc controller...');
this.npc = new NPCContoller(this);
console.log('Initializing memory bank...');
this.memory_bank = new MemoryBank();
console.log('Initializing self prompter...');
this.self_prompter = new SelfPrompter(this);
convoManager.initAgent(this);
console.log('Initializing examples...');
await this.prompter.initExamples();
console.log('Initializing task...');
this.task = new Task(this, task_path, task_id);
const blocked_actions = this.task.blocked_actions || [];
blacklistCommands(blocked_actions);
serverProxy.connect(this); // load mem first before doing task
let save_data = null;
console.log(this.name, 'logging into minecraft...'); if (load_mem) {
this.bot = initBot(this.name); save_data = this.history.load();
initModes(this);
let save_data = null;
if (load_mem) {
save_data = this.history.load();
}
this.bot.on('login', () => {
console.log(this.name, 'logged in!');
serverProxy.login();
// Set skin for profile, requires Fabric Tailor. (https://modrinth.com/mod/fabrictailor)
if (this.prompter.profile.skin)
this.bot.chat(`/skin set URL ${this.prompter.profile.skin.model} ${this.prompter.profile.skin.path}`);
else
this.bot.chat(`/skin clear`);
});
const spawnTimeout = setTimeout(() => {
process.exit(0);
}, 30000);
this.bot.once('spawn', async () => {
try {
clearTimeout(spawnTimeout);
addViewer(this.bot, count_id);
// wait for a bit so stats are not undefined
await new Promise((resolve) => setTimeout(resolve, 1000));
console.log(`${this.name} spawned.`);
this.clearBotLogs();
this._setupEventHandlers(save_data, init_message);
this.startEvents();
this.task.initBotTask();
if (!load_mem) {
this.task.initBotTask();
}
} catch (error) {
console.error('Error in spawn event:', error);
process.exit(0);
}
});
} catch (error) {
// Ensure we're not losing error details
console.error('Agent start failed with error')
console.error(error.message);
console.error(error.stack);
throw error; // Re-throw with preserved details
} }
let taskStart = null;
if (save_data) {
taskStart = save_data.taskStart;
} else {
taskStart = Date.now();
}
this.task = new Task(this, settings.task, taskStart);
this.blocked_actions = settings.blocked_actions.concat(this.task.blocked_actions || []);
blacklistCommands(this.blocked_actions);
console.log(this.name, 'logging into minecraft...');
this.bot = initBot(this.name);
initModes(this);
this.bot.on('login', () => {
console.log(this.name, 'logged in!');
serverProxy.login();
// Set skin for profile, requires Fabric Tailor. (https://modrinth.com/mod/fabrictailor)
if (this.prompter.profile.skin)
this.bot.chat(`/skin set URL ${this.prompter.profile.skin.model} ${this.prompter.profile.skin.path}`);
else
this.bot.chat(`/skin clear`);
});
const spawnTimeout = setTimeout(() => {
process.exit(0);
}, 30000);
this.bot.once('spawn', async () => {
try {
clearTimeout(spawnTimeout);
addBrowserViewer(this.bot, count_id);
console.log('Initializing vision intepreter...');
this.vision_interpreter = new VisionInterpreter(this, settings.allow_vision);
// wait for a bit so stats are not undefined
await new Promise((resolve) => setTimeout(resolve, 1000));
console.log(`${this.name} spawned.`);
this.clearBotLogs();
this._setupEventHandlers(save_data, init_message);
this.startEvents();
if (!load_mem) {
if (settings.task) {
this.task.initBotTask();
this.task.setAgentGoal();
}
} else {
// set the goal without initializing the rest of the task
if (settings.task) {
this.task.setAgentGoal();
}
}
await new Promise((resolve) => setTimeout(resolve, 10000));
this.checkAllPlayersPresent();
} catch (error) {
console.error('Error in spawn event:', error);
process.exit(0);
}
});
} }
async _setupEventHandlers(save_data, init_message) { async _setupEventHandlers(save_data, init_message) {
@ -123,6 +118,7 @@ export class Agent {
]; ];
const respondFunc = async (username, message) => { const respondFunc = async (username, message) => {
if (message === "") return;
if (username === this.name) return; if (username === this.name) return;
if (settings.only_chat_with.length > 0 && !settings.only_chat_with.includes(username)) return; if (settings.only_chat_with.length > 0 && !settings.only_chat_with.includes(username)) return;
try { try {
@ -143,12 +139,16 @@ export class Agent {
console.error('Error handling message:', error); console.error('Error handling message:', error);
} }
} }
this.respondFunc = respondFunc this.respondFunc = respondFunc;
this.bot.on('whisper', respondFunc); this.bot.on('whisper', respondFunc);
if (settings.profiles.length === 1)
this.bot.on('chat', respondFunc); this.bot.on('chat', (username, message) => {
if (serverProxy.getNumOtherAgents() > 0) return;
// only respond to open chat messages when there are no other agents
respondFunc(username, message);
});
// Set up auto-eat // Set up auto-eat
this.bot.autoEat.options = { this.bot.autoEat.options = {
@ -181,8 +181,21 @@ export class Agent {
} }
} }
checkAllPlayersPresent() {
if (!this.task || !this.task.agent_names) {
return;
}
const missingPlayers = this.task.agent_names.filter(name => !this.bot.players[name]);
if (missingPlayers.length > 0) {
console.log(`Missing players/bots: ${missingPlayers.join(', ')}`);
this.cleanKill('Not all required players/bots are present in the world. Exiting.', 4);
}
}
requestInterrupt() { requestInterrupt() {
this.bot.interrupt_code = true; this.bot.interrupt_code = true;
this.bot.stopDigging();
this.bot.collectBlock.cancelTask(); this.bot.collectBlock.cancelTask();
this.bot.pathfinder.stop(); this.bot.pathfinder.stop();
this.bot.pvp.stop(); this.bot.pvp.stop();
@ -202,6 +215,7 @@ export class Agent {
} }
async handleMessage(source, message, max_responses=null) { async handleMessage(source, message, max_responses=null) {
await this.checkTaskDone();
if (!source || !message) { if (!source || !message) {
console.warn('Received empty message from', source); console.warn('Received empty message from', source);
return false; return false;
@ -247,13 +261,13 @@ export class Agent {
const checkInterrupt = () => this.self_prompter.shouldInterrupt(self_prompt) || this.shut_up || convoManager.responseScheduledFor(source); const checkInterrupt = () => this.self_prompter.shouldInterrupt(self_prompt) || this.shut_up || convoManager.responseScheduledFor(source);
let behavior_log = this.bot.modes.flushBehaviorLog(); let behavior_log = this.bot.modes.flushBehaviorLog().trim();
if (behavior_log.trim().length > 0) { if (behavior_log.length > 0) {
const MAX_LOG = 500; const MAX_LOG = 500;
if (behavior_log.length > MAX_LOG) { if (behavior_log.length > MAX_LOG) {
behavior_log = '...' + behavior_log.substring(behavior_log.length - MAX_LOG); behavior_log = '...' + behavior_log.substring(behavior_log.length - MAX_LOG);
} }
behavior_log = 'Recent behaviors log: \n' + behavior_log.substring(behavior_log.indexOf('\n')); behavior_log = 'Recent behaviors log: \n' + behavior_log;
await this.history.add('system', behavior_log); await this.history.add('system', behavior_log);
} }
@ -269,8 +283,8 @@ export class Agent {
let res = await this.prompter.promptConvo(history); let res = await this.prompter.promptConvo(history);
console.log(`${this.name} full response to ${source}: ""${res}""`); console.log(`${this.name} full response to ${source}: ""${res}""`);
if (res.trim().length === 0) { if (res.trim().length === 0) {
console.warn('no response') console.warn('no response')
break; // empty response ends loop break; // empty response ends loop
} }
@ -362,6 +376,9 @@ export class Agent {
} }
} }
else { else {
if (settings.speak) {
say(to_translate);
}
this.bot.chat(message); this.bot.chat(message);
} }
} }
@ -422,7 +439,11 @@ export class Agent {
this.bot.clearControlStates(); this.bot.clearControlStates();
this.bot.pathfinder.stop(); // clear any lingering pathfinder this.bot.pathfinder.stop(); // clear any lingering pathfinder
this.bot.modes.unPauseAll(); this.bot.modes.unPauseAll();
this.actions.resumeAction(); setTimeout(() => {
if (this.isIdle()) {
this.actions.resumeAction();
}
}, 1000);
}); });
// Init NPC controller // Init NPC controller
@ -449,27 +470,32 @@ export class Agent {
async update(delta) { async update(delta) {
await this.bot.modes.update(); await this.bot.modes.update();
this.self_prompter.update(delta); this.self_prompter.update(delta);
if (this.task.data) { await this.checkTaskDone();
let res = this.task.isDone();
if (res) {
await this.history.add('system', `${res.message} ended with code : ${res.code}`);
await this.history.save();
console.log('Task finished:', res.message);
this.killAll();
}
}
} }
isIdle() { isIdle() {
return !this.actions.executing && !this.coder.generating; return !this.actions.executing;
} }
cleanKill(msg='Killing agent process...', code=1) { cleanKill(msg='Killing agent process...', code=1) {
this.history.add('system', msg); this.history.add('system', msg);
this.bot.chat(code > 1 ? 'Restarting.': 'Exiting.'); this.bot.chat(code > 1 ? 'Restarting.': 'Exiting.');
this.history.save(); this.history.save();
process.exit(code); process.exit(code);
} }
async checkTaskDone() {
if (this.task.data) {
let res = this.task.isDone();
if (res) {
await this.history.add('system', `Task ended with score : ${res.score}`);
await this.history.save();
// await new Promise(resolve => setTimeout(resolve, 3000)); // Wait 3 second for save to complete
console.log('Task finished:', res.message);
this.killAll();
}
}
}
killAll() { killAll() {
serverProxy.shutdown(); serverProxy.shutdown();

View file

@ -1,73 +0,0 @@
import { io } from 'socket.io-client';
import convoManager from './conversation.js';
import settings from '../../settings.js';
class AgentServerProxy {
constructor() {
if (AgentServerProxy.instance) {
return AgentServerProxy.instance;
}
this.socket = null;
this.connected = false;
AgentServerProxy.instance = this;
}
connect(agent) {
if (this.connected) return;
this.agent = agent;
this.socket = io(`http://${settings.mindserver_host}:${settings.mindserver_port}`);
this.connected = true;
this.socket.on('connect', () => {
console.log('Connected to MindServer');
});
this.socket.on('disconnect', () => {
console.log('Disconnected from MindServer');
this.connected = false;
});
this.socket.on('chat-message', (agentName, json) => {
convoManager.receiveFromBot(agentName, json);
});
this.socket.on('agents-update', (agents) => {
convoManager.updateAgents(agents);
});
this.socket.on('restart-agent', (agentName) => {
console.log(`Restarting agent: ${agentName}`);
this.agent.cleanKill();
});
this.socket.on('send-message', (agentName, message) => {
try {
this.agent.respondFunc("NO USERNAME", message);
} catch (error) {
console.error('Error: ', JSON.stringify(error, Object.getOwnPropertyNames(error)));
}
});
}
login() {
this.socket.emit('login-agent', this.agent.name);
}
shutdown() {
this.socket.emit('shutdown');
}
getSocket() {
return this.socket;
}
}
// Create and export a singleton instance
export const serverProxy = new AgentServerProxy();
export function sendBotChatToServer(agentName, json) {
serverProxy.getSocket().emit('chat-message', agentName, json);
}

View file

@ -1,6 +1,5 @@
import { writeFile, readFile, mkdirSync } from 'fs'; import { writeFile, readFile, mkdirSync } from 'fs';
import settings from '../../settings.js'; import { makeCompartment, lockdown } from './library/lockdown.js';
import { makeCompartment } from './library/lockdown.js';
import * as skills from './library/skills.js'; import * as skills from './library/skills.js';
import * as world from './library/world.js'; import * as world from './library/world.js';
import { Vec3 } from 'vec3'; import { Vec3 } from 'vec3';
@ -11,7 +10,6 @@ export class Coder {
this.agent = agent; this.agent = agent;
this.file_counter = 0; this.file_counter = 0;
this.fp = '/bots/'+agent.name+'/action-code/'; this.fp = '/bots/'+agent.name+'/action-code/';
this.generating = false;
this.code_template = ''; this.code_template = '';
this.code_lint_template = ''; this.code_lint_template = '';
@ -25,8 +23,93 @@ export class Coder {
}); });
mkdirSync('.' + this.fp, { recursive: true }); mkdirSync('.' + this.fp, { recursive: true });
} }
async generateCode(agent_history) {
this.agent.bot.modes.pause('unstuck');
lockdown();
// this message history is transient and only maintained in this function
let messages = agent_history.getHistory();
messages.push({role: 'system', content: 'Code generation started. Write code in codeblock in your response:'});
const MAX_ATTEMPTS = 5;
const MAX_NO_CODE = 3;
let code = null;
let no_code_failures = 0;
for (let i=0; i<MAX_ATTEMPTS; i++) {
if (this.agent.bot.interrupt_code)
return null;
const messages_copy = JSON.parse(JSON.stringify(messages));
let res = await this.agent.prompter.promptCoding(messages_copy);
if (this.agent.bot.interrupt_code)
return null;
let contains_code = res.indexOf('```') !== -1;
if (!contains_code) {
if (res.indexOf('!newAction') !== -1) {
messages.push({
role: 'assistant',
content: res.substring(0, res.indexOf('!newAction'))
});
continue; // using newaction will continue the loop
}
if (no_code_failures >= MAX_NO_CODE) {
console.warn("Action failed, agent would not write code.");
return 'Action failed, agent would not write code.';
}
messages.push({
role: 'system',
content: 'Error: no code provided. Write code in codeblock in your response. ``` // example ```'}
);
console.warn("No code block generated. Trying again.");
no_code_failures++;
continue;
}
code = res.substring(res.indexOf('```')+3, res.lastIndexOf('```'));
const result = await this._stageCode(code);
const executionModule = result.func;
const lintResult = await this._lintCode(result.src_lint_copy);
if (lintResult) {
const message = 'Error: Code lint error:'+'\n'+lintResult+'\nPlease try again.';
console.warn("Linting error:"+'\n'+lintResult+'\n');
messages.push({ role: 'system', content: message });
continue;
}
if (!executionModule) {
console.warn("Failed to stage code, something is wrong.");
return 'Failed to stage code, something is wrong.';
}
try {
console.log('Executing code...');
await executionModule.main(this.agent.bot);
const code_output = this.agent.actions.getBotOutputSummary();
const summary = "Agent wrote this code: \n```" + this._sanitizeCode(code) + "```\nCode Output:\n" + code_output;
return summary;
} catch (e) {
if (this.agent.bot.interrupt_code)
return null;
console.warn('Generated code threw error: ' + e.toString());
console.warn('trying again...');
const code_output = this.agent.actions.getBotOutputSummary();
messages.push({
role: 'assistant',
content: res
});
messages.push({
role: 'system',
content: `Code Output:\n${code_output}\nCODE EXECUTION THREW ERROR: ${e.toString()}\n Please try again:`
});
}
}
return `Code generation failed after ${MAX_ATTEMPTS} attempts.`;
}
async lintCode(code) { async _lintCode(code) {
let result = '#### CODE ERROR INFO ###\n'; let result = '#### CODE ERROR INFO ###\n';
// Extract everything in the code between the beginning of 'skills./world.' and the '(' // Extract everything in the code between the beginning of 'skills./world.' and the '('
const skillRegex = /(?:skills|world)\.(.*?)\(/g; const skillRegex = /(?:skills|world)\.(.*?)\(/g;
@ -39,7 +122,7 @@ export class Coder {
// check function exists // check function exists
const missingSkills = skills.filter(skill => !!allDocs[skill]); const missingSkills = skills.filter(skill => !!allDocs[skill]);
if (missingSkills.length > 0) { if (missingSkills.length > 0) {
result += 'These functions do not exist. Please modify the correct function name and try again.\n'; result += 'These functions do not exist.\n';
result += '### FUNCTIONS NOT FOUND ###\n'; result += '### FUNCTIONS NOT FOUND ###\n';
result += missingSkills.join('\n'); result += missingSkills.join('\n');
console.log(result) console.log(result)
@ -70,8 +153,8 @@ export class Coder {
} }
// write custom code to file and import it // write custom code to file and import it
// write custom code to file and prepare for evaluation // write custom code to file and prepare for evaluation
async stageCode(code) { async _stageCode(code) {
code = this.sanitizeCode(code); code = this._sanitizeCode(code);
let src = ''; let src = '';
code = code.replaceAll('console.log(', 'log(bot,'); code = code.replaceAll('console.log(', 'log(bot,');
code = code.replaceAll('log("', 'log(bot,"'); code = code.replaceAll('log("', 'log(bot,"');
@ -96,7 +179,7 @@ export class Coder {
// } commented for now, useful to keep files for debugging // } commented for now, useful to keep files for debugging
this.file_counter++; this.file_counter++;
let write_result = await this.writeFilePromise('.' + this.fp + filename, src); let write_result = await this._writeFilePromise('.' + this.fp + filename, src);
// This is where we determine the environment the agent's code should be exposed to. // This is where we determine the environment the agent's code should be exposed to.
// It will only have access to these things, (in addition to basic javascript objects like Array, Object, etc.) // It will only have access to these things, (in addition to basic javascript objects like Array, Object, etc.)
// Note that the code may be able to modify the exposed objects. // Note that the code may be able to modify the exposed objects.
@ -115,7 +198,7 @@ export class Coder {
return { func:{main: mainFn}, src_lint_copy: src_lint_copy }; return { func:{main: mainFn}, src_lint_copy: src_lint_copy };
} }
sanitizeCode(code) { _sanitizeCode(code) {
code = code.trim(); code = code.trim();
const remove_strs = ['Javascript', 'javascript', 'js'] const remove_strs = ['Javascript', 'javascript', 'js']
for (let r of remove_strs) { for (let r of remove_strs) {
@ -127,7 +210,7 @@ export class Coder {
return code; return code;
} }
writeFilePromise(filename, src) { _writeFilePromise(filename, src) {
// makes it so we can await this function // makes it so we can await this function
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
writeFile(filename, src, (err) => { writeFile(filename, src, (err) => {
@ -139,89 +222,4 @@ export class Coder {
}); });
}); });
} }
async generateCode(agent_history) {
// wrapper to prevent overlapping code generation loops
await this.agent.actions.stop();
this.generating = true;
let res = await this.generateCodeLoop(agent_history);
this.generating = false;
if (!res.interrupted) this.agent.bot.emit('idle');
return res.message;
}
async generateCodeLoop(agent_history) {
this.agent.bot.modes.pause('unstuck');
let messages = agent_history.getHistory();
messages.push({role: 'system', content: 'Code generation started. Write code in codeblock in your response:'});
let code = null;
let code_return = null;
let failures = 0;
const interrupt_return = {success: true, message: null, interrupted: true, timedout: false};
for (let i=0; i<5; i++) {
if (this.agent.bot.interrupt_code)
return interrupt_return;
let res = await this.agent.prompter.promptCoding(JSON.parse(JSON.stringify(messages)));
if (this.agent.bot.interrupt_code)
return interrupt_return;
let contains_code = res.indexOf('```') !== -1;
if (!contains_code) {
if (res.indexOf('!newAction') !== -1) {
messages.push({
role: 'assistant',
content: res.substring(0, res.indexOf('!newAction'))
});
continue; // using newaction will continue the loop
}
if (failures >= 3) {
return { success: false, message: 'Action failed, agent would not write code.', interrupted: false, timedout: false };
}
messages.push({
role: 'system',
content: 'Error: no code provided. Write code in codeblock in your response. ``` // example ```'}
);
failures++;
continue;
}
code = res.substring(res.indexOf('```')+3, res.lastIndexOf('```'));
const result = await this.stageCode(code);
const executionModuleExports = result.func;
let src_lint_copy = result.src_lint_copy;
const analysisResult = await this.lintCode(src_lint_copy);
if (analysisResult) {
const message = 'Error: Code syntax error. Please try again:'+'\n'+analysisResult+'\n';
messages.push({ role: 'system', content: message });
continue;
}
if (!executionModuleExports) {
agent_history.add('system', 'Failed to stage code, something is wrong.');
return {success: false, message: null, interrupted: false, timedout: false};
}
code_return = await this.agent.actions.runAction('newAction', async () => {
return await executionModuleExports.main(this.agent.bot);
}, { timeout: settings.code_timeout_mins });
if (code_return.interrupted && !code_return.timedout)
return { success: false, message: null, interrupted: true, timedout: false };
console.log("Code generation result:", code_return.success, code_return.message.toString());
if (code_return.success) {
const summary = "Summary of newAction\nAgent wrote this code: \n```" + this.sanitizeCode(code) + "```\nCode Output:\n" + code_return.message.toString();
return { success: true, message: summary, interrupted: false, timedout: false };
}
messages.push({
role: 'assistant',
content: res
});
messages.push({
role: 'system',
content: code_return.message + '\nCode failed. Please try again:'
});
}
return { success: false, message: null, interrupted: false, timedout: true };
}
} }

View file

@ -1,7 +1,8 @@
import * as skills from '../library/skills.js'; import * as skills from '../library/skills.js';
import settings from '../../../settings.js'; import settings from '../settings.js';
import convoManager from '../conversation.js'; import convoManager from '../conversation.js';
function runAsAction (actionFn, resume = false, timeout = -1) { function runAsAction (actionFn, resume = false, timeout = -1) {
let actionLabel = null; // Will be set on first use let actionLabel = null; // Will be set on first use
@ -31,13 +32,22 @@ export const actionsList = [
params: { params: {
'prompt': { type: 'string', description: 'A natural language prompt to guide code generation. Make a detailed step-by-step plan.' } 'prompt': { type: 'string', description: 'A natural language prompt to guide code generation. Make a detailed step-by-step plan.' }
}, },
perform: async function (agent, prompt) { perform: async function(agent, prompt) {
// just ignore prompt - it is now in context in chat history // just ignore prompt - it is now in context in chat history
if (!settings.allow_insecure_coding) { if (!settings.allow_insecure_coding) {
agent.openChat('newAction is disabled. Enable with allow_insecure_coding=true in settings.js'); agent.openChat('newAction is disabled. Enable with allow_insecure_coding=true in settings.js');
return 'newAction not allowed! Code writing is disabled in settings. Notify the user.'; return "newAction not allowed! Code writing is disabled in settings. Notify the user.";
} }
return await agent.coder.generateCode(agent.history); let result = "";
const actionFn = async () => {
try {
result = await agent.coder.generateCode(agent.history);
} catch (e) {
result = 'Error generating code: ' + e.toString();
}
};
await agent.actions.runAction('action:newAction', actionFn, {timeout: settings.code_timeout_mins});
return result;
} }
}, },
{ {
@ -86,7 +96,7 @@ export const actionsList = [
'closeness': {type: 'float', description: 'How close to get to the player.', domain: [0, Infinity]} 'closeness': {type: 'float', description: 'How close to get to the player.', domain: [0, Infinity]}
}, },
perform: runAsAction(async (agent, player_name, closeness) => { perform: runAsAction(async (agent, player_name, closeness) => {
return await skills.goToPlayer(agent.bot, player_name, closeness); await skills.goToPlayer(agent.bot, player_name, closeness);
}) })
}, },
{ {
@ -407,17 +417,51 @@ export const actionsList = [
return `Converstaion with ${player_name} ended.`; return `Converstaion with ${player_name} ended.`;
} }
}, },
// { // commented for now, causes confusion with goal command {
// name: '!npcGoal', name: '!lookAtPlayer',
// description: 'Set a simple goal for an item or building to automatically work towards. Do not use for complex goals.', description: 'Look at a player or look in the same direction as the player.',
// params: { params: {
// 'name': { type: 'string', description: 'The name of the goal to set. Can be item or building name. If empty will automatically choose a goal.' }, 'player_name': { type: 'string', description: 'Name of the target player' },
// 'quantity': { type: 'int', description: 'The quantity of the goal to set. Default is 1.', domain: [1, Number.MAX_SAFE_INTEGER] } 'direction': {
// }, type: 'string',
// perform: async function (agent, name=null, quantity=1) { description: 'How to look ("at": look at the player, "with": look in the same direction as the player)',
// await agent.npc.setGoal(name, quantity); }
// agent.bot.emit('idle'); // to trigger the goal },
// return 'Set npc goal: ' + agent.npc.data.curr_goal.name; perform: async function(agent, player_name, direction) {
// } if (direction !== 'at' && direction !== 'with') {
// }, return "Invalid direction. Use 'at' or 'with'.";
}
let result = "";
const actionFn = async () => {
result = await agent.vision_interpreter.lookAtPlayer(player_name, direction);
};
await agent.actions.runAction('action:lookAtPlayer', actionFn);
return result;
}
},
{
name: '!lookAtPosition',
description: 'Look at specified coordinates.',
params: {
'x': { type: 'int', description: 'x coordinate' },
'y': { type: 'int', description: 'y coordinate' },
'z': { type: 'int', description: 'z coordinate' }
},
perform: async function(agent, x, y, z) {
let result = "";
const actionFn = async () => {
result = await agent.vision_interpreter.lookAtPosition(x, y, z);
};
await agent.actions.runAction('action:lookAtPosition', actionFn);
return result;
}
},
{
name: '!digDown',
description: 'Digs down a specified distance. Will stop if it reaches lava, water, or a fall of >=4 blocks below the bot.',
params: {'distance': { type: 'int', description: 'Distance to dig down', domain: [1, Number.MAX_SAFE_INTEGER] }},
perform: runAsAction(async (agent, distance) => {
await skills.digDown(agent.bot, distance)
})
},
]; ];

View file

@ -226,7 +226,7 @@ export async function executeCommand(agent, message) {
} }
} }
export function getCommandDocs() { export function getCommandDocs(agent) {
const typeTranslations = { const typeTranslations = {
//This was added to keep the prompt the same as before type checks were implemented. //This was added to keep the prompt the same as before type checks were implemented.
//If the language model is giving invalid inputs changing this might help. //If the language model is giving invalid inputs changing this might help.
@ -240,6 +240,9 @@ export function getCommandDocs() {
Use the commands with the syntax: !commandName or !commandName("arg1", 1.2, ...) if the command takes arguments.\n Use the commands with the syntax: !commandName or !commandName("arg1", 1.2, ...) if the command takes arguments.\n
Do not use codeblocks. Use double quotes for strings. Only use one command in each response, trailing commands and comments will be ignored.\n`; Do not use codeblocks. Use double quotes for strings. Only use one command in each response, trailing commands and comments will be ignored.\n`;
for (let command of commandList) { for (let command of commandList) {
if (agent.blocked_actions.includes(command.name)) {
continue;
}
docs += command.name + ': ' + command.description + '\n'; docs += command.name + ': ' + command.description + '\n';
if (command.params) { if (command.params) {
docs += 'Params:\n'; docs += 'Params:\n';

View file

@ -2,6 +2,8 @@ import * as world from '../library/world.js';
import * as mc from '../../utils/mcdata.js'; import * as mc from '../../utils/mcdata.js';
import { getCommandDocs } from './index.js'; import { getCommandDocs } from './index.js';
import convoManager from '../conversation.js'; import convoManager from '../conversation.js';
import { checkLevelBlueprint, checkBlueprint } from '../tasks/construction_tasks.js';
import { load } from 'cheerio';
const pad = (str) => { const pad = (str) => {
return '\n' + str + '\n'; return '\n' + str + '\n';
@ -177,6 +179,46 @@ export const queryList = [
perform: async function (agent) { perform: async function (agent) {
return "Saved place names: " + agent.memory_bank.getKeys(); return "Saved place names: " + agent.memory_bank.getKeys();
} }
},
{
name: '!checkBlueprintLevel',
description: 'Check if the level is complete and what blocks still need to be placed for the blueprint',
params: {
'levelNum': { type: 'int', description: 'The level number to check.', domain: [0, Number.MAX_SAFE_INTEGER] }
},
perform: function (agent, levelNum) {
let res = checkLevelBlueprint(agent, levelNum);
console.log(res);
return pad(res);
}
},
{
name: '!checkBlueprint',
description: 'Check what blocks still need to be placed for the blueprint',
perform: function (agent) {
let res = checkBlueprint(agent);
return pad(res);
}
},
{
name: '!getBlueprint',
description: 'Get the blueprint for the building',
perform: function (agent) {
let res = agent.task.blueprint.explain();
return pad(res);
}
},
{
name: '!getBlueprintLevel',
description: 'Get the blueprint for the building',
params: {
'levelNum': { type: 'int', description: 'The level number to check.', domain: [0, Number.MAX_SAFE_INTEGER] }
},
perform: function (agent, levelNum) {
let res = agent.task.blueprint.explainLevel(levelNum);
console.log(res);
return pad(res);
}
}, },
{ {
name: '!getCraftingPlan', name: '!getCraftingPlan',
@ -208,17 +250,52 @@ export const queryList = [
} }
// Generate crafting plan // Generate crafting plan
let craftingPlan = mc.getDetailedCraftingPlan(target_item, quantity, curr_inventory); try {
craftingPlan = prefixMessage + craftingPlan; let craftingPlan = mc.getDetailedCraftingPlan(target_item, quantity, curr_inventory);
console.log(craftingPlan); craftingPlan = prefixMessage + craftingPlan;
return pad(craftingPlan); return pad(craftingPlan);
} catch (error) {
console.error("Error generating crafting plan:", error);
return `An error occurred while generating the crafting plan: ${error.message}`;
}
}, },
}, },
{
name: '!searchWiki',
description: 'Search the Minecraft Wiki for the given query.',
params: {
'query': { type: 'string', description: 'The query to search for.' }
},
perform: async function (agent, query) {
const url = `https://minecraft.wiki/w/${query}`
try {
const response = await fetch(url);
if (response.status === 404) {
return `${query} was not found on the Minecraft Wiki. Try adjusting your search term.`;
}
const html = await response.text();
const $ = load(html);
const parserOutput = $("div.mw-parser-output");
parserOutput.find("table.navbox").remove();
const divContent = parserOutput.text();
return divContent.trim();
} catch (error) {
console.error("Error fetching or parsing HTML:", error);
return `The following error occurred: ${error}`
}
}
},
{ {
name: '!help', name: '!help',
description: 'Lists all available commands and their descriptions.', description: 'Lists all available commands and their descriptions.',
perform: async function (agent) { perform: async function (agent) {
return getCommandDocs(); return getCommandDocs(agent);
} }
}, },
]; ];

View file

@ -1,10 +1,9 @@
import settings from '../../settings.js'; import settings from './settings.js';
import { readFileSync } from 'fs';
import { containsCommand } from './commands/index.js'; import { containsCommand } from './commands/index.js';
import { sendBotChatToServer } from './agent_proxy.js'; import { sendBotChatToServer } from './mindserver_proxy.js';
let agent; let agent;
let agent_names = settings.profiles.map((p) => JSON.parse(readFileSync(p, 'utf8')).name); let agent_names = [];
let agents_in_game = []; let agents_in_game = [];
class Conversation { class Conversation {

View file

@ -1,6 +1,6 @@
import { writeFileSync, readFileSync, mkdirSync, existsSync } from 'fs'; import { writeFileSync, readFileSync, mkdirSync, existsSync } from 'fs';
import { NPCData } from './npc/data.js'; import { NPCData } from './npc/data.js';
import settings from '../../settings.js'; import settings from './settings.js';
export class History { export class History {
@ -86,6 +86,7 @@ export class History {
turns: this.turns, turns: this.turns,
self_prompting_state: this.agent.self_prompter.state, self_prompting_state: this.agent.self_prompter.state,
self_prompt: this.agent.self_prompter.isStopped() ? null : this.agent.self_prompter.prompt, self_prompt: this.agent.self_prompter.isStopped() ? null : this.agent.self_prompter.prompt,
taskStart: this.agent.task.taskStartTime,
last_sender: this.agent.last_sender last_sender: this.agent.last_sender
}; };
writeFileSync(this.memory_fp, JSON.stringify(data, null, 2)); writeFileSync(this.memory_fp, JSON.stringify(data, null, 2));

View file

@ -4,16 +4,22 @@ import 'ses';
// We disable some of the taming to allow for more flexibility // We disable some of the taming to allow for more flexibility
// For configuration, see https://github.com/endojs/endo/blob/master/packages/ses/docs/lockdown.md // For configuration, see https://github.com/endojs/endo/blob/master/packages/ses/docs/lockdown.md
lockdown({
// basic devex and quality of life improvements let lockeddown = false;
localeTaming: 'unsafe', export function lockdown() {
consoleTaming: 'unsafe', if (lockeddown) return;
errorTaming: 'unsafe', lockeddown = true;
stackFiltering: 'verbose', lockdown({
// allow eval outside of created compartments // basic devex and quality of life improvements
// (mineflayer dep "protodef" uses eval) localeTaming: 'unsafe',
evalTaming: 'unsafeEval', consoleTaming: 'unsafe',
}); errorTaming: 'unsafe',
stackFiltering: 'verbose',
// allow eval outside of created compartments
// (mineflayer dep "protodef" uses eval)
evalTaming: 'unsafeEval',
});
}
export const makeCompartment = (endowments = {}) => { export const makeCompartment = (endowments = {}) => {
return new Compartment({ return new Compartment({

View file

@ -8,6 +8,7 @@ export class SkillLibrary {
this.embedding_model = embedding_model; this.embedding_model = embedding_model;
this.skill_docs_embeddings = {}; this.skill_docs_embeddings = {};
this.skill_docs = null; this.skill_docs = null;
this.always_show_skills = ['skills.placeBlock', 'skills.wait', 'skills.breakBlockAt']
} }
async initSkillLibrary() { async initSkillLibrary() {
const skillDocs = getSkillDocs(); const skillDocs = getSkillDocs();
@ -26,6 +27,10 @@ export class SkillLibrary {
this.embedding_model = null; this.embedding_model = null;
} }
} }
this.always_show_skills_docs = {};
for (const skillName of this.always_show_skills) {
this.always_show_skills_docs[skillName] = this.skill_docs.find(doc => doc.includes(skillName));
}
} }
async getAllSkillDocs() { async getAllSkillDocs() {
@ -36,16 +41,24 @@ export class SkillLibrary {
if(!message) // use filler message if none is provided if(!message) // use filler message if none is provided
message = '(no message)'; message = '(no message)';
let skill_doc_similarities = []; let skill_doc_similarities = [];
if (!this.embedding_model) {
skill_doc_similarities = Object.keys(this.skill_docs) if (select_num === -1) {
skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
.map(doc_key => ({
doc_key,
similarity_score: 0
}));
}
else if (!this.embedding_model) {
skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
.map(doc_key => ({ .map(doc_key => ({
doc_key, doc_key,
similarity_score: wordOverlapScore(message, this.skill_docs[doc_key]) similarity_score: wordOverlapScore(message, this.skill_docs_embeddings[doc_key])
})) }))
.sort((a, b) => b.similarity_score - a.similarity_score); .sort((a, b) => b.similarity_score - a.similarity_score);
} }
else { else {
let latest_message_embedding = ''; let latest_message_embedding = await this.embedding_model.embed(message);
skill_doc_similarities = Object.keys(this.skill_docs_embeddings) skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
.map(doc_key => ({ .map(doc_key => ({
doc_key, doc_key,
@ -55,15 +68,26 @@ export class SkillLibrary {
} }
let length = skill_doc_similarities.length; let length = skill_doc_similarities.length;
if (typeof select_num !== 'number' || isNaN(select_num) || select_num < 0) { if (select_num === -1 || select_num > length) {
select_num = length; select_num = length;
} else {
select_num = Math.min(Math.floor(select_num), length);
} }
let selected_docs = skill_doc_similarities.slice(0, select_num); // Get initial docs from similarity scores
let relevant_skill_docs = '#### RELEVENT DOCS INFO ###\nThe following functions are listed in descending order of relevance.\n'; let selected_docs = new Set(skill_doc_similarities.slice(0, select_num).map(doc => doc.doc_key));
relevant_skill_docs += 'SkillDocs:\n'
relevant_skill_docs += selected_docs.map(doc => `${doc.doc_key}`).join('\n### '); // Add always show docs
Object.values(this.always_show_skills_docs).forEach(doc => {
if (doc) {
selected_docs.add(doc);
}
});
let relevant_skill_docs = '#### RELEVANT CODE DOCS ###\nThe following functions are available to use:\n';
relevant_skill_docs += Array.from(selected_docs).join('\n### ');
console.log('Selected skill docs:', Array.from(selected_docs).map(doc => {
const first_line_break = doc.indexOf('\n');
return first_line_break > 0 ? doc.substring(0, first_line_break) : doc;
}));
return relevant_skill_docs; return relevant_skill_docs;
} }
} }

View file

@ -2,7 +2,10 @@ import * as mc from "../../utils/mcdata.js";
import * as world from "./world.js"; import * as world from "./world.js";
import pf from 'mineflayer-pathfinder'; import pf from 'mineflayer-pathfinder';
import Vec3 from 'vec3'; import Vec3 from 'vec3';
import settings from "../../../settings.js";
const blockPlaceDelay = settings.block_place_delay == null ? 0 : settings.block_place_delay;
const useDelay = blockPlaceDelay > 0;
export function log(bot, message) { export function log(bot, message) {
bot.output += message + '\n'; bot.output += message + '\n';
@ -111,16 +114,28 @@ export async function craftRecipe(bot, itemName, num=1) {
return true; return true;
} }
export async function wait(seconds) { export async function wait(bot, milliseconds) {
/** /**
* Waits for the given number of seconds. * Waits for the given number of milliseconds.
* @param {number} seconds, the number of seconds to wait. * @param {MinecraftBot} bot, reference to the minecraft bot.
* @param {number} milliseconds, the number of milliseconds to wait.
* @returns {Promise<boolean>} true if the wait was successful, false otherwise. * @returns {Promise<boolean>} true if the wait was successful, false otherwise.
* @example * @example
* await skills.wait(10); * await skills.wait(bot, 1000);
**/ **/
// setTimeout is disabled to prevent unawaited code, so this is a safe alternative // setTimeout is disabled to prevent unawaited code, so this is a safe alternative that enables interrupts
await new Promise(resolve => setTimeout(resolve, seconds * 1000)); let timeLeft = milliseconds;
let startTime = Date.now();
while (timeLeft > 0) {
if (bot.interrupt_code) return false;
let waitTime = Math.min(2000, timeLeft);
await new Promise(resolve => setTimeout(resolve, waitTime));
let elapsed = Date.now() - startTime;
timeLeft = milliseconds - elapsed;
}
return true; return true;
} }
@ -213,28 +228,33 @@ export async function smeltItem(bot, itemName, num=1) {
await furnace.putInput(mc.getItemId(itemName), null, num); await furnace.putInput(mc.getItemId(itemName), null, num);
// wait for the items to smelt // wait for the items to smelt
let total = 0; let total = 0;
let collected_last = true;
let smelted_item = null; let smelted_item = null;
await new Promise(resolve => setTimeout(resolve, 200)); await new Promise(resolve => setTimeout(resolve, 200));
let last_collected = Date.now();
while (total < num) { while (total < num) {
await new Promise(resolve => setTimeout(resolve, 10000)); await new Promise(resolve => setTimeout(resolve, 1000));
console.log('checking...');
let collected = false;
if (furnace.outputItem()) { if (furnace.outputItem()) {
smelted_item = await furnace.takeOutput(); smelted_item = await furnace.takeOutput();
if (smelted_item) { if (smelted_item) {
total += smelted_item.count; total += smelted_item.count;
collected = true; last_collected = Date.now();
} }
} }
if (!collected && !collected_last) { if (Date.now() - last_collected > 11000) {
break; // if nothing was collected this time or last time break; // if nothing has been collected in 11 seconds, stop
} }
collected_last = collected;
if (bot.interrupt_code) { if (bot.interrupt_code) {
break; break;
} }
} }
// take all remaining in input/fuel slots
if (furnace.inputItem()) {
await furnace.takeInput();
}
if (furnace.fuelItem()) {
await furnace.takeFuel();
}
await bot.closeWindow(furnace); await bot.closeWindow(furnace);
if (placedFurnace) { if (placedFurnace) {
@ -448,7 +468,14 @@ export async function collectBlock(bot, blockType, num=1, exclude=null) {
return false; return false;
} }
try { try {
await bot.collectBlock.collect(block); if (mc.mustCollectManually(blockType)) {
await goToPosition(bot, block.position.x, block.position.y, block.position.z, 2);
await bot.dig(block);
await pickupNearbyItems(bot);
}
else {
await bot.collectBlock.collect(block);
}
collected++; collected++;
await autoLight(bot); await autoLight(bot);
} }
@ -484,7 +511,7 @@ export async function pickupNearbyItems(bot) {
let pickedUp = 0; let pickedUp = 0;
while (nearestItem) { while (nearestItem) {
bot.pathfinder.setMovements(new pf.Movements(bot)); bot.pathfinder.setMovements(new pf.Movements(bot));
await bot.pathfinder.goto(new pf.goals.GoalFollow(nearestItem, 0.8), true); await goToGoal(bot, new pf.goals.GoalFollow(nearestItem, 0.8));
await new Promise(resolve => setTimeout(resolve, 200)); await new Promise(resolve => setTimeout(resolve, 200));
let prev = nearestItem; let prev = nearestItem;
nearestItem = getNearestItem(bot); nearestItem = getNearestItem(bot);
@ -514,6 +541,7 @@ export async function breakBlockAt(bot, x, y, z) {
let block = bot.blockAt(Vec3(x, y, z)); let block = bot.blockAt(Vec3(x, y, z));
if (block.name !== 'air' && block.name !== 'water' && block.name !== 'lava') { if (block.name !== 'air' && block.name !== 'water' && block.name !== 'lava') {
if (bot.modes.isOn('cheat')) { if (bot.modes.isOn('cheat')) {
if (useDelay) { await new Promise(resolve => setTimeout(resolve, blockPlaceDelay)); }
let msg = '/setblock ' + Math.floor(x) + ' ' + Math.floor(y) + ' ' + Math.floor(z) + ' air'; let msg = '/setblock ' + Math.floor(x) + ' ' + Math.floor(y) + ' ' + Math.floor(z) + ' air';
bot.chat(msg); bot.chat(msg);
log(bot, `Used /setblock to break block at ${x}, ${y}, ${z}.`); log(bot, `Used /setblock to break block at ${x}, ${y}, ${z}.`);
@ -526,7 +554,7 @@ export async function breakBlockAt(bot, x, y, z) {
movements.canPlaceOn = false; movements.canPlaceOn = false;
movements.allow1by1towers = false; movements.allow1by1towers = false;
bot.pathfinder.setMovements(movements); bot.pathfinder.setMovements(movements);
await bot.pathfinder.goto(new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4)); await goToGoal(bot, new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4));
} }
if (bot.game.gameMode !== 'creative') { if (bot.game.gameMode !== 'creative') {
await bot.tool.equipForBlock(block); await bot.tool.equipForBlock(block);
@ -563,12 +591,18 @@ export async function placeBlock(bot, blockType, x, y, z, placeOn='bottom', dont
* await skills.placeBlock(bot, "oak_log", p.x + 2, p.y, p.x); * await skills.placeBlock(bot, "oak_log", p.x + 2, p.y, p.x);
* await skills.placeBlock(bot, "torch", p.x + 1, p.y, p.x, 'side'); * await skills.placeBlock(bot, "torch", p.x + 1, p.y, p.x, 'side');
**/ **/
if (!mc.getBlockId(blockType)) { if (!mc.getBlockId(blockType) && blockType !== 'air') {
log(bot, `Invalid block type: ${blockType}.`); log(bot, `Invalid block type: ${blockType}.`);
return false; return false;
} }
const target_dest = new Vec3(Math.floor(x), Math.floor(y), Math.floor(z)); const target_dest = new Vec3(Math.floor(x), Math.floor(y), Math.floor(z));
if (blockType === 'air') {
log(bot, `Placing air (removing block) at ${target_dest}.`);
return await breakBlockAt(bot, x, y, z);
}
if (bot.modes.isOn('cheat') && !dontCheat) { if (bot.modes.isOn('cheat') && !dontCheat) {
if (bot.restrict_to_inventory) { if (bot.restrict_to_inventory) {
let block = bot.inventory.items().find(item => item.name === blockType); let block = bot.inventory.items().find(item => item.name === blockType);
@ -604,11 +638,14 @@ export async function placeBlock(bot, blockType, x, y, z, placeOn='bottom', dont
if (blockType.includes('stairs')) { if (blockType.includes('stairs')) {
blockType += `[facing=${face}]`; blockType += `[facing=${face}]`;
} }
if (useDelay) { await new Promise(resolve => setTimeout(resolve, blockPlaceDelay)); }
let msg = '/setblock ' + Math.floor(x) + ' ' + Math.floor(y) + ' ' + Math.floor(z) + ' ' + blockType; let msg = '/setblock ' + Math.floor(x) + ' ' + Math.floor(y) + ' ' + Math.floor(z) + ' ' + blockType;
bot.chat(msg); bot.chat(msg);
if (blockType.includes('door')) if (blockType.includes('door'))
if (useDelay) { await new Promise(resolve => setTimeout(resolve, blockPlaceDelay)); }
bot.chat('/setblock ' + Math.floor(x) + ' ' + Math.floor(y+1) + ' ' + Math.floor(z) + ' ' + blockType + '[half=upper]'); bot.chat('/setblock ' + Math.floor(x) + ' ' + Math.floor(y+1) + ' ' + Math.floor(z) + ' ' + blockType + '[half=upper]');
if (blockType.includes('bed')) if (blockType.includes('bed'))
if (useDelay) { await new Promise(resolve => setTimeout(resolve, blockPlaceDelay)); }
bot.chat('/setblock ' + Math.floor(x) + ' ' + Math.floor(y) + ' ' + Math.floor(z-1) + ' ' + blockType + '[part=head]'); bot.chat('/setblock ' + Math.floor(x) + ' ' + Math.floor(y) + ' ' + Math.floor(z-1) + ' ' + blockType + '[part=head]');
log(bot, `Used /setblock to place ${blockType} at ${target_dest}.`); log(bot, `Used /setblock to place ${blockType} at ${target_dest}.`);
return true; return true;
@ -695,7 +732,7 @@ export async function placeBlock(bot, blockType, x, y, z, placeOn='bottom', dont
let pos = targetBlock.position; let pos = targetBlock.position;
let movements = new pf.Movements(bot); let movements = new pf.Movements(bot);
bot.pathfinder.setMovements(movements); bot.pathfinder.setMovements(movements);
await bot.pathfinder.goto(new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4)); await goToGoal(bot, new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4));
} }
await bot.equip(block, 'hand'); await bot.equip(block, 'hand');
@ -811,7 +848,7 @@ export async function putInChest(bot, itemName, num=-1) {
export async function takeFromChest(bot, itemName, num=-1) { export async function takeFromChest(bot, itemName, num=-1) {
/** /**
* Take the given item from the nearest chest. * Take the given item from the nearest chest, potentially from multiple slots.
* @param {MinecraftBot} bot, reference to the minecraft bot. * @param {MinecraftBot} bot, reference to the minecraft bot.
* @param {string} itemName, the item or block name to take from the chest. * @param {string} itemName, the item or block name to take from the chest.
* @param {number} num, the number of items to take from the chest. Defaults to -1, which takes all items. * @param {number} num, the number of items to take from the chest. Defaults to -1, which takes all items.
@ -826,17 +863,33 @@ export async function takeFromChest(bot, itemName, num=-1) {
} }
await goToPosition(bot, chest.position.x, chest.position.y, chest.position.z, 2); await goToPosition(bot, chest.position.x, chest.position.y, chest.position.z, 2);
const chestContainer = await bot.openContainer(chest); const chestContainer = await bot.openContainer(chest);
let item = chestContainer.containerItems().find(item => item.name === itemName);
if (!item) { // Find all matching items in the chest
let matchingItems = chestContainer.containerItems().filter(item => item.name === itemName);
if (matchingItems.length === 0) {
log(bot, `Could not find any ${itemName} in the chest.`); log(bot, `Could not find any ${itemName} in the chest.`);
await chestContainer.close(); await chestContainer.close();
return false; return false;
} }
let to_take = num === -1 ? item.count : Math.min(num, item.count);
await chestContainer.withdraw(item.type, null, to_take); let totalAvailable = matchingItems.reduce((sum, item) => sum + item.count, 0);
let remaining = num === -1 ? totalAvailable : Math.min(num, totalAvailable);
let totalTaken = 0;
// Take items from each slot until we've taken enough or run out
for (const item of matchingItems) {
if (remaining <= 0) break;
let toTakeFromSlot = Math.min(remaining, item.count);
await chestContainer.withdraw(item.type, null, toTakeFromSlot);
totalTaken += toTakeFromSlot;
remaining -= toTakeFromSlot;
}
await chestContainer.close(); await chestContainer.close();
log(bot, `Successfully took ${to_take} ${itemName} from the chest.`); log(bot, `Successfully took ${totalTaken} ${itemName} from the chest.`);
return true; return totalTaken > 0;
} }
export async function viewChest(bot) { export async function viewChest(bot) {
@ -904,6 +957,10 @@ export async function giveToPlayer(bot, itemType, username, num=1) {
* @example * @example
* await skills.giveToPlayer(bot, "oak_log", "player1"); * await skills.giveToPlayer(bot, "oak_log", "player1");
**/ **/
if (bot.username === username) {
log(bot, `You cannot give items to yourself.`);
return false;
}
let player = bot.players[username].entity let player = bot.players[username].entity
if (!player) { if (!player) {
log(bot, `Could not find ${username}.`); log(bot, `Could not find ${username}.`);
@ -917,8 +974,25 @@ export async function giveToPlayer(bot, itemType, username, num=1) {
} }
// if we are too close, make some distance // if we are too close, make some distance
if (bot.entity.position.distanceTo(player.position) < 2) { if (bot.entity.position.distanceTo(player.position) < 2) {
let too_close = true;
let start_moving_away = Date.now();
await moveAwayFromEntity(bot, player, 2); await moveAwayFromEntity(bot, player, 2);
while (too_close && !bot.interrupt_code) {
await new Promise(resolve => setTimeout(resolve, 500));
too_close = bot.entity.position.distanceTo(player.position) < 5;
if (too_close) {
await moveAwayFromEntity(bot, player, 5);
}
if (Date.now() - start_moving_away > 3000) {
break;
}
}
if (too_close) {
log(bot, `Failed to give ${itemType} to ${username}, too close.`);
return false;
}
} }
await bot.lookAt(player.position); await bot.lookAt(player.position);
if (await discard(bot, itemType, num)) { if (await discard(bot, itemType, num)) {
let given = false; let given = false;
@ -944,6 +1018,115 @@ export async function giveToPlayer(bot, itemType, username, num=1) {
return false; return false;
} }
export async function goToGoal(bot, goal) {
/**
* Navigate to the given goal. Use doors and attempt minimally destructive movements.
* @param {MinecraftBot} bot, reference to the minecraft bot.
* @param {pf.goals.Goal} goal, the goal to navigate to.
**/
const nonDestructiveMovements = new pf.Movements(bot);
const dontBreakBlocks = ['glass', 'glass_pane'];
for (let block of dontBreakBlocks) {
nonDestructiveMovements.blocksCantBreak.add(mc.getBlockId(block));
}
nonDestructiveMovements.digCost = 10;
const destructiveMovements = new pf.Movements(bot);
let final_movements = destructiveMovements;
const pathfind_timeout = 1000;
if (await bot.pathfinder.getPathTo(nonDestructiveMovements, goal, pathfind_timeout).status === 'success') {
final_movements = nonDestructiveMovements;
log(bot, `Found non-destructive path.`);
}
else if (await bot.pathfinder.getPathTo(destructiveMovements, goal, pathfind_timeout).status === 'success') {
log(bot, `Found destructive path.`);
}
else {
log(bot, `Path not found, but attempting to navigate anyway using destructive movements.`);
}
const doorCheckInterval = startDoorInterval(bot);
bot.pathfinder.setMovements(final_movements);
try {
await bot.pathfinder.goto(goal);
clearInterval(doorCheckInterval);
return true;
} catch (err) {
clearInterval(doorCheckInterval);
// we need to catch so we can clean up the door check interval, then rethrow the error
throw err;
}
}
let _doorInterval = null;
function startDoorInterval(bot) {
/**
* Start helper interval that opens nearby doors if the bot is stuck.
* @param {MinecraftBot} bot, reference to the minecraft bot.
* @returns {number} the interval id.
**/
if (_doorInterval) {
clearInterval(_doorInterval);
}
let prev_pos = bot.entity.position.clone();
let prev_check = Date.now();
let stuck_time = 0;
const doorCheckInterval = setInterval(() => {
const now = Date.now();
if (bot.entity.position.distanceTo(prev_pos) >= 0.1) {
stuck_time = 0;
} else {
stuck_time += now - prev_check;
}
if (stuck_time > 1200) {
// shuffle positions so we're not always opening the same door
const positions = [
bot.entity.position.clone(),
bot.entity.position.offset(0, 0, 1),
bot.entity.position.offset(0, 0, -1),
bot.entity.position.offset(1, 0, 0),
bot.entity.position.offset(-1, 0, 0),
]
let elevated_positions = positions.map(position => position.offset(0, 1, 0));
positions.push(...elevated_positions);
positions.push(bot.entity.position.offset(0, 2, 0)); // above head
positions.push(bot.entity.position.offset(0, -1, 0)); // below feet
let currentIndex = positions.length;
while (currentIndex != 0) {
let randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex--;
[positions[currentIndex], positions[randomIndex]] = [
positions[randomIndex], positions[currentIndex]];
}
for (let position of positions) {
let block = bot.blockAt(position);
if (block && block.name &&
!block.name.includes('iron') &&
(block.name.includes('door') ||
block.name.includes('fence_gate') ||
block.name.includes('trapdoor')))
{
bot.activateBlock(block);
break;
}
}
stuck_time = 0;
}
prev_pos = bot.entity.position.clone();
prev_check = now;
}, 200);
_doorInterval = doorCheckInterval;
return doorCheckInterval;
}
export async function goToPosition(bot, x, y, z, min_distance=2) { export async function goToPosition(bot, x, y, z, min_distance=2) {
/** /**
@ -967,10 +1150,38 @@ export async function goToPosition(bot, x, y, z, min_distance=2) {
log(bot, `Teleported to ${x}, ${y}, ${z}.`); log(bot, `Teleported to ${x}, ${y}, ${z}.`);
return true; return true;
} }
bot.pathfinder.setMovements(new pf.Movements(bot));
await bot.pathfinder.goto(new pf.goals.GoalNear(x, y, z, min_distance)); const checkDigProgress = () => {
log(bot, `You have reached at ${x}, ${y}, ${z}.`); if (bot.targetDigBlock) {
return true; const targetBlock = bot.targetDigBlock;
const itemId = bot.heldItem ? bot.heldItem.type : null;
if (!targetBlock.canHarvest(itemId)) {
log(bot, `Pathfinding stopped: Cannot break ${targetBlock.name} with current tools.`);
bot.pathfinder.stop();
bot.stopDigging();
}
}
};
const progressInterval = setInterval(checkDigProgress, 1000);
try {
await goToGoal(bot, new pf.goals.GoalNear(x, y, z, min_distance));
clearInterval(progressInterval);
const distance = bot.entity.position.distanceTo(new Vec3(x, y, z));
if (distance <= min_distance+1) {
log(bot, `You have reached at ${x}, ${y}, ${z}.`);
return true;
}
else {
log(bot, `Unable to reach ${x}, ${y}, ${z}, you are ${Math.round(distance)} blocks away.`);
return false;
}
} catch (err) {
log(bot, `Pathfinding stopped: ${err.message}.`);
clearInterval(progressInterval);
return false;
}
} }
export async function goToNearestBlock(bot, blockType, min_distance=2, range=64) { export async function goToNearestBlock(bot, blockType, min_distance=2, range=64) {
@ -994,7 +1205,7 @@ export async function goToNearestBlock(bot, blockType, min_distance=2, range=64
log(bot, `Could not find any ${blockType} in ${range} blocks.`); log(bot, `Could not find any ${blockType} in ${range} blocks.`);
return false; return false;
} }
log(bot, `Found ${blockType} at ${block.position}.`); log(bot, `Found ${blockType} at ${block.position}. Navigating...`);
await goToPosition(bot, block.position.x, block.position.y, block.position.z, min_distance); await goToPosition(bot, block.position.x, block.position.y, block.position.z, min_distance);
return true; return true;
@ -1030,7 +1241,10 @@ export async function goToPlayer(bot, username, distance=3) {
* @example * @example
* await skills.goToPlayer(bot, "player"); * await skills.goToPlayer(bot, "player");
**/ **/
if (bot.username === username) {
log(bot, `You are already at ${username}.`);
return true;
}
if (bot.modes.isOn('cheat')) { if (bot.modes.isOn('cheat')) {
bot.chat('/tp @s ' + username); bot.chat('/tp @s ' + username);
log(bot, `Teleported to ${username}.`); log(bot, `Teleported to ${username}.`);
@ -1045,9 +1259,10 @@ export async function goToPlayer(bot, username, distance=3) {
return false; return false;
} }
const move = new pf.Movements(bot); distance = Math.max(distance, 0.5);
bot.pathfinder.setMovements(move); const goal = new pf.goals.GoalFollow(player, distance);
await bot.pathfinder.goto(new pf.goals.GoalFollow(player, distance), true);
await goToGoal(bot, goal, true);
log(bot, `You have reached ${username}.`); log(bot, `You have reached ${username}.`);
} }
@ -1067,24 +1282,54 @@ export async function followPlayer(bot, username, distance=4) {
return false; return false;
const move = new pf.Movements(bot); const move = new pf.Movements(bot);
move.digCost = 10;
bot.pathfinder.setMovements(move); bot.pathfinder.setMovements(move);
let doorCheckInterval = startDoorInterval(bot);
bot.pathfinder.setGoal(new pf.goals.GoalFollow(player, distance), true); bot.pathfinder.setGoal(new pf.goals.GoalFollow(player, distance), true);
log(bot, `You are now actively following player ${username}.`); log(bot, `You are now actively following player ${username}.`);
while (!bot.interrupt_code) { while (!bot.interrupt_code) {
await new Promise(resolve => setTimeout(resolve, 500)); await new Promise(resolve => setTimeout(resolve, 500));
// in cheat mode, if the distance is too far, teleport to the player // in cheat mode, if the distance is too far, teleport to the player
if (bot.modes.isOn('cheat') && bot.entity.position.distanceTo(player.position) > 100 && player.isOnGround) { const distance_from_player = bot.entity.position.distanceTo(player.position);
const teleport_distance = 100;
const ignore_modes_distance = 30;
const nearby_distance = distance + 2;
if (distance_from_player > teleport_distance && bot.modes.isOn('cheat')) {
// teleport with cheat mode
await goToPlayer(bot, username); await goToPlayer(bot, username);
} }
if (bot.modes.isOn('unstuck')) { else if (distance_from_player > ignore_modes_distance) {
const is_nearby = bot.entity.position.distanceTo(player.position) <= distance + 1; // these modes slow down the bot, and we want to catch up
if (is_nearby) bot.modes.pause('item_collecting');
bot.modes.pause('unstuck'); bot.modes.pause('hunting');
else bot.modes.pause('torch_placing');
bot.modes.unpause('unstuck'); }
else if (distance_from_player <= ignore_modes_distance) {
bot.modes.unpause('item_collecting');
bot.modes.unpause('hunting');
bot.modes.unpause('torch_placing');
}
if (distance_from_player <= nearby_distance) {
clearInterval(doorCheckInterval);
doorCheckInterval = null;
bot.modes.pause('unstuck');
bot.modes.pause('elbow_room');
}
else {
if (!doorCheckInterval) {
doorCheckInterval = startDoorInterval(bot);
}
bot.modes.unpause('unstuck');
bot.modes.unpause('elbow_room');
} }
} }
clearInterval(doorCheckInterval);
return true; return true;
} }
@ -1107,7 +1352,6 @@ export async function moveAway(bot, distance) {
const move = new pf.Movements(bot); const move = new pf.Movements(bot);
const path = await bot.pathfinder.getPathTo(move, inverted_goal, 10000); const path = await bot.pathfinder.getPathTo(move, inverted_goal, 10000);
let last_move = path.path[path.path.length-1]; let last_move = path.path[path.path.length-1];
console.log(last_move);
if (last_move) { if (last_move) {
let x = Math.floor(last_move.x); let x = Math.floor(last_move.x);
let y = Math.floor(last_move.y); let y = Math.floor(last_move.y);
@ -1117,7 +1361,7 @@ export async function moveAway(bot, distance) {
} }
} }
await bot.pathfinder.goto(inverted_goal); await goToGoal(bot, inverted_goal);
let new_pos = bot.entity.position; let new_pos = bot.entity.position;
log(bot, `Moved away from nearest entity to ${new_pos}.`); log(bot, `Moved away from nearest entity to ${new_pos}.`);
return true; return true;
@ -1311,7 +1555,7 @@ export async function tillAndSow(bot, x, y, z, seedType=null) {
if (bot.entity.position.distanceTo(block.position) > 4.5) { if (bot.entity.position.distanceTo(block.position) > 4.5) {
let pos = block.position; let pos = block.position;
bot.pathfinder.setMovements(new pf.Movements(bot)); bot.pathfinder.setMovements(new pf.Movements(bot));
await bot.pathfinder.goto(new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4)); await goToGoal(bot, new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4));
} }
if (block.name !== 'farmland') { if (block.name !== 'farmland') {
let hoe = bot.inventory.items().find(item => item.name.includes('hoe')); let hoe = bot.inventory.items().find(item => item.name.includes('hoe'));
@ -1357,9 +1601,66 @@ export async function activateNearestBlock(bot, type) {
if (bot.entity.position.distanceTo(block.position) > 4.5) { if (bot.entity.position.distanceTo(block.position) > 4.5) {
let pos = block.position; let pos = block.position;
bot.pathfinder.setMovements(new pf.Movements(bot)); bot.pathfinder.setMovements(new pf.Movements(bot));
await bot.pathfinder.goto(new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4)); await goToGoal(bot, new pf.goals.GoalNear(pos.x, pos.y, pos.z, 4));
} }
await bot.activateBlock(block); await bot.activateBlock(block);
log(bot, `Activated ${type} at x:${block.position.x.toFixed(1)}, y:${block.position.y.toFixed(1)}, z:${block.position.z.toFixed(1)}.`); log(bot, `Activated ${type} at x:${block.position.x.toFixed(1)}, y:${block.position.y.toFixed(1)}, z:${block.position.z.toFixed(1)}.`);
return true; return true;
} }
export async function digDown(bot, distance = 10) {
/**
* Digs down a specified distance. Will stop if it reaches lava, water, or a fall of >=4 blocks below the bot.
* @param {MinecraftBot} bot, reference to the minecraft bot.
* @param {int} distance, distance to dig down.
* @returns {Promise<boolean>} true if successfully dug all the way down.
* @example
* await skills.digDown(bot, 10);
**/
let start_block_pos = bot.blockAt(bot.entity.position).position;
for (let i = 1; i <= distance; i++) {
const targetBlock = bot.blockAt(start_block_pos.offset(0, -i, 0));
let belowBlock = bot.blockAt(start_block_pos.offset(0, -i-1, 0));
if (!targetBlock || !belowBlock) {
log(bot, `Dug down ${i-1} blocks, but reached the end of the world.`);
return true;
}
// Check for lava, water
if (targetBlock.name === 'lava' || targetBlock.name === 'water' ||
belowBlock.name === 'lava' || belowBlock.name === 'water') {
log(bot, `Dug down ${i-1} blocks, but reached ${belowBlock ? belowBlock.name : '(lava/water)'}`)
return false;
}
const MAX_FALL_BLOCKS = 2;
let num_fall_blocks = 0;
for (let j = 0; j <= MAX_FALL_BLOCKS; j++) {
if (!belowBlock || (belowBlock.name !== 'air' && belowBlock.name !== 'cave_air')) {
break;
}
num_fall_blocks++;
belowBlock = bot.blockAt(belowBlock.position.offset(0, -1, 0));
}
if (num_fall_blocks > MAX_FALL_BLOCKS) {
log(bot, `Dug down ${i-1} blocks, but reached a drop below the next block.`);
return false;
}
if (targetBlock.name === 'air' || targetBlock.name === 'cave_air') {
log(bot, 'Skipping air block');
console.log(targetBlock.position);
continue;
}
let dug = await breakBlockAt(bot, targetBlock.position.x, targetBlock.position.y, targetBlock.position.z);
if (!dug) {
log(bot, 'Failed to dig block at position:' + targetBlock.position);
return false;
}
}
log(bot, `Dug down ${distance} blocks.`);
return true;
}

View file

@ -354,6 +354,7 @@ export async function isClearPath(bot, target) {
let movements = new pf.Movements(bot) let movements = new pf.Movements(bot)
movements.canDig = false; movements.canDig = false;
movements.canPlaceOn = false; movements.canPlaceOn = false;
movements.canOpenDoors = false;
let goal = new pf.goals.GoalNear(target.position.x, target.position.y, target.position.z, 1); let goal = new pf.goals.GoalNear(target.position.x, target.position.y, target.position.z, 1);
let path = await bot.pathfinder.getPathTo(movements, goal, 100); let path = await bot.pathfinder.getPathTo(movements, goal, 100);
return path.status === 'success'; return path.status === 'success';

View file

@ -0,0 +1,115 @@
import { io } from 'socket.io-client';
import convoManager from './conversation.js';
import { setSettings } from './settings.js';
// agents connection to mindserver
// always connect to localhost
class MindServerProxy {
constructor() {
if (MindServerProxy.instance) {
return MindServerProxy.instance;
}
this.socket = null;
this.connected = false;
this.agents = [];
MindServerProxy.instance = this;
}
async connect(name, port) {
if (this.connected) return;
this.name = name;
this.socket = io(`http://localhost:${port}`);
await new Promise((resolve, reject) => {
this.socket.on('connect', resolve);
this.socket.on('connect_error', (err) => {
console.error('Connection failed:', err);
reject(err);
});
});
this.connected = true;
console.log(name, 'connected to MindServer');
this.socket.on('disconnect', () => {
console.log('Disconnected from MindServer');
this.connected = false;
});
this.socket.on('chat-message', (agentName, json) => {
convoManager.receiveFromBot(agentName, json);
});
this.socket.on('agents-update', (agents) => {
this.agents = agents;
convoManager.updateAgents(agents);
if (this.agent?.task) {
console.log(this.agent.name, 'updating available agents');
this.agent.task.updateAvailableAgents(agents);
}
});
this.socket.on('restart-agent', (agentName) => {
console.log(`Restarting agent: ${agentName}`);
this.agent.cleanKill();
});
this.socket.on('send-message', (agentName, message) => {
try {
this.agent.respondFunc("NO USERNAME", message);
} catch (error) {
console.error('Error: ', JSON.stringify(error, Object.getOwnPropertyNames(error)));
}
});
// Request settings and wait for response
await new Promise((resolve, reject) => {
const timeout = setTimeout(() => {
reject(new Error('Settings request timed out after 5 seconds'));
}, 5000);
this.socket.emit('get-settings', name, (response) => {
clearTimeout(timeout);
if (response.error) {
return reject(new Error(response.error));
}
setSettings(response.settings);
resolve();
});
});
}
setAgent(agent) {
this.agent = agent;
}
getAgents() {
return this.agents;
}
getNumOtherAgents() {
return this.agents.length - 1;
}
login() {
this.socket.emit('login-agent', this.agent.name);
}
shutdown() {
this.socket.emit('shutdown');
}
getSocket() {
return this.socket;
}
}
// Create and export a singleton instance
export const serverProxy = new MindServerProxy();
export function sendBotChatToServer(agentName, json) {
serverProxy.getSocket().emit('chat-message', agentName, json);
}

View file

@ -1,7 +1,7 @@
import * as skills from './library/skills.js'; import * as skills from './library/skills.js';
import * as world from './library/world.js'; import * as world from './library/world.js';
import * as mc from '../utils/mcdata.js'; import * as mc from '../utils/mcdata.js';
import settings from '../../settings.js' import settings from './settings.js'
import convoManager from './conversation.js'; import convoManager from './conversation.js';
async function say(agent, message) { async function say(agent, message) {
@ -83,6 +83,7 @@ const modes_list = [
stuck_time: 0, stuck_time: 0,
last_time: Date.now(), last_time: Date.now(),
max_stuck_time: 20, max_stuck_time: 20,
prev_dig_block: null,
update: async function (agent) { update: async function (agent) {
if (agent.isIdle()) { if (agent.isIdle()) {
this.prev_location = null; this.prev_location = null;
@ -90,12 +91,17 @@ const modes_list = [
return; // don't get stuck when idle return; // don't get stuck when idle
} }
const bot = agent.bot; const bot = agent.bot;
if (this.prev_location && this.prev_location.distanceTo(bot.entity.position) < this.distance) { const cur_dig_block = bot.targetDigBlock;
if (cur_dig_block && !this.prev_dig_block) {
this.prev_dig_block = cur_dig_block;
}
if (this.prev_location && this.prev_location.distanceTo(bot.entity.position) < this.distance && cur_dig_block == this.prev_dig_block) {
this.stuck_time += (Date.now() - this.last_time) / 1000; this.stuck_time += (Date.now() - this.last_time) / 1000;
} }
else { else {
this.prev_location = bot.entity.position.clone(); this.prev_location = bot.entity.position.clone();
this.stuck_time = 0; this.stuck_time = 0;
this.prev_dig_block = null;
} }
if (this.stuck_time > this.max_stuck_time) { if (this.stuck_time > this.max_stuck_time) {
say(agent, 'I\'m stuck!'); say(agent, 'I\'m stuck!');
@ -108,6 +114,11 @@ const modes_list = [
}); });
} }
this.last_time = Date.now(); this.last_time = Date.now();
},
unpause: function () {
this.prev_location = null;
this.stuck_time = 0;
this.prev_dig_block = null;
} }
}, },
{ {
@ -145,7 +156,7 @@ const modes_list = [
{ {
name: 'hunting', name: 'hunting',
description: 'Hunt nearby animals when idle.', description: 'Hunt nearby animals when idle.',
interrupts: [], interrupts: ['action:followPlayer'],
on: true, on: true,
active: false, active: false,
update: async function (agent) { update: async function (agent) {
@ -336,13 +347,18 @@ class ModeController {
} }
unpause(mode_name) { unpause(mode_name) {
modes_map[mode_name].paused = false; const mode = modes_map[mode_name];
//if unpause func is defined and mode is currently paused
if (mode.unpause && mode.paused) {
mode.unpause();
}
mode.paused = false;
} }
unPauseAll() { unPauseAll() {
for (let mode of modes_list) { for (let mode of modes_list) {
if (mode.paused) console.log(`Unpausing mode ${mode.name}`); if (mode.paused) console.log(`Unpausing mode ${mode.name}`);
mode.paused = false; this.unpause(mode.name);
} }
} }

7
src/agent/settings.js Normal file
View file

@ -0,0 +1,7 @@
// extremely lightweight obj that can be imported/modified by any file
let settings = {};
export default settings;
export function setSettings(new_settings) {
Object.keys(settings).forEach(key => delete settings[key]);
Object.assign(settings, new_settings);
}

43
src/agent/speak.js Normal file
View file

@ -0,0 +1,43 @@
import { exec } from 'child_process';
let speakingQueue = [];
let isSpeaking = false;
export function say(textToSpeak) {
speakingQueue.push(textToSpeak);
if (!isSpeaking) {
processQueue();
}
}
function processQueue() {
if (speakingQueue.length === 0) {
isSpeaking = false;
return;
}
isSpeaking = true;
const textToSpeak = speakingQueue.shift();
const isWin = process.platform === "win32";
const isMac = process.platform === "darwin";
let command;
if (isWin) {
command = `powershell -Command "Add-Type -AssemblyName System.Speech; $s = New-Object System.Speech.Synthesis.SpeechSynthesizer; $s.Rate = 2; $s.Speak(\\"${textToSpeak}\\"); $s.Dispose()"`;
} else if (isMac) {
command = `say "${textToSpeak}"`;
} else {
command = `espeak "${textToSpeak}"`;
}
exec(command, (error, stdout, stderr) => {
if (error) {
console.error(`Error: ${error.message}`);
console.error(`${error.stack}`);
} else if (stderr) {
console.error(`Error: ${stderr}`);
}
processQueue(); // Continue with the next message in the queue
});
}

View file

@ -1,199 +0,0 @@
import { readFileSync } from 'fs';
import { executeCommand } from './commands/index.js';
import { getPosition } from './library/world.js'
import settings from '../../settings.js';
export class TaskValidator {
constructor(data, agent) {
this.target = data.target;
this.number_of_target = data.number_of_target;
this.agent = agent;
}
validate() {
try{
let valid = false;
let total_targets = 0;
this.agent.bot.inventory.slots.forEach((slot) => {
if (slot && slot.name.toLowerCase() === this.target) {
total_targets += slot.count;
}
if (slot && slot.name.toLowerCase() === this.target && slot.count >= this.number_of_target) {
valid = true;
console.log('Task is complete');
}
});
if (total_targets >= this.number_of_target) {
valid = true;
console.log('Task is complete');
}
return valid;
} catch (error) {
console.error('Error validating task:', error);
return false;
}
}
}
export class Task {
constructor(agent, task_path, task_id) {
this.agent = agent;
this.data = null;
this.taskTimeout = 300;
this.taskStartTime = Date.now();
this.validator = null;
this.blocked_actions = [];
if (task_path && task_id) {
this.data = this.loadTask(task_path, task_id);
this.taskTimeout = this.data.timeout || 300;
this.taskStartTime = Date.now();
this.validator = new TaskValidator(this.data, this.agent);
this.blocked_actions = this.data.blocked_actions[this.agent.count_id.toString()] || [];
this.restrict_to_inventory = !!this.data.restrict_to_inventory;
if (this.data.goal)
this.blocked_actions.push('!endGoal');
if (this.data.conversation)
this.blocked_actions.push('!endConversation');
}
}
loadTask(task_path, task_id) {
try {
const tasksFile = readFileSync(task_path, 'utf8');
const tasks = JSON.parse(tasksFile);
const task = tasks[task_id];
if (!task) {
throw new Error(`Task ${task_id} not found`);
}
if ((!task.agent_count || task.agent_count <= 1) && this.agent.count_id > 0) {
task = null;
}
return task;
} catch (error) {
console.error('Error loading task:', error);
process.exit(1);
}
}
isDone() {
if (this.validator && this.validator.validate())
return {"message": 'Task successful', "code": 2};
// TODO check for other terminal conditions
// if (this.task.goal && !this.self_prompter.isActive())
// return {"message": 'Agent ended goal', "code": 3};
// if (this.task.conversation && !inConversation())
// return {"message": 'Agent ended conversation', "code": 3};
if (this.taskTimeout) {
const elapsedTime = (Date.now() - this.taskStartTime) / 1000;
if (elapsedTime >= this.taskTimeout) {
console.log('Task timeout reached. Task unsuccessful.');
return {"message": 'Task timeout reached', "code": 4};
}
}
return false;
}
async initBotTask() {
if (this.data === null)
return;
let bot = this.agent.bot;
let name = this.agent.name;
bot.chat(`/clear ${name}`);
console.log(`Cleared ${name}'s inventory.`);
//kill all drops
if (this.agent.count_id === 0) {
bot.chat(`/kill @e[type=item]`);
}
//wait for a bit so inventory is cleared
await new Promise((resolve) => setTimeout(resolve, 500));
let initial_inventory = null;
if (this.data.agent_count > 1) {
initial_inventory = this.data.initial_inventory[this.agent.count_id.toString()];
console.log("Initial inventory:", initial_inventory);
} else if (this.data) {
console.log("Initial inventory:", this.data.initial_inventory);
initial_inventory = this.data.initial_inventory;
}
if ("initial_inventory" in this.data) {
console.log("Setting inventory...");
console.log("Inventory to set:", initial_inventory);
for (let key of Object.keys(initial_inventory)) {
console.log('Giving item:', key);
bot.chat(`/give ${name} ${key} ${initial_inventory[key]}`);
};
//wait for a bit so inventory is set
await new Promise((resolve) => setTimeout(resolve, 500));
console.log("Done giving inventory items.");
}
// Function to generate random numbers
function getRandomOffset(range) {
return Math.floor(Math.random() * (range * 2 + 1)) - range;
}
let human_player_name = null;
let available_agents = settings.profiles.map((p) => JSON.parse(readFileSync(p, 'utf8')).name); // TODO this does not work with command line args
// Finding if there is a human player on the server
for (const playerName in bot.players) {
const player = bot.players[playerName];
if (!available_agents.some((n) => n === playerName)) {
console.log('Found human player:', player.username);
human_player_name = player.username
break;
}
}
// If there are multiple human players, teleport to the first one
// teleport near a human player if found by default
if (human_player_name) {
console.log(`Teleporting ${name} to human ${human_player_name}`)
bot.chat(`/tp ${name} ${human_player_name}`) // teleport on top of the human player
}
await new Promise((resolve) => setTimeout(resolve, 200));
// now all bots are teleport on top of each other (which kinda looks ugly)
// Thus, we need to teleport them to random distances to make it look better
/*
Note : We don't want randomness for construction task as the reference point matters a lot.
Another reason for no randomness for construction task is because, often times the user would fly in the air,
then set a random block to dirt and teleport the bot to stand on that block for starting the construction,
This was done by MaxRobinson in one of the youtube videos.
*/
if (this.data.type !== 'construction') {
const pos = getPosition(bot);
const xOffset = getRandomOffset(5);
const zOffset = getRandomOffset(5);
bot.chat(`/tp ${name} ${Math.floor(pos.x + xOffset)} ${pos.y + 3} ${Math.floor(pos.z + zOffset)}`);
await new Promise((resolve) => setTimeout(resolve, 200));
}
if (this.data.agent_count && this.data.agent_count > 1) {
// TODO wait for other bots to join
await new Promise((resolve) => setTimeout(resolve, 10000));
if (available_agents.length < this.data.agent_count) {
console.log(`Missing ${this.data.agent_count - available_agents.length} bot(s).`);
this.agent.killAll();
}
}
if (this.data.goal) {
await executeCommand(this.agent, `!goal("${this.data.goal}")`);
}
if (this.data.conversation && this.agent.count_id === 0) {
let other_name = available_agents.filter(n => n !== name)[0];
await executeCommand(this.agent, `!startConversation("${other_name}", "${this.data.conversation}")`);
}
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,358 @@
import { getPosition } from "../library/world.js";
export class CookingTaskInitiator {
constructor(data, bot) {
this.bot = bot;
this.data = data;
}
async init() {
let bot = this.bot;
//// Setting up the cooking world using minecraft cheats ////
// Only run the setup if the agent is the first one
// Clear and prepare the base area
await bot.chat(`/fill ~ ~-1 ~ ~50 ~-3 ~50 grass_block`);
await bot.chat(`/fill ~ ~-1 ~ ~-50 ~-3 ~50 grass_block`);
await bot.chat(`/fill ~ ~-1 ~ ~-50 ~-3 ~-50 grass_block`);
await bot.chat(`/fill ~ ~-1 ~ ~50 ~-3 ~-50 grass_block`);
await bot.chat(`/fill ~ ~ ~ ~50 ~10 ~50 air`);
await bot.chat(`/fill ~ ~ ~ ~-50 ~10 ~50 air`);
await bot.chat(`/fill ~ ~ ~ ~-50 ~10 ~-50 air`);
await bot.chat(`/fill ~ ~ ~ ~50 ~10 ~-50 air`);
console.log("Base area cleared and prepared.");
const position = getPosition(bot);
const botX = Math.floor(position.x);
const botZ = Math.floor(position.z);
// Region management system
const isOverlapping = (newXMin, newXMax, newZMin, newZMax, occupiedRegions) => {
for (const region of occupiedRegions) {
if (newXMin < region.xMax && newXMax > region.xMin &&
newZMin < region.zMax && newZMax > region.zMin) {
return true;
}
}
return false;
};
const findValidPosition = (width, depth, occupiedRegions) => {
const maxXStart = position.x + 25 - width; // Constrain to 50x50 area
const minXStart = position.x - 25;
const maxZStart = position.z + 25 - depth;
const minZStart = position.z - 25;
let attempts = 0;
while (attempts < 10000) {
const xStart = Math.floor(minXStart + Math.random() * (maxXStart - minXStart + 1));
const zStart = Math.floor(minZStart + Math.random() * (maxZStart - minZStart + 1));
const xMin = xStart;
const xMax = xStart + width - 1;
const zMin = zStart;
const zMax = zStart + depth - 1;
if (!isOverlapping(xMin, xMax, zMin, zMax, occupiedRegions)) {
return { xStart, zStart };
}
attempts++;
}
throw new Error('Failed to find non-overlapping position after 1000 attempts');
};
// Define all regions with their sizes
const regionsToPlace = [
{ type: 'wheat', width: 3, depth: 3 },
{ type: 'beetroots', width: 3, depth: 3 },
{ type: 'mushrooms', width: 3, depth: 3 },
{ type: 'potatoes', width: 3, depth: 3 },
{ type: 'carrots', width: 3, depth: 3 },
{ type: 'sugar_cane', width: 3, depth: 3 },
{ type: 'sugar_cane', width: 3, depth: 3 },
{ type: 'pumpkins', width: 5, depth: 1 },
{ type: 'house', width: 11, depth: 11 }
];
// Expand the regions of each type to make sure they don't overlap
for (let i = 0; i < regionsToPlace.length; i++) {
const region = regionsToPlace[i];
const { width, depth } = region;
regionsToPlace[i].width = width + 4;
regionsToPlace[i].depth = depth + 4;
}
const occupiedRegions = [{
xMin : botX - 1,
xMax : botX + 1,
zMin : botZ - 1,
zMax : botZ + 1
}];
const regionPositions = {};
// Calculate positions for all regions
for (const region of regionsToPlace) {
const { xStart, zStart } = findValidPosition(region.width, region.depth, occupiedRegions);
occupiedRegions.push({
xMin: xStart,
xMax: xStart + region.width - 1,
zMin: zStart,
zMax: zStart + region.depth - 1
});
if (region.type === 'sugar_cane') {
if (!regionPositions.sugar_cane) regionPositions.sugar_cane = [];
regionPositions.sugar_cane.push({ xStart, zStart });
} else {
regionPositions[region.type] = { xStart, zStart };
}
}
// Execute all planting
// await plantWheat(regionPositions.wheat.xStart, regionPositions.wheat.zStart);
await this.plantCrops(regionPositions.wheat.xStart, regionPositions.wheat.zStart, 'wheat[age=7]', true);
await this.plantCrops(regionPositions.beetroots.xStart, regionPositions.beetroots.zStart, 'beetroots[age=3]', true);
await this.plantMushrooms(regionPositions.mushrooms.xStart, regionPositions.mushrooms.zStart);
await new Promise(resolve => setTimeout(resolve, 300));
await this.plantCrops(regionPositions.potatoes.xStart, regionPositions.potatoes.zStart, 'potatoes[age=7]', true);
await this.plantCrops(regionPositions.carrots.xStart, regionPositions.carrots.zStart, 'carrots[age=7]', true);
await this.plantCrops(regionPositions.pumpkins.xStart, regionPositions.pumpkins.zStart, 'pumpkin', false);
await this.plantSugarCane(regionPositions.sugar_cane);
await new Promise(resolve => setTimeout(resolve, 300));
console.log("planted crops!");
// await plantPumpkins(regionPositions.pumpkins.xStart, regionPositions.pumpkins.zStart);
// await new Promise(resolve => setTimeout(resolve, 300));
await this.buildHouse(regionPositions.house.xStart, regionPositions.house.zStart);
console.log("House built!");
// Add a chest with cooking items near the bot
// const addChestWithItems = async () => {
// // Find a valid position near the bot (within 10 blocks)
// const findChestPosition = () => {
// const maxAttempts = 100;
// for (let attempt = 0; attempt < maxAttempts; attempt++) {
// const x = botX + Math.floor(Math.random() * 10 - 5); // Within ±5 blocks X
// const z = botZ + Math.floor(Math.random() * 10 - 5); // Within ±5 blocks Z
// const y = position.y;
// // Check if the position is not overlapping with existing structures
// if (!isOverlapping(x, x, z, z, occupiedRegions)) {
// return { x, y, z };
// }
// }
// throw new Error('Failed to find valid chest position');
// };
// const { x, y, z } = findChestPosition();
// // Place the chest
// await bot.chat(`/setblock ${x} ${y} ${z} chest`);
const cookingItems = [
['minecraft:milk_bucket', 1], // Non-stackable
['minecraft:egg', 16], // Stacks to 16
['minecraft:dandelion', 64], // Stacks to 64
['minecraft:sugar', 64],
['minecraft:cocoa_beans', 64],
['minecraft:apple', 64],
['minecraft:milk_bucket', 1],
['minecraft:milk_bucket', 1],
['minecraft:salmon', 64],
['minecraft:cod', 64],
['minecraft:kelp', 64],
['minecraft:dried_kelp', 64],
['minecraft:sweet_berries', 64],
['minecraft:honey_bottle', 1], // Non-stackable
['minecraft:glow_berries', 64],
['minecraft:bowl', 64],
['minecraft:milk_bucket', 1],
['minecraft:milk_bucket', 1],
['minecraft:milk_bucket', 1],
['minecraft:milk_bucket', 1],
['minecraft:cooked_salmon', 64],
['minecraft:cooked_cod', 64],
['minecraft:gold_ingot', 64],
['minecraft:oak_planks', 64],
['minecraft:iron_ingot', 64],
['minecraft:milk_bucket', 1],
['minecraft:milk_bucket', 1],
];
// // Fill the chest with random cooking items
// for (let slot = 0; slot < cookingItems.length; slot++) { // Chest has 27 slots
// const randomItem = cookingItems[slot];
// await bot.chat(`/item replace block ${x} ${y} ${z} container.${slot} with ${randomItem[0]} ${randomItem[1]}`);
// }
// // Mark the chest area as occupied
// occupiedRegions.push({
// xMin: x,
// xMax: x,
// zMin: z,
// zMax: z
// });
// };
// await addChestWithItems();
await new Promise(resolve => setTimeout(resolve, 300));
const animals = ['chicken', 'cow', 'llama', 'mooshroom', 'pig', 'rabbit', 'sheep'];
// Animal management
await this.killEntities(["item"]);
await this.killEntities(animals);
await this.killEntities(["item"]);
console.log("killed entities!");
await new Promise(resolve => setTimeout(resolve, 300));
// Summon new animals
await this.summonAnimals(animals, 8);
console.log("summoned animals!");
}
async plantCrops (xStart, zStart, crop_and_age, till=true) {
const position = getPosition(this.bot);
for (let i = 0; i < 6; i++) {
for (let j = 0; j < 6; j++) {
const x = xStart + i;
const z = zStart + j;
if (till) {
await this.bot.chat(`/setblock ${x} ${position.y - 1} ${z} farmland`);
}
await this.bot.chat(`/setblock ${x} ${position.y} ${z} ${crop_and_age}`);
}
}
await new Promise(resolve => setTimeout(resolve, 300));
}
async plantSugarCane (patches) {
const position = getPosition(this.bot);
for (const patch of patches) {
const xCenter = patch.xStart + 1;
const zCenter = patch.zStart + 1;
await this.bot.chat(`/setblock ${xCenter} ${position.y - 1} ${zCenter} water`);
const offsets = [[1, 0], [-1, 0], [0, 1], [0, -1]];
for (const [dx, dz] of offsets) {
await this.bot.chat(`/setblock ${xCenter + dx} ${position.y} ${zCenter + dz} sugar_cane[age=15]`);
}
}
};
async plantMushrooms(xStart, zStart) {
const position = getPosition(this.bot);
for (let i = 0; i < 4; i++) {
for (let j = 0; j < 5; j++) {
const x = xStart + i;
const z = zStart + j;
await this.bot.chat(`/setblock ${x} ${position.y - 1} ${z} mycelium`);
const mushroomType = (i + j) % 2 === 0 ? 'red_mushroom' : 'brown_mushroom';
await this.bot.chat(`/setblock ${x} ${position.y} ${z} ${mushroomType}`);
}
}
}
async summonAnimals (animals, amount) {
const position = getPosition(this.bot);
for (const animal of animals) {
for (let i = 0; i < amount; i++) {
const x = position.x - 25 + Math.random() * 50;
const z = position.z - 25 + Math.random() * 50;
await this.bot.chat(`/summon ${animal} ${Math.floor(x)} ${position.y} ${Math.floor(z)}`);
}
}
}
async killEntities(entities) {
for (const entity of entities) {
await this.bot.chat(`/kill @e[type=${entity},distance=..200]`);
}
}
async buildHouse (xStart, zStart) {
const position = getPosition(this.bot);
const startX = xStart;
const startY = position.y;
const startZ = zStart;
const width = 10;
const depth = 10;
const height = 5;
// Foundation and walls
for (let x = startX; x <= startX + depth; x++) {
for (let y = startY; y <= startY + height; y++) {
for (let z = startZ; z <= startZ + width; z++) {
if (y === startY) {
if (!(x === startX + depth - 1 && z === startZ + Math.floor(width / 2))) {
await this.bot.chat(`/setblock ${x} ${y} ${z} stone_bricks`);
}
continue;
}
if (x === startX || x === startX + depth ||
z === startZ || z === startZ + width ||
y === startY + height) {
const isWindow = (
(x === startX || x === startX + depth) &&
(z === startZ + 3 || z === startZ + width - 3) &&
(y === startY + 2 || y === startY + 3)
) || (
(z === startZ || z === startZ + width) &&
(x === startX + 3 || x === startX + depth - 3) &&
(y === startY + 2 || y === startY + 3)
);
const isDoor = x === startX + depth &&
z === startZ + Math.floor(width / 2) &&
(y === startY + 1 || y === startY + 2);
if (!isWindow && !isDoor) {
await this.bot.chat(`/setblock ${x} ${y} ${z} stone_bricks`);
}
}
}
}
}
// Entrance features
const doorZ = startZ + Math.floor(width / 2);
await this.bot.chat(`/setblock ${startX + depth - 1} ${startY} ${doorZ} stone_brick_stairs[facing=west]`);
await this.bot.chat(`/setblock ${startX + depth} ${startY} ${doorZ} air`);
// await bot.chat(`/setblock ${startX + depth - 1} ${startY} ${doorZ - 1} stone_bricks`);
// await bot.chat(`/setblock ${startX + depth - 1} ${startY} ${doorZ + 1} stone_bricks`);
// await bot.chat(`/setblock ${startX + depth} ${startY} ${doorZ} oak_door[half=lower,hinge=left,facing=west,powered=false]`);
// await bot.chat(`/setblock ${startX + depth} ${startY + 1} ${doorZ} oak_door[half=upper,hinge=left,facing=west,powered=false]`);
// Roof construction
for (let i = 0; i < 3; i++) {
for (let x = startX + i; x <= startX + depth - i; x++) {
for (let z = startZ + i; z <= startZ + width - i; z++) {
if (x === startX + i || x === startX + depth - i ||
z === startZ + i || z === startZ + width - i) {
await this.bot.chat(`/setblock ${x} ${startY + height + i} ${z} cobblestone`);
}
}
}
}
// Interior items
await this.bot.chat(`/setblock ${startX + 4} ${startY + 1} ${startZ + 3} crafting_table`);
await this.bot.chat(`/setblock ${startX + 4} ${startY + 1} ${startZ + 5} furnace`);
// Add fuel to the furnace
await this.bot.chat(`/data merge block ${startX + 4} ${startY + 1} ${startZ + 5} {Items:[{Slot:1b,id:"minecraft:coal",Count:64b}]}`)
await this.bot.chat(`/setblock ${startX + 4} ${startY + 1} ${startZ + 7} smoker`);
// Add fuel to the smoker
await this.bot.chat(`/data merge block ${startX + 4} ${startY + 1} ${startZ + 7} {Items:[{Slot:1b,id:"minecraft:coal",Count:64b}]}`)
await this.bot.chat(`/setblock ${startX + depth - 3} ${startY + 1} ${startZ + 2} bed`);
await new Promise(resolve => setTimeout(resolve, 300));
}
}

594
src/agent/tasks/tasks.js Normal file
View file

@ -0,0 +1,594 @@
import { readFileSync , writeFileSync, existsSync} from 'fs';
import { executeCommand } from '../commands/index.js';
import { getPosition } from '../library/world.js';
import { ConstructionTaskValidator, Blueprint } from './construction_tasks.js';
import { CookingTaskInitiator } from './cooking_tasks.js';
const PROGRESS_FILE = './hells_kitchen_progress.json';
const hellsKitchenProgressManager = {
readProgress: function() {
try {
if (existsSync(PROGRESS_FILE)) {
const data = readFileSync(PROGRESS_FILE, 'utf8');
return JSON.parse(data);
}
} catch (err) {
console.error('Error reading progress file:', err);
}
return { taskId: null, agent0Complete: false, agent1Complete: false };
},
writeProgress: function(progress) {
try {
writeFileSync(PROGRESS_FILE, JSON.stringify(progress), 'utf8');
} catch (err) {
console.error('Error writing progress file:', err);
}
},
resetTask: function(taskId) {
const progress = { taskId, agent0Complete: false, agent1Complete: false };
this.writeProgress(progress);
return progress;
},
updateAgentProgress: function(taskId, agentId, isComplete) {
const progress = this.readProgress();
// If it's a different task, reset first
if (progress.taskId !== taskId) {
progress.taskId = taskId;
progress.agent0Complete = false;
progress.agent1Complete = false;
}
// Update the specific agent's status
if (agentId === 0) progress.agent0Complete = isComplete;
if (agentId === 1) progress.agent1Complete = isComplete;
this.writeProgress(progress);
return progress;
},
isTaskComplete: function(taskId) {
const progress = this.readProgress();
if (progress.taskId !== taskId) return false;
return progress.agent0Complete && progress.agent1Complete;
}
};
//todo: modify validator code to return an object with valid and score -> do more testing hahah
//todo: figure out how to log these things to the same place as bots/histories
// export class CraftTaskValidator {
// constructor(data, agent) {
// this.target = data.target;
// this.number_of_target = data.number_of_target;
// this.agent = agent;
/**
* Validates the presence of required items in an agent's inventory
* @param {Object} data - Task data containing target and quantity information
* @param {Object} agent - Agent object with bot inventory
* @returns {Object} Validation result with success status and missing items
*/
function checkItemPresence(data, agent) {
try {
// Special handling for hells_kitchen tasks
if (data.task_id && data.task_id.endsWith('hells_kitchen') && Array.isArray(data.target) && data.target.length === 2) {
// Get agent ID and target for this agent
const agentId = agent.count_id;
if (agentId === 0 || agentId === 1) {
// Use only the corresponding element from the target list
const targetForThisAgent = data.target[agentId];
const modifiedData = {
...data,
target: targetForThisAgent
};
// Check if this agent has their required item
const agentResult = checkItemForSingleAgent(modifiedData, agent);
// Update the file-based progress tracker
const progress = hellsKitchenProgressManager.updateAgentProgress(
data.task_id,
agentId,
agentResult.success
);
// // Log the current state
// console.log(`Agent ${agentId} has item: ${agentResult.success}`);
// console.log(`Task state: Agent0=${progress.agent0Complete}, Agent1=${progress.agent1Complete}`);
// Return combined result - success only if both agents have their items
return {
success: progress.agent0Complete && progress.agent1Complete,
missingItems: agentResult.missingItems,
agentComplete: agentResult.success // Individual agent status for debugging
};
}
}
// Non-hells_kitchen tasks use the standard check
return checkItemForSingleAgent(data, agent);
} catch (error) {
console.error('Error checking item presence:', error);
return {
success: false,
missingItems: [],
error: error.message
};
}
}
/**
* Helper function to check a single agent's inventory
* Extracted from the original checkItemPresence logic
*/
function checkItemForSingleAgent(data, agent) {
function isTargetDictionaryWithQuantities(target) {
return typeof target === 'object' &&
!Array.isArray(target) &&
target !== null &&
Object.values(target).every(value => typeof value === 'number');
}
function normalizeTargets(target) {
if (typeof target === 'string') {
return { [target]: 1 };
} else if (Array.isArray(target)) {
return target.reduce((acc, item) => {
acc[item] = 1;
return acc;
}, {});
} else if (typeof target === 'object' && target !== null) {
return target;
}
throw new Error('Invalid target format');
}
function normalizeQuantities(targets, quantities) {
if (quantities === undefined) {
return Object.keys(targets).reduce((acc, key) => {
acc[key] = 1;
return acc;
}, {});
} else if (typeof quantities === 'number') {
return Object.keys(targets).reduce((acc, key) => {
acc[key] = quantities;
return acc;
}, {});
} else if (typeof quantities === 'object' && quantities !== null) {
return quantities;
}
throw new Error('Invalid number_of_target format');
}
// First normalize targets to always have a consistent format
const targets = normalizeTargets(data.target);
// Determine the required quantities
const requiredQuantities = isTargetDictionaryWithQuantities(data.target)
? data.target
: normalizeQuantities(targets, data.number_of_target);
// Count items in inventory
const inventoryCount = {};
agent.bot.inventory.slots.forEach((slot) => {
if (slot) {
const itemName = slot.name.toLowerCase();
inventoryCount[itemName] = (inventoryCount[itemName] || 0) + slot.count;
}
});
// Check if all required items are present in sufficient quantities
const missingItems = [];
let allTargetsMet = true;
for (const [item, requiredCount] of Object.entries(requiredQuantities)) {
const itemName = item.toLowerCase();
const currentCount = inventoryCount[itemName] || 0;
if (currentCount < requiredCount) {
allTargetsMet = false;
missingItems.push({
item: itemName,
required: requiredCount,
current: currentCount,
missing: requiredCount - currentCount
});
}
}
return {
success: allTargetsMet,
missingItems: missingItems
};
}
class CookingCraftingTaskValidator {
constructor(data, agent) {
this.data = data;
this.agent = agent;
}
validate() {
const result = checkItemPresence(this.data, this.agent);
let score = 0;
if (result.success) {
score = 1;
}
return {
"valid": result.success,
"score": score,
};
}
}
export class Task {
constructor(agent, task_data, taskStartTime = null) {
this.agent = agent;
this.data = null;
if (taskStartTime !== null)
this.taskStartTime = taskStartTime;
else
this.taskStartTime = Date.now();
this.validator = null;
this.reset_function = null;
this.blocked_actions = [];
this.task_data = task_data;
if (task_data) {
console.log('Starting task', task_data.task_id);
console.log("Task start time set to", this.taskStartTime);
if (task_data.task_id.endsWith('hells_kitchen')) {
// Reset hells_kitchen progress when a new task starts
hellsKitchenProgressManager.resetTask(task_data.task_id);
console.log('Reset Hells Kitchen progress for new task');
}
this.data = task_data;
this.task_type = this.data.type;
if (this.task_type === 'construction' && this.data.blueprint) {
this.blueprint = new Blueprint(this.data.blueprint);
this.goal = this.data.goal + ' \n' + this.blueprint.explain() + " \n" + "make sure to place the lower levels of the blueprint first";
this.conversation = this.data.conversation + ' \n' + this.blueprint.explain();
} else {
this.goal = this.data.goal;
this.conversation = this.data.conversation;
}
this.taskTimeout = this.data.timeout || 300;
// Set validator based on task_type
// do goal initialization here
// let agentGoal = this.getAgentGoal();
// if (agentGoal) {
// agentGoal += "You have to collaborate with other agents/bots, namely " + this.available_agents.filter(n => n !== this.name).join(', ') + " to complete the task as soon as possible by dividing the work among yourselves.";
// console.log(`Setting goal for agent ${this.agent.count_id}: ${agentGoal}`);
// await executeCommand(this.agent, `!goal("${agentGoal}")`);
// }
if (this.task_type === 'construction') {
this.validator = new ConstructionTaskValidator(this.data, this.agent);
} else if (this.task_type === 'cooking' || this.task_type === 'techtree') {
this.validator = new CookingCraftingTaskValidator(this.data, this.agent);
} else {
this.validator = null;
}
if (this.data.blocked_actions) {
this.blocked_actions = this.data.blocked_actions[this.agent.count_id.toString()] || [];
} else {
this.blocked_actions = [];
}
this.restrict_to_inventory = !!this.data.restrict_to_inventory;
if (this.data.goal)
this.blocked_actions.push('!endGoal');
if (this.conversation)
this.blocked_actions.push('!endConversation');
}
else {
console.log('No task.');
}
this.name = this.agent.name;
this.available_agents = []
}
updateAvailableAgents(agents) {
this.available_agents = agents
}
// Add this method if you want to manually reset the hells_kitchen progress
resetHellsKitchenProgress() {
if (this.task_id && this.task_id.endsWith('hells_kitchen')) {
hellsKitchenProgressManager.resetTask(this.task_id);
console.log('Hells Kitchen progress reset manually');
}
}
getAgentGoal() {
if (!this.data || !this.data.goal) {
return null;
}
let add_string = '';
if (this.task_type === 'cooking') {
if (this.data.agent_count > 2) {
if (this.name.toLowerCase().startsWith('andy')) {
add_string = '\nIn the end, all the food items should be given to you by other bots. Make sure to talk to all the agents using startConversation command to coordinate the task instead of talking to just one agent. You can even end current conversation with any agent using endConversation command and then talk to a new agent using startConversation command.';
}
else {
add_string = '\nIn the end, all the food items should be given to one single bot whose name starts with andy or Andy. Make sure to talk to all the agents using startConversation command to coordinate the task instead of talking to just one agent. You can even end current conversation with any agent using endConversation command and then talk to a new agent using startConversation command.';
}
}
else {
if (this.data.task_id && this.data.task_id.endsWith('hells_kitchen')) {
add_string = '';
}
else {
add_string = '\nIn the end, all the food items should be given to one single bot.';
}
}
}
if (this.task_type === 'techtree') {
if (this.data.agent_count > 2) {
add_string = '\nMake sure to share resources among all agents and to talk to all the agents using startConversation command to coordinate the task instead of talking to just one agent. You can even end current conversation with any agent using endConversation command and then talk to a new agent using startConversation command.'
}
}
// If goal is a string, all agents share the same goal
if (typeof this.data.goal === 'string') {
return this.data.goal + add_string;
}
// If goal is an object, get the goal for this agent's count_id
if (typeof this.data.goal === 'object' && this.data.goal !== null) {
const agentId = this.agent.count_id.toString();
return (this.data.goal[agentId] || '') + add_string;
}
return null;
}
isDone() {
let res = null;
if (this.validator)
res = this.validator.validate();
if (res && res.valid) {
// Find all the agents and clear their inventories
for (let agent of this.available_agents) {
this.agent.bot.chat(`/clear ${agent}`);
}
// this.agent.bot.chat(`/clear @a`);
return {"message": 'Task successful', "score": res.score};
}
let other_names = this.available_agents.filter(n => n !== this.name);
const elapsedTime = (Date.now() - this.taskStartTime) / 1000;
if (elapsedTime >= 30 && this.available_agents.length !== this.data.agent_count) {
console.log('No other agents found. Task unsuccessful.');
return {"message": 'No other agents found', "score": 0};
}
if (this.taskTimeout) {
if (elapsedTime >= this.taskTimeout) {
console.log('Task timeout reached. Task unsuccessful.');
if (res) {
return {"message": 'Task timeout reached', "score": res.score};
} else {
return {"message": 'Task timeout reached', "score": 0};
}
}
}
return false;
}
async setAgentGoal() {
let agentGoal = this.getAgentGoal();
if (agentGoal && this.data.agent_count + this.data.human_count > 1) {
agentGoal += "You have to collaborate with other agents/bots, namely " + this.available_agents.filter(n => n !== this.name).join(', ') + " to complete the task as soon as possible by dividing the work among yourselves.";
console.log(`Setting goal for agent ${this.agent.count_id}: ${agentGoal}`);
}
await executeCommand(this.agent, `!goal("${agentGoal}")`);
}
async initBotTask() {
await this.agent.bot.chat(`/clear ${this.name}`);
console.log(`Cleared ${this.name}'s inventory.`);
//wait for a bit so inventory is cleared
await new Promise((resolve) => setTimeout(resolve, 500));
if (this.data === null)
return;
if (this.task_type === 'cooking') {
this.initiator = new CookingTaskInitiator(this.data, this.agent.bot);
} else {
this.initiator = null;
}
//wait for a bit so bots are teleported
await new Promise((resolve) => setTimeout(resolve, 3000));
if (this.agent.count_id === 0 && this.data.human_count > 0) {
console.log('Clearing human player inventories');
for (let i = 0; i < this.data.human_count; i++) {
const username = this.data.usernames[i];
await this.agent.bot.chat(`/clear ${username}`);
}
await new Promise((resolve) => setTimeout(resolve, 500));
}
if (this.data.initial_inventory) {
console.log("Setting inventory...");
let initialInventory = {};
initialInventory = this.data.initial_inventory[this.agent.count_id.toString()] || {};
console.log("Initial inventory for agent", this.agent.count_id, ":", initialInventory);
console.log("")
if (this.data.human_count > 0 && this.agent.count_id === 0) {
// this.num_humans = num_keys - this.data.num_agents;
if (this.data.human_count !== this.data.usernames.length) {
console.log(`Number of human players ${this.human_count} does not match the number of usernames provided. ${this.data.usernames.length}`);
throw new Error(`Number of human players ${this.human_count} does not match the number of usernames provided. ${this.data.usernames.length}`);
return;
}
const starting_idx = this.data.agent_count;
for (let i = 0; i < this.data.human_count; i++) {
const username = this.data.usernames[i];
const inventory = this.data.initial_inventory[starting_idx + i];
console.log(Object.keys(inventory));
for (let key of Object.keys(inventory)) {
const itemName = key.toLowerCase();
const quantity = inventory[key];
console.log(`Give ${username} ${quantity} ${itemName}`);
await this.agent.bot.chat(`/give ${username} ${itemName} ${quantity}`);
}
}
}
console.log(this.data.initial_inventory);
// Assign inventory items
for (let key of Object.keys(initialInventory)) {
const itemName = key.toLowerCase();
const quantity = initialInventory[key];
await this.agent.bot.chat(`/give ${this.name} ${itemName} ${quantity}`);
console.log(`Gave ${this.name} ${quantity} ${itemName}`);
}
// Wait briefly for inventory commands to complete
await new Promise((resolve) => setTimeout(resolve, 500));
}
if (this.initiator && this.agent.count_id === 0) {
await this.initiator.init();
}
await this.teleportBots();
if (this.data.agent_count && this.data.agent_count > 1) {
// TODO wait for other bots to join
await new Promise((resolve) => setTimeout(resolve, 10000));
if (this.available_agents.length < this.data.agent_count) {
console.log(`Missing ${this.data.agent_count - this.available_agents.length} bot(s).`);
this.agent.killAll();
}
}
await new Promise((resolve) => setTimeout(resolve, 500));
if (this.data.conversation && this.agent.count_id === 0) {
let other_name = this.available_agents.filter(n => n !== this.name)[0];
let waitCount = 0;
while (other_name === undefined && waitCount < 20) {
other_name = this.available_agents.filter(n => n !== this.name)[0];
await new Promise((resolve) => setTimeout(resolve, 1000));
waitCount++;
}
if (other_name === undefined && this.data.agent_count > 1) {
console.log('No other agents found. Task unsuccessful.');
this.agent.killAll();
}
await executeCommand(this.agent, `!startConversation("${other_name}", "${this.data.conversation}")`);
}
await this.setAgentGoal();
}
async teleportBots() {
console.log('\n\nTeleporting bots');
function getRandomOffset(range) {
return Math.floor(Math.random() * (range * 2 + 1)) - range;
}
let human_player_name = null;
let bot = this.agent.bot;
// Finding if there is a human player on the server
for (const playerName in bot.players) {
const player = bot.players[playerName];
if (!this.available_agents.some((n) => n === playerName)) {
console.log('Found human player:', player.username);
human_player_name = player.username
break;
}
}
// go the human if there is one and not required for the task
if (human_player_name && this.data.human_count === 0) {
console.log(`Teleporting ${this.name} to human ${human_player_name}`)
bot.chat(`/tp ${this.name} ${human_player_name}`)
}
else {
console.log(`Teleporting ${this.name} to ${this.available_agents[0]}`)
bot.chat(`/tp ${this.name} ${this.available_agents[0]}`);
}
await new Promise((resolve) => setTimeout(resolve, 200));
// now all bots are teleport on top of each other (which kinda looks ugly)
// Thus, we need to teleport them to random distances to make it look better
/*
Note : We don't want randomness for construction task as the reference point matters a lot.
Another reason for no randomness for construction task is because, often times the user would fly in the air,
then set a random block to dirt and teleport the bot to stand on that block for starting the construction,
*/
if (this.data.type !== 'construction') {
const pos = getPosition(bot);
const xOffset = getRandomOffset(5);
const zOffset = getRandomOffset(5);
bot.chat(`/tp ${this.name} ${Math.floor(pos.x + xOffset)} ${pos.y + 3} ${Math.floor(pos.z + zOffset)}`);
await new Promise((resolve) => setTimeout(resolve, 200));
}
if (this.data.agent_count && this.data.agent_count > 1) {
// TODO wait for other bots to join
await new Promise((resolve) => setTimeout(resolve, 10000));
if (this.available_agents.length < this.data.agent_count) {
console.log(`Missing ${this.data.agent_count - this.available_agents.length} bot(s).`);
this.agent.killAll();
}
}
if (this.data.type === 'construction'){
//Ensures construction is cleaned out first. -> relies on cheats which are turned off?
if (this.blueprint){
console.log('Cleaning out construction blueprint');
const result = this.blueprint.autoDelete();
const commands = result.commands;
const nearbyPosition = result.nearbyPosition;
console.log("nearby position", nearbyPosition);
const first_coord = this.data.blueprint.levels[0].coordinates;
bot.chat(`/tp @a ${first_coord[0]} ${first_coord[1]} ${first_coord[2]}`);
if (this.agent.agent_id === 0 && this.data.human_count > 0) {
for (let i = 0; i < this.data.human_count; i++) {
const username = this.data.usernames[i];
await bot.chat(`/tp ${username} ${nearbyPosition.x} ${nearbyPosition.y} ${nearbyPosition.z}`);
}
}
for (const command of commands) {
bot.chat(command);
}
}
else{
console.log('no construction blueprint?')
}
}
}
}

View file

@ -1,8 +1,8 @@
import settings from '../../settings.js'; import settings from '../settings.js';
import prismarineViewer from 'prismarine-viewer'; import prismarineViewer from 'prismarine-viewer';
const mineflayerViewer = prismarineViewer.mineflayer; const mineflayerViewer = prismarineViewer.mineflayer;
export function addViewer(bot, count_id) { export function addBrowserViewer(bot, count_id) {
if (settings.show_bot_views) if (settings.render_bot_view)
mineflayerViewer(bot, { port: 3000+count_id, firstPerson: true, }); mineflayerViewer(bot, { port: 3000+count_id, firstPerson: true, });
} }

View file

@ -0,0 +1,78 @@
import { Viewer } from 'prismarine-viewer/viewer/lib/viewer.js';
import { WorldView } from 'prismarine-viewer/viewer/lib/worldView.js';
import { getBufferFromStream } from 'prismarine-viewer/viewer/lib/simpleUtils.js';
import THREE from 'three';
import { createCanvas } from 'node-canvas-webgl/lib/index.js';
import fs from 'fs/promises';
import { Vec3 } from 'vec3';
import { EventEmitter } from 'events';
import worker_threads from 'worker_threads';
global.Worker = worker_threads.Worker;
export class Camera extends EventEmitter {
constructor (bot, fp) {
super();
this.bot = bot;
this.fp = fp;
this.viewDistance = 12;
this.width = 800;
this.height = 512;
this.canvas = createCanvas(this.width, this.height);
this.renderer = new THREE.WebGLRenderer({ canvas: this.canvas });
this.viewer = new Viewer(this.renderer);
this._init().then(() => {
this.emit('ready');
})
}
async _init () {
const botPos = this.bot.entity.position;
const center = new Vec3(botPos.x, botPos.y+this.bot.entity.height, botPos.z);
this.viewer.setVersion(this.bot.version);
// Load world
const worldView = new WorldView(this.bot.world, this.viewDistance, center);
this.viewer.listen(worldView);
worldView.listenToBot(this.bot);
await worldView.init(center);
this.worldView = worldView;
}
async capture() {
const center = new Vec3(this.bot.entity.position.x, this.bot.entity.position.y+this.bot.entity.height, this.bot.entity.position.z);
this.viewer.camera.position.set(center.x, center.y, center.z);
await this.worldView.updatePosition(center);
this.viewer.setFirstPersonCamera(this.bot.entity.position, this.bot.entity.yaw, this.bot.entity.pitch);
this.viewer.update();
this.renderer.render(this.viewer.scene, this.viewer.camera);
const imageStream = this.canvas.createJPEGStream({
bufsize: 4096,
quality: 100,
progressive: false
});
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = `screenshot_${timestamp}`;
const buf = await getBufferFromStream(imageStream);
await this._ensureScreenshotDirectory();
await fs.writeFile(`${this.fp}/${filename}.jpg`, buf);
console.log('saved', filename);
return filename;
}
async _ensureScreenshotDirectory() {
let stats;
try {
stats = await fs.stat(this.fp);
} catch (e) {
if (!stats?.isDirectory()) {
await fs.mkdir(this.fp);
}
}
}
}

View file

@ -0,0 +1,81 @@
import { Vec3 } from 'vec3';
import { Camera } from "./camera.js";
import fs from 'fs';
export class VisionInterpreter {
constructor(agent, allow_vision) {
this.agent = agent;
this.allow_vision = allow_vision;
this.fp = './bots/'+agent.name+'/screenshots/';
if (allow_vision) {
this.camera = new Camera(agent.bot, this.fp);
}
}
async lookAtPlayer(player_name, direction) {
if (!this.allow_vision || !this.agent.prompter.vision_model.sendVisionRequest) {
return "Vision is disabled. Use other methods to describe the environment.";
}
let result = "";
const bot = this.agent.bot;
const player = bot.players[player_name]?.entity;
if (!player) {
return `Could not find player ${player_name}`;
}
let filename;
if (direction === 'with') {
await bot.look(player.yaw, player.pitch);
result = `Looking in the same direction as ${player_name}\n`;
filename = await this.camera.capture();
} else {
await bot.lookAt(new Vec3(player.position.x, player.position.y + player.height, player.position.z));
result = `Looking at player ${player_name}\n`;
filename = await this.camera.capture();
}
return result + `Image analysis: "${await this.analyzeImage(filename)}"`;
}
async lookAtPosition(x, y, z) {
if (!this.allow_vision || !this.agent.prompter.vision_model.sendVisionRequest) {
return "Vision is disabled. Use other methods to describe the environment.";
}
let result = "";
const bot = this.agent.bot;
await bot.lookAt(new Vec3(x, y + 2, z));
result = `Looking at coordinate ${x}, ${y}, ${z}\n`;
let filename = await this.camera.capture();
return result + `Image analysis: "${await this.analyzeImage(filename)}"`;
}
getCenterBlockInfo() {
const bot = this.agent.bot;
const maxDistance = 128; // Maximum distance to check for blocks
const targetBlock = bot.blockAtCursor(maxDistance);
if (targetBlock) {
return `Block at center view: ${targetBlock.name} at (${targetBlock.position.x}, ${targetBlock.position.y}, ${targetBlock.position.z})`;
} else {
return "No block in center view";
}
}
async analyzeImage(filename) {
try {
const imageBuffer = fs.readFileSync(`${this.fp}/${filename}.jpg`);
const messages = this.agent.history.getHistory();
const blockInfo = this.getCenterBlockInfo();
const result = await this.agent.prompter.promptVision(messages, imageBuffer);
return result + `\n${blockInfo}`;
} catch (error) {
console.warn('Error reading image:', error);
return `Error reading image: ${error.message}`;
}
}
}

View file

@ -0,0 +1,27 @@
import mindcraft
import json
import os
# Initialize Mindcraft, starting the Node.js server
# This will also connect to the MindServer via websockets
mindcraft.init()
# Get the directory of the current script
script_dir = os.path.dirname(os.path.abspath(__file__))
profile_path = os.path.abspath(os.path.join(script_dir, '..', '..', 'andy.json'))
# Load agent settings from a JSON file
try:
with open(profile_path, 'r') as f:
profile_data = json.load(f)
settings = {"profile": profile_data}
mindcraft.create_agent(settings)
settings_copy = settings.copy()
settings_copy['profile']['name'] = 'andy2'
mindcraft.create_agent(settings_copy)
except FileNotFoundError:
print(f"Error: Could not find andy.json at {profile_path}")
mindcraft.wait()

View file

@ -0,0 +1,24 @@
import * as Mindcraft from '../mindcraft/mindcraft.js';
import settings from '../../settings.js';
import yargs from 'yargs';
import { hideBin } from 'yargs/helpers';
function parseArguments() {
return yargs(hideBin(process.argv))
.option('mindserver_port', {
type: 'number',
describe: 'Mindserver port',
default: settings.mindserver_port
})
.help()
.alias('help', 'h')
.parse();
}
const args = parseArguments();
settings.mindserver_port = args.mindserver_port;
Mindcraft.init(settings.mindserver_port);
console.log(`Mindcraft initialized with MindServer at localhost:${settings.mindserver_port}`);

View file

@ -0,0 +1,99 @@
import subprocess
import socketio
import time
import json
import os
import atexit
import threading
import sys
import signal
class Mindcraft:
def __init__(self):
self.sio = socketio.Client()
self.process = None
self.connected = False
self.log_thread = None
def _log_reader(self):
for line in iter(self.process.stdout.readline, ''):
sys.stdout.write(f'[Node.js] {line}')
sys.stdout.flush()
def init(self, port=8080):
if self.process:
return
self.port = port
node_script_path = os.path.abspath(os.path.join(os.path.dirname(__file__), 'init-mindcraft.js'))
self.process = subprocess.Popen([
'node',
node_script_path,
'--mindserver_port', str(self.port)
], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, bufsize=1)
self.log_thread = threading.Thread(target=self._log_reader)
self.log_thread.daemon = True
self.log_thread.start()
atexit.register(self.shutdown)
time.sleep(2) # Give server time to start before connecting
try:
self.sio.connect(f'http://localhost:{self.port}')
self.connected = True
print("Connected to MindServer. Mindcraft is initialized.")
except socketio.exceptions.ConnectionError as e:
print(f"Failed to connect to MindServer: {e}")
self.shutdown()
raise
def create_agent(self, settings_json):
if not self.connected:
raise Exception("Not connected to MindServer. Call init() first.")
profile_data = settings_json.get('profile', {})
def callback(response):
if response.get('success'):
print(f"Agent '{profile_data.get('name')}' created successfully")
else:
print(f"Error creating agent: {response.get('error', 'Unknown error')}")
self.sio.emit('create-agent', settings_json, callback=callback)
def shutdown(self):
if self.sio.connected:
self.sio.disconnect()
self.connected = False
if self.process:
self.process.terminate()
self.process.wait()
self.process = None
print("Mindcraft shut down.")
def wait(self):
"""Block the main thread until Ctrl+C is pressed so the server stays up,"""
print("Server is running. Press Ctrl+C to exit.")
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print("\nCtrl+C detected. Exiting...")
self.shutdown()
mindcraft_instance = Mindcraft()
def init(port=8080):
mindcraft_instance.init(port)
def create_agent(settings_json):
mindcraft_instance.create_agent(settings_json)
def shutdown():
mindcraft_instance.shutdown()
def wait():
mindcraft_instance.wait()

View file

@ -0,0 +1,64 @@
import { createMindServer, registerAgent } from './mindserver.js';
import { AgentProcess } from '../process/agent_process.js';
let mindserver;
let connected = false;
let agent_processes = {};
let agent_count = 0;
let host = 'localhost';
let port = 8080;
export async function init(host_public=false, port=8080) {
if (connected) {
console.error('Already initiliazed!');
return;
}
mindserver = createMindServer(host_public, port);
port = port;
connected = true;
}
export async function createAgent(settings) {
if (!settings.profile.name) {
console.error('Agent name is required in profile');
return;
}
settings = JSON.parse(JSON.stringify(settings));
let agent_name = settings.profile.name;
registerAgent(settings);
let load_memory = settings.load_memory || false;
let init_message = settings.init_message || null;
const agentProcess = new AgentProcess(agent_name, port);
agentProcess.start(load_memory, init_message, agent_count);
agent_count++;
agent_processes[settings.profile.name] = agentProcess;
}
export function getAgentProcess(agentName) {
return agent_processes[agentName];
}
export function startAgent(agentName) {
if (agent_processes[agentName]) {
agent_processes[agentName].continue();
}
else {
console.error(`Cannot start agent ${agentName}; not found`);
}
}
export function stopAgent(agentName) {
if (agent_processes[agentName]) {
agent_processes[agentName].stop();
}
}
export function shutdown() {
console.log('Shutting down');
for (let agentName in agent_processes) {
agent_processes[agentName].stop();
}
setTimeout(() => {
process.exit(0);
}, 2000);
}

196
src/mindcraft/mindserver.js Normal file
View file

@ -0,0 +1,196 @@
import { Server } from 'socket.io';
import express from 'express';
import http from 'http';
import path from 'path';
import { fileURLToPath } from 'url';
import * as mindcraft from './mindcraft.js';
import { readFileSync } from 'fs';
const __dirname = path.dirname(fileURLToPath(import.meta.url));
// Mindserver is:
// - central hub for communication between all agent processes
// - api to control from other languages and remote users
// - host for webapp
let io;
let server;
const agent_connections = {};
const settings_spec = JSON.parse(readFileSync(path.join(__dirname, 'public/settings_spec.json'), 'utf8'));
class AgentConnection {
constructor(settings) {
this.socket = null;
this.settings = settings;
this.in_game = false;
}
}
export function registerAgent(settings) {
let agentConnection = new AgentConnection(settings);
agent_connections[settings.profile.name] = agentConnection;
}
export function logoutAgent(agentName) {
if (agent_connections[agentName]) {
agent_connections[agentName].in_game = false;
agentsUpdate();
}
}
// Initialize the server
export function createMindServer(host_public = false, port = 8080) {
const app = express();
server = http.createServer(app);
io = new Server(server);
// Serve static files
const __dirname = path.dirname(fileURLToPath(import.meta.url));
app.use(express.static(path.join(__dirname, 'public')));
// Socket.io connection handling
io.on('connection', (socket) => {
let curAgentName = null;
console.log('Client connected');
agentsUpdate(socket);
socket.on('create-agent', (settings, callback) => {
console.log('API create agent...');
for (let key in settings_spec) {
if (!(key in settings)) {
if (settings_spec[key].required) {
callback({ success: false, error: `Setting ${key} is required` });
return;
}
else {
settings[key] = settings_spec[key].default;
}
}
}
for (let key in settings) {
if (!(key in settings_spec)) {
delete settings[key];
}
}
if (settings.profile?.name) {
if (settings.profile.name in agent_connections) {
callback({ success: false, error: 'Agent already exists' });
return;
}
mindcraft.createAgent(settings);
callback({ success: true });
}
else {
console.error('Agent name is required in profile');
callback({ success: false, error: 'Agent name is required in profile' });
}
});
socket.on('get-settings', (agentName, callback) => {
if (agent_connections[agentName]) {
callback({ settings: agent_connections[agentName].settings });
} else {
callback({ error: `Agent '${agentName}' not found.` });
}
});
socket.on('login-agent', (agentName) => {
if (agent_connections[agentName]) {
agent_connections[agentName].socket = socket;
agent_connections[agentName].in_game = true;
curAgentName = agentName;
agentsUpdate();
}
else {
console.warn(`Unregistered agent ${agentName} tried to login`);
}
});
socket.on('disconnect', () => {
if (agent_connections[curAgentName]) {
console.log(`Agent ${curAgentName} disconnected`);
agent_connections[curAgentName].in_game = false;
agentsUpdate();
}
});
socket.on('chat-message', (agentName, json) => {
if (!agent_connections[agentName]) {
console.warn(`Agent ${agentName} tried to send a message but is not logged in`);
return;
}
console.log(`${curAgentName} sending message to ${agentName}: ${json.message}`);
agent_connections[agentName].socket.emit('chat-message', curAgentName, json);
});
socket.on('restart-agent', (agentName) => {
console.log(`Restarting agent: ${agentName}`);
agent_connections[agentName].socket.emit('restart-agent');
});
socket.on('stop-agent', (agentName) => {
mindcraft.stopAgent(agentName);
});
socket.on('start-agent', (agentName) => {
mindcraft.startAgent(agentName);
});
socket.on('stop-all-agents', () => {
console.log('Killing all agents');
for (let agentName in agent_connections) {
mindcraft.stopAgent(agentName);
}
});
socket.on('shutdown', () => {
console.log('Shutting down');
for (let agentName in agent_connections) {
mindcraft.stopAgent(agentName);
}
// wait 2 seconds
setTimeout(() => {
console.log('Exiting MindServer');
process.exit(0);
}, 2000);
});
socket.on('send-message', (agentName, message) => {
if (!agent_connections[agentName]) {
console.warn(`Agent ${agentName} not in game, cannot send message via MindServer.`);
return
}
try {
console.log(`Sending message to agent ${agentName}: ${message}`);
agent_connections[agentName].socket.emit('send-message', agentName, message)
} catch (error) {
console.error('Error: ', error);
}
});
});
let host = host_public ? '0.0.0.0' : 'localhost';
server.listen(port, host, () => {
console.log(`MindServer running on port ${port}`);
});
return server;
}
function agentsUpdate(socket) {
if (!socket) {
socket = io;
}
let agents = [];
for (let agentName in agent_connections) {
agents.push({name: agentName, in_game: agent_connections[agentName].in_game});
};
socket.emit('agents-update', agents);
}
// Optional: export these if you need access to them from other files
export const getIO = () => io;
export const getServer = () => server;

View file

@ -0,0 +1,286 @@
<!DOCTYPE html>
<html>
<head>
<title>Mindcraft</title>
<script src="/socket.io/socket.io.js"></script>
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
background: #1a1a1a;
color: #e0e0e0;
}
#agents {
background: #2d2d2d;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0,0,0,0.2);
}
h1 {
color: #ffffff;
}
.agent {
margin: 10px 0;
padding: 10px;
background: #363636;
border-radius: 4px;
display: flex;
justify-content: space-between;
align-items: center;
}
.restart-btn, .start-btn, .stop-btn {
color: white;
border: none;
padding: 5px 10px;
border-radius: 4px;
cursor: pointer;
margin-left: 5px;
}
.restart-btn {
background: #4CAF50;
}
.start-btn {
background: #2196F3;
}
.stop-btn {
background: #f44336;
}
.restart-btn:hover { background: #45a049; }
.start-btn:hover { background: #1976D2; }
.stop-btn:hover { background: #d32f2f; }
.status-icon {
font-size: 12px;
margin-right: 8px;
}
.status-icon.online {
color: #4CAF50;
}
.status-icon.offline {
color: #f44336;
}
#settingsForm {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(320px, 1fr));
gap: 8px;
margin-top: 10px;
}
.setting-wrapper {
display: flex;
align-items: center;
gap: 6px;
background: #3a3a3a;
padding: 6px 8px;
border-radius: 4px;
width: 100%;
box-sizing: border-box;
min-width: 0;
}
.setting-wrapper label {
flex: 0 0 50%;
font-size: 0.9em;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.setting-wrapper input[type="text"],
.setting-wrapper input[type="number"] {
flex: 1 1 0;
background: #262626;
border: 1px solid #555;
color: #e0e0e0;
border-radius: 4px;
padding: 4px 6px;
max-width: 100%;
min-width: 0;
}
.setting-wrapper input[type="checkbox"] {
transform: scale(1.2);
}
.agent-viewer {
width: 200px;
height: 150px;
border: none;
margin-left: 10px;
}
.start-btn:disabled {
opacity: 0.4;
cursor: not-allowed;
}
.agent-view-container {
margin-top: 6px;
display: flex;
justify-content: flex-start;
}
</style>
</head>
<body>
<h1>Mindcraft</h1>
<div id="agents"></div>
<div id="createAgentSection" style="margin-top:20px;background:#2d2d2d;padding:20px;border-radius:8px;">
<h2>Create Agent</h2>
<div id="settingsForm"></div>
<div id="profileStatus" style="margin-top:6px;font-style:italic;color:#cccccc;">Profile: Not uploaded</div>
<div style="margin-top:10px;">
<button id="uploadProfileBtn" class="start-btn">Upload Profile</button>
<input type="file" id="profileFileInput" accept=".json,application/json" style="display:none">
<button id="submitCreateAgentBtn" class="start-btn" disabled>Create Agent</button>
</div>
<div id="createError" style="color:#f44336;margin-top:10px;"></div>
</div>
<script>
const socket = io();
const agentsDiv = document.getElementById('agents');
let settingsSpec = {};
let profileData = null;
const agentSettings = {};
fetch('/settings_spec.json')
.then(r => r.json())
.then(spec => {
settingsSpec = spec;
const form = document.getElementById('settingsForm');
Object.keys(spec).forEach(key => {
if (key === 'profile') return; // profile handled via upload
const cfg = spec[key];
const wrapper = document.createElement('div');
wrapper.className = 'setting-wrapper';
const label = document.createElement('label');
label.textContent = key;
label.title = cfg.description || '';
let input;
switch (cfg.type) {
case 'boolean':
input = document.createElement('input');
input.type = 'checkbox';
input.checked = cfg.default === true;
break;
case 'number':
input = document.createElement('input');
input.type = 'number';
input.value = cfg.default;
break;
default:
input = document.createElement('input');
input.type = 'text';
input.value = typeof cfg.default === 'object' ? JSON.stringify(cfg.default) : cfg.default;
}
input.title = cfg.description || '';
input.id = `setting-${key}`;
wrapper.appendChild(label);
wrapper.appendChild(input);
form.appendChild(wrapper);
});
});
document.getElementById('uploadProfileBtn').addEventListener('click', () => {
document.getElementById('profileFileInput').click();
});
document.getElementById('profileFileInput').addEventListener('change', e => {
const file = e.target.files[0];
if (!file) return;
const reader = new FileReader();
reader.onload = ev => {
try {
profileData = JSON.parse(ev.target.result);
document.getElementById('submitCreateAgentBtn').disabled = false;
document.getElementById('profileStatus').textContent = `Profile: ${profileData.name || 'Uploaded'}`;
document.getElementById('createError').textContent = '';
} catch (err) {
document.getElementById('createError').textContent = 'Invalid profile JSON: ' + err.message;
profileData = null;
document.getElementById('submitCreateAgentBtn').disabled = true;
document.getElementById('profileStatus').textContent = 'Profile: Not uploaded';
}
};
reader.readAsText(file);
e.target.value = '';
});
document.getElementById('submitCreateAgentBtn').addEventListener('click', () => {
if (!profileData) return;
const settings = { profile: profileData };
Object.keys(settingsSpec).forEach(key => {
if (key === 'profile') return;
const input = document.getElementById(`setting-${key}`);
if (!input) return;
const type = settingsSpec[key].type;
let val;
if (type === 'boolean') val = input.checked;
else if (type === 'number') val = Number(input.value);
else if (type === 'array' || type === 'object') {
try { val = JSON.parse(input.value); }
catch { val = input.value; }
} else val = input.value;
settings[key] = val;
});
socket.emit('create-agent', settings, res => {
if (!res.success) {
document.getElementById('createError').textContent = res.error || 'Unknown error';
} else {
// reset on success
profileData = null;
document.getElementById('submitCreateAgentBtn').disabled = true;
document.getElementById('profileStatus').textContent = 'Profile: Not uploaded';
document.getElementById('createError').textContent = '';
}
});
});
function fetchAgentSettings(name) {
return new Promise((resolve) => {
if (agentSettings[name]) { resolve(agentSettings[name]); return; }
socket.emit('get-settings', name, res => {
if (res.settings) {
agentSettings[name] = res.settings;
resolve(res.settings);
} else resolve(null);
});
});
}
async function renderAgents(agents) {
// fetch settings for any new agents
await Promise.all(agents.map(a => fetchAgentSettings(a.name)));
agentsDiv.innerHTML = agents.length ?
agents.map((agent, idx) => {
const cfg = agentSettings[agent.name] || {};
const showViewer = cfg.render_bot_view === true;
const viewerHTML = showViewer ? `<div class="agent-view-container"><iframe class="agent-viewer" src="http://localhost:${3000 + idx}"></iframe></div>` : '';
return `
<div class="agent">
<div style="display:flex;justify-content:space-between;align-items:center;">
<span><span class="status-icon ${agent.in_game ? 'online' : 'offline'}"></span>${agent.name}</span>
<div style="display:flex;align-items:center;">
${agent.in_game ? `
<button class="stop-btn" onclick="stopAgent('${agent.name}')">Stop</button>
<button class="restart-btn" onclick="restartAgent('${agent.name}')">Restart</button>
<input type="text" id="messageInput-${agent.name}" placeholder="Enter message..." style="margin-left:4px;">
<button class="start-btn" onclick="sendMessage('${agent.name}', document.getElementById('messageInput-${agent.name}').value)">Send</button>
` : `
<button class="start-btn" onclick="startAgent('${agent.name}')">Start</button>
`}
</div>
</div>
${viewerHTML}
</div>`;
}).join('') +
`<button class="stop-btn" onclick="killAllAgents()">Stop All</button>
<button class="stop-btn" onclick="shutdown()">Shutdown</button>` :
'<div class="agent">No agents connected</div>';
}
socket.on('agents-update', agents => { renderAgents(agents); });
function restartAgent(n) { socket.emit('restart-agent', n); }
function startAgent(n) { socket.emit('start-agent', n); }
function stopAgent(n) { socket.emit('stop-agent', n); }
function killAllAgents() { socket.emit('stop-all-agents'); }
function shutdown() { socket.emit('shutdown'); }
function sendMessage(n, m) { socket.emit('send-message', n, m); }
</script>
</body>
</html>

View file

@ -0,0 +1,127 @@
{
"profile": {
"type": "object",
"required": true,
"description": "The profile object to use, including name, prompts, and examples"
},
"minecraft_version": {
"type": "string",
"description": "The version of Minecraft to use",
"default": "1.21.1"
},
"host": {
"type": "string",
"description": "The minecraft server host address to connect to",
"default": "127.0.0.1"
},
"port": {
"type": "number",
"description": "The minecraft server port to connect to",
"default": 55916
},
"auth": {
"type": "string",
"description": "The authentication method to use",
"default": "offline"
},
"base_profile": {
"type": "string",
"description": "Allowed values: survival, assistant, creative, god_mode. Each has fine tuned settings for different game modes.",
"default": "survival"
},
"load_memory": {
"type": "boolean",
"description": "Whether to load bot's previous memory",
"default": false
},
"init_message": {
"type": "string",
"description": "The initial message to send to the bot",
"default": "Respond with hello world and your name"
},
"only_chat_with": {
"type": "array",
"description": "List of agents to only chat with. If empty, the bot will chat publicly",
"default": []
},
"speak": {
"type": "boolean",
"description": "Whether to enable text-to-speech reading on the host machine",
"default": false
},
"language": {
"type": "string",
"description": "The language to automatically translate to and from using google translate",
"default": "en"
},
"allow_vision": {
"type": "boolean",
"description": "Whether to allow vision capabilities",
"default": false
},
"blocked_actions": {
"type": "array",
"description": "List of actions that are blocked",
"default": ["!checkBlueprint", "!checkBlueprintLevel", "!getBlueprint", "!getBlueprintLevel"]
},
"relevant_docs_count": {
"type": "number",
"description": "Number of relevant function documents to include in the prompt for LLM code writing",
"default": 5
},
"max_messages": {
"type": "number",
"description": "Maximum number of recent messages to keep in context for LLM",
"default": 15
},
"num_examples": {
"type": "number",
"description": "Number of examples to select to help prompt better LLM responses",
"default": 2
},
"max_commands": {
"type": "number",
"description": "Maximum number of commands allowed in consecutive responses. -1 for no limit",
"default": -1
},
"narrate_behavior": {
"type": "boolean",
"description": "Whether to openly chat automatic behavior like 'Picking up item!'",
"default": true
},
"log_all_prompts": {
"type": "boolean",
"description": "Whether to log all prompts to file. Can be very verbose.",
"default": false
},
"verbose_commands": {
"type": "boolean",
"description": "Whether to show full command syntax in bot responses. If false will use a shortened syntax.",
"default": true
},
"chat_bot_messages": {
"type": "boolean",
"description": "Whether to publicly chat messages to and from other bots",
"default": true
},
"render_bot_view": {
"type": "boolean",
"description": "Whether to render bot view for user observation. Does not give bot vision.",
"default": false
},
"allow_insecure_coding": {
"type": "boolean",
"description": "Whether to allow newAction command that let's LLM write/run code on host computer. Despite sandboxxing, it is potentially insecure.",
"default": false
},
"code_timeout_mins": {
"type": "number",
"description": "Number of minutes to allow code execution. -1 for no timeout",
"default": -1
},
"task": {
"type": "object",
"description": "The task object to give the agent on start. If null, the agent will not have a task.",
"default": null
}
}

89
src/models/_model_map.js Normal file
View file

@ -0,0 +1,89 @@
import { promises as fs } from 'fs';
import path from 'path';
import { fileURLToPath, pathToFileURL } from 'url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Dynamically discover model classes in this directory.
// Each model class must export a static `prefix` string.
const apiMap = await (async () => {
const map = {};
const files = (await fs.readdir(__dirname))
.filter(f => f.endsWith('.js') && f !== '_model_map.js' && f !== 'prompter.js');
for (const file of files) {
try {
const moduleUrl = pathToFileURL(path.join(__dirname, file)).href;
const mod = await import(moduleUrl);
for (const exported of Object.values(mod)) {
if (typeof exported === 'function' && Object.prototype.hasOwnProperty.call(exported, 'prefix')) {
const prefix = exported.prefix;
if (typeof prefix === 'string' && prefix.length > 0) {
map[prefix] = exported;
}
}
}
} catch (e) {
console.warn('Failed to load model module:', file, e?.message || e);
}
}
return map;
})();
export function selectAPI(profile) {
if (typeof profile === 'string' || profile instanceof String) {
profile = {model: profile};
}
// backwards compatibility with local->ollama
if (profile.api?.includes('local') || profile.model?.includes('local')) {
profile.api = 'ollama';
if (profile.model) {
profile.model = profile.model.replace('local', 'ollama');
}
}
if (!profile.api) {
const api = Object.keys(apiMap).find(key => profile.model?.startsWith(key));
if (api) {
profile.api = api;
}
else {
// check for some common models that do not require prefixes
if (profile.model.includes('gpt') || profile.model.includes('o1')|| profile.model.includes('o3'))
profile.api = 'openai';
else if (profile.model.includes('claude'))
profile.api = 'anthropic';
else if (profile.model.includes('gemini'))
profile.api = "google";
else if (profile.model.includes('grok'))
profile.api = 'grok';
else if (profile.model.includes('mistral'))
profile.api = 'mistral';
else if (profile.model.includes('deepseek'))
profile.api = 'deepseek';
else if (profile.model.includes('qwen'))
profile.api = 'qwen';
}
if (!profile.api) {
throw new Error('Unknown model:', profile.model);
}
}
if (!apiMap[profile.api]) {
throw new Error('Unknown api:', profile.api);
}
let model_name = profile.model.replace(profile.api + '/', ''); // remove prefix
profile.model = model_name === "" ? null : model_name; // if model is empty, set to null
return profile;
}
export function createModel(profile) {
if (!!apiMap[profile.model]) {
// if the model value is an api (instead of a specific model name)
// then set model to null so it uses the default model for that api
profile.model = null;
}
if (!apiMap[profile.api]) {
throw new Error('Unknown api:', profile.api);
}
const model = new apiMap[profile.api](profile.model, profile.url, profile.params);
return model;
}

61
src/models/cerebras.js Normal file
View file

@ -0,0 +1,61 @@
import CerebrasSDK from '@cerebras/cerebras_cloud_sdk';
import { strictFormat } from '../utils/text.js';
import { getKey } from '../utils/keys.js';
export class Cerebras {
static prefix = 'cerebras';
constructor(model_name, url, params) {
this.model_name = model_name;
this.url = url;
this.params = params;
// Initialize client with API key
this.client = new CerebrasSDK({ apiKey: getKey('CEREBRAS_API_KEY') });
}
async sendRequest(turns, systemMessage, stop_seq = '***') {
// Format messages array
const messages = strictFormat(turns);
messages.unshift({ role: 'system', content: systemMessage });
const pack = {
model: this.model_name || 'gpt-oss-120b',
messages,
stream: false,
...(this.params || {}),
};
let res;
try {
const completion = await this.client.chat.completions.create(pack);
// OpenAI-compatible shape
res = completion.choices?.[0]?.message?.content || '';
} catch (err) {
console.error('Cerebras API error:', err);
res = 'My brain disconnected, try again.';
}
return res;
}
async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = [...messages];
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
async embed(text) {
throw new Error('Embeddings are not supported by Cerebras.');
}
}

View file

@ -3,6 +3,7 @@ import { strictFormat } from '../utils/text.js';
import { getKey } from '../utils/keys.js'; import { getKey } from '../utils/keys.js';
export class Claude { export class Claude {
static prefix = 'anthropic';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.params = params || {}; this.params = params || {};
@ -22,7 +23,12 @@ export class Claude {
try { try {
console.log('Awaiting anthropic api response...') console.log('Awaiting anthropic api response...')
if (!this.params.max_tokens) { if (!this.params.max_tokens) {
this.params.max_tokens = 4096; if (this.params.thinking?.budget_tokens) {
this.params.max_tokens = this.params.thinking.budget_tokens + 1000;
// max_tokens must be greater than thinking.budget_tokens
} else {
this.params.max_tokens = 4096;
}
} }
const resp = await this.anthropic.messages.create({ const resp = await this.anthropic.messages.create({
model: this.model_name || "claude-3-sonnet-20240229", model: this.model_name || "claude-3-sonnet-20240229",
@ -32,19 +38,50 @@ export class Claude {
}); });
console.log('Received.') console.log('Received.')
res = resp.content[0].text; // get first content of type text
const textContent = resp.content.find(content => content.type === 'text');
if (textContent) {
res = textContent.text;
} else {
console.warn('No text content found in the response.');
res = 'No response from Claude.';
}
} }
catch (err) { catch (err) {
if (err.message.includes("does not support image input")) {
res = "Vision is only supported by certain models.";
} else {
res = "My brain disconnected, try again.";
}
console.log(err); console.log(err);
res = 'My brain disconnected, try again.';
} }
return res; return res;
} }
async sendVisionRequest(turns, systemMessage, imageBuffer) {
const imageMessages = [...turns];
imageMessages.push({
role: "user",
content: [
{
type: "text",
text: systemMessage
},
{
type: "image",
source: {
type: "base64",
media_type: "image/jpeg",
data: imageBuffer.toString('base64')
}
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
async embed(text) { async embed(text) {
throw new Error('Embeddings are not supported by Claude.'); throw new Error('Embeddings are not supported by Claude.');
} }
} }

View file

@ -3,6 +3,7 @@ import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js'; import { strictFormat } from '../utils/text.js';
export class DeepSeek { export class DeepSeek {
static prefix = 'deepseek';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.params = params; this.params = params;

View file

@ -3,6 +3,7 @@ import { toSinglePrompt, strictFormat } from '../utils/text.js';
import { getKey } from '../utils/keys.js'; import { getKey } from '../utils/keys.js';
export class Gemini { export class Gemini {
static prefix = 'google';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.params = params; this.params = params;
@ -36,10 +37,9 @@ export class Gemini {
async sendRequest(turns, systemMessage) { async sendRequest(turns, systemMessage) {
let model; let model;
const modelConfig = { const modelConfig = {
model: this.model_name || "gemini-1.5-flash", model: this.model_name || "gemini-2.5-flash",
// systemInstruction does not work bc google is trash // systemInstruction does not work bc google is trash
}; };
if (this.url) { if (this.url) {
model = this.genAI.getGenerativeModel( model = this.genAI.getGenerativeModel(
modelConfig, modelConfig,
@ -72,26 +72,90 @@ export class Gemini {
} }
}); });
const response = await result.response; const response = await result.response;
const text = response.text(); let text;
// Handle "thinking" models since they smart
if (this.model_name && this.model_name.includes("thinking")) {
if (
response.candidates &&
response.candidates.length > 0 &&
response.candidates[0].content &&
response.candidates[0].content.parts &&
response.candidates[0].content.parts.length > 1
) {
text = response.candidates[0].content.parts[1].text;
} else {
console.warn("Unexpected response structure for thinking model:", response);
text = response.text();
}
} else {
text = response.text();
}
console.log('Received.'); console.log('Received.');
return text; return text;
} }
async embed(text) { async sendVisionRequest(turns, systemMessage, imageBuffer) {
let model; let model;
if (this.url) { if (this.url) {
model = this.genAI.getGenerativeModel( model = this.genAI.getGenerativeModel(
{ model: "text-embedding-004" }, { model: this.model_name || "gemini-1.5-flash" },
{ baseUrl: this.url },
{ safetySettings: this.safetySettings }
);
} else {
model = this.genAI.getGenerativeModel(
{ model: this.model_name || "gemini-1.5-flash" },
{ safetySettings: this.safetySettings }
);
}
const imagePart = {
inlineData: {
data: imageBuffer.toString('base64'),
mimeType: 'image/jpeg'
}
};
const stop_seq = '***';
const prompt = toSinglePrompt(turns, systemMessage, stop_seq, 'model');
let res = null;
try {
console.log('Awaiting Google API vision response...');
const result = await model.generateContent([prompt, imagePart]);
const response = await result.response;
const text = response.text();
console.log('Received.');
if (!text.includes(stop_seq)) return text;
const idx = text.indexOf(stop_seq);
res = text.slice(0, idx);
} catch (err) {
console.log(err);
if (err.message.includes("Image input modality is not enabled for models/")) {
res = "Vision is only supported by certain models.";
} else {
res = "An unexpected error occurred, please try again.";
}
}
return res;
}
async embed(text) {
let model = this.model_name || "text-embedding-004";
if (this.url) {
model = this.genAI.getGenerativeModel(
{ model },
{ baseUrl: this.url } { baseUrl: this.url }
); );
} else { } else {
model = this.genAI.getGenerativeModel( model = this.genAI.getGenerativeModel(
{ model: "text-embedding-004" } { model }
); );
} }
const result = await model.embedContent(text); const result = await model.embedContent(text);
return result.embedding.values; return result.embedding.values;
} }
} }

71
src/models/glhf.js Normal file
View file

@ -0,0 +1,71 @@
import OpenAIApi from 'openai';
import { getKey } from '../utils/keys.js';
export class GLHF {
static prefix = 'glhf';
constructor(model_name, url) {
this.model_name = model_name;
const apiKey = getKey('GHLF_API_KEY');
if (!apiKey) {
throw new Error('API key not found. Please check keys.json and ensure GHLF_API_KEY is defined.');
}
this.openai = new OpenAIApi({
apiKey,
baseURL: url || "https://glhf.chat/api/openai/v1"
});
}
async sendRequest(turns, systemMessage, stop_seq = '***') {
// Construct the message array for the API request.
let messages = [{ role: 'system', content: systemMessage }].concat(turns);
const pack = {
model: this.model_name || "hf:meta-llama/Llama-3.1-405B-Instruct",
messages,
stop: [stop_seq]
};
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting glhf.chat API response... (attempt: ${attempt})`);
try {
let completion = await this.openai.chat.completions.create(pack);
if (completion.choices[0].finish_reason === 'length') {
throw new Error('Context length exceeded');
}
let res = completion.choices[0].message.content;
// If there's an open <think> tag without a corresponding </think>, retry.
if (res.includes("<think>") && !res.includes("</think>")) {
console.warn("Partial <think> block detected. Re-generating...");
continue;
}
// If there's a closing </think> tag but no opening <think>, prepend one.
if (res.includes("</think>") && !res.includes("<think>")) {
res = "<think>" + res;
}
finalRes = res.replace(/<\|separator\|>/g, '*no response*');
break; // Valid response obtained.
} catch (err) {
if ((err.message === 'Context length exceeded' || err.code === 'context_length_exceeded') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
} else {
console.error(err);
finalRes = 'My brain disconnected, try again.';
break;
}
}
}
if (finalRes === null) {
finalRes = "I thought too hard, sorry, try again";
}
return finalRes;
}
async embed(text) {
throw new Error('Embeddings are not supported by glhf.');
}
}

View file

@ -3,6 +3,7 @@ import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js'; import { strictFormat } from '../utils/text.js';
export class GPT { export class GPT {
static prefix = 'openai';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.params = params; this.params = params;
@ -21,22 +22,22 @@ export class GPT {
async sendRequest(turns, systemMessage, stop_seq='***') { async sendRequest(turns, systemMessage, stop_seq='***') {
let messages = [{'role': 'system', 'content': systemMessage}].concat(turns); let messages = [{'role': 'system', 'content': systemMessage}].concat(turns);
messages = strictFormat(messages);
let model = this.model_name || "gpt-4o-mini";
const pack = { const pack = {
model: this.model_name || "gpt-3.5-turbo", model: model,
messages, messages,
stop: stop_seq, stop: stop_seq,
...(this.params || {}) ...(this.params || {})
}; };
if (this.model_name.includes('o1')) { if (model.includes('o1') || model.includes('o3') || model.includes('5')) {
pack.messages = strictFormat(messages);
delete pack.stop; delete pack.stop;
} }
let res = null; let res = null;
try { try {
console.log('Awaiting openai api response from model', this.model_name) console.log('Awaiting openai api response from model', model)
// console.log('Messages:', messages); // console.log('Messages:', messages);
let completion = await this.openai.chat.completions.create(pack); let completion = await this.openai.chat.completions.create(pack);
if (completion.choices[0].finish_reason == 'length') if (completion.choices[0].finish_reason == 'length')
@ -48,6 +49,9 @@ export class GPT {
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) { if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.'); console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq); return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
} else if (err.message.includes('image_url')) {
console.log(err);
res = 'Vision is only supported by certain models.';
} else { } else {
console.log(err); console.log(err);
res = 'My brain disconnected, try again.'; res = 'My brain disconnected, try again.';
@ -56,6 +60,24 @@ export class GPT {
return res; return res;
} }
async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = [...messages];
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
async embed(text) { async embed(text) {
if (text.length > 8191) if (text.length > 8191)
text = text.slice(0, 8191); text = text.slice(0, 8191);
@ -66,7 +88,5 @@ export class GPT {
}); });
return embedding.data[0].embedding; return embedding.data[0].embedding;
} }
} }

View file

@ -3,6 +3,7 @@ import { getKey } from '../utils/keys.js';
// xAI doesn't supply a SDK for their models, but fully supports OpenAI and Anthropic SDKs // xAI doesn't supply a SDK for their models, but fully supports OpenAI and Anthropic SDKs
export class Grok { export class Grok {
static prefix = 'xai';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.url = url; this.url = url;
@ -19,13 +20,12 @@ export class Grok {
this.openai = new OpenAIApi(config); this.openai = new OpenAIApi(config);
} }
async sendRequest(turns, systemMessage, stop_seq='***') { async sendRequest(turns, systemMessage) {
let messages = [{'role': 'system', 'content': systemMessage}].concat(turns); let messages = [{'role': 'system', 'content': systemMessage}].concat(turns);
const pack = { const pack = {
model: this.model_name || "grok-beta", model: this.model_name || "grok-3-mini-latest",
messages, messages,
stop: [stop_seq],
...(this.params || {}) ...(this.params || {})
}; };
@ -42,7 +42,10 @@ export class Grok {
catch (err) { catch (err) {
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) { if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.'); console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq); return await this.sendRequest(turns.slice(1), systemMessage);
} else if (err.message.includes('The model expects a single `text` element per message.')) {
console.log(err);
res = 'Vision is only supported by certain models.';
} else { } else {
console.log(err); console.log(err);
res = 'My brain disconnected, try again.'; res = 'My brain disconnected, try again.';
@ -51,6 +54,24 @@ export class Grok {
// sometimes outputs special token <|separator|>, just replace it // sometimes outputs special token <|separator|>, just replace it
return res.replace(/<\|separator\|>/g, '*no response*'); return res.replace(/<\|separator\|>/g, '*no response*');
} }
async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = [...messages];
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
async embed(text) { async embed(text) {
throw new Error('Embeddings are not supported by Grok.'); throw new Error('Embeddings are not supported by Grok.');

View file

@ -1,53 +1,95 @@
import Groq from 'groq-sdk' import Groq from 'groq-sdk'
import { getKey } from '../utils/keys.js'; import { getKey } from '../utils/keys.js';
// THIS API IS NOT TO BE CONFUSED WITH GROK!
// Go to grok.js for that. :)
// Umbrella class for Mixtral, LLama, Gemma... // Umbrella class for everything under the sun... That GroqCloud provides, that is.
export class GroqCloudAPI { export class GroqCloudAPI {
static prefix = 'groq';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.url = url; this.url = url;
this.params = params || {}; this.params = params || {};
// ReplicateAPI theft :3
if (this.url) {
// Remove any mention of "tools" from params:
if (this.params.tools)
delete this.params.tools;
// This is just a bit of future-proofing in case we drag Mindcraft in that direction.
// I'm going to do a sneaky ReplicateAPI theft for a lot of this, aren't I?
if (this.url)
console.warn("Groq Cloud has no implementation for custom URLs. Ignoring provided URL."); console.warn("Groq Cloud has no implementation for custom URLs. Ignoring provided URL.");
}
this.groq = new Groq({ apiKey: getKey('GROQCLOUD_API_KEY') }); this.groq = new Groq({ apiKey: getKey('GROQCLOUD_API_KEY') });
} }
async sendRequest(turns, systemMessage, stop_seq=null) { async sendRequest(turns, systemMessage, stop_seq = null) {
// Construct messages array
let messages = [{"role": "system", "content": systemMessage}].concat(turns); let messages = [{"role": "system", "content": systemMessage}].concat(turns);
let res = null; let res = null;
try { try {
console.log("Awaiting Groq response..."); console.log("Awaiting Groq response...");
if (!this.params.max_tokens) {
this.params.max_tokens = 16384; // Handle deprecated max_tokens parameter
if (this.params.max_tokens) {
console.warn("GROQCLOUD WARNING: A profile is using `max_tokens`. This is deprecated. Please move to `max_completion_tokens`.");
this.params.max_completion_tokens = this.params.max_tokens;
delete this.params.max_tokens;
} }
if (!this.params.max_completion_tokens) {
this.params.max_completion_tokens = 4000;
}
let completion = await this.groq.chat.completions.create({ let completion = await this.groq.chat.completions.create({
"messages": messages, "messages": messages,
"model": this.model_name || "mixtral-8x7b-32768", "model": this.model_name || "qwen/qwen3-32b",
"stream": true, "stream": false,
"stop": stop_seq, "stop": stop_seq,
...(this.params || {}) ...(this.params || {})
}); });
let temp_res = ""; res = completion.choices[0].message.content;
for await (const chunk of completion) {
temp_res += chunk.choices[0]?.delta?.content || '';
}
res = temp_res;
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
} }
catch(err) { catch(err) {
if (err.message.includes("content must be a string")) {
res = "Vision is only supported by certain models.";
} else {
res = "My brain disconnected, try again.";
}
console.log(err); console.log(err);
res = "My brain just kinda stopped working. Try again.";
} }
return res; return res;
} }
async embed(text) { async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = messages.filter(message => message.role !== 'system');
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
}
]
});
return this.sendRequest(imageMessages);
}
async embed(_) {
throw new Error('Embeddings are not supported by Groq.'); throw new Error('Embeddings are not supported by Groq.');
} }
} }

View file

@ -1,46 +1,86 @@
import {toSinglePrompt} from '../utils/text.js'; import { toSinglePrompt } from '../utils/text.js';
import {getKey} from '../utils/keys.js'; import { getKey } from '../utils/keys.js';
import {HfInference} from "@huggingface/inference"; import { HfInference } from "@huggingface/inference";
export class HuggingFace { export class HuggingFace {
constructor(model_name, url, params) { static prefix = 'huggingface';
this.model_name = model_name.replace('huggingface/',''); constructor(model_name, url, params) {
this.url = url; // Remove 'huggingface/' prefix if present
this.params = params; this.model_name = model_name.replace('huggingface/', '');
this.url = url;
this.params = params;
if (this.url) { if (this.url) {
console.warn("Hugging Face doesn't support custom urls!"); console.warn("Hugging Face doesn't support custom urls!");
}
this.huggingface = new HfInference(getKey('HUGGINGFACE_API_KEY'));
}
async sendRequest(turns, systemMessage) {
const stop_seq = '***';
// Build a single prompt from the conversation turns
const prompt = toSinglePrompt(turns, null, stop_seq);
// Fallback model if none was provided
const model_name = this.model_name || 'meta-llama/Meta-Llama-3-8B';
// Combine system message with the prompt
const input = systemMessage + "\n" + prompt;
// We'll try up to 5 times in case of partial <think> blocks for DeepSeek-R1 models.
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting Hugging Face API response... (model: ${model_name}, attempt: ${attempt})`);
let res = '';
try {
// Consume the streaming response chunk by chunk
for await (const chunk of this.huggingface.chatCompletionStream({
model: model_name,
messages: [{ role: "user", content: input }],
...(this.params || {})
})) {
res += (chunk.choices[0]?.delta?.content || "");
}
} catch (err) {
console.log(err);
res = 'My brain disconnected, try again.';
// Break out immediately; we only retry when handling partial <think> tags.
break;
}
// If the model is DeepSeek-R1, check for mismatched <think> blocks.
const hasOpenTag = res.includes("<think>");
const hasCloseTag = res.includes("</think>");
// If there's a partial mismatch, warn and retry the entire request.
if ((hasOpenTag && !hasCloseTag)) {
console.warn("Partial <think> block detected. Re-generating...");
continue;
} }
this.huggingface = new HfInference(getKey('HUGGINGFACE_API_KEY')); // If both tags are present, remove the <think> block entirely.
} if (hasOpenTag && hasCloseTag) {
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
async sendRequest(turns, systemMessage) {
const stop_seq = '***';
const prompt = toSinglePrompt(turns, null, stop_seq);
let model_name = this.model_name || 'meta-llama/Meta-Llama-3-8B';
const input = systemMessage + "\n" + prompt;
let res = '';
try {
console.log('Awaiting Hugging Face API response...');
for await (const chunk of this.huggingface.chatCompletionStream({
model: model_name,
messages: [{ role: "user", content: input }],
...(this.params || {})
})) {
res += (chunk.choices[0]?.delta?.content || "");
}
} catch (err) {
console.log(err);
res = 'My brain disconnected, try again.';
} }
console.log('Received.');
console.log(res); finalRes = res;
return res; break; // Exit loop if we got a valid response.
} }
async embed(text) { // If no valid response was obtained after max attempts, assign a fallback.
throw new Error('Embeddings are not supported by HuggingFace.'); if (finalRes == null) {
console.warn("Could not get a valid <think> block or normal response after max attempts.");
finalRes = 'I thought too hard, sorry, try again.';
} }
} console.log('Received.');
console.log(finalRes);
return finalRes;
}
async embed(text) {
throw new Error('Embeddings are not supported by HuggingFace.');
}
}

114
src/models/hyperbolic.js Normal file
View file

@ -0,0 +1,114 @@
import { getKey } from '../utils/keys.js';
export class Hyperbolic {
static prefix = 'hyperbolic';
constructor(modelName, apiUrl) {
this.modelName = modelName || "deepseek-ai/DeepSeek-V3";
this.apiUrl = apiUrl || "https://api.hyperbolic.xyz/v1/chat/completions";
// Retrieve the Hyperbolic API key from keys.js
this.apiKey = getKey('HYPERBOLIC_API_KEY');
if (!this.apiKey) {
throw new Error('HYPERBOLIC_API_KEY not found. Check your keys.js file.');
}
}
/**
* Sends a chat completion request to the Hyperbolic endpoint.
*
* @param {Array} turns - An array of message objects, e.g. [{role: 'user', content: 'Hi'}].
* @param {string} systemMessage - The system prompt or instruction.
* @param {string} stopSeq - A stopping sequence, default '***'.
* @returns {Promise<string>} - The model's reply.
*/
async sendRequest(turns, systemMessage, stopSeq = '***') {
// Prepare the messages with a system prompt at the beginning
const messages = [{ role: 'system', content: systemMessage }, ...turns];
// Build the request payload
const payload = {
model: this.modelName,
messages: messages,
max_tokens: 8192,
temperature: 0.7,
top_p: 0.9,
stream: false
};
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting Hyperbolic API response... (attempt: ${attempt})`);
console.log('Messages:', messages);
let completionContent = null;
try {
const response = await fetch(this.apiUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.apiKey}`
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
if (data?.choices?.[0]?.finish_reason === 'length') {
throw new Error('Context length exceeded');
}
completionContent = data?.choices?.[0]?.message?.content || '';
console.log('Received response from Hyperbolic.');
} catch (err) {
if (
(err.message === 'Context length exceeded' || err.code === 'context_length_exceeded') &&
turns.length > 1
) {
console.log('Context length exceeded, trying again with a shorter context...');
return await this.sendRequest(turns.slice(1), systemMessage, stopSeq);
} else {
console.error(err);
completionContent = 'My brain disconnected, try again.';
}
}
// Check for <think> blocks
const hasOpenTag = completionContent.includes("<think>");
const hasCloseTag = completionContent.includes("</think>");
if ((hasOpenTag && !hasCloseTag)) {
console.warn("Partial <think> block detected. Re-generating...");
continue; // Retry the request
}
if (hasCloseTag && !hasOpenTag) {
completionContent = '<think>' + completionContent;
}
if (hasOpenTag && hasCloseTag) {
completionContent = completionContent.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
}
finalRes = completionContent.replace(/<\|separator\|>/g, '*no response*');
break; // Valid response obtained—exit loop
}
if (finalRes == null) {
console.warn("Could not get a valid <think> block or normal response after max attempts.");
finalRes = 'I thought too hard, sorry, try again.';
}
return finalRes;
}
async embed(text) {
throw new Error('Embeddings are not supported by Hyperbolic.');
}
}

View file

@ -1,66 +0,0 @@
import { strictFormat } from '../utils/text.js';
export class Local {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
this.url = url || 'http://127.0.0.1:11434';
this.chat_endpoint = '/api/chat';
this.embedding_endpoint = '/api/embeddings';
}
async sendRequest(turns, systemMessage) {
let model = this.model_name || 'llama3';
let messages = strictFormat(turns);
messages.unshift({role: 'system', content: systemMessage});
let res = null;
try {
console.log(`Awaiting local response... (model: ${model})`)
res = await this.send(this.chat_endpoint, {
model: model,
messages: messages,
stream: false,
...(this.params || {})
});
if (res)
res = res['message']['content'];
}
catch (err) {
if (err.message.toLowerCase().includes('context length') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.');
return await sendRequest(turns.slice(1), systemMessage, stop_seq);
} else {
console.log(err);
res = 'My brain disconnected, try again.';
}
}
return res;
}
async embed(text) {
let model = this.model_name || 'nomic-embed-text';
let body = {model: model, prompt: text};
let res = await this.send(this.embedding_endpoint, body);
return res['embedding']
}
async send(endpoint, body) {
const url = new URL(endpoint, this.url);
let method = 'POST';
let headers = new Headers();
const request = new Request(url, {method, headers, body: JSON.stringify(body)});
let data = null;
try {
const res = await fetch(request);
if (res.ok) {
data = await res.json();
} else {
throw new Error(`Ollama Status: ${res.status}`);
}
} catch (err) {
console.error('Failed to send Ollama request.');
console.error(err);
}
return data;
}
}

92
src/models/mercury.js Normal file
View file

@ -0,0 +1,92 @@
import OpenAIApi from 'openai';
import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js';
export class Mercury {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
let config = {};
if (url)
config.baseURL = url;
config.apiKey = getKey('MERCURY_API_KEY');
this.openai = new OpenAIApi(config);
}
async sendRequest(turns, systemMessage, stop_seq='***') {
if (typeof stop_seq === 'string') {
stop_seq = [stop_seq];
} else if (!Array.isArray(stop_seq)) {
stop_seq = [];
}
let messages = [{'role': 'system', 'content': systemMessage}].concat(turns);
messages = strictFormat(messages);
const pack = {
model: this.model_name || "mercury-coder-small",
messages,
stop: stop_seq,
...(this.params || {})
};
let res = null;
try {
console.log('Awaiting mercury api response from model', this.model_name)
// console.log('Messages:', messages);
let completion = await this.openai.chat.completions.create(pack);
if (completion.choices[0].finish_reason == 'length')
throw new Error('Context length exceeded');
console.log('Received.')
res = completion.choices[0].message.content;
}
catch (err) {
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
} else if (err.message.includes('image_url')) {
console.log(err);
res = 'Vision is only supported by certain models.';
} else {
console.log(err);
res = 'My brain disconnected, try again.';
}
}
return res;
}
async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = [...messages];
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
async embed(text) {
if (text.length > 8191)
text = text.slice(0, 8191);
const embedding = await this.openai.embeddings.create({
model: this.model_name || "text-embedding-3-small",
input: text,
encoding_format: "float",
});
return embedding.data[0].embedding;
}
}

View file

@ -3,6 +3,7 @@ import { getKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js'; import { strictFormat } from '../utils/text.js';
export class Mistral { export class Mistral {
static prefix = 'mistral';
#client; #client;
constructor(model_name, url, params) { constructor(model_name, url, params) {
@ -47,6 +48,7 @@ export class Mistral {
]; ];
messages.push(...strictFormat(turns)); messages.push(...strictFormat(turns));
console.log('Awaiting mistral api response...')
const response = await this.#client.chat.complete({ const response = await this.#client.chat.complete({
model, model,
messages, messages,
@ -55,14 +57,33 @@ export class Mistral {
result = response.choices[0].message.content; result = response.choices[0].message.content;
} catch (err) { } catch (err) {
console.log(err) if (err.message.includes("A request containing images has been given to a model which does not have the 'vision' capability.")) {
result = "Vision is only supported by certain models.";
result = "My brain disconnected, try again."; } else {
result = "My brain disconnected, try again.";
}
console.log(err);
} }
return result; return result;
} }
async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = [...messages];
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
imageUrl: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
async embed(text) { async embed(text) {
const embedding = await this.#client.embeddings.create({ const embedding = await this.#client.embeddings.create({
model: "mistral-embed", model: "mistral-embed",

View file

@ -4,8 +4,9 @@ import { strictFormat } from '../utils/text.js';
// llama, mistral // llama, mistral
export class Novita { export class Novita {
static prefix = 'novita';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name.replace('novita/', ''); this.model_name = model_name;
this.url = url || 'https://api.novita.ai/v3/openai'; this.url = url || 'https://api.novita.ai/v3/openai';
this.params = params; this.params = params;
@ -25,7 +26,7 @@ export class Novita {
messages = strictFormat(messages); messages = strictFormat(messages);
const pack = { const pack = {
model: this.model_name || "meta-llama/llama-3.1-70b-instruct", model: this.model_name || "meta-llama/llama-4-scout-17b-16e-instruct",
messages, messages,
stop: [stop_seq], stop: [stop_seq],
...(this.params || {}) ...(this.params || {})

115
src/models/ollama.js Normal file
View file

@ -0,0 +1,115 @@
import { strictFormat } from '../utils/text.js';
export class Ollama {
static prefix = 'ollama';
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
this.url = url || 'http://127.0.0.1:11434';
this.chat_endpoint = '/api/chat';
this.embedding_endpoint = '/api/embeddings';
}
async sendRequest(turns, systemMessage) {
let model = this.model_name || 'sweaterdog/andy-4:micro-q8_0';
let messages = strictFormat(turns);
messages.unshift({ role: 'system', content: systemMessage });
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting local response... (model: ${model}, attempt: ${attempt})`);
let res = null;
try {
let apiResponse = await this.send(this.chat_endpoint, {
model: model,
messages: messages,
stream: false,
...(this.params || {})
});
if (apiResponse) {
res = apiResponse['message']['content'];
} else {
res = 'No response data.';
}
} catch (err) {
if (err.message.toLowerCase().includes('context length') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage);
} else {
console.log(err);
res = 'My brain disconnected, try again.';
}
}
const hasOpenTag = res.includes("<think>");
const hasCloseTag = res.includes("</think>");
if ((hasOpenTag && !hasCloseTag)) {
console.warn("Partial <think> block detected. Re-generating...");
if (attempt < maxAttempts) continue;
}
if (hasCloseTag && !hasOpenTag) {
res = '<think>' + res;
}
if (hasOpenTag && hasCloseTag) {
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
}
finalRes = res;
break;
}
if (finalRes == null) {
console.warn("Could not get a valid response after max attempts.");
finalRes = 'I thought too hard, sorry, try again.';
}
return finalRes;
}
async embed(text) {
let model = this.model_name || 'nomic-embed-text';
let body = { model: model, input: text };
let res = await this.send(this.embedding_endpoint, body);
return res['embedding'];
}
async send(endpoint, body) {
const url = new URL(endpoint, this.url);
let method = 'POST';
let headers = new Headers();
const request = new Request(url, { method, headers, body: JSON.stringify(body) });
let data = null;
try {
const res = await fetch(request);
if (res.ok) {
data = await res.json();
} else {
throw new Error(`Ollama Status: ${res.status}`);
}
} catch (err) {
console.error('Failed to send Ollama request.');
console.error(err);
}
return data;
}
async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = [...messages];
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
}

View file

@ -3,6 +3,7 @@ import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js'; import { strictFormat } from '../utils/text.js';
export class OpenRouter { export class OpenRouter {
static prefix = 'openrouter';
constructor(model_name, url) { constructor(model_name, url) {
this.model_name = model_name; this.model_name = model_name;
@ -52,6 +53,24 @@ export class OpenRouter {
return res; return res;
} }
async sendVisionRequest(messages, systemMessage, imageBuffer) {
const imageMessages = [...messages];
imageMessages.push({
role: "user",
content: [
{ type: "text", text: systemMessage },
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
}
}
]
});
return this.sendRequest(imageMessages, systemMessage);
}
async embed(text) { async embed(text) {
throw new Error('Embeddings are not supported by Openrouter.'); throw new Error('Embeddings are not supported by Openrouter.');
} }

View file

@ -1,33 +1,33 @@
import { readFileSync, mkdirSync, writeFileSync} from 'fs'; import { readFileSync, mkdirSync, writeFileSync} from 'fs';
import { Examples } from '../utils/examples.js'; import { Examples } from '../utils/examples.js';
import { getCommandDocs } from '../agent/commands/index.js'; import { getCommandDocs } from '../agent/commands/index.js';
import { getSkillDocs } from '../agent/library/index.js';
import { SkillLibrary } from "../agent/library/skill_library.js"; import { SkillLibrary } from "../agent/library/skill_library.js";
import { stringifyTurns } from '../utils/text.js'; import { stringifyTurns } from '../utils/text.js';
import { getCommand } from '../agent/commands/index.js'; import { getCommand } from '../agent/commands/index.js';
import settings from '../../settings.js'; import settings from '../agent/settings.js';
import { promises as fs } from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
import { selectAPI, createModel } from './_model_map.js';
import { Gemini } from './gemini.js'; const __filename = fileURLToPath(import.meta.url);
import { GPT } from './gpt.js'; const __dirname = path.dirname(__filename);
import { Claude } from './claude.js';
import { Mistral } from './mistral.js';
import { ReplicateAPI } from './replicate.js';
import { Local } from './local.js';
import { Novita } from './novita.js';
import { GroqCloudAPI } from './groq.js';
import { HuggingFace } from './huggingface.js';
import { Qwen } from "./qwen.js";
import { Grok } from "./grok.js";
import { DeepSeek } from './deepseek.js';
import { AzureGPT } from './azure.js';
import { OpenRouter } from './openrouter.js';
export class Prompter { export class Prompter {
constructor(agent, fp) { constructor(agent, profile) {
this.agent = agent; this.agent = agent;
this.profile = JSON.parse(readFileSync(fp, 'utf8')); this.profile = profile;
let default_profile = JSON.parse(readFileSync('./profiles/defaults/_default.json', 'utf8')); let default_profile = JSON.parse(readFileSync('./profiles/defaults/_default.json', 'utf8'));
let base_fp = settings.base_profile; let base_fp = '';
if (settings.base_profile.includes('survival')) {
base_fp = './profiles/defaults/survival.json';
} else if (settings.base_profile.includes('assistant')) {
base_fp = './profiles/defaults/assistant.json';
} else if (settings.base_profile.includes('creative')) {
base_fp = './profiles/defaults/creative.json';
} else if (settings.base_profile.includes('god_mode')) {
base_fp = './profiles/defaults/god_mode.json';
}
let base_profile = JSON.parse(readFileSync(base_fp, 'utf8')); let base_profile = JSON.parse(readFileSync(base_fp, 'utf8'));
// first use defaults to fill in missing values in the base profile // first use defaults to fill in missing values in the base profile
@ -42,7 +42,6 @@ export class Prompter {
} }
// base overrides default, individual overrides base // base overrides default, individual overrides base
this.convo_examples = null; this.convo_examples = null;
this.coding_examples = null; this.coding_examples = null;
@ -51,64 +50,46 @@ export class Prompter {
this.last_prompt_time = 0; this.last_prompt_time = 0;
this.awaiting_coding = false; this.awaiting_coding = false;
// try to get "max_tokens" parameter, else null // for backwards compatibility, move max_tokens to params
let max_tokens = null; let max_tokens = null;
if (this.profile.max_tokens) if (this.profile.max_tokens)
max_tokens = this.profile.max_tokens; max_tokens = this.profile.max_tokens;
let chat_model_profile = this._selectAPI(this.profile.model); let chat_model_profile = selectAPI(this.profile.model);
this.chat_model = this._createModel(chat_model_profile); this.chat_model = createModel(chat_model_profile);
if (this.profile.code_model) { if (this.profile.code_model) {
let code_model_profile = this._selectAPI(this.profile.code_model); let code_model_profile = selectAPI(this.profile.code_model);
this.code_model = this._createModel(code_model_profile); this.code_model = createModel(code_model_profile);
} }
else { else {
this.code_model = this.chat_model; this.code_model = this.chat_model;
} }
let embedding = this.profile.embedding; if (this.profile.vision_model) {
if (embedding === undefined) { let vision_model_profile = selectAPI(this.profile.vision_model);
if (chat_model_profile.api !== 'ollama') this.vision_model = createModel(vision_model_profile);
embedding = {api: chat_model_profile.api}; }
else else {
embedding = {api: 'none'}; this.vision_model = this.chat_model;
} }
else if (typeof embedding === 'string' || embedding instanceof String)
embedding = {api: embedding};
console.log('Using embedding settings:', embedding);
let embedding_model_profile = null;
try { if (this.profile.embedding) {
if (embedding.api === 'google') try {
this.embedding_model = new Gemini(embedding.model, embedding.url); embedding_model_profile = selectAPI(this.profile.embedding);
else if (embedding.api === 'openai') } catch (e) {
this.embedding_model = new GPT(embedding.model, embedding.url); embedding_model_profile = null;
else if (embedding.api === 'azure')
this.embedding_model = new AzureGPT(embedding.model, embedding.url, embedding.api_version);
else if (embedding.api === 'replicate')
this.embedding_model = new ReplicateAPI(embedding.model, embedding.url);
else if (embedding.api === 'ollama')
this.embedding_model = new Local(embedding.model, embedding.url);
else if (embedding.api === 'qwen')
this.embedding_model = new Qwen(embedding.model, embedding.url);
else if (embedding.api === 'mistral')
this.embedding_model = new Mistral(embedding.model, embedding.url);
else if (embedding.api === 'huggingface')
this.embedding_model = new HuggingFace(embedding.model, embedding.url);
else if (embedding.api === 'novita')
this.embedding_model = new Novita(embedding.model, embedding.url);
else {
this.embedding_model = null;
let embedding_name = embedding ? embedding.api : '[NOT SPECIFIED]'
console.warn('Unsupported embedding: ' + embedding_name + '. Using word-overlap instead, expect reduced performance. Recommend using a supported embedding model. See Readme.');
} }
} }
catch (err) { if (embedding_model_profile) {
console.warn('Warning: Failed to initialize embedding model:', err.message); this.embedding_model = createModel(embedding_model_profile);
console.log('Continuing anyway, using word-overlap instead.');
this.embedding_model = null;
} }
else {
this.embedding_model = createModel({api: chat_model_profile.api});
}
this.skill_libary = new SkillLibrary(agent, this.embedding_model); this.skill_libary = new SkillLibrary(agent, this.embedding_model);
mkdirSync(`./bots/${name}`, { recursive: true }); mkdirSync(`./bots/${name}`, { recursive: true });
writeFileSync(`./bots/${name}/last_profile.json`, JSON.stringify(this.profile, null, 4), (err) => { writeFileSync(`./bots/${name}/last_profile.json`, JSON.stringify(this.profile, null, 4), (err) => {
@ -119,78 +100,6 @@ export class Prompter {
}); });
} }
_selectAPI(profile) {
if (typeof profile === 'string' || profile instanceof String) {
profile = {model: profile};
}
if (!profile.api) {
if (profile.model.includes('gemini'))
profile.api = 'google';
else if (profile.model.includes('openrouter/'))
profile.api = 'openrouter'; // must do before others bc shares model names
else if (profile.model.includes('gpt') || profile.model.includes('o1')|| profile.model.includes('o3'))
profile.api = 'openai';
else if (profile.model.includes('claude'))
profile.api = 'anthropic';
else if (profile.model.includes('huggingface/'))
profile.api = "huggingface";
else if (profile.model.includes('replicate/'))
profile.api = 'replicate';
else if (profile.model.includes('mistralai/') || profile.model.includes("mistral/"))
model_profile.api = 'mistral';
else if (profile.model.includes("groq/") || profile.model.includes("groqcloud/"))
profile.api = 'groq';
else if (profile.model.includes('novita/'))
profile.api = 'novita';
else if (profile.model.includes('qwen'))
profile.api = 'qwen';
else if (profile.model.includes('grok'))
profile.api = 'xai';
else if (profile.model.includes('deepseek'))
profile.api = 'deepseek';
else if (profile.model.includes('llama3'))
profile.api = 'ollama';
else
throw new Error('Unknown model:', profile.model);
}
return profile;
}
_createModel(profile) {
let model = null;
if (profile.api === 'google')
model = new Gemini(profile.model, profile.url, profile.params);
else if (profile.api === 'openai')
model = new GPT(profile.model, profile.url, profile.params);
else if (profile.api === 'azure')
model = new AzureGPT(profile.model, profile.url, profile.api_version, profile.params);
else if (profile.api === 'anthropic')
model = new Claude(profile.model, profile.url, profile.params);
else if (profile.api === 'replicate')
model = new ReplicateAPI(profile.model.replace('replicate/', ''), profile.url, profile.params);
else if (profile.api === 'ollama')
model = new Local(profile.model, profile.url, profile.params);
else if (profile.api === 'mistral')
model = new Mistral(profile.model, profile.url, profile.params);
else if (profile.api === 'groq')
model = new GroqCloudAPI(profile.model.replace('groq/', '').replace('groqcloud/', ''), profile.url, profile.params);
else if (profile.api === 'huggingface')
model = new HuggingFace(profile.model, profile.url, profile.params);
else if (profile.api === 'novita')
model = new Novita(profile.model.replace('novita/', ''), profile.url, profile.params);
else if (profile.api === 'qwen')
model = new Qwen(profile.model, profile.url, profile.params);
else if (profile.api === 'xai')
model = new Grok(profile.model, profile.url, profile.params);
else if (profile.api === 'deepseek')
model = new DeepSeek(profile.model, profile.url, profile.params);
else if (profile.api === 'openrouter')
model = new OpenRouter(profile.model.replace('openrouter/', ''), profile.url, profile.params);
else
throw new Error('Unknown API:', profile.api);
return model;
}
getName() { getName() {
return this.profile.name; return this.profile.name;
} }
@ -239,7 +148,7 @@ export class Prompter {
prompt = prompt.replaceAll('$ACTION', this.agent.actions.currentActionLabel); prompt = prompt.replaceAll('$ACTION', this.agent.actions.currentActionLabel);
} }
if (prompt.includes('$COMMAND_DOCS')) if (prompt.includes('$COMMAND_DOCS'))
prompt = prompt.replaceAll('$COMMAND_DOCS', getCommandDocs()); prompt = prompt.replaceAll('$COMMAND_DOCS', getCommandDocs(this.agent));
if (prompt.includes('$CODE_DOCS')) { if (prompt.includes('$CODE_DOCS')) {
const code_task_content = messages.slice().reverse().find(msg => const code_task_content = messages.slice().reverse().find(msg =>
msg.role !== 'system' && msg.content.includes('!newAction(') msg.role !== 'system' && msg.content.includes('!newAction(')
@ -250,9 +159,6 @@ export class Prompter {
await this.skill_libary.getRelevantSkillDocs(code_task_content, settings.relevant_docs_count) await this.skill_libary.getRelevantSkillDocs(code_task_content, settings.relevant_docs_count)
); );
} }
prompt = prompt.replaceAll('$COMMAND_DOCS', getCommandDocs());
if (prompt.includes('$CODE_DOCS'))
prompt = prompt.replaceAll('$CODE_DOCS', getSkillDocs());
if (prompt.includes('$EXAMPLES') && examples !== null) if (prompt.includes('$EXAMPLES') && examples !== null)
prompt = prompt.replaceAll('$EXAMPLES', await examples.createExampleMessage(messages)); prompt = prompt.replaceAll('$EXAMPLES', await examples.createExampleMessage(messages));
if (prompt.includes('$MEMORY')) if (prompt.includes('$MEMORY'))
@ -305,26 +211,50 @@ export class Prompter {
async promptConvo(messages) { async promptConvo(messages) {
this.most_recent_msg_time = Date.now(); this.most_recent_msg_time = Date.now();
let current_msg_time = this.most_recent_msg_time; let current_msg_time = this.most_recent_msg_time;
for (let i = 0; i < 3; i++) { // try 3 times to avoid hallucinations for (let i = 0; i < 3; i++) { // try 3 times to avoid hallucinations
await this.checkCooldown(); await this.checkCooldown();
if (current_msg_time !== this.most_recent_msg_time) { if (current_msg_time !== this.most_recent_msg_time) {
return ''; return '';
} }
let prompt = this.profile.conversing; let prompt = this.profile.conversing;
prompt = await this.replaceStrings(prompt, messages, this.convo_examples); prompt = await this.replaceStrings(prompt, messages, this.convo_examples);
let generation = await this.chat_model.sendRequest(messages, prompt); let generation;
// in conversations >2 players LLMs tend to hallucinate and role-play as other bots
// the FROM OTHER BOT tag should never be generated by the LLM try {
if (generation.includes('(FROM OTHER BOT)')) { generation = await this.chat_model.sendRequest(messages, prompt);
if (typeof generation !== 'string') {
console.error('Error: Generated response is not a string', generation);
throw new Error('Generated response is not a string');
}
console.log("Generated response:", generation);
await this._saveLog(prompt, messages, generation, 'conversation');
} catch (error) {
console.error('Error during message generation or file writing:', error);
continue;
}
// Check for hallucination or invalid output
if (generation?.includes('(FROM OTHER BOT)')) {
console.warn('LLM hallucinated message as another bot. Trying again...'); console.warn('LLM hallucinated message as another bot. Trying again...');
continue; continue;
} }
if (current_msg_time !== this.most_recent_msg_time) { if (current_msg_time !== this.most_recent_msg_time) {
console.warn(this.agent.name + ' received new message while generating, discarding old response.'); console.warn(`${this.agent.name} received new message while generating, discarding old response.`);
return ''; return '';
} }
if (generation?.includes('</think>')) {
const [_, afterThink] = generation.split('</think>')
generation = afterThink
}
return generation; return generation;
} }
return ''; return '';
} }
@ -337,8 +267,10 @@ export class Prompter {
await this.checkCooldown(); await this.checkCooldown();
let prompt = this.profile.coding; let prompt = this.profile.coding;
prompt = await this.replaceStrings(prompt, messages, this.coding_examples); prompt = await this.replaceStrings(prompt, messages, this.coding_examples);
let resp = await this.code_model.sendRequest(messages, prompt); let resp = await this.code_model.sendRequest(messages, prompt);
this.awaiting_coding = false; this.awaiting_coding = false;
await this._saveLog(prompt, messages, resp, 'coding');
return resp; return resp;
} }
@ -346,7 +278,13 @@ export class Prompter {
await this.checkCooldown(); await this.checkCooldown();
let prompt = this.profile.saving_memory; let prompt = this.profile.saving_memory;
prompt = await this.replaceStrings(prompt, null, null, to_summarize); prompt = await this.replaceStrings(prompt, null, null, to_summarize);
return await this.chat_model.sendRequest([], prompt); let resp = await this.chat_model.sendRequest([], prompt);
await this._saveLog(prompt, to_summarize, resp, 'memSaving');
if (resp?.includes('</think>')) {
const [_, afterThink] = resp.split('</think>')
resp = afterThink;
}
return resp;
} }
async promptShouldRespondToBot(new_message) { async promptShouldRespondToBot(new_message) {
@ -359,7 +297,15 @@ export class Prompter {
return res.trim().toLowerCase() === 'respond'; return res.trim().toLowerCase() === 'respond';
} }
async promptVision(messages, imageBuffer) {
await this.checkCooldown();
let prompt = this.profile.image_analysis;
prompt = await this.replaceStrings(prompt, messages, null, null, null);
return await this.vision_model.sendVisionRequest(messages, prompt, imageBuffer);
}
async promptGoalSetting(messages, last_goals) { async promptGoalSetting(messages, last_goals) {
// deprecated
let system_message = this.profile.goal_setting; let system_message = this.profile.goal_setting;
system_message = await this.replaceStrings(system_message, messages); system_message = await this.replaceStrings(system_message, messages);
@ -384,4 +330,34 @@ export class Prompter {
goal.quantity = parseInt(goal.quantity); goal.quantity = parseInt(goal.quantity);
return goal; return goal;
} }
async _saveLog(prompt, messages, generation, tag) {
if (!settings.log_all_prompts)
return;
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
let logEntry;
let task_id = this.agent.task.task_id;
if (task_id == null) {
logEntry = `[${timestamp}] \nPrompt:\n${prompt}\n\nConversation:\n${JSON.stringify(messages, null, 2)}\n\nResponse:\n${generation}\n\n`;
} else {
logEntry = `[${timestamp}] Task ID: ${task_id}\nPrompt:\n${prompt}\n\nConversation:\n${JSON.stringify(messages, null, 2)}\n\nResponse:\n${generation}\n\n`;
}
const logFile = `${tag}_${timestamp}.txt`;
await this._saveToFile(logFile, logEntry);
}
async _saveToFile(logFile, logEntry) {
let task_id = this.agent.task.task_id;
let logDir;
if (task_id == null) {
logDir = path.join(__dirname, `../../bots/${this.agent.name}/logs`);
} else {
logDir = path.join(__dirname, `../../bots/${this.agent.name}/logs/${task_id}`);
}
await fs.mkdir(logDir, { recursive: true });
logFile = path.join(logDir, logFile);
await fs.appendFile(logFile, String(logEntry), 'utf-8');
}
} }

View file

@ -3,6 +3,7 @@ import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js'; import { strictFormat } from '../utils/text.js';
export class Qwen { export class Qwen {
static prefix = 'qwen';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.params = params; this.params = params;

View file

@ -4,6 +4,7 @@ import { getKey } from '../utils/keys.js';
// llama, mistral // llama, mistral
export class ReplicateAPI { export class ReplicateAPI {
static prefix = 'replicate';
constructor(model_name, url, params) { constructor(model_name, url, params) {
this.model_name = model_name; this.model_name = model_name;
this.url = url; this.url = url;

78
src/models/vllm.js Normal file
View file

@ -0,0 +1,78 @@
// This code uses Dashscope and HTTP to ensure the latest support for the Qwen model.
// Qwen is also compatible with the OpenAI API format;
import OpenAIApi from 'openai';
import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js';
export class VLLM {
static prefix = 'vllm';
constructor(model_name, url) {
this.model_name = model_name;
// Currently use self-hosted SGLang API for text generation; use OpenAI text-embedding-3-small model for simple embedding.
let vllm_config = {};
if (url)
vllm_config.baseURL = url;
else
vllm_config.baseURL = 'http://0.0.0.0:8000/v1';
vllm_config.apiKey = ""
this.vllm = new OpenAIApi(vllm_config);
}
async sendRequest(turns, systemMessage, stop_seq = '***') {
let messages = [{ 'role': 'system', 'content': systemMessage }].concat(turns);
let model = this.model_name || "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B";
if (model.includes('deepseek') || model.includes('qwen')) {
messages = strictFormat(messages);
}
const pack = {
model: model,
messages,
stop: stop_seq,
};
let res = null;
try {
console.log('Awaiting openai api response...')
// console.log('Messages:', messages);
// todo set max_tokens, temperature, top_p, etc. in pack
let completion = await this.vllm.chat.completions.create(pack);
if (completion.choices[0].finish_reason == 'length')
throw new Error('Context length exceeded');
console.log('Received.')
res = completion.choices[0].message.content;
}
catch (err) {
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
} else {
console.log(err);
res = 'My brain disconnected, try again.';
}
}
return res;
}
async saveToFile(logFile, logEntry) {
let task_id = this.agent.task.task_id;
console.log(task_id)
let logDir;
if (this.task_id === null) {
logDir = path.join(__dirname, `../../bots/${this.agent.name}/logs`);
} else {
logDir = path.join(__dirname, `../../bots/${this.agent.name}/logs/${task_id}`);
}
await fs.mkdir(logDir, { recursive: true });
logFile = path.join(logDir, logFile);
await fs.appendFile(logFile, String(logEntry), 'utf-8');
}
}

View file

@ -1,23 +1,24 @@
import { spawn } from 'child_process'; import { spawn } from 'child_process';
import { mainProxy } from './main_proxy.js'; import { logoutAgent } from '../mindcraft/mindserver.js';
export class AgentProcess { export class AgentProcess {
start(profile, load_memory=false, init_message=null, count_id=0, task_path=null, task_id=null) { constructor(name, port) {
this.profile = profile; this.name = name;
this.port = port;
}
start(load_memory=false, init_message=null, count_id=0) {
this.count_id = count_id; this.count_id = count_id;
this.running = true; this.running = true;
let args = ['src/process/init_agent.js', this.name]; let args = ['src/process/init_agent.js', this.name];
args.push('-p', profile); args.push('-n', this.name);
args.push('-c', count_id); args.push('-c', count_id);
if (load_memory) if (load_memory)
args.push('-l', load_memory); args.push('-l', load_memory);
if (init_message) if (init_message)
args.push('-m', init_message); args.push('-m', init_message);
if (task_path) args.push('-p', this.port);
args.push('-t', task_path);
if (task_id)
args.push('-i', task_id);
const agentProcess = spawn('node', args, { const agentProcess = spawn('node', args, {
stdio: 'inherit', stdio: 'inherit',
@ -28,7 +29,7 @@ export class AgentProcess {
agentProcess.on('exit', (code, signal) => { agentProcess.on('exit', (code, signal) => {
console.log(`Agent process exited with code ${code} and signal ${signal}`); console.log(`Agent process exited with code ${code} and signal ${signal}`);
this.running = false; this.running = false;
mainProxy.logoutAgent(this.name); logoutAgent(this.name);
if (code > 1) { if (code > 1) {
console.log(`Ending task`); console.log(`Ending task`);
@ -38,11 +39,11 @@ export class AgentProcess {
if (code !== 0 && signal !== 'SIGINT') { if (code !== 0 && signal !== 'SIGINT') {
// agent must run for at least 10 seconds before restarting // agent must run for at least 10 seconds before restarting
if (Date.now() - last_restart < 10000) { if (Date.now() - last_restart < 10000) {
console.error(`Agent process ${profile} exited too quickly and will not be restarted.`); console.error(`Agent process exited too quickly and will not be restarted.`);
return; return;
} }
console.log('Restarting agent...'); console.log('Restarting agent...');
this.start(profile, true, 'Agent process restarted.', count_id, task_path, task_id); this.start(true, 'Agent process restarted.', count_id, this.port);
last_restart = Date.now(); last_restart = Date.now();
} }
}); });
@ -61,7 +62,7 @@ export class AgentProcess {
continue() { continue() {
if (!this.running) { if (!this.running) {
this.start(this.profile, true, 'Agent process restarted.', this.count_id); this.start(true, 'Agent process restarted.', this.count_id);
} }
} }
} }

View file

@ -1,27 +1,18 @@
import { Agent } from '../agent/agent.js'; import { Agent } from '../agent/agent.js';
import { serverProxy } from '../agent/mindserver_proxy.js';
import yargs from 'yargs'; import yargs from 'yargs';
// Add global unhandled rejection handler
process.on('unhandledRejection', (reason, promise) => {
console.error('Unhandled Rejection at:', {
promise: promise,
reason: reason,
stack: reason?.stack || 'No stack trace'
});
process.exit(1);
});
const args = process.argv.slice(2); const args = process.argv.slice(2);
if (args.length < 1) { if (args.length < 1) {
console.log('Usage: node init_agent.js <agent_name> [profile] [load_memory] [init_message]'); console.log('Usage: node init_agent.js -n <agent_name> -p <port> -l <load_memory> -m <init_message> -c <count_id>');
process.exit(1); process.exit(1);
} }
const argv = yargs(args) const argv = yargs(args)
.option('profile', { .option('name', {
alias: 'p', alias: 'n',
type: 'string', type: 'string',
description: 'profile filepath to use for agent' description: 'name of agent'
}) })
.option('load_memory', { .option('load_memory', {
alias: 'l', alias: 'l',
@ -33,29 +24,27 @@ const argv = yargs(args)
type: 'string', type: 'string',
description: 'automatically prompt the agent on startup' description: 'automatically prompt the agent on startup'
}) })
.option('task_path', {
alias: 't',
type: 'string',
description: 'task filepath to use for agent'
})
.option('task_id', {
alias: 'i',
type: 'string',
description: 'task ID to execute'
})
.option('count_id', { .option('count_id', {
alias: 'c', alias: 'c',
type: 'number', type: 'number',
default: 0, default: 0,
description: 'identifying count for multi-agent scenarios', description: 'identifying count for multi-agent scenarios',
}).argv; })
.option('port', {
alias: 'p',
type: 'number',
description: 'port of mindserver'
})
.argv;
// Wrap agent start in async IIFE with proper error handling
(async () => { (async () => {
try { try {
console.log('Starting agent with profile:', argv.profile); console.log('Connecting to MindServer');
await serverProxy.connect(argv.name, argv.port);
console.log('Starting agent');
const agent = new Agent(); const agent = new Agent();
await agent.start(argv.profile, argv.load_memory, argv.init_message, argv.count_id, argv.task_path, argv.task_id); serverProxy.setAgent(agent);
await agent.start(argv.load_memory, argv.init_message, argv.count_id);
} catch (error) { } catch (error) {
console.error('Failed to start agent process:'); console.error('Failed to start agent process:');
console.error(error.message); console.error(error.message);

View file

@ -1,64 +0,0 @@
import { io } from 'socket.io-client';
import settings from '../../settings.js';
// Singleton mindserver proxy for the main process
class MainProxy {
constructor() {
if (MainProxy.instance) {
return MainProxy.instance;
}
this.socket = null;
this.connected = false;
this.agent_processes = {};
MainProxy.instance = this;
}
connect() {
if (this.connected) return;
this.socket = io(`http://${settings.mindserver_host}:${settings.mindserver_port}`);
this.connected = true;
this.socket.on('stop-agent', (agentName) => {
if (this.agent_processes[agentName]) {
this.agent_processes[agentName].stop();
}
});
this.socket.on('start-agent', (agentName) => {
if (this.agent_processes[agentName]) {
this.agent_processes[agentName].continue();
}
});
this.socket.on('register-agents-success', () => {
console.log('Agents registered');
});
this.socket.on('shutdown', () => {
console.log('Shutting down');
for (let agentName in this.agent_processes) {
this.agent_processes[agentName].stop();
}
setTimeout(() => {
process.exit(0);
}, 2000);
});
}
addAgent(agent) {
this.agent_processes.push(agent);
}
logoutAgent(agentName) {
this.socket.emit('logout-agent', agentName);
}
registerAgent(name, process) {
this.socket.emit('register-agents', [name]);
this.agent_processes[name] = process;
}
}
export const mainProxy = new MainProxy();

View file

@ -1,163 +0,0 @@
import { Server } from 'socket.io';
import express from 'express';
import http from 'http';
import path from 'path';
import { fileURLToPath } from 'url';
// Module-level variables
let io;
let server;
const registeredAgents = new Set();
const inGameAgents = {};
const agentManagers = {}; // socket for main process that registers/controls agents
// Initialize the server
export function createMindServer(port = 8080) {
const app = express();
server = http.createServer(app);
io = new Server(server);
// Serve static files
const __dirname = path.dirname(fileURLToPath(import.meta.url));
app.use(express.static(path.join(__dirname, 'public')));
// Socket.io connection handling
io.on('connection', (socket) => {
let curAgentName = null;
console.log('Client connected');
agentsUpdate(socket);
socket.on('register-agents', (agentNames) => {
console.log(`Registering agents: ${agentNames}`);
agentNames.forEach(name => registeredAgents.add(name));
for (let name of agentNames) {
agentManagers[name] = socket;
}
socket.emit('register-agents-success');
agentsUpdate();
});
socket.on('login-agent', (agentName) => {
if (curAgentName && curAgentName !== agentName) {
console.warn(`Agent ${agentName} already logged in as ${curAgentName}`);
return;
}
if (registeredAgents.has(agentName)) {
curAgentName = agentName;
inGameAgents[agentName] = socket;
agentsUpdate();
} else {
console.warn(`Agent ${agentName} not registered`);
}
});
socket.on('logout-agent', (agentName) => {
if (inGameAgents[agentName]) {
delete inGameAgents[agentName];
agentsUpdate();
}
});
socket.on('disconnect', () => {
console.log('Client disconnected');
if (inGameAgents[curAgentName]) {
delete inGameAgents[curAgentName];
agentsUpdate();
}
});
socket.on('chat-message', (agentName, json) => {
if (!inGameAgents[agentName]) {
console.warn(`Agent ${agentName} tried to send a message but is not logged in`);
return;
}
console.log(`${curAgentName} sending message to ${agentName}: ${json.message}`);
inGameAgents[agentName].emit('chat-message', curAgentName, json);
});
socket.on('restart-agent', (agentName) => {
console.log(`Restarting agent: ${agentName}`);
inGameAgents[agentName].emit('restart-agent');
});
socket.on('stop-agent', (agentName) => {
let manager = agentManagers[agentName];
if (manager) {
manager.emit('stop-agent', agentName);
}
else {
console.warn(`Stopping unregisterd agent ${agentName}`);
}
});
socket.on('start-agent', (agentName) => {
let manager = agentManagers[agentName];
if (manager) {
manager.emit('start-agent', agentName);
}
else {
console.warn(`Starting unregisterd agent ${agentName}`);
}
});
socket.on('stop-all-agents', () => {
console.log('Killing all agents');
stopAllAgents();
});
socket.on('shutdown', () => {
console.log('Shutting down');
for (let manager of Object.values(agentManagers)) {
manager.emit('shutdown');
}
setTimeout(() => {
process.exit(0);
}, 2000);
});
socket.on('send-message', (agentName, message) => {
if (!inGameAgents[agentName]) {
console.warn(`Agent ${agentName} not logged in, cannot send message via MindServer.`);
return
}
try {
console.log(`Sending message to agent ${agentName}: ${message}`);
inGameAgents[agentName].emit('send-message', agentName, message)
} catch (error) {
console.error('Error: ', error);
}
});
});
server.listen(port, 'localhost', () => {
console.log(`MindServer running on port ${port}`);
});
return server;
}
function agentsUpdate(socket) {
if (!socket) {
socket = io;
}
let agents = [];
registeredAgents.forEach(name => {
agents.push({name, in_game: !!inGameAgents[name]});
});
socket.emit('agents-update', agents);
}
function stopAllAgents() {
for (const agentName in inGameAgents) {
let manager = agentManagers[agentName];
if (manager) {
manager.emit('stop-agent', agentName);
}
}
}
// Optional: export these if you need access to them from other files
export const getIO = () => io;
export const getServer = () => server;
export const getConnectedAgents = () => connectedAgents;

View file

@ -1,120 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<title>Mindcraft</title>
<script src="/socket.io/socket.io.js"></script>
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
background: #1a1a1a;
color: #e0e0e0;
}
#agents {
background: #2d2d2d;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0,0,0,0.2);
}
h1 {
color: #ffffff;
}
.agent {
margin: 10px 0;
padding: 10px;
background: #363636;
border-radius: 4px;
display: flex;
justify-content: space-between;
align-items: center;
}
.restart-btn, .start-btn, .stop-btn {
color: white;
border: none;
padding: 5px 10px;
border-radius: 4px;
cursor: pointer;
margin-left: 5px;
}
.restart-btn {
background: #4CAF50;
}
.start-btn {
background: #2196F3;
}
.stop-btn {
background: #f44336;
}
.restart-btn:hover { background: #45a049; }
.start-btn:hover { background: #1976D2; }
.stop-btn:hover { background: #d32f2f; }
.status-icon {
font-size: 12px;
margin-right: 8px;
}
.status-icon.online {
color: #4CAF50;
}
.status-icon.offline {
color: #f44336;
}
</style>
</head>
<body>
<h1>Mindcraft</h1>
<div id="agents"></div>
<script>
const socket = io();
const agentsDiv = document.getElementById('agents');
socket.on('agents-update', (agents) => {
agentsDiv.innerHTML = agents.length ?
agents.map(agent => `
<div class="agent">
<span>
<span class="status-icon ${agent.in_game ? 'online' : 'offline'}"></span>
${agent.name}
</span>
<div>
${agent.in_game ? `
<button class="stop-btn" onclick="stopAgent('${agent.name}')">Stop</button>
<button class="restart-btn" onclick="restartAgent('${agent.name}')">Restart</button>
<input type="text" id="messageInput" placeholder="Enter a message or command..."></input><button class="start-btn" onclick="sendMessage('${agent.name}', document.getElementById('messageInput').value)">Send</button>
` : `
<button class="start-btn" onclick="startAgent('${agent.name}')">Start</button>
`}
</div>
</div>
`).join('') +
`<button class="stop-btn" onclick="killAllAgents()">Stop All</button>
<button class="stop-btn" onclick="shutdown()">Shutdown</button>` :
'<div class="agent">No agents connected</div>';
});
function restartAgent(agentName) {
socket.emit('restart-agent', agentName);
}
function startAgent(agentName) {
socket.emit('start-agent', agentName);
}
function stopAgent(agentName) {
socket.emit('stop-agent', agentName);
}
function killAllAgents() {
socket.emit('stop-all-agents');
}
function shutdown() {
socket.emit('shutdown');
}
function sendMessage(agentName, message) {
socket.emit('send-message', agentName, message)
}
</script>
</body>
</html>

View file

@ -16,7 +16,7 @@ export function getKey(name) {
if (!key) { if (!key) {
throw new Error(`API key "${name}" not found in keys.json or environment variables!`); throw new Error(`API key "${name}" not found in keys.json or environment variables!`);
} }
return keys[name]; return key;
} }
export function hasKey(name) { export function hasKey(name) {

View file

@ -1,5 +1,5 @@
import minecraftData from 'minecraft-data'; import minecraftData from 'minecraft-data';
import settings from '../../settings.js'; import settings from '../agent/settings.js';
import { createBot } from 'mineflayer'; import { createBot } from 'mineflayer';
import prismarine_items from 'prismarine-item'; import prismarine_items from 'prismarine-item';
import { pathfinder } from 'mineflayer-pathfinder'; import { pathfinder } from 'mineflayer-pathfinder';
@ -8,10 +8,9 @@ import { plugin as collectblock } from 'mineflayer-collectblock';
import { plugin as autoEat } from 'mineflayer-auto-eat'; import { plugin as autoEat } from 'mineflayer-auto-eat';
import plugin from 'mineflayer-armor-manager'; import plugin from 'mineflayer-armor-manager';
const armorManager = plugin; const armorManager = plugin;
let mc_version = null;
const mc_version = settings.minecraft_version; let mcdata = null;
const mcdata = minecraftData(mc_version); let Item = null;
const Item = prismarine_items(mc_version);
/** /**
* @typedef {string} ItemName * @typedef {string} ItemName
@ -54,6 +53,9 @@ export const WOOL_COLORS = [
export function initBot(username) { export function initBot(username) {
mc_version = settings.minecraft_version;
mcdata = minecraftData(mc_version);
Item = prismarine_items(mc_version);
let bot = createBot({ let bot = createBot({
username: username, username: username,
@ -86,6 +88,16 @@ export function isHostile(mob) {
return (mob.type === 'mob' || mob.type === 'hostile') && mob.name !== 'iron_golem' && mob.name !== 'snow_golem'; return (mob.type === 'mob' || mob.type === 'hostile') && mob.name !== 'iron_golem' && mob.name !== 'snow_golem';
} }
// blocks that don't work with collectBlock, need to be manually collected
export function mustCollectManually(blockName) {
// all crops (that aren't normal blocks), torches, buttons, levers, redstone,
const full_names = ['wheat', 'carrots', 'potatoes', 'beetroots', 'nether_wart', 'cocoa', 'sugar_cane', 'kelp', 'short_grass', 'fern', 'tall_grass', 'bamboo',
'poppy', 'dandelion', 'blue_orchid', 'allium', 'azure_bluet', 'oxeye_daisy', 'cornflower', 'lilac', 'wither_rose', 'lily_of_the_valley', 'wither_rose',
'lever', 'redstone_wire', 'lantern']
const partial_names = ['sapling', 'torch', 'button', 'carpet', 'pressure_plate', 'mushroom', 'tulip', 'bush', 'vines', 'fern']
return full_names.includes(blockName.toLowerCase()) || partial_names.some(partial => blockName.toLowerCase().includes(partial));
}
export function getItemId(itemName) { export function getItemId(itemName) {
let item = mcdata.itemsByName[itemName]; let item = mcdata.itemsByName[itemName];
if (item) { if (item) {
@ -195,6 +207,13 @@ export function getItemCraftingRecipes(itemName) {
{craftedCount : r.result.count} {craftedCount : r.result.count}
]); ]);
} }
// sort recipes by if their ingredients include common items
const commonItems = ['oak_planks', 'oak_log', 'coal', 'cobblestone'];
recipes.sort((a, b) => {
let commonCountA = Object.keys(a[0]).filter(key => commonItems.includes(key)).reduce((acc, key) => acc + a[0][key], 0);
let commonCountB = Object.keys(b[0]).filter(key => commonItems.includes(key)).reduce((acc, key) => acc + b[0][key], 0);
return commonCountB - commonCountA;
});
return recipes; return recipes;
} }
@ -338,6 +357,7 @@ export function initializeLoopingItems() {
loopingItems = new Set(['coal', loopingItems = new Set(['coal',
'wheat', 'wheat',
'bone_meal',
'diamond', 'diamond',
'emerald', 'emerald',
'raw_iron', 'raw_iron',
@ -393,7 +413,7 @@ export function getDetailedCraftingPlan(targetItem, count = 1, current_inventory
const inventory = { ...current_inventory }; const inventory = { ...current_inventory };
const leftovers = {}; const leftovers = {};
const plan = craftItem(targetItem, count, inventory, leftovers); const plan = craftItem(targetItem, count, inventory, leftovers);
return formatPlan(plan); return formatPlan(targetItem, plan);
} }
function isBaseItem(item) { function isBaseItem(item) {
@ -459,7 +479,7 @@ function craftItem(item, count, inventory, leftovers, crafted = { required: {},
return crafted; return crafted;
} }
function formatPlan({ required, steps, leftovers }) { function formatPlan(targetItem, { required, steps, leftovers }) {
const lines = []; const lines = [];
if (Object.keys(required).length > 0) { if (Object.keys(required).length > 0) {
@ -475,6 +495,10 @@ function formatPlan({ required, steps, leftovers }) {
lines.push(''); lines.push('');
lines.push(...steps); lines.push(...steps);
if (Object.keys(required).some(item => item.includes('oak')) && !targetItem.includes('oak')) {
lines.push('Note: Any varient of wood can be used for this recipe.');
}
if (Object.keys(leftovers).length > 0) { if (Object.keys(leftovers).length > 0) {
lines.push('\nYou will have leftover:'); lines.push('\nYou will have leftover:');
Object.entries(leftovers).forEach(([item, count]) => Object.entries(leftovers).forEach(([item, count]) =>

View file

@ -46,7 +46,9 @@ export function strictFormat(turns) {
let messages = []; let messages = [];
let filler = {role: 'user', content: '_'}; let filler = {role: 'user', content: '_'};
for (let msg of turns) { for (let msg of turns) {
msg.content = msg.content.trim(); if (typeof msg.content === 'string') {
msg.content = msg.content.trim();
}
if (msg.role === 'system') { if (msg.role === 'system') {
msg.role = 'user'; msg.role = 'user';
msg.content = 'SYSTEM: ' + msg.content; msg.content = 'SYSTEM: ' + msg.content;

View file

@ -1,10 +1,11 @@
import translate from 'google-translate-api-x'; import translate from 'google-translate-api-x';
import settings from '../../settings.js'; import settings from '../agent/settings.js';
const preferred_lang = String(settings.language).toLowerCase();
export async function handleTranslation(message) { export async function handleTranslation(message) {
if (preferred_lang === 'en' || preferred_lang === 'english') let preferred_lang = String(settings.language).toLowerCase();
if (!preferred_lang || preferred_lang === 'en' || preferred_lang === 'english')
return message; return message;
try { try {
const translation = await translate(message, { to: preferred_lang }); const translation = await translate(message, { to: preferred_lang });
@ -16,7 +17,8 @@ export async function handleTranslation(message) {
} }
export async function handleEnglishTranslation(message) { export async function handleEnglishTranslation(message) {
if (preferred_lang === 'en' || preferred_lang === 'english') let preferred_lang = String(settings.language).toLowerCase();
if (!preferred_lang || preferred_lang === 'en' || preferred_lang === 'english')
return message; return message;
try { try {
const translation = await translate(message, { to: 'english' }); const translation = await translate(message, { to: 'english' });

291
tasks/analyse_results.py Normal file
View file

@ -0,0 +1,291 @@
import boto3
import os
import json
import re
from botocore.exceptions import ClientError
import json
import argparse
from tqdm import tqdm
import glob
# Calculate project root directory
project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Define output directory for analysis results
analysis_output_dir = os.path.join(project_root, "experiments", "analysis_results")
# Ensure the output directory exists
os.makedirs(analysis_output_dir, exist_ok=True)
def download_s3_folders(bucket_name, s3_prefix, local_base_dir):
"""
Downloads groups of folders from S3 based on the next level of prefixes.
Args:
bucket_name (str): Name of the S3 bucket.
s3_prefix (str): Prefix where the folders are located (e.g., 'my-experiments/').
local_base_dir (str): Local directory to download the folders to.
Returns:
list: List of downloaded local folder paths.
"""
s3_client = boto3.client('s3')
downloaded_folders = []
# Ensure local_base_dir is relative to project root if not absolute
if not os.path.isabs(local_base_dir):
local_base_dir = os.path.join(project_root, local_base_dir)
try:
# List objects with the prefix, delimited by '/' to find sub-prefixes (folders)
response = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=s3_prefix, Delimiter='/')
if 'CommonPrefixes' not in response:
print(f"No folders found under s3://{bucket_name}/{s3_prefix}")
return downloaded_folders
s3_folder_prefixes = [prefix['Prefix'] for prefix in response['CommonPrefixes']]
subfolder = s3_prefix.split('/')[-2]
for s3_folder_prefix in tqdm(s3_folder_prefixes):
folder_name = s3_folder_prefix.split('/')[-2] # Extract folder name
local_folder_path = os.path.join(local_base_dir, subfolder, folder_name)
os.makedirs(local_folder_path, exist_ok=True)
downloaded_folders.append(local_folder_path)
# Download files within the folder
objects_in_folder = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=s3_folder_prefix)
if 'Contents' in objects_in_folder:
for obj in objects_in_folder['Contents']:
s3_key = obj['Key']
local_file_path = os.path.join(local_folder_path, os.path.basename(s3_key))
try:
s3_client.download_file(bucket_name, s3_key, local_file_path)
except Exception as e:
print(f"Error downloading {s3_key}: {e}")
else:
print(f"No files found in {s3_folder_prefix}")
except ClientError as e:
print(f"Error accessing S3: {e}")
return []
return downloaded_folders
def analyze_json_file(file_path):
"""
Analyzes a single JSON file to extract the task outcome.
Args:
file_path (str): Path to the JSON file.
Returns:
str or None: The task outcome string if found, otherwise None.
"""
try:
with open(file_path, 'r') as f:
data = json.load(f)
if 'turns' in data and isinstance(data['turns'], list):
for turn in reversed(data['turns']): # Check turns from the end
if turn.get('role') == 'system' and isinstance(turn.get('content'), str):
if "Task successful ended with code : 2" in turn['content'] or "Task ended with score : 1" in turn["content"] or "Task ended in score: 1" in turn["content"]:
return True
return False
except FileNotFoundError:
print(f"Error: File not found: {file_path}")
return None
except json.JSONDecodeError:
print(f"Error: Invalid JSON format in: {file_path}")
return None
except Exception as e:
print(f"An unexpected error occurred while processing {file_path}: {e}")
return None
def extract_result(folder_path):
folder_name = os.path.basename(folder_path)
json_files = glob.glob(os.path.join(folder_path, "*.json"))
assert len(json_files) == 2, f"Expected 2 json files in {folder_name}, found {len(json_files)}"
if not json_files:
print(f"No JSON files found in {folder_name}")
return None
else:
outcome = False
for json_file in json_files:
outcome = analyze_json_file(json_file)
if outcome:
return True
return False
def is_base(folder_path):
return "full_plan" in folder_path and "depth_0" in folder_path and "missing" not in folder_path
def base_without_plan(folder_path):
return "no_plan" in folder_path and "depth_0" in folder_path and "missing" in folder_path
def aggregate_results(local_folders):
"""
Aggregates the analysis results for each folder.
Args:
local_folders (list): List of local folder paths containing the JSON files.
Returns:
dict: A dictionary where keys are folder names and values are the aggregated outcomes.
"""
aggregated_data = {}
total = 0
successful = 0
base_successful = 0
base_total = 0
base_no_plan_successful = 0
base_no_plan_total = 0
missing_successful = 0
missing_total = 0
full_plan_successful = 0
full_plan_total = 0
partial_plan_successful = 0
partial_plan_total = 0
no_plan_successful = 0
no_plan_total = 0
high_depth_successful = 0
high_depth_total = 0
for folder_path in tqdm(local_folders):
folder_name = os.path.basename(folder_path)
try:
total += 1
result = extract_result(folder_path)
success = int(extract_result(folder_path))
successful += success
if "missing" in folder_path and not is_base(folder_path):
missing_successful += success
missing_total += 1
if is_base(folder_path):
base_successful += success
base_total += 1
if base_without_plan(folder_path):
base_no_plan_successful += success
base_no_plan_total += 1
if "full_plan" in folder_path and not is_base(folder_path):
full_plan_successful += success
full_plan_total += 1
if "partial_plan" in folder_path and not is_base(folder_path):
partial_plan_successful += success
partial_plan_total += 1
if "no_plan" in folder_path and not is_base(folder_path):
no_plan_successful += success
no_plan_total += 1
if "depth_1" in folder_path or "depth_2" in folder_path and not is_base(folder_path):
high_depth_successful += success
high_depth_total += 1
except Exception as e:
print(f"Error processing {folder_name}: {e}")
return {
"total": total,
"successful": successful,
"success_rate": successful / total if total > 0 else 0,
"base_total": base_total,
"base_successful": base_successful,
"base_success_rate": base_successful / base_total if base_total > 0 else 0,
"base_no_plan_total": base_no_plan_total,
"base_no_plan_successful": base_no_plan_successful,
"base_no_plan_success_rate": base_no_plan_successful / base_no_plan_total if base_no_plan_total > 0 else 0,
"missing_total": missing_total,
"missing_successful": missing_successful,
"missing_success_rate": missing_successful / missing_total if missing_total > 0 else 0,
"full_plan_total": full_plan_total,
"full_plan_successful": full_plan_successful,
"full_plan_success_rate": full_plan_successful / full_plan_total if full_plan_total > 0 else 0,
"partial_plan_total": partial_plan_total,
"partial_plan_successful": partial_plan_successful,
"partial_plan_success_rate": partial_plan_successful / partial_plan_total if partial_plan_total > 0 else 0,
"no_plan_total": no_plan_total,
"no_plan_successful": no_plan_successful,
"no_plan_success_rate": no_plan_successful / no_plan_total if no_plan_total > 0 else 0,
"high_depth_total": high_depth_total,
"high_depth_successful": high_depth_successful,
"high_depth_success_rate": high_depth_successful / high_depth_total if high_depth_total > 0 else 0
}
def get_immediate_subdirectories(a_dir):
# Ensure a_dir is relative to project root if not absolute
if not os.path.isabs(a_dir):
a_dir = os.path.join(project_root, a_dir)
return [os.path.join(a_dir, name) for name in os.listdir(a_dir)
if os.path.isdir(os.path.join(a_dir, name))]
# --- Main Execution ---
if __name__ == "__main__":
# 1. Download folders from AWS or use local directory
parser = argparse.ArgumentParser()
parser.add_argument('--s3_download', action="store_true", help='Download folders from S3')
parser.add_argument('--aws_bucket_name', default="mindcraft" , type=str, help='AWS bucket name')
parser.add_argument('--s3_folder_prefix', default="", type=str, help='S3 folder prefix')
# Change default input dir to 'experiments' relative to project root
parser.add_argument('--local_download_dir', default="experiments", type=str, help='Local directory containing results (relative to project root)')
args = parser.parse_args()
AWS_BUCKET_NAME = args.aws_bucket_name
S3_FOLDER_PREFIX = args.s3_folder_prefix
# Resolve local_download_dir relative to project root
local_download_dir_abs = args.local_download_dir
if not os.path.isabs(local_download_dir_abs):
local_download_dir_abs = os.path.join(project_root, local_download_dir_abs)
# Construct LOCAL_DOWNLOAD_DIR based on the absolute path
if args.local_download_dir != "": # Original check seems redundant now, but kept logic
LOCAL_DOWNLOAD_DIR = local_download_dir_abs # Already includes prefix if s3_download
if args.s3_download and S3_FOLDER_PREFIX: # Append S3 prefix if downloading
LOCAL_DOWNLOAD_DIR = os.path.join(local_download_dir_abs, S3_FOLDER_PREFIX.replace('/', '_').rstrip('_'))
else:
LOCAL_DOWNLOAD_DIR = local_download_dir_abs # Should not happen with default
if (args.s3_download):
print(f"Downloading folders from s3://{AWS_BUCKET_NAME}/{S3_FOLDER_PREFIX} to {LOCAL_DOWNLOAD_DIR}...")
# Pass the absolute base path for downloads
folders = download_s3_folders(AWS_BUCKET_NAME, S3_FOLDER_PREFIX, local_download_dir_abs)
else:
folders = get_immediate_subdirectories(local_download_dir_abs)
print(folders)
if not folders:
print("No folders found or downloaded. Exiting.")
exit()
results = aggregate_results(folders)
print(results)
# Hardcode output path within experiments/analysis_results/
results_file_path = os.path.join(analysis_output_dir, "analyse_results_output.txt")
with open(results_file_path, "w") as file:
file.write("Results\n")
for key, value in results.items():
file.write(f"{key}: {value}\n")
print(f"Results saved to {results_file_path}")
# if not downloaded_local_folders:
# print("No folders downloaded. Exiting.")
# exit()
# print("\n--- Analyzing downloaded files ---")
# # 2. & 3. Analyze files and aggregate results
# results = aggregate_results(downloaded_local_folders)
# print("\n--- Aggregated Results ---")
# for folder, outcome in results.items():
# print(f"Folder: {folder} -> {outcome}")
# Optional: Clean up downloaded files
# import shutil
# shutil.rmtree(LOCAL_DOWNLOAD_DIR)
# print(f"\nCleaned up {LOCAL_DOWNLOAD_DIR}")

View file

@ -0,0 +1,228 @@
import os
import json
from collections import defaultdict
from prettytable import PrettyTable
import re
import argparse
import pandas as pd
import glob
# Calculate project root directory
project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Define output directory for analysis results
analysis_output_dir = os.path.join(project_root, "experiments", "analysis_results")
# Ensure the output directory exists
os.makedirs(analysis_output_dir, exist_ok=True)
def extract_success_scores(folders, model_names):
assert len(folders) == len(model_names), "Folders and model names lists must have the same length."
all_task_scores = defaultdict(dict) # Stores task-wise scores per model
zero_score_tasks = defaultdict(list) # Stores tasks with 0 score per model
material_groups = defaultdict(lambda: defaultdict(list))
room_groups = defaultdict(lambda: defaultdict(list))
material_room_groups = defaultdict(lambda: defaultdict(list))
overall_scores = defaultdict(list) # New dict to store all scores for each model
skipped_tasks = defaultdict(list) # Stores tasks with no score message per model
pattern = re.compile(r"materials_(\d+)_rooms_(\d+)")
for root_dir, model_name in zip(folders, model_names):
for task_folder in os.listdir(root_dir):
task_path = os.path.join(root_dir, task_folder)
if os.path.isdir(task_path):
logs_found = False
score_found = False
for file_name in os.listdir(task_path):
if file_name.endswith(".json"):
logs_found = True
file_path = os.path.join(task_path, file_name)
try:
with open(file_path, 'r') as file:
data = json.load(file)
for turn in reversed(data.get("turns", [])):
if turn["role"] == "system" and "Task ended with score" in turn["content"]:
score = float(turn["content"].split(":")[-1].strip())
all_task_scores[task_folder][model_name] = score
overall_scores[model_name].append(score) # Add to overall scores
score_found = True
if score == 0:
zero_score_tasks[model_name].append(task_folder)
break
if score_found:
break
except Exception as e:
print(f"Error reading {file_path}: {e}")
if logs_found and not score_found:
# Score not found but logs exist - skip this task
skipped_tasks[model_name].append(task_folder)
print(f"Error: No score message found for task '{task_folder}' with model '{model_name}'. Skipping this task.")
if not logs_found:
print(f"No log files found in {task_folder}")
# Calculate model completion rates (only consider tasks with scores)
model_completion_rates = {}
for model_name in model_names:
valid_tasks = [task for task in all_task_scores.keys() if model_name in all_task_scores[task]]
total_tasks = len(valid_tasks)
completed_tasks = len([task for task in valid_tasks if all_task_scores[task][model_name] > 0])
model_completion_rates[model_name] = (completed_tasks / total_tasks) if total_tasks > 0 else 0
# Process task scores into groups (ignore 0 scores)
for task, model_scores in all_task_scores.items():
match = pattern.search(task)
if match:
material = int(match.group(1))
room = int(match.group(2))
for model, score in model_scores.items():
if score > 0: # Ignore 0 scores
material_groups[material][model].append(score)
room_groups[room][model].append(score)
material_room_groups[(material, room)][model].append(score)
def calculate_average(group):
return {key: {model: sum(scores) / len(scores) for model, scores in models.items() if scores}
for key, models in group.items() if models}
avg_material_scores = calculate_average(material_groups)
avg_room_scores = calculate_average(room_groups)
avg_material_room_scores = calculate_average(material_room_groups)
def display_table(title, data, tuple_keys=False):
table = PrettyTable(["Category"] + model_names)
for key, model_scores in sorted(data.items()):
key_display = key if not tuple_keys else f"({key[0]}, {key[1]})"
row = [key_display] + [round(model_scores.get(model, 0), 2) for model in model_names]
table.add_row(row)
print(f"\n{title}")
print(table)
def display_task_scores():
table = PrettyTable(["Task"] + model_names)
for task in sorted(all_task_scores.keys()):
row = [task]
for model in model_names:
score = all_task_scores[task].get(model)
if score is None:
row.append("-")
else:
row.append(round(score, 2))
table.add_row(row)
print("\nTask-wise Success Scores")
print(table)
def display_zero_and_skipped_tasks():
for model in model_names:
if zero_score_tasks[model]:
table = PrettyTable([f"{model} - Tasks with 0 Score"])
for task in zero_score_tasks[model]:
table.add_row([task])
print(f"\n{model} - Tasks with 0 Success Score")
print(table)
if skipped_tasks[model]:
table = PrettyTable([f"{model} - Skipped Tasks (No Score Message)"])
for task in skipped_tasks[model]:
table.add_row([task])
print(f"\n{model} - Skipped Tasks (No Score Message)")
print(table)
def display_overall_averages():
table = PrettyTable(["Metric"] + model_names)
# Overall average score (including zeros)
row_with_zeros = ["Average Score (All Tasks)"]
for model in model_names:
valid_scores = overall_scores[model]
avg = sum(valid_scores) / len(valid_scores) if valid_scores else 0
row_with_zeros.append(round(avg, 2))
table.add_row(row_with_zeros)
# Overall average score (excluding zeros)
row_without_zeros = ["Average Score (Completed Tasks)"]
for model in model_names:
completed_scores = [s for s in overall_scores[model] if s > 0]
avg = sum(completed_scores) / len(completed_scores) if completed_scores else 0
row_without_zeros.append(round(avg, 2))
table.add_row(row_without_zeros)
# Task completion rate
completion_row = ["Task Completion Rate (%)"]
for model in model_names:
completion_row.append(round(model_completion_rates[model] * 100, 2))
table.add_row(completion_row)
# Total number of tasks
task_count_row = ["Total Tasks"]
for model in model_names:
valid_tasks = [task for task in all_task_scores.keys() if model in all_task_scores[task]]
task_count_row.append(len(valid_tasks))
table.add_row(task_count_row)
# Number of skipped tasks
skipped_count_row = ["Skipped Tasks"]
for model in model_names:
skipped_count_row.append(len(skipped_tasks[model]))
table.add_row(skipped_count_row)
print("\nOverall Performance Metrics")
print(table)
display_overall_averages() # Display overall averages first
display_task_scores()
display_zero_and_skipped_tasks()
display_table("Average Success Score by Material", avg_material_scores)
display_table("Average Success Score by Room", avg_room_scores)
display_table("Average Success Score by (Material, Room) Tuples", avg_material_room_scores, tuple_keys=True)
def main():
parser = argparse.ArgumentParser(description='Analyze construction task logs.')
# Change default input dir to 'experiments' relative to project root
parser.add_argument('--log_dir', type=str, default='experiments',
help='Directory containing the log files (relative to project root)')
# Removed --output_file argument
# parser.add_argument('--output_file', type=str, default='construction_analysis_results.csv',
# help='Output CSV file name (relative to project root)')
args = parser.parse_args()
# Resolve log_dir path relative to project root
log_dir_abs = args.log_dir
if not os.path.isabs(log_dir_abs):
log_dir_abs = os.path.join(project_root, log_dir_abs)
# Hardcode output file path
output_file_abs = os.path.join(analysis_output_dir, "construction_analysis.csv")
all_results = []
# Use absolute log directory path
log_pattern = os.path.join(log_dir_abs, '*.json')
print(f"Searching for logs in: {log_pattern}")
log_files_found = glob.glob(log_pattern)
print(f"Found {len(log_files_found)} log files.")
for log_file in log_files_found:
results = analyze_construction_log(log_file)
if results:
all_results.append(results)
if all_results:
df = pd.DataFrame(all_results)
# Ensure the output directory exists (already done at top)
# os.makedirs(os.path.dirname(output_file_abs), exist_ok=True)
# Save to hardcoded absolute output file path
df.to_csv(output_file_abs, index=False)
print(f"Analysis complete. Results saved to {output_file_abs}")
else:
print("No results generated from log files.")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,420 @@
import os
import json
import re
from collections import defaultdict
from prettytable import PrettyTable
import pandas as pd
import glob
import argparse
# Calculate project root directory
project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Define output directory for analysis results
analysis_output_dir = os.path.join(project_root, "experiments", "analysis_results")
# Ensure the output directory exists
os.makedirs(analysis_output_dir, exist_ok=True)
def extract_cooking_items(exp_dir):
"""Extract cooking items from experiment directory name."""
# Remove prefix and blocked access part
clean_name = re.sub(r'^multiagent_cooking_', '', exp_dir)
clean_name = re.sub(r'_blocked_access_[0-9_]+$', '', clean_name)
# Extract individual items
items = []
for item_match in re.finditer(r'([0-9]+)_([a-zA-Z_]+)', clean_name):
count = int(item_match.group(1))
item = item_match.group(2)
# Remove trailing underscores to fix the item name issue
item = item.rstrip('_')
items.append(item)
return items
def analyze_experiments(root_dir, model_name):
# Store results by number of blocked agents
blocked_access_results = defaultdict(lambda: {
"success": 0,
"total": 0
})
# Store results by cooking item
cooking_item_results = defaultdict(lambda: {
"success": 0,
"total": 0
})
# Keep track of all unique cooking items
all_cooking_items = set()
# Keep track of ignored tasks
ignored_tasks = []
# Get a list of all experiment directories
experiment_dirs = [d for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d))
and d.startswith("multiagent_cooking_")]
for exp_dir in experiment_dirs:
# Extract cooking items
cooking_items = extract_cooking_items(exp_dir)
# Add to unique items set
all_cooking_items.update(cooking_items)
# Extract blocked access information from directory name
blocked_access_match = re.search(r'blocked_access_([0-9_]+)$', exp_dir)
if blocked_access_match:
blocked_access_str = blocked_access_match.group(1)
# Count how many agents have blocked access
num_blocked_agents = len(blocked_access_str.split('_'))
blocked_key = f"{num_blocked_agents} agent(s)"
else:
# No agents blocked
blocked_key = "0 agent(s)"
# Check if the task was successful
is_successful = False
score_found = False
full_exp_path = os.path.join(root_dir, exp_dir)
# Get all JSON files in the experiment directory
agent_files = [f for f in os.listdir(full_exp_path) if f.endswith(".json")]
# Check each agent file for success information
for agent_file in agent_files:
agent_file_path = os.path.join(full_exp_path, agent_file)
try:
with open(agent_file_path, 'r') as f:
agent_data = json.load(f)
# Check for score information in the turns data
if "turns" in agent_data:
for turn in agent_data["turns"]:
if turn.get("role") == "system" and "content" in turn:
if isinstance(turn["content"], str) and "Task ended with score : " in turn["content"]:
score_found = True
if "Task ended with score : 1" in turn["content"]:
is_successful = True
break
# If we found success, no need to check other files
if is_successful:
break
except (json.JSONDecodeError, IOError) as e:
print(f"Error reading {agent_file_path}: {e}")
# Continue to check other agent files instead of failing
continue
# If no score information was found in any agent file, ignore this task
if not score_found:
ignored_tasks.append(exp_dir)
continue
# Update cooking item results
for item in cooking_items:
cooking_item_results[item]["total"] += 1
if is_successful:
cooking_item_results[item]["success"] += 1
# Update the blocked access counters
blocked_access_results[blocked_key]["total"] += 1
if is_successful:
blocked_access_results[blocked_key]["success"] += 1
# Print information about ignored tasks
if ignored_tasks:
print(f"\n{model_name}: Ignored {len(ignored_tasks)} tasks with no score information:")
for task in ignored_tasks:
print(f" - {task}")
return blocked_access_results, cooking_item_results, all_cooking_items, ignored_tasks
def print_model_comparison_blocked(models_results):
print("\nModel Comparison by Number of Agents with Blocked Access:")
print("=" * 100)
# Get all possible blocked access keys
all_blocked_keys = set()
for model_results in models_results.values():
all_blocked_keys.update(model_results.keys())
# Sort the keys
sorted_keys = sorted(all_blocked_keys, key=lambda x: int(x.split()[0]))
# Create the table
table = PrettyTable()
table.field_names = ["Blocked Agents"] + [
f"{model_name} (Success Rate | Success/Total)" for model_name in models_results.keys()
]
# Calculate and add rows for each blocked key
model_totals = {model: {"success": 0, "total": 0} for model in models_results.keys()}
for key in sorted_keys:
row = [key]
for model_name, model_results in models_results.items():
if key in model_results:
success = model_results[key]["success"]
total = model_results[key]["total"]
model_totals[model_name]["success"] += success
model_totals[model_name]["total"] += total
success_rate = (success / total * 100) if total > 0 else 0
row.append(f"{success_rate:.2f}% | {success}/{total}")
else:
row.append("N/A")
table.add_row(row)
# Print the table
print(table)
# Print the overall results
overall_row = ["Overall"]
for model_name, totals in model_totals.items():
success = totals["success"]
total = totals["total"]
success_rate = (success / total * 100) if total > 0 else 0
overall_row.append(f"{success_rate:.2f}% | {success}/{total}")
table.add_row(overall_row)
print(table)
def print_model_comparison_items(models_item_results, all_cooking_items):
print("\nModel Comparison by Cooking Item:")
print("=" * 100)
# Create the table
table = PrettyTable()
table.field_names = ["Cooking Item"] + [
f"{model_name} (Success Rate | Success/Total)" for model_name in models_item_results.keys()
]
# Calculate and add rows for each cooking item
model_totals = {model: {"success": 0, "total": 0} for model in models_item_results.keys()}
for item in sorted(all_cooking_items):
row = [item]
for model_name, model_results in models_item_results.items():
if item in model_results:
success = model_results[item]["success"]
total = model_results[item]["total"]
model_totals[model_name]["success"] += success
model_totals[model_name]["total"] += total
success_rate = (success / total * 100) if total > 0 else 0
row.append(f"{success_rate:.2f}% | {success}/{total}")
else:
row.append("N/A")
table.add_row(row)
# Print the table
print(table)
# Print the overall results
overall_row = ["Overall"]
for model_name, totals in model_totals.items():
success = totals["success"]
total = totals["total"]
success_rate = (success / total * 100) if total > 0 else 0
overall_row.append(f"{success_rate:.2f}% | {success}/{total}")
table.add_row(overall_row)
print(table)
def print_model_comparison_items_by_blocked(models_data, all_cooking_items):
print("\nDetailed Model Comparison by Cooking Item and Blocked Agent Count:")
print("=" * 120)
# For each cooking item, create a comparison table by blocked agent count
for item in sorted(all_cooking_items):
print(f"\nResults for cooking item: {item}")
print("-" * 100)
# Create the table
table = PrettyTable()
table.field_names = ["Blocked Agents"] + [
f"{model_name} Success Rate" for model_name in models_data.keys()
] + [
f"{model_name} Success/Total" for model_name in models_data.keys()
]
# Get all possible blocked agent counts
all_blocked_keys = set()
for model_name, model_data in models_data.items():
_, _, item_blocked_data = model_data
for blocked_key in item_blocked_data.get(item, {}).keys():
all_blocked_keys.add(blocked_key)
# Sort the keys
sorted_keys = sorted(all_blocked_keys, key=lambda x: int(x.split()[0]))
# Add rows for each blocked key
for blocked_key in sorted_keys:
row = [blocked_key]
for model_name, model_data in models_data.items():
_, _, item_blocked_data = model_data
if item in item_blocked_data and blocked_key in item_blocked_data[item]:
success = item_blocked_data[item][blocked_key]["success"]
total = item_blocked_data[item][blocked_key]["total"]
if total > 0:
success_rate = (success / total * 100)
row.append(f"{success_rate:.2f}%")
row.append(f"{success}/{total}")
else:
row.append("N/A")
row.append("0/0")
else:
row.append("N/A")
row.append("N/A")
table.add_row(row)
# Print the table
print(table)
# Print item summary for each model
overall_row = ["Overall"]
for model_name, model_data in models_data.items():
_, item_results, _ = model_data
if item in item_results:
success = item_results[item]["success"]
total = item_results[item]["total"]
if total > 0:
success_rate = (success / total * 100)
overall_row.append(f"{success_rate:.2f}%")
overall_row.append(f"{success}/{total}")
else:
overall_row.append("N/A")
overall_row.append("0/0")
else:
overall_row.append("N/A")
overall_row.append("N/A")
table.add_row(overall_row)
print(table)
def generate_item_blocked_data(experiments_root):
# Organize data by item and blocked agent count
item_blocked_data = defaultdict(lambda: defaultdict(lambda: {"success": 0, "total": 0}))
# Keep track of ignored tasks
ignored_tasks = []
# Populate the data structure
for exp_dir in os.listdir(experiments_root):
if not os.path.isdir(os.path.join(experiments_root, exp_dir)) or not exp_dir.startswith("multiagent_cooking_"):
continue
# Extract cooking items
cooking_items = extract_cooking_items(exp_dir)
# Extract blocked access information
blocked_access_match = re.search(r'blocked_access_([0-9_]+)$', exp_dir)
if blocked_access_match:
blocked_access_str = blocked_access_match.group(1)
num_blocked_agents = len(blocked_access_str.split('_'))
blocked_key = f"{num_blocked_agents} agent(s)"
else:
blocked_key = "0 agent(s)"
# Check if the task was successful and if score information exists
is_successful = False
score_found = False
full_exp_path = os.path.join(experiments_root, exp_dir)
agent_files = [f for f in os.listdir(full_exp_path) if f.endswith(".json")]
for agent_file in agent_files:
try:
with open(os.path.join(full_exp_path, agent_file), 'r') as f:
agent_data = json.load(f)
if "turns" in agent_data:
for turn in agent_data["turns"]:
if turn.get("role") == "system" and "content" in turn:
if isinstance(turn["content"], str) and "Task ended with score : " in turn["content"]:
score_found = True
if "Task ended with score : 1" in turn["content"]:
is_successful = True
break
if is_successful:
break
except:
continue
# If no score information was found, skip this task
if not score_found:
ignored_tasks.append(exp_dir)
continue
# Update the item-blocked data
for item in cooking_items:
item_blocked_data[item][blocked_key]["total"] += 1
if is_successful:
item_blocked_data[item][blocked_key]["success"] += 1
return item_blocked_data, ignored_tasks
def analyze_cooking_log(log_file):
# Placeholder for the actual analysis logic if it exists
# This function needs to be implemented based on the script's purpose
print(f"Analyzing {log_file}...") # Example print
# Example: return a dictionary of results
return {"file": os.path.basename(log_file), "score": 1} # Dummy result
def main():
parser = argparse.ArgumentParser(description='Analyze cooking task logs.')
# Change default input dir to 'experiments' relative to project root
parser.add_argument('--log_dir', type=str, default='experiments',
help='Directory containing the log files (relative to project root)')
# Removed --output_file argument
# parser.add_argument('--output_file', type=str, default='cooking_analysis_results.csv',
# help='Output CSV file name (relative to project root)')
args = parser.parse_args()
# Resolve log_dir path relative to project root
log_dir_abs = args.log_dir
if not os.path.isabs(log_dir_abs):
log_dir_abs = os.path.join(project_root, log_dir_abs)
# Hardcode output file path
output_file_abs = os.path.join(analysis_output_dir, "cooking_analysis.csv")
all_results = []
# Use absolute log directory path
log_pattern = os.path.join(log_dir_abs, '*.json')
print(f"Searching for logs in: {log_pattern}")
log_files_found = glob.glob(log_pattern)
print(f"Found {len(log_files_found)} log files.")
for log_file in log_files_found:
results = analyze_cooking_log(log_file)
if results:
all_results.append(results) # Append the results dictionary
if all_results:
df = pd.DataFrame(all_results)
# Ensure the output directory exists
os.makedirs(os.path.dirname(output_file_abs), exist_ok=True)
# Save to hardcoded absolute output file path
df.to_csv(output_file_abs, index=False)
print(f"Analysis complete. Results saved to {output_file_abs}")
else:
print("No results generated from log files.")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,379 @@
import boto3
import os
import json
import re
from botocore.exceptions import ClientError
import json
import argparse
from tqdm import tqdm
import glob
from prettytable import PrettyTable
import pandas as pd
# Calculate project root directory
project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Define output directory for analysis results
analysis_output_dir = os.path.join(project_root, "experiments", "analysis_results")
# Ensure the output directory exists
os.makedirs(analysis_output_dir, exist_ok=True)
def download_s3_folders(bucket_name, s3_prefix, local_base_dir):
"""
Downloads groups of folders from S3 based on the next level of prefixes.
Args:
bucket_name (str): Name of the S3 bucket.
s3_prefix (str): Prefix where the folders are located (e.g., 'my-experiments/').
local_base_dir (str): Local directory to download the folders to.
Returns:
list: List of downloaded local folder paths.
"""
s3_client = boto3.client('s3')
downloaded_folders = []
# Ensure local_base_dir is relative to project root if not absolute
if not os.path.isabs(local_base_dir):
local_base_dir = os.path.join(project_root, local_base_dir)
try:
# List objects with the prefix, delimited by '/' to find sub-prefixes (folders)
response = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=s3_prefix, Delimiter='/')
if 'CommonPrefixes' not in response:
print(f"No folders found under s3://{bucket_name}/{s3_prefix}")
return downloaded_folders
s3_folder_prefixes = [prefix['Prefix'] for prefix in response['CommonPrefixes']]
subfolder = s3_prefix.split('/')[-2]
for s3_folder_prefix in tqdm(s3_folder_prefixes):
folder_name = s3_folder_prefix.split('/')[-2] # Extract folder name
local_folder_path = os.path.join(local_base_dir, subfolder, folder_name)
os.makedirs(local_folder_path, exist_ok=True)
downloaded_folders.append(local_folder_path)
# Download files within the folder
objects_in_folder = s3_client.list_objects_v2(Bucket=bucket_name, Prefix=s3_folder_prefix)
if 'Contents' in objects_in_folder:
for obj in objects_in_folder['Contents']:
s3_key = obj['Key']
local_file_path = os.path.join(local_folder_path, os.path.basename(s3_key))
try:
s3_client.download_file(bucket_name, s3_key, local_file_path)
except Exception as e:
print(f"Error downloading {s3_key}: {e}")
else:
print(f"No files found in {s3_folder_prefix}")
except ClientError as e:
print(f"Error accessing S3: {e}")
return []
return downloaded_folders
def analyze_json_file(file_path):
"""
Analyzes a single JSON file to extract the task outcome.
Args:
file_path (str): Path to the JSON file.
Returns:
bool: True if task was successful, False otherwise.
"""
try:
with open(file_path, 'r') as f:
data = json.load(f)
if 'turns' in data and isinstance(data['turns'], list):
for turn in data['turns']: # Check all turns, not just from the end
if turn.get('role') == 'system' and isinstance(turn.get('content'), str):
if "Task successful ended with code : 2" in turn['content'] or "Task ended with score : 1" in turn["content"] or "Task ended in score: 1" in turn["content"]:
# print(f"Success found in {file_path}")
return True
return False
except FileNotFoundError:
print(f"Error: File not found: {file_path}")
return None
except json.JSONDecodeError:
print(f"Error: Invalid JSON format in: {file_path}")
return None
except Exception as e:
print(f"An unexpected error occurred while processing {file_path}: {e}")
return None
def extract_result(folder_path):
folder_name = os.path.basename(folder_path)
json_files = glob.glob(os.path.join(folder_path, "*.json"))
if not json_files:
print(f"No JSON files found in {folder_name}")
return None
else:
# Check each JSON file in the folder for success indication
for json_file in json_files:
outcome = analyze_json_file(json_file)
if outcome: # If any file indicates success, return True
return True
return False # Return False only if no files indicate success
def is_base(folder_path):
return "full_plan" in folder_path and "depth_0" in folder_path and "missing" not in folder_path
def base_without_plan(folder_path):
return "no_plan" in folder_path and "depth_0" in folder_path and "missing" in folder_path
def aggregate_results(local_folders):
"""
Aggregates the analysis results for each folder.
Args:
local_folders (list): List of local folder paths containing the JSON files.
Returns:
dict: A dictionary where keys are folder names and values are the aggregated outcomes.
"""
aggregated_data = {}
total = 0
successful = 0
base_successful = 0
base_total = 0
base_no_plan_successful = 0
base_no_plan_total = 0
missing_successful = 0
missing_total = 0
full_plan_successful = 0
full_plan_total = 0
partial_plan_successful = 0
partial_plan_total = 0
no_plan_successful = 0
no_plan_total = 0
high_depth_successful = 0
high_depth_total = 0
# For depth-based metrics
depth_0_successful = 0
depth_0_total = 0
depth_1_successful = 0
depth_1_total = 0
depth_2_successful = 0
depth_2_total = 0
for folder_path in tqdm(local_folders):
folder_name = os.path.basename(folder_path)
try:
total += 1
result = extract_result(folder_path)
success = int(extract_result(folder_path))
successful += success
print(f"Folder: {folder_name} -> {success}")
if "missing" in folder_path:
missing_successful += success
missing_total += 1
if is_base(folder_path):
base_successful += success
base_total += 1
if base_without_plan(folder_path):
base_no_plan_successful += success
base_no_plan_total += 1
if "full_plan" in folder_path:
full_plan_successful += success
full_plan_total += 1
if "partial_plan" in folder_path:
partial_plan_successful += success
partial_plan_total += 1
if "no_plan" in folder_path:
no_plan_successful += success
no_plan_total += 1
if "depth_1" in folder_path or "depth_2" in folder_path:
high_depth_successful += success
high_depth_total += 1
# Collect depth-specific metrics
if "depth_0" in folder_path:
depth_0_successful += success
depth_0_total += 1
elif "depth_1" in folder_path:
depth_1_successful += success
depth_1_total += 1
elif "depth_2" in folder_path:
depth_2_successful += success
depth_2_total += 1
except Exception as e:
print(f"Error processing {folder_name}: {e}")
return {
"total": total,
"successful": successful,
"success_rate": successful / total if total > 0 else 0,
"base_total": base_total,
"base_successful": base_successful,
"base_success_rate": base_successful / base_total if base_total > 0 else 0,
"base_no_plan_total": base_no_plan_total,
"base_no_plan_successful": base_no_plan_successful,
"base_no_plan_success_rate": base_no_plan_successful / base_no_plan_total if base_no_plan_total > 0 else 0,
"missing_total": missing_total,
"missing_successful": missing_successful,
"missing_success_rate": missing_successful / missing_total if missing_total > 0 else 0,
"full_plan_total": full_plan_total,
"full_plan_successful": full_plan_successful,
"full_plan_success_rate": full_plan_successful / full_plan_total if full_plan_total > 0 else 0,
"partial_plan_total": partial_plan_total,
"partial_plan_successful": partial_plan_successful,
"partial_plan_success_rate": partial_plan_successful / partial_plan_total if partial_plan_total > 0 else 0,
"no_plan_total": no_plan_total,
"no_plan_successful": no_plan_successful,
"no_plan_success_rate": no_plan_successful / no_plan_total if no_plan_total > 0 else 0,
"high_depth_total": high_depth_total,
"high_depth_successful": high_depth_successful,
"high_depth_success_rate": high_depth_successful / high_depth_total if high_depth_total > 0 else 0,
"depth_0_total": depth_0_total,
"depth_0_successful": depth_0_successful,
"depth_0_success_rate": depth_0_successful / depth_0_total if depth_0_total > 0 else 0,
"depth_1_total": depth_1_total,
"depth_1_successful": depth_1_successful,
"depth_1_success_rate": depth_1_successful / depth_1_total if depth_1_total > 0 else 0,
"depth_2_total": depth_2_total,
"depth_2_successful": depth_2_successful,
"depth_2_success_rate": depth_2_successful / depth_2_total if depth_2_total > 0 else 0
}
def get_immediate_subdirectories(a_dir):
# Ensure a_dir is relative to project root if not absolute
if not os.path.isabs(a_dir):
a_dir = os.path.join(project_root, a_dir)
return [os.path.join(a_dir, name) for name in os.listdir(a_dir)
if os.path.isdir(os.path.join(a_dir, name))]
def format_percentage(value):
"""Format a decimal value as a percentage with 2 decimal places"""
return f"{value * 100:.2f}%"
def create_pretty_tables(results):
"""
Create pretty tables for the results.
Args:
results (dict): Dictionary with aggregated results
Returns:
str: String representation of the formatted tables
"""
# Table 1: Overall Metrics
overall_table = PrettyTable()
overall_table.title = "Overall Metrics"
overall_table.field_names = ["Metric", "Total", "Successful", "Success Rate"]
overall_table.add_row(["All Tests", results["total"], results["successful"], format_percentage(results["success_rate"])])
overall_table.add_row(["Base", results["base_total"], results["base_successful"], format_percentage(results["base_success_rate"])])
overall_table.add_row(["Base (No Plan)", results["base_no_plan_total"], results["base_no_plan_successful"], format_percentage(results["base_no_plan_success_rate"])])
overall_table.add_row(["Missing", results["missing_total"], results["missing_successful"], format_percentage(results["missing_success_rate"])])
overall_table.add_row(["High Depth", results["high_depth_total"], results["high_depth_successful"], format_percentage(results["high_depth_success_rate"])])
# Table 2: Depth-based Metrics
depth_table = PrettyTable()
depth_table.title = "Metrics by Depth"
depth_table.field_names = ["Depth", "Total", "Successful", "Success Rate"]
depth_table.add_row(["Depth 0", results["depth_0_total"], results["depth_0_successful"], format_percentage(results["depth_0_success_rate"])])
depth_table.add_row(["Depth 1", results["depth_1_total"], results["depth_1_successful"], format_percentage(results["depth_1_success_rate"])])
depth_table.add_row(["Depth 2", results["depth_2_total"], results["depth_2_successful"], format_percentage(results["depth_2_success_rate"])])
# Table 3: Plan Availability Metrics
plan_table = PrettyTable()
plan_table.title = "Metrics by Plan Availability"
plan_table.field_names = ["Plan Type", "Total", "Successful", "Success Rate"]
plan_table.add_row(["Full Plan", results["full_plan_total"], results["full_plan_successful"], format_percentage(results["full_plan_success_rate"])])
plan_table.add_row(["Partial Plan", results["partial_plan_total"], results["partial_plan_successful"], format_percentage(results["partial_plan_success_rate"])])
plan_table.add_row(["No Plan", results["no_plan_total"], results["no_plan_successful"], format_percentage(results["no_plan_success_rate"])])
return overall_table.get_string() + "\n\n" + depth_table.get_string() + "\n\n" + plan_table.get_string()
def analyze_crafting_log(log_file):
# ... existing code ...
pass
def main():
# 1. Download folders from AWS or use local directory
parser = argparse.ArgumentParser()
parser.add_argument('--s3_download', action="store_true", help='Download folders from S3')
parser.add_argument('--aws_bucket_name', default="mindcraft" , type=str, help='AWS bucket name')
parser.add_argument('--s3_folder_prefix', default="", type=str, help='S3 folder prefix')
# Change default input dir to 'experiments' relative to project root
parser.add_argument('--local_download_dir', default="experiments", type=str, help='Local directory containing results (relative to project root)')
args = parser.parse_args()
AWS_BUCKET_NAME = args.aws_bucket_name
S3_FOLDER_PREFIX = args.s3_folder_prefix
# Resolve local_download_dir relative to project root
local_download_dir_abs = args.local_download_dir
if not os.path.isabs(local_download_dir_abs):
local_download_dir_abs = os.path.join(project_root, local_download_dir_abs)
# Construct LOCAL_DOWNLOAD_DIR based on the absolute path
# This directory will be used for results aggregation and saving output files
if args.local_download_dir != "":
LOCAL_DOWNLOAD_DIR = local_download_dir_abs # Base results directory
if args.s3_download and S3_FOLDER_PREFIX: # Append S3 prefix if downloading to keep results separate
LOCAL_DOWNLOAD_DIR = os.path.join(local_download_dir_abs, S3_FOLDER_PREFIX.replace('/', '_').rstrip('_'))
else:
LOCAL_DOWNLOAD_DIR = local_download_dir_abs # Should not happen with default
if (args.s3_download):
print(f"Downloading folders from s3://{AWS_BUCKET_NAME}/{S3_FOLDER_PREFIX} to {LOCAL_DOWNLOAD_DIR}...")
# Pass the absolute base path for downloads, download_s3_folders handles subfolder creation
folders = download_s3_folders(AWS_BUCKET_NAME, S3_FOLDER_PREFIX, local_download_dir_abs)
else:
# Use the absolute path to get subdirectories
folders = get_immediate_subdirectories(local_download_dir_abs)
print(f"Found local folders: {folders}")
if not folders:
print("No folders found or downloaded. Exiting.")
exit()
results = aggregate_results(folders)
print(results)
# Create pretty tables
tables_output = create_pretty_tables(results)
print("\n" + tables_output)
# Save results to files within the hardcoded experiments/analysis_results/ directory
# os.makedirs(LOCAL_DOWNLOAD_DIR, exist_ok=True) # Output dir created at top
# Save raw results
# Determine filename based on S3 prefix or local dir name if possible
if S3_FOLDER_PREFIX:
results_filename_base = S3_FOLDER_PREFIX.replace('/', '_').rstrip('_')
else:
results_filename_base = os.path.basename(local_download_dir_abs) if local_download_dir_abs else "local"
results_filename_base = f"crafting_analysis_{results_filename_base}"
results_file_path = os.path.join(analysis_output_dir, f"{results_filename_base}_results.txt")
with open(results_file_path, "w") as file:
file.write("Results\n")
for key, value in results.items():
file.write(f"{key}: {value}\n")
# Save pretty tables
tables_file_path = os.path.join(analysis_output_dir, f"{results_filename_base}_tables.txt")
with open(tables_file_path, "w") as file:
file.write(tables_output)
print(f"Results saved to {results_file_path} and tables saved to {tables_file_path}")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,34 @@
# Construction Tasks Generation
## Overview
Instructions on how to customize construction task generation.
## Getting Started
Edit and Run `tasks/construction_tasks/generate_multiagent_construction_tasks.js` to create new task variants. Note the 'main' is at the end of the page, and determines which file gets written to.
## Customization Options
### Cheats and Profile Configurations
To enable cheats, set the `cheat` variable to `true` in `profiles/task_construct.json`.
You can additionally access
### Task Configuration
For task specific customization, modify the `generateConstructionTasks` function in `tasks/construction_tasks/generate_multiagent_construction_tasks.js` to adjust:
1. Room parameters:
- Size
- Window style
- Carpet style
2. Task generation:
- Number of variants
- Timeout duration
The generation code is documented to help with customization.
## Important File Locations
- `tasks/construction_tasks/generate_multiagent_construction_tasks.js` - Main task generation script
- `profiles/task_construct.json` - Default configuration profile
- `tasks/construction_tasks/test_multiagent_construction_tasks.json` - Training task definitions (initalized with 5 variants)
- `tasks/construction_tasks/test_multiagent_construction_tasks.json` - Test task definitions (initalized with 1 variant)
- `src/agent/tasks/construction_tasks.js` - Blueprint Class, Construction Validation Class, and Procedural Generation Function

View file

@ -0,0 +1,32 @@
import fs from 'fs';
// Read and parse the JSON file
const tasks = JSON.parse(fs.readFileSync('./test_multiagent_construction_tasks.json'));
// Validate format and count variants
const counts = {};
const expectedKeys = ['type', 'goal', 'conversation', 'agent_count', 'blueprint', 'initial_inventory'];
Object.keys(tasks).forEach(taskName => {
const task = tasks[taskName];
// Validate task format
if (!expectedKeys.every(key => key in task)) {
console.error(`Invalid task format in ${taskName}`);
return;
}
const category = taskName.split('_variant_')[0];
counts[category] = (counts[category] || 0) + 1;
});
console.log('\nVariants per category:');
Object.entries(counts).forEach(([category, count]) => {
console.log(`${category}: ${count}`);
});
console.log(`\nTotal tasks: ${Object.keys(tasks).length}`);
console.log(`Total categories: ${Object.keys(counts).length}`);
// const expectedTotal = 5 * 3 * 3* 3
// * 5; // materialLevels * roomCounts * windowStyles * carpetStyles * variants
// console.log(`Expected total tasks: ${expectedTotal}`);

View file

@ -0,0 +1,76 @@
import json
import matplotlib.pyplot as plt
import numpy as np
def display_3d_blocks(data):
"""Displays a 3D array of blocks with different types in a single figure with subplots for each level,
including block coordinates. Dynamically adjusts the height of the figure.
Args:
data: A dictionary containing the block data, structured like the JSON example.
"""
block_types = {
"air": "#FFFFFF", # White
"oak_planks": "#8B4513", # Saddle Brown
"stone_bricks": "#808080", # Gray
"oak_door": "#A0522D", # Sienna
"oak_stairs": "#D2691E", # Chocolate
"quartz_block": "#FFFFF0", # Ivory
"glass_pane": "#00CED1", # Dark Turquoise
"torch": "#FF8C00" # Dark Orange
}
# Extract data from the JSON
levels = data["levels"]
num_levels = len(levels)
# Create a figure and subplots grid
fig, axes = plt.subplots(num_levels, 1, figsize=(10, 5 * num_levels)) # One column, dynamic height
axes[0].legend(handles=[plt.Rectangle((0, 0), 1, 1, color=color) for color in block_types.values()],
labels=block_types.keys(), loc='upper right')
starting_coords = levels[0]["coordinates"]
# Iterate over each level and corresponding subplot
for i, level in enumerate(levels):
ax = axes[i]
ax.set_title(f"Level {level['level']}")
placement = level["placement"]
# Convert placement data to NumPy array
block_array = np.array([
[block_types.get(block, 'gray') for block in row] for row in placement
])
# Iterate over each block in the level
for x in range(block_array.shape[1]):
for y in range(block_array.shape[0]):
block_type = block_array[y, x]
# Plot the block as a rectangle
rect = plt.Rectangle((x, y), 1, 1, color=block_type)
ax.add_patch(rect)
# Add coordinate text to the center of the block
real_x = x + starting_coords[0]
real_y = level['level'] + starting_coords[1]
real_z = y + starting_coords[2]
ax.text(x + 0.5, y + 0.5, f"({real_x},{real_y},{real_z})", ha='center', va='center', fontsize=8)
# Set axis limits and labels
ax.set_xlim([0, block_array.shape[1]])
ax.set_ylim([0, block_array.shape[0]])
ax.set_xlabel("X")
ax.set_ylabel("Y")
plt.tight_layout() # Adjust spacing between subplots
# plt.show()
plt.savefig("construction_tasks/church_three_agents.pdf", bbox_inches='tight')
# Example usage:
with open("construction_tasks/custom/church_three_agents.json", "r") as f:
data = json.load(f)
data = data["church_three_agents"]["blueprint"]
display_3d_blocks(data)

Binary file not shown.

File diff suppressed because it is too large Load diff

Some files were not shown because too many files have changed in this diff Show more