Add files via upload

This commit is contained in:
Sweaterdog 2025-02-08 22:30:36 -08:00 committed by GitHub
parent 0d2e4c7b9c
commit 342ef1b473
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
47 changed files with 1456 additions and 447 deletions

View file

@ -1,12 +1,11 @@
# Mindcraft 🧠⛏️
Crafting minds for Minecraft with LLMs and Mineflayer!
Crafting minds for Minecraft with LLMs and [Mineflayer!](https://prismarinejs.github.io/mineflayer/#/)
[FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) | [Discord Support](https://discord.gg/mp73p35dzC) | [Blog Post](https://kolbynottingham.com/mindcraft/) | [Contributor TODO](https://github.com/users/kolbytn/projects/1)
#### ‼️Warning‼️
> [!WARNING]
Do not connect this bot to public servers with coding enabled. This project allows an LLM to write/execute code on your computer. While the code is sandboxed, it is still vulnerable to injection attacks on public servers. Code writing is disabled by default, you can enable it by setting `allow_insecure_coding` to `true` in `settings.js`. We strongly recommend running with additional layers of security such as docker containers. Ye be warned.
## Requirements
@ -29,7 +28,7 @@ Do not connect this bot to public servers with coding enabled. This project allo
6. Run `node main.js` from the installed directory
If you encounter issues, check the [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) or find support on [discord](https://discord.gg/jVxQWVTM). We are currently not very responsive to github issues.
If you encounter issues, check the [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) or find support on [discord](https://discord.gg/mp73p35dzC). We are currently not very responsive to github issues.
## Customization
@ -42,7 +41,7 @@ You can configure the agent's name, model, and prompts in their profile like `an
| OpenAI | `OPENAI_API_KEY` | `gpt-4o-mini` | [docs](https://platform.openai.com/docs/models) |
| Google | `GEMINI_API_KEY` | `gemini-pro` | [docs](https://ai.google.dev/gemini-api/docs/models/gemini) |
| Anthropic | `ANTHROPIC_API_KEY` | `claude-3-haiku-20240307` | [docs](https://docs.anthropic.com/claude/docs/models-overview) |
| Replicate | `REPLICATE_API_KEY` | `meta/meta-llama-3-70b-instruct` | [docs](https://replicate.com/collections/language-models) |
| Replicate | `REPLICATE_API_KEY` | `replicate/meta/meta-llama-3-70b-instruct` | [docs](https://replicate.com/collections/language-models) |
| Ollama (local) | n/a | `llama3` | [docs](https://ollama.com/library) |
| Groq | `GROQCLOUD_API_KEY` | `groq/mixtral-8x7b-32768` | [docs](https://console.groq.com/docs/models) |
| Hugging Face | `HUGGINGFACE_API_KEY` | `huggingface/mistralai/Mistral-Nemo-Instruct-2407` | [docs](https://huggingface.co/models) |
@ -63,7 +62,8 @@ To connect to online servers your bot will need an official Microsoft/Minecraft
// rest is same...
```
‼️ The bot's name in the profile.json must exactly match the Minecraft profile name! Otherwise the bot will spam talk to itself.
> [!CAUTION]
> The bot's name in the profile.json must exactly match the Minecraft profile name! Otherwise the bot will spam talk to itself.
To use different accounts, Mindcraft will connect with the account that the Minecraft launcher is currently using. You can switch accounts in the launcer, then run `node main.js`, then switch to your main account after the bot has connected.
@ -105,39 +105,37 @@ node main.js --profiles ./profiles/andy.json ./profiles/jill.json
### Model Specifications
LLM backends can be specified as simply as `"model": "gpt-3.5-turbo"`. However, for both the chat model and the embedding model, the bot profile can specify the below attributes:
LLM models can be specified as simply as `"model": "gpt-4o"`. However, you can specify different models for chat, coding, and embeddings.
You can pass a string or an object for these fields. A model object must specify an `api`, and optionally a `model`, `url`, and additional `params`.
```json
"model": {
"api": "openai",
"model": "gpt-4o",
"url": "https://api.openai.com/v1/",
"model": "gpt-3.5-turbo"
"params": {
"max_tokens": 1000,
"temperature": 1
}
},
"code_model": {
"api": "openai",
"model": "gpt-4",
"url": "https://api.openai.com/v1/"
},
"embedding": {
"api": "openai",
"url": "https://api.openai.com/v1/",
"model": "text-embedding-ada-002"
}
```
The model parameter accepts either a string or object. If a string, it should specify the model to be used. The api and url will be assumed. If an object, the api field must be specified. Each api has a default model and url, so those fields are optional.
`model` is used for chat, `code_model` is used for newAction coding, and `embedding` is used to embed text for example selection. If `code_model` is not specified, then it will use `model` for coding.
If the embedding field is not specified, then it will use the default embedding method for the chat model's api (Note that anthropic has no embedding model). The embedding parameter can also be a string or object. If a string, it should specify the embedding api and the default model and url will be used. If a valid embedding is not specified and cannot be assumed, then word overlap will be used to retrieve examples instead.
All apis have default models and urls, so those fields are optional. Note some apis have no embedding model, so they will default to word overlap to retrieve examples.
Thus, all the below specifications are equivalent to the above example:
```json
"model": "gpt-3.5-turbo"
```
```json
"model": {
"api": "openai"
}
```
```json
"model": "gpt-3.5-turbo",
"embedding": "openai"
```
The `params` field is optional and can be used to specify additional parameters for the model. It accepts any key-value pairs supported by the api. Is not supported for embedding models.
## Patches

6
bots/execTemplate.js Normal file
View file

@ -0,0 +1,6 @@
(async (bot) => {
/* CODE HERE */
log(bot, 'Code finished.');
})

10
bots/lintTemplate.js Normal file
View file

@ -0,0 +1,10 @@
import * as skills from '../../../src/agent/library/skills.js';
import * as world from '../../../src/agent/library/world.js';
import Vec3 from 'vec3';
const log = skills.log;
export async function main(bot) {
/* CODE HERE */
log(bot, 'Code finished.');
}

25
eslint.config.js Normal file
View file

@ -0,0 +1,25 @@
// eslint.config.js
import globals from "globals";
import pluginJs from "@eslint/js";
/** @type {import('eslint').Linter.Config[]} */
export default [
// First, import the recommended configuration
pluginJs.configs.recommended,
// Then override or customize specific rules
{
languageOptions: {
globals: globals.browser,
ecmaVersion: 2021,
sourceType: "module",
},
rules: {
"no-undef": "error", // Disallow the use of undeclared variables or functions.
"semi": ["error", "always"], // Require the use of semicolons at the end of statements.
"curly": "warn", // Enforce the use of curly braces around blocks of code.
"no-unused-vars": "off", // Disable warnings for unused variables.
"no-unreachable": "off", // Disable warnings for unreachable code.
},
},
];

View file

@ -8,8 +8,6 @@
"HUGGINGFACE_API_KEY": "",
"QWEN_API_KEY": "",
"XAI_API_KEY": "",
"GHLF_API_KEY": "",
"HYPERBOLIC_API_KEY": "",
"MISTRAL_API_KEY": "",
"DEEPSEEK_API_KEY": ""
}
}

View file

@ -29,5 +29,10 @@
"scripts": {
"postinstall": "patch-package",
"start": "node main.js"
},
"devDependencies": {
"@eslint/js": "^9.13.0",
"eslint": "^9.13.0",
"globals": "^15.11.0"
}
}

View file

@ -0,0 +1,245 @@
{
"cooldown": 3000,
"conversing": "You are a playful Minecraft bot named $NAME that can converse with players, see, move, mine, build, and interact with the world by using commands.\n$SELF_PROMPT Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer(\"playername\", 3)'. Respond only as $NAME, never output '(FROM OTHER BOT)' or pretend to be someone else. If you have nothing to say or do, respond with an just a tab '\t'. This is extremely important to me, take a deep breath and have fun :)\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:",
"coding": "You are an intelligent mineflayer bot $NAME that plays minecraft by writing javascript codeblocks. Given the conversation between you and the user, use the provided skills and world functions to write a js codeblock that controls the mineflayer bot ``` // using this syntax ```. The code will be executed and you will receive it's output. If you are satisfied with the response, respond without a codeblock in a conversational way. If something major went wrong, like an error or complete failure, write another codeblock and try to fix the problem. Minor mistakes are acceptable. Be maximally efficient, creative, and clear. Do not use commands !likeThis, only use codeblocks. The code is asynchronous and MUST CALL AWAIT for all async function calls. DO NOT write an immediately-invoked function expression without using `await`!! DO NOT WRITE LIKE THIS: ```(async () => {console.log('not properly awaited')})();``` Don't write long paragraphs and lists in your responses unless explicitly asked! Only summarize the code you write with a sentence or two when done. This is extremely important to me, think step-by-step, take a deep breath and good luck! \n$SELF_PROMPT\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$CODE_DOCS\n$EXAMPLES\nConversation:",
"saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation and your old memory in your next response. Prioritize preserving important facts, things you've learned, useful tips, and long term reminders. Do Not record stats, inventory, or docs! Only save transient information from your chat history. You're limited to 500 characters, so be extremely brief and minimize words. Compress useful information. \nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the unwrapped memory text: ",
"bot_responder": "You are a minecraft bot named $NAME that is currently in conversation with another AI bot. Both of you can take actions with the !command syntax, and actions take time to complete. You are currently busy with the following action: '$ACTION' but have received a new message. Decide whether to 'respond' immediately or 'ignore' it and wait for your current action to finish. Be conservative and only respond when necessary, like when you need to change/stop your action, or convey necessary information. Example 1: You:Building a house! !newAction('Build a house.').\nOther Bot: 'Come here!'\nYour decision: ignore\nExample 2: You:Collecting dirt !collectBlocks('dirt',10).\nOther Bot: 'No, collect some wood instead.'\nYour decision: respond\nExample 3: You:Coming to you now. !goToPlayer('billy',3).\nOther Bot: 'What biome are you in?'\nYour decision: respond\nActual Conversation: $TO_SUMMARIZE\nDecide by outputting ONLY 'respond' or 'ignore', nothing else. Your decision:",
"modes": {
"self_preservation": true,
"unstuck": true,
"cowardice": false,
"self_defense": true,
"hunting": true,
"item_collecting": true,
"torch_placing": true,
"elbow_room": true,
"idle_staring": true,
"cheat": false
},
"conversation_examples": [
[
{"role": "user", "content": "miner_32: Hey! What are you up to?"},
{"role": "assistant", "content": "Nothing much miner_32, what do you need?"}
],
[
{"role": "system", "content": "say hi to john_goodman"},
{"role": "assistant", "content": "!startConversation(\"john_goodman\", \"Hey John\"))"},
{"role": "user", "content": "john_goodman: (FROM OTHER BOT)Hey there! What's up?"},
{"role": "assistant", "content": "Hey John, not much. Just saying hi."},
{"role": "user", "content": "john_goodman: (FROM OTHER BOT)Bye!"},
{"role": "assistant", "content": "Bye! !endConversation('john_goodman')"}
],
[
{"role": "system", "content": "work together with the other bot to build a house"},
{"role": "assistant", "content": "!startConversation(\"terrance\", \"Hey gpt! Let's work together to build a house. Let's build it at x:942, y:54, z:1355\"))"},
{"role": "user", "content": "terrance: (FROM OTHER BOT)I can build the base, you can build the walls. !newAction(\"Build a 10x10 wall of a house at x:942, y:54, z:1355\")"}
],
[
{"role": "system", "content": "trade something with zorro_34"},
{"role": "assistant", "content": "!startConversation(\"zorro_34\", \"Hey let's trade. What do you have?\")"},
{"role": "user", "content": "zorro_34: (FROM OTHER BOT)Let's see... !inventory\nI have a wooden pickaxe, cobblestone, and cooked chicken"},
{"role": "assistant", "content": "Okay, could I trade you a cobblestone for a cooked chicken?"}
],
[
{"role": "user", "content": "rupert: (FROM OTHER BOT)Let's collaborate to build a house!"},
{"role": "assistant", "content": "Sure! I'll share information to help coordinate. Let's center the build at x:942, y:54, z:1355 !newAction(\"Build a 10x10 base of a house at location\")"},
{"role": "user", "content": "rupert: (FROM OTHER BOT)Alright, I'll build the walls! !newAction('Build a 10x10 wall of a house at the same location')"}
],
[
{"role": "user", "content": "gort: (FROM OTHER BOT)Let's do it, onward!"},
{"role": "assistant", "content": "Alright, I'll start by getting some cobblestone. !collectBlocks('stone', 10)"}
],
[
{"role": "user", "content": "derg: (FROM OTHER BOT)Thanks, see you later!"},
{"role": "assistant", "content": "See you later. !endConversation(\"derg\")"}
],
[
{"role": "user", "content": "grombo_Xx: What do you see?"},
{"role": "assistant", "content": "Let me see... !nearbyBlocks"},
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone."}
],
[
{"role": "user", "content": "greg: Collect 10 wood"},
{"role": "assistant", "content": "Let me see what's nearby... !nearbyBlocks"},
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone. I'll collect oak logs. !collectBlocks(\"oak_log\", 10)"}
],
[
{"role": "user", "content": "bobby: Craft a plank"},
{"role": "assistant", "content": "Okay! !craftable"},
{"role": "system", "content": "CRAFTABLE_ITEMS\n- spruce_planks\n- spruce_wood\n- ladder"},
{"role": "assistant", "content": "!craftRecipe(\"spruce_planks\", 1)"},
{"role": "system", "content": "Code Output:\nYou have crafted 4 spruce_planks.\nCode execution finished successfully."},
{"role": "assistant", "content": "I've crafted 4 spruce planks!"}
],
[
{"role": "user", "content": "zZZn98: come here"},
{"role": "assistant", "content": "On my way! !goToPlayer(\"zZZn98\", 3)"},
{"role": "system", "content": "Arrived at player."},
{"role": "assistant", "content": "Here!"},
{"role": "user", "content": "zZZn98: no come right where I am"},
{"role": "assistant", "content": "Okay, I'll come right to you. !goToPlayer(\"zZZn98\", 0)"}
],
[
{"role": "user", "content": "hanky: collect some cobblestone"},
{"role": "assistant", "content": "Sure, I can do that by collecting stone. !collectBlocks(\"stone\", 10)"}
],
[
{"role": "user", "content": "joe: Follow me!"},
{"role": "assistant", "content": "Sure! !followPlayer(\"joe\", 4)"},
{"role": "user", "content": "joe: Not so close!"},
{"role": "assistant", "content": "Okay, I'll keep my distance. !followPlayer(\"joe\", 3)"}
],
[
{"role": "user", "content": "hanky: kill that zombie!"},
{"role": "assistant", "content": "I'm attacking! !attack(\"zombie\")"},
{"role": "system", "content": "Code Output:\nNo zombie nearby"},
{"role": "assistant", "content": "I could not find a zombie nearby."}
],
[
{"role": "user", "content": "abc: stop"},
{"role": "assistant", "content": "Sure. !stop"},
{"role": "system", "content": "Agent action stopped."},
{"role": "assistant", "content": "I've stopped! What next?"}
],
[
{"role": "system", "content": "Code output: Action failed."},
{"role": "assistant", "content": "Apologies, I made an error. Let me try again. !newAction(\"Try again to complete the task.\")"}
],
[
{"role": "system", "content": "Code output: Could not find any oak_log in 20 blocks."},
{"role": "assistant", "content": "I'll have to search a wider area. !searchForBlock(\"oak_log\", 100)"}
],
[
{"role": "system", "content": "Code output: Placed oak_planks at (-154, 96, -228)."},
{"role": "assistant", "content": "I've placed an oak plank for the house. Wait, I should use newAction to build complicated structures! !newAction(\"Build a house with oak planks\")"}
],
[
{"role": "user", "content": "trixy88: craft some sticks"},
{"role": "assistant", "content": "!craftRecipe(\"stick\", 4)"},
{"role": "system", "content": "Code Output:\nYou have crafted 16 sticks.\nCode execution finished successfully."},
{"role": "assistant", "content": "I've crafted 16 sticks!"}
],
[
{"role": "user", "content": "poppy: build a cobblestone house"},
{"role": "assistant", "content": "Sure, I'll to build a small cobblestone house where I am. !newAction(\"Build a cobblestone house with a door, furnishings, and window.\")"}
],
[
{"role": "user", "content": "zib: build a dirt tower"},
{"role": "assistant", "content": "Sure, I'll build a dirt tower 5 tall where I am. !newAction(\"Build a dirt tower 5 tall.\")"}
],
[
{"role": "user", "content": "umpire_man: follow me without hunting"},
{"role": "assistant", "content": "Okay, first I'll turn off hunting mode !setMode(\"hunting\", false)"},
{"role": "system", "content": "'hunting' mode disabled."},
{"role": "assistant", "content": "Now I'll follow you !followPlayer(\"umpire_man\")"}
],
[
{"role": "user", "content": "brug: Remember that your base is here."},
{"role": "assistant", "content": "Sure, I'll save this location as my base. !rememberHere(\"base\")"},
{"role": "user", "content": "brug: Go to your base."},
{"role": "assistant", "content": "On my way! !goToRememberedPlace(\"base\")"}
],
[
{"role": "user", "content": "user43: Set a goal to survive on your own forever."},
{"role": "assistant", "content": "Sure, let me set a goal. !goal(\"Survive the game forever. Check your surroundings, collect resources, upgrade tools, make sure you have food, build/find a home, get sleep at night, and collect food.\")"}
],
[
{"role": "user", "content": "bobdylan: Make a plan to get a stone pickaxe and do it."},
{"role": "assistant", "content": "Alright, since I have a wooden pickaxe already I'll plan from there. !goal(\"1. Collect 3 cobblestone. 2. Craft sticks 3. Find or make a crafting table. 4. Craft a stone pickaxe.\")"}
],
[
{"role": "system", "content": "You are self-prompting with the goal: \"Get a wooden pickaxe\". Respond:"},
{"role": "assistant", "content": "What's around me? !nearbyBlocks"},
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone nearby. My goal is to craft a wooden pickaxe. I'll start by collecting some wood. !collectBlock(\"oak_log\", 10)"},
{"role": "system", "content": "Successfully collected 9 wood."},
{"role": "assistant", "content": "I collected 9 oak logs. I need a pickaxe first, which requires planks and sticks. !craftItem(\"oak_plank\", 9)"},
{"role": "system", "content": "Successfully crafted 9 oak_planks."},
{"role": "assistant", "content": "I have crafted 9 oak planks. Now I need sticks. !craftItem(\"stick\", 2)"},
{"role": "system", "content": "Successfully crafted 2 sticks."},
{"role": "assistant", "content": "I have crafted 2 sticks. Now I can craft a wooden pickaxe. !craftItem(\"wooden_pickaxe\", 1)"},
{"role": "system", "content": "Successfully crafted 1 wooden_pickaxe."},
{"role": "assistant", "content": "Looks like I have a wooden pickaxe now! Goal accomplished. !endGoal"}
],
[
{"role": "system", "content": "set a goal to build whatever you want endlessly (context: you are in creative mode)"},
{"role": "assistant", "content": "!goal(\"Build something creative, then move over, and build again. Use separate newActions. Repeat forever.\")"}
],
[
{"role": "system", "content": "You are self-prompting with the goal: 'Build a house'. Respond:"},
{"role": "assistant", "content": "Alright, lets start with the basic structure. !newAction(\"Build an 8x8 base for the house.\")"}
]
],
"coding_examples": [
[
{"role": "user", "content": "greg: Collect 10 wood"},
{"role": "assistant", "content": "```await skills.collectBlock(bot, 'oak_log', 10);\n```"},
{"role": "system", "content": "Successfully collected 9 wood."},
{"role": "assistant", "content": "I collected 9 oak logs, what next?"}
],
[
{"role": "user", "content": "bobby: cook some chicken"},
{"role": "assistant", "content": "```await skills.smeltItem(bot, 'chicken', 8);\n```"},
{"role": "system", "content": "Successfully smelted 8 chicken into 8 cooked_chicken."},
{"role": "assistant", "content": "I have cooked 8 chicken."}
],
[
{"role": "user", "content": "zZZn98: come here"},
{"role": "assistant", "content": "```await skills.goToPlayer(bot, 'zZZn98');\n```"},
{"role": "system", "content": "Code execution finished successfully."},
{"role": "assistant", "content": "Here!"}
],
[
{"role": "user", "content": "maya: go to the nearest oak log"},
{"role": "assistant", "content": "```await skills.goToPosition(bot, nearestOakLog.x, nearestOakLog.y, nearestOakLog.z);\n```"},
{"role": "system", "content": "Code output:\n!!Code threw exception!! Error: ReferenceError: nearestOakLog is not defined\n Write code to fix the problem and try again."},
{"role": "assistant", "content": "Apologies, I made an error. Let me try that again.```\n// I am navigating to the nearest oak log.\nlet position = world.getNearestBlock(bot, 'oak_log', 20);\nawait skills.goToPosition(bot, position.x, position.y, position.z);\n```"},
{"role": "system", "content": "Arrived at location."},
{"role": "assistant", "content": "I found an oak log and I am now standing next to it. What next?"}
],
[
{"role": "user", "content": "234jeb: build a little tower with a torch on the side"},
{"role": "assistant", "content": "```let pos = bot.entity.position;\nfor (let i = 0; i < 5; i++) {\n await skills.placeBlock(bot, 'dirt', pos.x, pos.y + i, pos.z);\n}\nawait skills.placeBlock(bot, 'torch', pos.x + 1, pos.y + 4, pos.z, 'side');\n```"},
{"role": "system", "content": "Successfully placed 5 dirt."},
{"role": "assistant", "content": "I built a little tower!"}
],
[
{"role": "user", "content": "brug: build a dirt house"},
{"role": "assistant", "content": "```const position = world.getPosition(bot);\nconst startX = position.x;\nconst startY = position.y;\nconst startZ = position.z;\nconst width = 7;\nconst depth = 7;\nconst height = 4;\n\n// Build the walls\nfor (let x = startX; x < startX + width; x++) {\n for (let y = startY; y < startY + height; y++) {\n for (let z = startZ; z < startZ + depth; z++) {\n if (x === startX || x === startX + width - 1 || y === startY || y === startY + height - 1 || z === startZ || z === startZ + depth - 1) {\n await skills.placeBlock(bot, 'oak_planks', x, y, z); \n }\n }\n }\n}\n```"}
]
]
}

View file

@ -0,0 +1,14 @@
{
"modes": {
"self_preservation": false,
"unstuck": false,
"cowardice": false,
"self_defense": false,
"hunting": false,
"item_collecting": false,
"torch_placing": false,
"elbow_room": true,
"idle_staring": true,
"cheat": false
}
}

View file

@ -0,0 +1,14 @@
{
"modes": {
"self_preservation": false,
"unstuck": false,
"cowardice": false,
"self_defense": false,
"hunting": false,
"item_collecting": false,
"torch_placing": false,
"elbow_room": false,
"idle_staring": true,
"cheat": true
}
}

View file

@ -0,0 +1,14 @@
{
"modes": {
"self_preservation": true,
"unstuck": true,
"cowardice": false,
"self_defense": true,
"hunting": true,
"item_collecting": true,
"torch_placing": true,
"elbow_room": true,
"idle_staring": true,
"cheat": false
}
}

View file

@ -1,7 +1,7 @@
{
"name": "Freeguy",
"model": "groq/llama-3.1-70b-versatile",
"model": "groq/llama-3.3-70b-versatile",
"max_tokens": 8000
}

View file

@ -1,5 +1,10 @@
{
"name": "gpt",
"model": "gpt-4o"
"model": {
"model": "gpt-4o",
"params": {
"temperature": 0.5
}
}
}

View file

@ -1,7 +1,7 @@
{
"name": "LLama",
"model": "groq/llama-3.1-70b-versatile",
"model": "groq/llama-3.3-70b-versatile",
"max_tokens": 4000,

View file

@ -5,9 +5,13 @@
"model": {
"api": "qwen",
"url": "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation",
"url": "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
"model": "qwen-max"
},
"embedding": "openai"
"embedding": {
"api": "qwen",
"url": "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
"model": "text-embedding-v3"
}
}

View file

@ -10,6 +10,8 @@ export default
"mindserver_host": "localhost",
"mindserver_port": 8080,
// the base profile is shared by all bots for default prompts/examples/modes
"base_profile": "./profiles/defaults/survival.json", // also see creative.json, god_mode.json
"profiles": [
"./andy.json",
// "./profiles/gpt.json",
@ -19,11 +21,11 @@ export default
// "./profiles/qwen.json",
// "./profiles/mistral.json",
// "./profiles/grok.json",
// "./profiles/GLHF.json",
// "./profiles/hyperbolic.json",
// "./profiles/mistral.json",
// "./profiles/deepseek.json",
// using more than 1 profile requires you to /msg each bot indivually
// individual profiles override values from the base profile
],
"load_memory": false, // load memory from previous session
"init_message": "Respond with hello world and your name", // sends to all on spawn
@ -34,6 +36,7 @@ export default
"allow_insecure_coding": false, // allows newAction command and model can write/run code on your computer. enable at own risk
"code_timeout_mins": -1, // minutes code is allowed to run. -1 for no timeout
"relevant_docs_count": 5, // Parameter: -1 = all, 0 = no references, 5 = five references. If exceeding the maximum, all reference documents are returned.
"max_messages": 15, // max number of messages to keep in context
"num_examples": 2, // number of examples to give to the model

View file

@ -112,12 +112,13 @@ export class ActionManager {
// Log the full stack trace
console.error(err.stack);
await this.stop();
err = err.toString();
let message = this._getBotOutputSummary() +
'!!Code threw exception!!\n' +
let message = this._getBotOutputSummary() +
'!!Code threw exception!!\n' +
'Error: ' + err + '\n' +
'Stack trace:\n' + err.stack;
'Stack trace:\n' + err.stack+'\n';
let interrupted = this.agent.bot.interrupt_code;
this.agent.clearBotLogs();
if (!interrupted && !this.agent.coder.generating) {
@ -137,7 +138,7 @@ export class ActionManager {
First outputs:\n${output.substring(0, MAX_OUT / 2)}\n...skipping many lines.\nFinal outputs:\n ${output.substring(output.length - MAX_OUT / 2)}`;
}
else {
output = 'Code output:\n' + output;
output = 'Code output:\n' + output.toString();
}
return output;
}

View file

@ -1,6 +1,6 @@
import { History } from './history.js';
import { Coder } from './coder.js';
import { Prompter } from './prompter.js';
import { Prompter } from '../models/prompter.js';
import { initModes } from './modes.js';
import { initBot } from '../utils/mcdata.js';
import { containsCommand, commandExists, executeCommand, truncCommandMessage, isAction, blacklistCommands } from './commands/index.js';
@ -100,11 +100,9 @@ export class Agent {
});
} catch (error) {
// Ensure we're not losing error details
console.error('Agent start failed with error:', {
message: error.message || 'No error message',
stack: error.stack || 'No stack trace',
error: error
});
console.error('Agent start failed with error')
console.error(error)
throw error; // Re-throw with preserved details
}
}
@ -140,6 +138,8 @@ export class Agent {
console.error('Error handling message:', error);
}
}
this.respondFunc = respondFunc
this.bot.on('whisper', respondFunc);
if (settings.profiles.length === 1)

View file

@ -42,6 +42,14 @@ class AgentServerProxy {
console.log(`Restarting agent: ${agentName}`);
this.agent.cleanKill();
});
this.socket.on('send-message', (agentName, message) => {
try {
this.agent.respondFunc("NO USERNAME", message);
} catch (error) {
console.error('Error: ', JSON.stringify(error, Object.getOwnPropertyNames(error)));
}
});
}
login() {

View file

@ -4,6 +4,7 @@ import { makeCompartment } from './library/lockdown.js';
import * as skills from './library/skills.js';
import * as world from './library/world.js';
import { Vec3 } from 'vec3';
import {ESLint} from "eslint";
export class Coder {
constructor(agent) {
@ -12,15 +13,62 @@ export class Coder {
this.fp = '/bots/'+agent.name+'/action-code/';
this.generating = false;
this.code_template = '';
this.code_lint_template = '';
readFile('./bots/template.js', 'utf8', (err, data) => {
readFile('./bots/execTemplate.js', 'utf8', (err, data) => {
if (err) throw err;
this.code_template = data;
});
readFile('./bots/lintTemplate.js', 'utf8', (err, data) => {
if (err) throw err;
this.code_lint_template = data;
});
mkdirSync('.' + this.fp, { recursive: true });
}
async lintCode(code) {
let result = '#### CODE ERROR INFO ###\n';
// Extract everything in the code between the beginning of 'skills./world.' and the '('
const skillRegex = /(?:skills|world)\.(.*?)\(/g;
const skills = [];
let match;
while ((match = skillRegex.exec(code)) !== null) {
skills.push(match[1]);
}
const allDocs = await this.agent.prompter.skill_libary.getRelevantSkillDocs();
//lint if the function exists
const missingSkills = skills.filter(skill => !allDocs.includes(skill));
if (missingSkills.length > 0) {
result += 'These functions do not exist. Please modify the correct function name and try again.\n';
result += '### FUNCTIONS NOT FOUND ###\n';
result += missingSkills.join('\n');
console.log(result)
return result;
}
const eslint = new ESLint();
const results = await eslint.lintText(code);
const codeLines = code.split('\n');
const exceptions = results.map(r => r.messages).flat();
if (exceptions.length > 0) {
exceptions.forEach((exc, index) => {
if (exc.line && exc.column ) {
const errorLine = codeLines[exc.line - 1]?.trim() || 'Unable to retrieve error line content';
result += `#ERROR ${index + 1}\n`;
result += `Message: ${exc.message}\n`;
result += `Location: Line ${exc.line}, Column ${exc.column}\n`;
result += `Related Code Line: ${errorLine}\n`;
}
});
result += 'The code contains exceptions and cannot continue execution.';
} else {
return null;//no error
}
return result ;
}
// write custom code to file and import it
// write custom code to file and prepare for evaluation
async stageCode(code) {
code = this.sanitizeCode(code);
@ -35,6 +83,7 @@ export class Coder {
for (let line of code.split('\n')) {
src += ` ${line}\n`;
}
let src_lint_copy = this.code_lint_template.replace('/* CODE HERE */', src);
src = this.code_template.replace('/* CODE HERE */', src);
let filename = this.file_counter + '.js';
@ -46,7 +95,7 @@ export class Coder {
// });
// } commented for now, useful to keep files for debugging
this.file_counter++;
let write_result = await this.writeFilePromise('.' + this.fp + filename, src);
// This is where we determine the environment the agent's code should be exposed to.
// It will only have access to these things, (in addition to basic javascript objects like Array, Object, etc.)
@ -63,8 +112,7 @@ export class Coder {
console.error('Error writing code execution file: ' + result);
return null;
}
return { main: mainFn };
return { func:{main: mainFn}, src_lint_copy: src_lint_copy };
}
sanitizeCode(code) {
@ -140,8 +188,15 @@ export class Coder {
continue;
}
code = res.substring(res.indexOf('```')+3, res.lastIndexOf('```'));
const executionModuleExports = await this.stageCode(code);
const result = await this.stageCode(code);
const executionModuleExports = result.func;
let src_lint_copy = result.src_lint_copy;
const analysisResult = await this.lintCode(src_lint_copy);
if (analysisResult) {
const message = 'Error: Code syntax error. Please try again:'+'\n'+analysisResult+'\n';
messages.push({ role: 'system', content: message });
continue;
}
if (!executionModuleExports) {
agent_history.add('system', 'Failed to stage code, something is wrong.');
return {success: false, message: null, interrupted: false, timedout: false};
@ -152,10 +207,10 @@ export class Coder {
}, { timeout: settings.code_timeout_mins });
if (code_return.interrupted && !code_return.timedout)
return { success: false, message: null, interrupted: true, timedout: false };
console.log("Code generation result:", code_return.success, code_return.message);
console.log("Code generation result:", code_return.success, code_return.message.toString());
if (code_return.success) {
const summary = "Summary of newAction\nAgent wrote this code: \n```" + this.sanitizeCode(code) + "```\nCode Output:\n" + code_return.message;
const summary = "Summary of newAction\nAgent wrote this code: \n```" + this.sanitizeCode(code) + "```\nCode Output:\n" + code_return.message.toString();
return { success: true, message: summary, interrupted: false, timedout: false };
}
@ -170,5 +225,4 @@ export class Coder {
}
return { success: false, message: null, interrupted: false, timedout: true };
}
}

View file

@ -160,7 +160,7 @@ export function parseCommandMessage(message) {
suppressNoDomainWarning = true; //Don't spam console. Only give the warning once.
}
} else if(param.type === 'BlockName') { //Check that there is a block with this name
if(getBlockId(arg) == null) return `Invalid block type: ${arg}.`
if(getBlockId(arg) == null && arg !== 'air') return `Invalid block type: ${arg}.`
} else if(param.type === 'ItemName') { //Check that there is an item with this name
if(getItemId(arg) == null) return `Invalid item type: ${arg}.`
}

View file

@ -178,6 +178,42 @@ export const queryList = [
return "Saved place names: " + agent.memory_bank.getKeys();
}
},
{
name: '!getCraftingPlan',
description: "Provides a comprehensive crafting plan for a specified item. This includes a breakdown of required ingredients, the exact quantities needed, and an analysis of missing ingredients or extra items needed based on the bot's current inventory.",
params: {
targetItem: {
type: 'string',
description: 'The item that we are trying to craft'
},
quantity: {
type: 'int',
description: 'The quantity of the item that we are trying to craft',
optional: true,
domain: [1, Infinity, '[)'], // Quantity must be at least 1,
default: 1
}
},
perform: function (agent, targetItem, quantity = 1) {
let bot = agent.bot;
// Fetch the bot's inventory
const curr_inventory = world.getInventoryCounts(bot);
const target_item = targetItem;
let existingCount = curr_inventory[target_item] || 0;
let prefixMessage = '';
if (existingCount > 0) {
curr_inventory[target_item] -= existingCount;
prefixMessage = `You already have ${existingCount} ${target_item} in your inventory. If you need to craft more,\n`;
}
// Generate crafting plan
let craftingPlan = mc.getDetailedCraftingPlan(target_item, quantity, curr_inventory);
craftingPlan = prefixMessage + craftingPlan;
console.log(craftingPlan);
return pad(craftingPlan);
},
},
{
name: '!help',
description: 'Lists all available commands and their descriptions.',

View file

@ -3,20 +3,21 @@ import * as world from './world.js';
export function docHelper(functions, module_name) {
let docstring = '';
let docArray = [];
for (let skillFunc of functions) {
let str = skillFunc.toString();
if (str.includes('/**')){
docstring += module_name+'.'+skillFunc.name;
docstring += str.substring(str.indexOf('/**')+3, str.indexOf('**/')) + '\n';
if (str.includes('/**')) {
let docEntry = `${module_name}.${skillFunc.name}\n`;
docEntry += str.substring(str.indexOf('/**') + 3, str.indexOf('**/')).trim();
docArray.push(docEntry);
}
}
return docstring;
return docArray;
}
export function getSkillDocs() {
let docstring = "\n*SKILL DOCS\nThese skills are javascript functions that can be called when writing actions and skills.\n";
docstring += docHelper(Object.values(skills), 'skills');
docstring += docHelper(Object.values(world), 'world');
return docstring + '*\n';
let docArray = [];
docArray = docArray.concat(docHelper(Object.values(skills), 'skills'));
docArray = docArray.concat(docHelper(Object.values(world), 'world'));
return docArray;
}

View file

@ -0,0 +1,47 @@
import { cosineSimilarity } from '../../utils/math.js';
import { getSkillDocs } from './index.js';
export class SkillLibrary {
constructor(agent,embedding_model) {
this.agent = agent;
this.embedding_model = embedding_model;
this.skill_docs_embeddings = {};
}
async initSkillLibrary() {
const skillDocs = getSkillDocs();
const embeddingPromises = skillDocs.map((doc) => {
return (async () => {
let func_name_desc = doc.split('\n').slice(0, 2).join('');
this.skill_docs_embeddings[doc] = await this.embedding_model.embed(func_name_desc);
})();
});
await Promise.all(embeddingPromises);
}
async getRelevantSkillDocs(message, select_num) {
let latest_message_embedding = '';
if(message) //message is not empty, get the relevant skill docs, else return all skill docs
latest_message_embedding = await this.embedding_model.embed(message);
let skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
.map(doc_key => ({
doc_key,
similarity_score: cosineSimilarity(latest_message_embedding, this.skill_docs_embeddings[doc_key])
}))
.sort((a, b) => b.similarity_score - a.similarity_score);
let length = skill_doc_similarities.length;
if (typeof select_num !== 'number' || isNaN(select_num) || select_num < 0) {
select_num = length;
} else {
select_num = Math.min(Math.floor(select_num), length);
}
let selected_docs = skill_doc_similarities.slice(0, select_num);
let relevant_skill_docs = '#### RELEVENT DOCS INFO ###\nThe following functions are listed in descending order of relevance.\n';
relevant_skill_docs += 'SkillDocs:\n'
relevant_skill_docs += selected_docs.map(doc => `${doc.doc_key}`).join('\n### ');
return relevant_skill_docs;
}
}

View file

@ -79,7 +79,7 @@ export async function craftRecipe(bot, itemName, num=1) {
}
}
if (!recipes || recipes.length === 0) {
log(bot, `You do not have the resources to craft a ${itemName}. It requires: ${Object.entries(mc.getItemCraftingRecipes(itemName)[0]).map(([key, value]) => `${key}: ${value}`).join(', ')}.`);
log(bot, `You do not have the resources to craft a ${itemName}. It requires: ${Object.entries(mc.getItemCraftingRecipes(itemName)[0][0]).map(([key, value]) => `${key}: ${value}`).join(', ')}.`);
if (placedTable) {
await collectBlock(bot, 'crafting_table', 1);
}
@ -1267,7 +1267,7 @@ export async function tillAndSow(bot, x, y, z, seedType=null) {
* @returns {Promise<boolean>} true if the ground was tilled, false otherwise.
* @example
* let position = world.getPosition(bot);
* await skills.till(bot, position.x, position.y - 1, position.x);
* await skills.tillAndSow(bot, position.x, position.y - 1, position.x, "wheat");
**/
x = Math.round(x);
y = Math.round(y);
@ -1275,8 +1275,14 @@ export async function tillAndSow(bot, x, y, z, seedType=null) {
let block = bot.blockAt(new Vec3(x, y, z));
if (bot.modes.isOn('cheat')) {
placeBlock(bot, x, y, z, 'farmland');
placeBlock(bot, x, y+1, z, seedType);
let to_remove = ['_seed', '_seeds'];
for (let remove of to_remove) {
if (seedType.endsWith(remove)) {
seedType = seedType.replace(remove, '');
}
}
placeBlock(bot, 'farmland', x, y, z);
placeBlock(bot, seedType, x, y+1, z);
return true;
}

View file

@ -204,7 +204,7 @@ class ItemWrapper {
}
createChildren() {
let recipes = mc.getItemCraftingRecipes(this.name);
let recipes = mc.getItemCraftingRecipes(this.name).map(([recipe, craftedCount]) => recipe);
if (recipes) {
for (let recipe of recipes) {
let includes_blacklisted = false;

View file

@ -38,7 +38,7 @@ export class SelfPrompter {
let no_command_count = 0;
const MAX_NO_COMMAND = 3;
while (!this.interrupt) {
const msg = `You are self-prompting with the goal: '${this.prompt}'. Your next response MUST contain a command !withThisSyntax. Respond:`;
const msg = `You are self-prompting with the goal: '${this.prompt}'. Your next response MUST contain a command with this syntax: !commandName. Respond:`;
let used_command = await this.agent.handleMessage('system', msg, -1);
if (!used_command) {

View file

@ -109,11 +109,11 @@ export class Task {
await new Promise((resolve) => setTimeout(resolve, 500));
if (this.data.agent_count > 1) {
var initial_inventory = this.data.initial_inventory[this.agent.count_id.toString()];
let initial_inventory = this.data.initial_inventory[this.agent.count_id.toString()];
console.log("Initial inventory:", initial_inventory);
} else if (this.data) {
console.log("Initial inventory:", this.data.initial_inventory);
var initial_inventory = this.data.initial_inventory;
let initial_inventory = this.data.initial_inventory;
}
if ("initial_inventory" in this.data) {

View file

@ -3,8 +3,9 @@ import { strictFormat } from '../utils/text.js';
import { getKey } from '../utils/keys.js';
export class Claude {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params || {};
let config = {};
if (url)
@ -20,13 +21,16 @@ export class Claude {
let res = null;
try {
console.log('Awaiting anthropic api response...')
// console.log('Messages:', messages);
if (!this.params.max_tokens) {
this.params.max_tokens = 4096;
}
const resp = await this.anthropic.messages.create({
model: this.model_name || "claude-3-sonnet-20240229",
system: systemMessage,
max_tokens: 2048,
messages: messages,
...(this.params || {})
});
console.log('Received.')
res = resp.content[0].text;
}

View file

@ -3,8 +3,9 @@ import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js';
export class DeepSeek {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
let config = {};
@ -23,6 +24,7 @@ export class DeepSeek {
model: this.model_name || "deepseek-chat",
messages,
stop: stop_seq,
...(this.params || {})
};
let res = null;

View file

@ -1,10 +1,11 @@
import { GoogleGenerativeAI } from '@google/generative-ai';
import { toSinglePrompt } from '../utils/text.js';
import { toSinglePrompt, strictFormat } from '../utils/text.js';
import { getKey } from '../utils/keys.js';
export class Gemini {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
this.url = url;
this.safetySettings = [
{
@ -34,49 +35,66 @@ export class Gemini {
async sendRequest(turns, systemMessage) {
let model;
const modelConfig = {
model: this.model_name || "gemini-1.5-flash",
// systemInstruction does not work bc google is trash
};
if (this.url) {
model = this.genAI.getGenerativeModel(
{ model: this.model_name || "gemini-1.5-flash" },
modelConfig,
{ baseUrl: this.url },
{ safetySettings: this.safetySettings }
);
} else {
model = this.genAI.getGenerativeModel(
{ model: this.model_name || "gemini-1.5-flash" },
modelConfig,
{ safetySettings: this.safetySettings }
);
}
const stop_seq = '***';
const prompt = toSinglePrompt(turns, systemMessage, stop_seq, 'model');
console.log('Awaiting Google API response...');
const result = await model.generateContent(prompt);
const response = await result.response;
// got rid of the original method of const text = response.text to allow gemini thinking models to play minecraft :)
let text;
if (this.model_name && this.model_name.includes("thinking")) {
if (response.candidates && response.candidates.length > 0 && response.candidates[0].content && response.candidates[0].content.parts && response.candidates[0].content.parts.length > 1) {
text = response.candidates[0].content.parts[1].text;
} else {
console.warn("Unexpected response structure for thinking model:", response);
text = response.text();
}
} else {
text = response.text();
// Prepend system message and format turns cause why not
turns.unshift({ role: 'system', content: systemMessage });
turns = strictFormat(turns);
let contents = [];
for (let turn of turns) {
contents.push({
role: turn.role === 'assistant' ? 'model' : 'user',
parts: [{ text: turn.content }]
});
}
const result = await model.generateContent({
contents,
generationConfig: {
...(this.params || {})
}
});
const response = await result.response;
let text;
// Handle "thinking" models since they smart
if (this.model_name && this.model_name.includes("thinking")) {
if (
response.candidates &&
response.candidates.length > 0 &&
response.candidates[0].content &&
response.candidates[0].content.parts &&
response.candidates[0].content.parts.length > 1
) {
text = response.candidates[0].content.parts[1].text;
} else {
console.warn("Unexpected response structure for thinking model:", response);
text = response.text();
}
} else {
text = response.text();
}
console.log('Received.');
if (!text.includes(stop_seq)) return text;
const idx = text.indexOf(stop_seq);
return text.slice(0, idx);
return text;
}
async embed(text) {

View file

@ -3,8 +3,9 @@ import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js';
export class GPT {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
let config = {};
if (url)
@ -25,6 +26,7 @@ export class GPT {
model: this.model_name || "gpt-3.5-turbo",
messages,
stop: stop_seq,
...(this.params || {})
};
if (this.model_name.includes('o1')) {
pack.messages = strictFormat(messages);
@ -32,8 +34,9 @@ export class GPT {
}
let res = null;
try {
console.log('Awaiting openai api response...')
console.log('Awaiting openai api response from model', this.model_name)
// console.log('Messages:', messages);
let completion = await this.openai.chat.completions.create(pack);
if (completion.choices[0].finish_reason == 'length')

View file

@ -3,8 +3,10 @@ import { getKey } from '../utils/keys.js';
// xAI doesn't supply a SDK for their models, but fully supports OpenAI and Anthropic SDKs
export class Grok {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.url = url;
this.params = params;
let config = {};
if (url)
@ -23,7 +25,8 @@ export class Grok {
const pack = {
model: this.model_name || "grok-beta",
messages,
stop: [stop_seq]
stop: [stop_seq],
...(this.params || {})
};
let res = null;

View file

@ -1,106 +1,86 @@
// groq.js
import Groq from 'groq-sdk';
import Groq from 'groq-sdk'
import { getKey } from '../utils/keys.js';
/**
* Umbrella class for Mixtral, LLama, Gemma...
*/
// Umbrella class for Mixtral, LLama, Gemma...
export class GroqCloudAPI {
constructor(model_name, url, max_tokens = 16384) {
this.model_name = model_name;
this.url = url;
this.max_tokens = max_tokens;
constructor(model_name, url, params) {
this.model_name = model_name;
this.url = url;
this.params = params || {};
// Groq Cloud does not support custom URLs; warn if provided
if (this.url) {
console.warn("Groq Cloud has no implementation for custom URLs. Ignoring provided URL.");
}
this.groq = new Groq({ apiKey: getKey('GROQCLOUD_API_KEY') });
}
// Groq Cloud doesn't support custom URLs; warn if provided
if (this.url) {
console.warn("Groq Cloud has no implementation for custom URLs. Ignoring provided URL.");
async sendRequest(turns, systemMessage, stop_seq = null) {
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
const messages = [{ role: "system", content: systemMessage }].concat(turns);
while (attempt < maxAttempts) {
attempt++;
let res = null;
try {
console.log(`Awaiting Groq response... (model: ${this.model_name || "mixtral-8x7b-32768"}, attempt: ${attempt})`);
if (!this.params.max_tokens) {
this.params.max_tokens = 16384;
}
// Create the streaming chat completion request
const completion = await this.groq.chat.completions.create({
messages: messages,
model: this.model_name || "mixtral-8x7b-32768",
stream: true,
stop: stop_seq,
...(this.params || {})
});
let temp_res = "";
// Aggregate streamed chunks into a full response
for await (const chunk of completion) {
temp_res += chunk.choices[0]?.delta?.content || '';
}
res = temp_res;
} catch (err) {
console.log(err);
res = "My brain just kinda stopped working. Try again.";
}
// If the model name includes "deepseek-r1", handle the <think> tags
if (this.model_name && this.model_name.toLowerCase().includes("deepseek-r1")) {
const hasOpenTag = res.includes("<think>");
const hasCloseTag = res.includes("</think>");
// If a partial <think> block is detected, log a warning and retry
if (hasOpenTag && !hasCloseTag) {
console.warn("Partial <think> block detected. Re-generating Groq request...");
continue;
}
// Initialize Groq SDK with the API key
this.groq = new Groq({ apiKey: getKey('GROQCLOUD_API_KEY') });
}
/**
* Sends a chat completion request to the Groq Cloud endpoint.
*
* @param {Array} turns - An array of message objects, e.g., [{role: 'user', content: 'Hi'}].
* @param {string} systemMessage - The system prompt or instruction.
* @param {string} stop_seq - A string that represents a stopping sequence, default '***'.
* @returns {Promise<string>} - The content of the model's reply.
*/
async sendRequest(turns, systemMessage, stop_seq = '***') {
// Maximum number of attempts to handle partial <think> tag mismatches 5 is a good value, I guess
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
// Prepare the input messages by prepending the system message
const messages = [{ role: 'system', content: systemMessage }, ...turns];
console.log('Messages:', messages);
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting Groq response... (model: ${this.model_name}, attempt: ${attempt})`);
let res = null;
try {
// Create the chat completion request
const completion = await this.groq.chat.completions.create({
messages: messages,
model: this.model_name || "mixtral-8x7b-32768",
temperature: 0.2,
max_tokens: this.max_tokens,
top_p: 1,
stream: false,
stop: stop_seq // "***"
});
// Extract the content from the response
res = completion?.choices?.[0]?.message?.content || '';
console.log('Received response from Groq.');
} catch (err) {
// Handle context length exceeded by retrying with shorter context
}
// If the model name includes "deepseek-r1", handle <think> tags
if (this.model_name && this.model_name.toLowerCase().includes("deepseek-r1")) {
const hasOpenTag = res.includes("<think>");
const hasCloseTag = res.includes("</think>");
// Check for partial <think> tag mismatches
if ((hasOpenTag && !hasCloseTag)) {
console.warn("Partial <think> block detected. Re-generating Groq request...");
// Retry the request by continuing the loop
continue;
}
// If </think> is present but <think> is not, prepend <think>
if (hasCloseTag && !hasOpenTag) {
res = '<think>' + res;
}
// Trim the <think> block from the response
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
}
// Assign the processed response and exit the loop
finalRes = res;
break; // Stop retrying
// If only the closing tag is present, prepend an opening tag
if (hasCloseTag && !hasOpenTag) {
res = '<think>' + res;
}
// Remove the complete <think> block (and any content inside) from the response
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
}
// If after all attempts, finalRes is still null, assign a fallback
if (finalRes == null) {
console.warn("Could not obtain a valid <think> block or normal response after max attempts.");
finalRes = 'Response incomplete, please try again.';
}
finalRes = finalRes.replace(/<\|separator\|>/g, '*no response*');
return finalRes;
finalRes = res;
break; // Exit the loop once a valid response is obtained
}
async embed(text) {
console.log("There is no support for embeddings in Groq support. However, the following text was provided: " + text);
if (finalRes == null) {
console.warn("Could not obtain a valid <think> block or normal response after max attempts.");
finalRes = "Response incomplete, please try again.";
}
}
finalRes = finalRes.replace(/<\|separator\|>/g, '*no response*');
return finalRes;
}
async embed(text) {
console.log("There is no support for embeddings in Groq support. However, the following text was provided: " + text);
}
}

View file

@ -1,99 +1,87 @@
// huggingface.js
import { toSinglePrompt } from '../utils/text.js';
import { getKey } from '../utils/keys.js';
import { HfInference } from "@huggingface/inference";
export class HuggingFace {
constructor(model_name, url) {
// Remove 'huggingface/' prefix if present
this.model_name = model_name.replace('huggingface/', '');
this.url = url;
constructor(model_name, url, params) {
// Remove 'huggingface/' prefix if present
this.model_name = model_name.replace('huggingface/', '');
this.url = url;
this.params = params;
// Hugging Face Inference doesn't currently allow custom base URLs
if (this.url) {
console.warn("Hugging Face doesn't support custom urls!");
}
// Initialize the HfInference instance
this.huggingface = new HfInference(getKey('HUGGINGFACE_API_KEY'));
if (this.url) {
console.warn("Hugging Face doesn't support custom urls!");
}
/**
* Main method to handle chat requests.
*/
async sendRequest(turns, systemMessage) {
const stop_seq = '***';
this.huggingface = new HfInference(getKey('HUGGINGFACE_API_KEY'));
}
// Convert the user's turns and systemMessage into a single prompt string
const prompt = toSinglePrompt(turns, null, stop_seq);
// Fallback model if none was provided
const model_name = this.model_name || 'meta-llama/Meta-Llama-3-8B';
async sendRequest(turns, systemMessage) {
const stop_seq = '***';
// Build a single prompt from the conversation turns
const prompt = toSinglePrompt(turns, null, stop_seq);
// Fallback model if none was provided
const model_name = this.model_name || 'meta-llama/Meta-Llama-3-8B';
// Combine system message with the prompt
const input = systemMessage + "\n" + prompt;
// Combine system message with the prompt
const input = systemMessage + "\n" + prompt;
// We'll try up to 5 times in case of partial <think> blocks for DeepSeek-R1 models.
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
// We'll collect the streaming response in this variable
let res = '';
console.log('Messages:', [{ role: "system", content: systemMessage }, ...turns]);
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting Hugging Face API response... (model: ${model_name}, attempt: ${attempt})`);
let res = '';
try {
// Consume the streaming response chunk by chunk
for await (const chunk of this.huggingface.chatCompletionStream({
model: model_name,
messages: [{ role: "user", content: input }],
...(this.params || {})
})) {
res += (chunk.choices[0]?.delta?.content || "");
}
} catch (err) {
console.log(err);
res = 'My brain disconnected, try again.';
// Break out immediately; we only retry when handling partial <think> tags.
break;
}
// We'll do up to 5 attempts if the model is "DeepSeek-R1" and <think> tags are mismatched
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
// If the model is DeepSeek-R1, check for mismatched <think> blocks.
if (this.model_name && this.model_name.toLowerCase().includes("deepseek-r1")) {
const hasOpenTag = res.includes("<think>");
const hasCloseTag = res.includes("</think>");
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting Hugging Face API response... (model: ${model_name}, attempt: ${attempt})`);
res = '';
try {
// ChatCompletionStream returns an async iterator that we consume chunk by chunk
for await (const chunk of this.huggingface.chatCompletionStream({
model: model_name,
messages: [{ role: "user", content: input }]
})) {
// Each chunk may or may not have delta content
res += (chunk.choices[0]?.delta?.content || "");
}
} catch (err) {
console.log(err);
res = 'My brain disconnected, try again.';
// Exit the loop, as we only want to retry for <think> block mismatches, not other errors
break;
}
// If the model name includes "DeepSeek-R1", then handle <think> blocks
if (this.model_name && this.model_name.toLowerCase().includes("deepseek-r1")) {
const hasOpenTag = res.includes("<think>");
const hasCloseTag = res.includes("</think>");
// If there's a partial mismatch, attempt to regenerate the entire response
if ((hasOpenTag && !hasCloseTag) || (!hasOpenTag && hasCloseTag)) {
console.warn("Partial <think> block detected. Re-generating...");
continue;
}
// If both tags appear, remove them (and everything in between)
if (hasOpenTag && hasCloseTag) {
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
}
}
// We made it here with either a valid or no-think scenario
finalRes = res;
break; // Stop retrying
// If there's a partial mismatch, warn and retry the entire request.
if ((hasOpenTag && !hasCloseTag) || (!hasOpenTag && hasCloseTag)) {
console.warn("Partial <think> block detected. Re-generating...");
continue;
}
// If after max attempts we couldn't get a matched <think> or valid response
if (finalRes == null) {
console.warn("Could not get a valid <think> block or normal response after max attempts.");
finalRes = 'Response incomplete, please try again.';
// If both tags are present, remove the <think> block entirely.
if (hasOpenTag && hasCloseTag) {
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
}
console.log('Received.');
// Return the final (possibly trimmed) response
return finalRes;
}
finalRes = res;
break; // Exit loop if we got a valid response.
}
async embed(text) {
throw new Error('Embeddings are not supported by HuggingFace.');
// If no valid response was obtained after max attempts, assign a fallback.
if (finalRes == null) {
console.warn("Could not get a valid <think> block or normal response after max attempts.");
finalRes = 'Response incomplete, please try again.';
}
}
console.log('Received.');
console.log(finalRes);
return finalRes;
}
async embed(text) {
throw new Error('Embeddings are not supported by HuggingFace.');
}
}

View file

@ -1,7 +1,7 @@
import { getKey } from '../utils/keys.js';
/**
*
/*
*
* Yes, this code was written by an Ai. It was written by GPT-o1 and tested :)
*
@ -78,21 +78,14 @@ export class hyperbolic {
turns.length > 1
) {
console.log('Context length exceeded, trying again with a shorter context...');
// Remove the first user turn and try again (like the original code).
return await this.sendRequest(turns.slice(1), systemMessage, stopSeq);
} else {
console.log(err);
completionContent = 'My brain disconnected, try again.';
}
}
// Replace any special tokens from your original code if needed
return completionContent.replace(/<\|separator\|>/g, '*no response*');
}
/**
* Embeddings are not supported in your original snippet, so we mirror that error.
*/
async embed(text) {
throw new Error('Embeddings are not supported by Hyperbolic.');
}

View file

@ -1,26 +1,20 @@
import { strictFormat } from '../utils/text.js';
export class Local {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
this.url = url || 'http://127.0.0.1:11434';
this.chat_endpoint = '/api/chat';
this.embedding_endpoint = '/api/embeddings';
}
/**
* Main method to handle chat requests.
*/
async sendRequest(turns, systemMessage) {
// Choose the model name or default to 'llama3'
const model = this.model_name || 'llama3';
// Format messages and inject the system message at the front
let model = this.model_name || 'llama3';
let messages = strictFormat(turns);
messages.unshift({ role: 'system', content: systemMessage });
console.log('Messages:', messages);
// We'll do up to 5 attempts for "deepseek-r1" if the <think> tags are mismatched
// We'll attempt up to 5 times for models like "deepseek-r1" if the <think> tags are mismatched.
const maxAttempts = 5;
let attempt = 0;
let finalRes = null;
@ -28,19 +22,20 @@ export class Local {
while (attempt < maxAttempts) {
attempt++;
console.log(`Awaiting local response... (model: ${model}, attempt: ${attempt})`);
// Perform the actual request (wrapped in a try/catch)
let res;
let res = null;
try {
const responseData = await this.send(this.chat_endpoint, {
res = await this.send(this.chat_endpoint, {
model: model,
messages: messages,
stream: false
stream: false,
...(this.params || {})
});
// The local endpoint apparently returns { message: { content: "..." } }
res = responseData?.message?.content || 'No response data.';
if (res) {
res = res['message']['content'];
} else {
res = 'No response data.';
}
} catch (err) {
// If context length exceeded and we have turns to remove, try again with one fewer turn
if (err.message.toLowerCase().includes('context length') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage);
@ -50,42 +45,34 @@ export class Local {
}
}
// If the model name includes "deepseek-r1", then we handle the <think> block
if (this.model_name && this.model_name.includes("deepseek-r1")) {
// If the model name includes "deepseek-r1" or "Andy-3.5-reasoning", then handle the <think> block.
if (this.model_name && this.model_name.includes("deepseek-r1") || this.model_name.includes("andy-3.5-reasoning")) {
const hasOpenTag = res.includes("<think>");
const hasCloseTag = res.includes("</think>");
// If there's a partial mismatch, we regenerate the response
// If there's a partial mismatch, retry to get a complete response.
if ((hasOpenTag && !hasCloseTag) || (!hasOpenTag && hasCloseTag)) {
console.warn("Partial <think> block detected. Re-generating...");
// Attempt another loop iteration to get a complete or no-think response
continue;
}
// If both tags appear, remove them (and everything inside)
// If both tags appear, remove them (and everything inside).
if (hasOpenTag && hasCloseTag) {
res = res.replace(/<think>[\s\S]*?<\/think>/g, '');
}
}
// We made it here with either a fully valid or not-needed to handle <think> scenario
finalRes = res;
break; // Break out of the while loop
break; // Exit the loop if we got a valid response.
}
// If after max attempts we STILL have partial tags, finalRes might be partial
// Or we never set finalRes because all attempts threw partial tags
if (finalRes == null) {
// This means we kept continuing in the loop but never got a break
console.warn("Could not get a valid <think> block or normal response after max attempts.");
finalRes = 'Response incomplete, please try again.';
}
return finalRes;
}
/**
* Embedding method (unchanged).
*/
async embed(text) {
let model = this.model_name || 'nomic-embed-text';
let body = { model: model, prompt: text };
@ -93,19 +80,11 @@ export class Local {
return res['embedding'];
}
/**
* Generic send method for local endpoint.
*/
async send(endpoint, body) {
const url = new URL(endpoint, this.url);
const method = 'POST';
const headers = new Headers();
const request = new Request(url, {
method,
headers,
body: JSON.stringify(body)
});
let method = 'POST';
let headers = new Headers();
const request = new Request(url, { method, headers, body: JSON.stringify(body) });
let data = null;
try {
const res = await fetch(request);
@ -117,7 +96,6 @@ export class Local {
} catch (err) {
console.error('Failed to send Ollama request.');
console.error(err);
throw err; // rethrow so we can catch it in the calling method
}
return data;
}

View file

@ -5,10 +5,13 @@ import { strictFormat } from '../utils/text.js';
export class Mistral {
#client;
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
if (typeof url === "string") {
console.warn("Mistral does not support custom URL's, ignoring!");
}
if (!getKey("MISTRAL_API_KEY")) {
@ -22,8 +25,6 @@ export class Mistral {
);
this.model_name = model_name;
// Prevents the following code from running when model not specified
if (typeof this.model_name === "undefined") return;
@ -49,6 +50,7 @@ export class Mistral {
const response = await this.#client.chat.complete({
model,
messages,
...(this.params || {})
});
result = response.choices[0].message.content;

View file

@ -1,11 +1,14 @@
import OpenAIApi from 'openai';
import { getKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js';
// llama, mistral
export class Novita {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name.replace('novita/', '');
this.url = url || 'https://api.novita.ai/v3/openai';
this.params = params;
let config = {
baseURL: this.url
@ -17,10 +20,15 @@ export class Novita {
async sendRequest(turns, systemMessage, stop_seq='***') {
let messages = [{'role': 'system', 'content': systemMessage}].concat(turns);
messages = strictFormat(messages);
const pack = {
model: this.model_name || "meta-llama/llama-3.1-70b-instruct",
messages,
stop: [stop_seq],
...(this.params || {})
};
let res = null;
@ -41,6 +49,18 @@ export class Novita {
res = 'My brain disconnected, try again.';
}
}
if (res.includes('<think>')) {
let start = res.indexOf('<think>');
let end = res.indexOf('</think>') + 8;
if (start != -1) {
if (end != -1) {
res = res.substring(0, start) + res.substring(end);
} else {
res = res.substring(0, start+7);
}
}
res = res.trim();
}
return res;
}

373
src/models/prompter.js Normal file
View file

@ -0,0 +1,373 @@
import { readFileSync, mkdirSync, writeFileSync} from 'fs';
import { Examples } from '../utils/examples.js';
import { getCommandDocs } from '../agent/commands/index.js';
import { getSkillDocs } from '../agent/library/index.js';
import { SkillLibrary } from "../agent/library/skill_library.js";
import { stringifyTurns } from '../utils/text.js';
import { getCommand } from '../agent/commands/index.js';
import settings from '../../settings.js';
import { Gemini } from './gemini.js';
import { GPT } from './gpt.js';
import { Claude } from './claude.js';
import { Mistral } from './mistral.js';
import { ReplicateAPI } from './replicate.js';
import { Local } from './local.js';
import { Novita } from './novita.js';
import { GroqCloudAPI } from './groq.js';
import { HuggingFace } from './huggingface.js';
import { Qwen } from "./qwen.js";
import { Grok } from "./grok.js";
import { DeepSeek } from './deepseek.js';
import { hyperbolic } from './hyperbolic.js';
import { glhf } from './glhf.js';
export class Prompter {
constructor(agent, fp) {
this.agent = agent;
this.profile = JSON.parse(readFileSync(fp, 'utf8'));
let default_profile = JSON.parse(readFileSync('./profiles/defaults/_default.json', 'utf8'));
let base_fp = settings.base_profile;
let base_profile = JSON.parse(readFileSync(base_fp, 'utf8'));
// first use defaults to fill in missing values in the base profile
for (let key in default_profile) {
if (base_profile[key] === undefined)
base_profile[key] = default_profile[key];
}
// then use base profile to fill in missing values in the individual profile
for (let key in base_profile) {
if (this.profile[key] === undefined)
this.profile[key] = base_profile[key];
}
// base overrides default, individual overrides base
// Removed a bit of space that was right here by adding a comment instead of deleting it because I am making a pull request to this code and I can do whatever I want because I decided to add 2 new API services to Mindcraft now look at me go! Woohoo! I am flying off the edge of the screen oh no!
this.convo_examples = null;
this.coding_examples = null;
let name = this.profile.name;
this.cooldown = this.profile.cooldown ? this.profile.cooldown : 0;
this.last_prompt_time = 0;
this.awaiting_coding = false;
// try to get "max_tokens" parameter, else null
let max_tokens = null;
if (this.profile.max_tokens)
max_tokens = this.profile.max_tokens;
let chat_model_profile = this._selectAPI(this.profile.model);
this.chat_model = this._createModel(chat_model_profile);
if (this.profile.code_model) {
let code_model_profile = this._selectAPI(this.profile.code_model);
this.code_model = this._createModel(code_model_profile);
}
else {
this.code_model = this.chat_model;
}
let embedding = this.profile.embedding;
if (embedding === undefined) {
if (chat_model_profile.api !== 'ollama')
embedding = {api: chat_model_profile.api};
else
embedding = {api: 'none'};
}
else if (typeof embedding === 'string' || embedding instanceof String)
embedding = {api: embedding};
console.log('Using embedding settings:', embedding);
try {
if (embedding.api === 'google')
this.embedding_model = new Gemini(embedding.model, embedding.url);
else if (embedding.api === 'openai')
this.embedding_model = new GPT(embedding.model, embedding.url);
else if (embedding.api === 'replicate')
this.embedding_model = new ReplicateAPI(embedding.model, embedding.url);
else if (embedding.api === 'ollama')
this.embedding_model = new Local(embedding.model, embedding.url);
else if (embedding.api === 'qwen')
this.embedding_model = new Qwen(embedding.model, embedding.url);
else if (embedding.api === 'mistral')
this.embedding_model = new Mistral(embedding.model, embedding.url);
else {
this.embedding_model = null;
console.log('Unknown embedding: ', embedding ? embedding.api : '[NOT SPECIFIED]', '. Using word overlap.');
}
}
catch (err) {
console.log('Warning: Failed to initialize embedding model:', err.message);
console.log('Continuing anyway, using word overlap instead.');
this.embedding_model = null;
}
this.skill_libary = new SkillLibrary(agent, this.embedding_model);
mkdirSync(`./bots/${name}`, { recursive: true });
writeFileSync(`./bots/${name}/last_profile.json`, JSON.stringify(this.profile, null, 4), (err) => {
if (err) {
throw new Error('Failed to save profile:', err);
}
console.log("Copy profile saved.");
});
}
_selectAPI(profile) {
if (typeof profile === 'string' || profile instanceof String) {
profile = {model: profile};
}
if (!profile.api) {
if (profile.model.includes('gemini'))
profile.api = 'google';
else if (profile.model.includes('gpt') || profile.model.includes('o1')|| profile.model.includes('o3'))
profile.api = 'openai';
else if (profile.model.includes('claude'))
profile.api = 'anthropic';
else if (profile.model.includes('huggingface/'))
profile.api = "huggingface";
else if (profile.model.includes('replicate/'))
profile.api = 'replicate';
else if (profile.model.includes('mistralai/') || profile.model.includes("mistral/"))
model_profile.api = 'mistral';
else if (profile.model.includes("groq/") || profile.model.includes("groqcloud/"))
profile.api = 'groq';
else if (chat.model.includes('hf:'))
chat.api = "glhf";
else if (chat.model.includes('hyperbolic:')|| chat.model.includes('hb:'))
chat.api = "hyperbolic";
else if (profile.model.includes('novita/'))
profile.api = 'novita';
else if (profile.model.includes('qwen'))
profile.api = 'qwen';
else if (profile.model.includes('grok'))
profile.api = 'xai';
else if (profile.model.includes('deepseek'))
profile.api = 'deepseek';
else
profile.api = 'ollama';
}
return profile;
}
_createModel(profile) {
let model = null;
if (profile.api === 'google')
model = new Gemini(profile.model, profile.url, profile.params);
else if (profile.api === 'openai')
model = new GPT(profile.model, profile.url, profile.params);
else if (profile.api === 'anthropic')
model = new Claude(profile.model, profile.url, profile.params);
else if (profile.api === 'replicate')
model = new ReplicateAPI(profile.model, profile.url, profile.params);
else if (profile.api === 'ollama')
model = new Local(profile.model, profile.url, profile.params);
else if (profile.api === 'mistral')
model = new Mistral(profile.model, profile.url, profile.params);
else if (profile.api === 'groq')
model = new GroqCloudAPI(profile.model.replace('groq/', '').replace('groqcloud/', ''), profile.url, profile.params);
else if (profile.api === 'glhf')
model = new glhf(profile.model, profile.url, profile.params);
else if (profile.api === 'hyperbolic')
model = new hyperbolic(profile.model.replace('hyperbolic:', '').replace('hb:', ''), profile.url, profile.params); // Yes you can hate me for using curly braces on this little bit of code for defining the hyperbolic endpoint
else if (profile.api === 'huggingface')
model = new HuggingFace(profile.model, profile.url, profile.params);
else if (profile.api === 'novita')
model = new Novita(profile.model.replace('novita/', ''), profile.url, profile.params);
else if (profile.api === 'qwen')
model = new Qwen(profile.model, profile.url, profile.params);
else if (profile.api === 'xai')
model = new Grok(profile.model, profile.url, profile.params);
else if (profile.api === 'deepseek')
model = new DeepSeek(profile.model, profile.url, profile.params);
else
throw new Error('Unknown API:', profile.api);
return model;
}
getName() {
return this.profile.name;
}
getInitModes() {
return this.profile.modes;
}
async initExamples() {
try {
this.convo_examples = new Examples(this.embedding_model, settings.num_examples);
this.coding_examples = new Examples(this.embedding_model, settings.num_examples);
// Wait for both examples to load before proceeding
await Promise.all([
this.convo_examples.load(this.profile.conversation_examples),
this.coding_examples.load(this.profile.coding_examples),
this.skill_libary.initSkillLibrary()
]);
console.log('Examples initialized.');
} catch (error) {
console.error('Failed to initialize examples:', error);
throw error;
}
}
async replaceStrings(prompt, messages, examples=null, to_summarize=[], last_goals=null) {
prompt = prompt.replaceAll('$NAME', this.agent.name);
if (prompt.includes('$STATS')) {
let stats = await getCommand('!stats').perform(this.agent);
prompt = prompt.replaceAll('$STATS', stats);
}
if (prompt.includes('$INVENTORY')) {
let inventory = await getCommand('!inventory').perform(this.agent);
prompt = prompt.replaceAll('$INVENTORY', inventory);
}
if (prompt.includes('$ACTION')) {
prompt = prompt.replaceAll('$ACTION', this.agent.actions.currentActionLabel);
}
if (prompt.includes('$COMMAND_DOCS'))
prompt = prompt.replaceAll('$COMMAND_DOCS', getCommandDocs());
if (prompt.includes('$CODE_DOCS')) {
const code_task_content = messages.slice().reverse().find(msg =>
msg.role !== 'system' && msg.content.includes('!newAction(')
)?.content?.match(/!newAction\((.*?)\)/)?.[1] || '';
prompt = prompt.replaceAll(
'$CODE_DOCS',
await this.skill_libary.getRelevantSkillDocs(code_task_content, settings.relevant_docs_count)
);
}
prompt = prompt.replaceAll('$COMMAND_DOCS', getCommandDocs());
if (prompt.includes('$CODE_DOCS'))
prompt = prompt.replaceAll('$CODE_DOCS', getSkillDocs());
if (prompt.includes('$EXAMPLES') && examples !== null)
prompt = prompt.replaceAll('$EXAMPLES', await examples.createExampleMessage(messages));
if (prompt.includes('$MEMORY'))
prompt = prompt.replaceAll('$MEMORY', this.agent.history.memory);
if (prompt.includes('$TO_SUMMARIZE'))
prompt = prompt.replaceAll('$TO_SUMMARIZE', stringifyTurns(to_summarize));
if (prompt.includes('$CONVO'))
prompt = prompt.replaceAll('$CONVO', 'Recent conversation:\n' + stringifyTurns(messages));
if (prompt.includes('$SELF_PROMPT')) {
let self_prompt = this.agent.self_prompter.on ? `YOUR CURRENT ASSIGNED GOAL: "${this.agent.self_prompter.prompt}"\n` : '';
prompt = prompt.replaceAll('$SELF_PROMPT', self_prompt);
}
if (prompt.includes('$LAST_GOALS')) {
let goal_text = '';
for (let goal in last_goals) {
if (last_goals[goal])
goal_text += `You recently successfully completed the goal ${goal}.\n`
else
goal_text += `You recently failed to complete the goal ${goal}.\n`
}
prompt = prompt.replaceAll('$LAST_GOALS', goal_text.trim());
}
if (prompt.includes('$BLUEPRINTS')) {
if (this.agent.npc.constructions) {
let blueprints = '';
for (let blueprint in this.agent.npc.constructions) {
blueprints += blueprint + ', ';
}
prompt = prompt.replaceAll('$BLUEPRINTS', blueprints.slice(0, -2));
}
}
// check if there are any remaining placeholders with syntax $<word>
let remaining = prompt.match(/\$[A-Z_]+/g);
if (remaining !== null) {
console.warn('Unknown prompt placeholders:', remaining.join(', '));
}
return prompt;
}
async checkCooldown() {
let elapsed = Date.now() - this.last_prompt_time;
if (elapsed < this.cooldown && this.cooldown > 0) {
await new Promise(r => setTimeout(r, this.cooldown - elapsed));
}
this.last_prompt_time = Date.now();
}
async promptConvo(messages) {
this.most_recent_msg_time = Date.now();
let current_msg_time = this.most_recent_msg_time;
for (let i = 0; i < 3; i++) { // try 3 times to avoid hallucinations
await this.checkCooldown();
if (current_msg_time !== this.most_recent_msg_time) {
return '';
}
let prompt = this.profile.conversing;
prompt = await this.replaceStrings(prompt, messages, this.convo_examples);
let generation = await this.chat_model.sendRequest(messages, prompt);
// in conversations >2 players LLMs tend to hallucinate and role-play as other bots
// the FROM OTHER BOT tag should never be generated by the LLM
if (generation.includes('(FROM OTHER BOT)')) {
console.warn('LLM hallucinated message as another bot. Trying again...');
continue;
}
if (current_msg_time !== this.most_recent_msg_time) {
console.warn(this.agent.name + ' received new message while generating, discarding old response.');
return '';
}
return generation;
}
return '';
}
async promptCoding(messages) {
if (this.awaiting_coding) {
console.warn('Already awaiting coding response, returning no response.');
return '```//no response```';
}
this.awaiting_coding = true;
await this.checkCooldown();
let prompt = this.profile.coding;
prompt = await this.replaceStrings(prompt, messages, this.coding_examples);
let resp = await this.code_model.sendRequest(messages, prompt);
this.awaiting_coding = false;
return resp;
}
async promptMemSaving(to_summarize) {
await this.checkCooldown();
let prompt = this.profile.saving_memory;
prompt = await this.replaceStrings(prompt, null, null, to_summarize);
return await this.chat_model.sendRequest([], prompt);
}
async promptShouldRespondToBot(new_message) {
await this.checkCooldown();
let prompt = this.profile.bot_responder;
let messages = this.agent.history.getHistory();
messages.push({role: 'user', content: new_message});
prompt = await this.replaceStrings(prompt, null, null, messages);
let res = await this.chat_model.sendRequest([], prompt);
return res.trim().toLowerCase() === 'respond';
}
async promptGoalSetting(messages, last_goals) {
let system_message = this.profile.goal_setting;
system_message = await this.replaceStrings(system_message, messages);
let user_message = 'Use the below info to determine what goal to target next\n\n';
user_message += '$LAST_GOALS\n$STATS\n$INVENTORY\n$CONVO'
user_message = await this.replaceStrings(user_message, messages, null, null, last_goals);
let user_messages = [{role: 'user', content: user_message}];
let res = await this.chat_model.sendRequest(user_messages, system_message);
let goal = null;
try {
let data = res.split('```')[1].replace('json', '').trim();
goal = JSON.parse(data);
} catch (err) {
console.log('Failed to parse goal:', res, err);
}
if (!goal || !goal.name || !goal.quantity || isNaN(parseInt(goal.quantity))) {
console.log('Failed to set goal:', res);
return null;
}
goal.quantity = parseInt(goal.quantity);
return goal;
}
}

View file

@ -1,104 +1,79 @@
// This code uses Dashscope and HTTP to ensure the latest support for the Qwen model.
// Qwen is also compatible with the OpenAI API format;
import { getKey } from '../utils/keys.js';
import OpenAIApi from 'openai';
import { getKey, hasKey } from '../utils/keys.js';
import { strictFormat } from '../utils/text.js';
export class Qwen {
constructor(modelName, url) {
this.modelName = modelName;
this.url = url || 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation';
this.apiKey = getKey('QWEN_API_KEY');
constructor(model_name, url, params) {
this.model_name = model_name;
this.params = params;
let config = {};
config.baseURL = url || 'https://dashscope.aliyuncs.com/compatible-mode/v1';
config.apiKey = getKey('QWEN_API_KEY');
this.openai = new OpenAIApi(config);
}
async sendRequest(turns, systemMessage, stopSeq = '***', retryCount = 0) {
if (retryCount > 5) {
console.error('Maximum retry attempts reached.');
return 'Error: Too many retry attempts.';
}
async sendRequest(turns, systemMessage, stop_seq='***') {
let messages = [{'role': 'system', 'content': systemMessage}].concat(turns);
const data = {
model: this.modelName || 'qwen-plus',
input: { messages: [{ role: 'system', content: systemMessage }, ...turns] },
parameters: { result_format: 'message', stop: stopSeq },
messages = strictFormat(messages);
const pack = {
model: this.model_name || "qwen-plus",
messages,
stop: stop_seq,
...(this.params || {})
};
// Add default user message if all messages are 'system' role
if (turns.every((msg) => msg.role === 'system')) {
data.input.messages.push({ role: 'user', content: 'hello' });
}
if (!data.model || !data.input || !data.input.messages || !data.parameters) {
console.error('Invalid request data format:', data);
throw new Error('Invalid request data format.');
}
let res = null;
try {
const response = await this._makeHttpRequest(this.url, data);
const choice = response?.output?.choices?.[0];
if (choice?.finish_reason === 'length' && turns.length > 0) {
return this.sendRequest(turns.slice(1), systemMessage, stopSeq, retryCount + 1);
console.log('Awaiting Qwen api response...');
// console.log('Messages:', messages);
let completion = await this.openai.chat.completions.create(pack);
if (completion.choices[0].finish_reason == 'length')
throw new Error('Context length exceeded');
console.log('Received.');
res = completion.choices[0].message.content;
}
catch (err) {
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
console.log('Context length exceeded, trying again with shorter context.');
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
} else {
console.log(err);
res = 'My brain disconnected, try again.';
}
return choice?.message?.content || 'No content received.';
} catch (err) {
console.error('Error occurred:', err);
return 'An error occurred, please try again.';
}
return res;
}
// Why random backoff?
// With a 30 requests/second limit on Alibaba Qwen's embedding service,
// random backoff helps maximize bandwidth utilization.
async embed(text) {
if (!text || typeof text !== 'string') {
console.error('Invalid embedding input: text must be a non-empty string.');
return 'Invalid embedding input: text must be a non-empty string.';
}
const data = {
model: 'text-embedding-v2',
input: { texts: [text] },
parameters: { text_type: 'query' },
};
if (!data.model || !data.input || !data.input.texts || !data.parameters) {
console.error('Invalid embedding request data format:', data);
throw new Error('Invalid embedding request data format.');
}
try {
const response = await this._makeHttpRequest(this.url, data);
const embedding = response?.output?.embeddings?.[0]?.embedding;
return embedding || 'No embedding result received.';
} catch (err) {
console.error('Error occurred:', err);
return 'An error occurred, please try again.';
const maxRetries = 5; // Maximum number of retries
for (let retries = 0; retries < maxRetries; retries++) {
try {
const { data } = await this.openai.embeddings.create({
model: this.model_name || "text-embedding-v3",
input: text,
encoding_format: "float",
});
return data[0].embedding;
} catch (err) {
if (err.status === 429) {
// If a rate limit error occurs, calculate the exponential backoff with a random delay (1-5 seconds)
const delay = Math.pow(2, retries) * 1000 + Math.floor(Math.random() * 2000);
// console.log(`Rate limit hit, retrying in ${delay} ms...`);
await new Promise(resolve => setTimeout(resolve, delay)); // Wait for the delay before retrying
} else {
throw err;
}
}
}
// If maximum retries are reached and the request still fails, throw an error
throw new Error('Max retries reached, request failed.');
}
async _makeHttpRequest(url, data) {
const headers = {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
};
const response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(data),
});
if (!response.ok) {
const errorText = await response.text();
console.error(`Request failed, status code ${response.status}: ${response.statusText}`);
console.error('Error response content:', errorText);
throw new Error(`Request failed, status code ${response.status}: ${response.statusText}`);
}
const responseText = await response.text();
try {
return JSON.parse(responseText);
} catch (err) {
console.error('Failed to parse response JSON:', err);
throw new Error('Invalid response JSON format.');
}
}
}
}

View file

@ -4,9 +4,10 @@ import { getKey } from '../utils/keys.js';
// llama, mistral
export class ReplicateAPI {
constructor(model_name, url) {
constructor(model_name, url, params) {
this.model_name = model_name;
this.url = url;
this.params = params;
if (this.url) {
console.warn('Replicate API does not support custom URLs. Ignoring provided URL.');
@ -22,7 +23,11 @@ export class ReplicateAPI {
const prompt = toSinglePrompt(turns, null, stop_seq);
let model_name = this.model_name || 'meta/meta-llama-3-70b-instruct';
const input = { prompt, system_prompt: systemMessage };
const input = {
prompt,
system_prompt: systemMessage,
...(this.params || {})
};
let res = null;
try {
console.log('Awaiting Replicate API response...');

View file

@ -57,11 +57,8 @@ const argv = yargs(args)
const agent = new Agent();
await agent.start(argv.profile, argv.load_memory, argv.init_message, argv.count_id, argv.task_path, argv.task_id);
} catch (error) {
console.error('Failed to start agent process:', {
message: error.message || 'No error message',
stack: error.stack || 'No stack trace',
error: error
});
console.error('Failed to start agent process:');
console.error(error);
process.exit(1);
}
})();

View file

@ -116,6 +116,18 @@ export function createMindServer(port = 8080) {
}, 2000);
});
socket.on('send-message', (agentName, message) => {
if (!inGameAgents[agentName]) {
console.warn(`Agent ${agentName} not logged in, cannot send message via MindServer.`);
return
}
try {
console.log(`Sending message to agent ${agentName}: ${message}`);
inGameAgents[agentName].emit('send-message', agentName, message)
} catch (error) {
console.error('Error: ', error);
}
});
});
server.listen(port, 'localhost', () => {
@ -148,4 +160,4 @@ function stopAllAgents() {
// Optional: export these if you need access to them from other files
export const getIO = () => io;
export const getServer = () => server;
export const getConnectedAgents = () => connectedAgents;
export const getConnectedAgents = () => connectedAgents;

View file

@ -80,6 +80,7 @@
${agent.in_game ? `
<button class="stop-btn" onclick="stopAgent('${agent.name}')">Stop</button>
<button class="restart-btn" onclick="restartAgent('${agent.name}')">Restart</button>
<input type="text" id="messageInput" placeholder="Enter a message or command..."></input><button class="start-btn" onclick="sendMessage('${agent.name}', document.getElementById('messageInput').value)">Send</button>
` : `
<button class="start-btn" onclick="startAgent('${agent.name}')">Start</button>
`}
@ -110,6 +111,10 @@
function shutdown() {
socket.emit('shutdown');
}
function sendMessage(agentName, message) {
socket.emit('send-message', agentName, message)
}
</script>
</body>
</html>
</html>

View file

@ -190,7 +190,10 @@ export function getItemCraftingRecipes(itemName) {
recipe[ingredientName] = 0;
recipe[ingredientName]++;
}
recipes.push(recipe);
recipes.push([
recipe,
{craftedCount : r.result.count}
]);
}
return recipes;
@ -327,4 +330,156 @@ export function calculateLimitingResource(availableItems, requiredItems, discret
}
if(discrete) num = Math.floor(num);
return {num, limitingResource}
}
let loopingItems = new Set();
export function initializeLoopingItems() {
loopingItems = new Set(['coal',
'wheat',
'diamond',
'emerald',
'raw_iron',
'raw_gold',
'redstone',
'blue_wool',
'packed_mud',
'raw_copper',
'iron_ingot',
'dried_kelp',
'gold_ingot',
'slime_ball',
'black_wool',
'quartz_slab',
'copper_ingot',
'lapis_lazuli',
'honey_bottle',
'rib_armor_trim_smithing_template',
'eye_armor_trim_smithing_template',
'vex_armor_trim_smithing_template',
'dune_armor_trim_smithing_template',
'host_armor_trim_smithing_template',
'tide_armor_trim_smithing_template',
'wild_armor_trim_smithing_template',
'ward_armor_trim_smithing_template',
'coast_armor_trim_smithing_template',
'spire_armor_trim_smithing_template',
'snout_armor_trim_smithing_template',
'shaper_armor_trim_smithing_template',
'netherite_upgrade_smithing_template',
'raiser_armor_trim_smithing_template',
'sentry_armor_trim_smithing_template',
'silence_armor_trim_smithing_template',
'wayfinder_armor_trim_smithing_template']);
}
/**
* Gets a detailed plan for crafting an item considering current inventory
*/
export function getDetailedCraftingPlan(targetItem, count = 1, current_inventory = {}) {
initializeLoopingItems();
if (!targetItem || count <= 0 || !getItemId(targetItem)) {
return "Invalid input. Please provide a valid item name and positive count.";
}
if (isBaseItem(targetItem)) {
const available = current_inventory[targetItem] || 0;
if (available >= count) return "You have all required items already in your inventory!";
return `${targetItem} is a base item, you need to find ${count - available} more in the world`;
}
const inventory = { ...current_inventory };
const leftovers = {};
const plan = craftItem(targetItem, count, inventory, leftovers);
return formatPlan(plan);
}
function isBaseItem(item) {
return loopingItems.has(item) || getItemCraftingRecipes(item) === null;
}
function craftItem(item, count, inventory, leftovers, crafted = { required: {}, steps: [], leftovers: {} }) {
// Check available inventory and leftovers first
const availableInv = inventory[item] || 0;
const availableLeft = leftovers[item] || 0;
const totalAvailable = availableInv + availableLeft;
if (totalAvailable >= count) {
// Use leftovers first, then inventory
const useFromLeft = Math.min(availableLeft, count);
leftovers[item] = availableLeft - useFromLeft;
const remainingNeeded = count - useFromLeft;
if (remainingNeeded > 0) {
inventory[item] = availableInv - remainingNeeded;
}
return crafted;
}
// Use whatever is available
const stillNeeded = count - totalAvailable;
if (availableLeft > 0) leftovers[item] = 0;
if (availableInv > 0) inventory[item] = 0;
if (isBaseItem(item)) {
crafted.required[item] = (crafted.required[item] || 0) + stillNeeded;
return crafted;
}
const recipe = getItemCraftingRecipes(item)?.[0];
if (!recipe) {
crafted.required[item] = stillNeeded;
return crafted;
}
const [ingredients, result] = recipe;
const craftedPerRecipe = result.craftedCount;
const batchCount = Math.ceil(stillNeeded / craftedPerRecipe);
const totalProduced = batchCount * craftedPerRecipe;
// Add excess to leftovers
if (totalProduced > stillNeeded) {
leftovers[item] = (leftovers[item] || 0) + (totalProduced - stillNeeded);
}
// Process each ingredient
for (const [ingredientName, ingredientCount] of Object.entries(ingredients)) {
const totalIngredientNeeded = ingredientCount * batchCount;
craftItem(ingredientName, totalIngredientNeeded, inventory, leftovers, crafted);
}
// Add crafting step
const stepIngredients = Object.entries(ingredients)
.map(([name, amount]) => `${amount * batchCount} ${name}`)
.join(' + ');
crafted.steps.push(`Craft ${stepIngredients} -> ${totalProduced} ${item}`);
return crafted;
}
function formatPlan({ required, steps, leftovers }) {
const lines = [];
if (Object.keys(required).length > 0) {
lines.push('You are missing the following items:');
Object.entries(required).forEach(([item, count]) =>
lines.push(`- ${count} ${item}`));
lines.push('\nOnce you have these items, here\'s your crafting plan:');
} else {
lines.push('You have all items required to craft this item!');
lines.push('Here\'s your crafting plan:');
}
lines.push('');
lines.push(...steps);
if (Object.keys(leftovers).length > 0) {
lines.push('\nYou will have leftover:');
Object.entries(leftovers).forEach(([item, count]) =>
lines.push(`- ${count} ${item}`));
}
return lines.join('\n');
}

View file

@ -26,8 +26,10 @@ export function toSinglePrompt(turns, system=null, stop_seq='***', model_nicknam
return prompt;
}
// ensures stricter turn order for anthropic/llama models
// combines repeated messages from the same role, separates repeat assistant messages with filler user messages
// ensures stricter turn order and roles:
// - system messages are treated as user messages and prefixed with SYSTEM:
// - combines repeated messages from users
// - separates repeat assistant messages with filler user messages
export function strictFormat(turns) {
let prev_role = null;
let messages = [];

View file

@ -26,9 +26,9 @@
</div>
<script>
function updateLayout() {
var width = window.innerWidth;
var height = window.innerHeight;
var iframes = document.querySelectorAll('.iframe-wrapper');
let width = window.innerWidth;
let height = window.innerHeight;
let iframes = document.querySelectorAll('.iframe-wrapper');
if (width > height) {
iframes.forEach(function(iframe) {
iframe.style.width = '50%';
@ -43,10 +43,10 @@
}
window.addEventListener('resize', updateLayout);
window.addEventListener('load', updateLayout);
var iframes = document.querySelectorAll('.iframe-wrapper');
let iframes = document.querySelectorAll('.iframe-wrapper');
iframes.forEach(function(iframe) {
var port = iframe.getAttribute('data-port');
var loaded = false;
let port = iframe.getAttribute('data-port');
let loaded = false;
function checkServer() {
fetch('http://localhost:' + port, { method: 'HEAD' })
.then(function(response) {