mirror of
https://github.com/kolbytn/mindcraft.git
synced 2025-07-24 00:45:23 +02:00
Merge branch 'main' into evaluation_parallelization
This commit is contained in:
commit
7a19f34e22
53 changed files with 992 additions and 389 deletions
99
README.md
99
README.md
|
@ -1,13 +1,12 @@
|
|||
# Mindcraft 🧠⛏️
|
||||
|
||||
Crafting minds for Minecraft with LLMs and Mineflayer!
|
||||
Crafting minds for Minecraft with LLMs and [Mineflayer!](https://prismarinejs.github.io/mineflayer/#/)
|
||||
|
||||
[FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) | [Discord Support](https://discord.gg/mp73p35dzC) | [Blog Post](https://kolbynottingham.com/mindcraft/) | [Contributor TODO](https://github.com/users/kolbytn/projects/1)
|
||||
[FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) | [Discord Support](https://discord.gg/mp73p35dzC) | [Video Tutorial](https://www.youtube.com/watch?v=gRotoL8P8D8) | [Blog Post](https://kolbynottingham.com/mindcraft/) | [Contributor TODO](https://github.com/users/kolbytn/projects/1)
|
||||
|
||||
|
||||
#### ‼️Warning‼️
|
||||
|
||||
Do not connect this bot to public servers with coding enabled. This project allows an LLM to write/execute code on your computer. While the code is sandboxed, it is still vulnerable to injection attacks on public servers. Code writing is disabled by default, you can enable it by setting `allow_insecure_coding` to `true` in `settings.js`. We strongly recommend running with additional layers of security such as docker containers. Ye be warned.
|
||||
> [!Caution]
|
||||
Do not connect this bot to public servers with coding enabled. This project allows an LLM to write/execute code on your computer. The code is sandboxed, but still vulnerable to injection attacks. Code writing is disabled by default, you can enable it by setting `allow_insecure_coding` to `true` in `settings.js`. Ye be warned.
|
||||
|
||||
## Requirements
|
||||
|
||||
|
@ -29,32 +28,33 @@ Do not connect this bot to public servers with coding enabled. This project allo
|
|||
|
||||
6. Run `node main.js` from the installed directory
|
||||
|
||||
If you encounter issues, check the [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) or find support on [discord](https://discord.gg/jVxQWVTM). We are currently not very responsive to github issues.
|
||||
If you encounter issues, check the [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) or find support on [discord](https://discord.gg/mp73p35dzC). We are currently not very responsive to github issues.
|
||||
|
||||
## Customization
|
||||
## Model Customization
|
||||
|
||||
You can configure project details in `settings.js`. [See file.](settings.js)
|
||||
|
||||
You can configure the agent's name, model, and prompts in their profile like `andy.json`.
|
||||
You can configure the agent's name, model, and prompts in their profile like `andy.json` with the `model` field. For comprehensive details, see [Model Specifications](#model-specifications).
|
||||
|
||||
| API | Config Variable | Example Model name | Docs |
|
||||
|------|------|------|------|
|
||||
| OpenAI | `OPENAI_API_KEY` | `gpt-4o-mini` | [docs](https://platform.openai.com/docs/models) |
|
||||
| Google | `GEMINI_API_KEY` | `gemini-pro` | [docs](https://ai.google.dev/gemini-api/docs/models/gemini) |
|
||||
| Anthropic | `ANTHROPIC_API_KEY` | `claude-3-haiku-20240307` | [docs](https://docs.anthropic.com/claude/docs/models-overview) |
|
||||
| Replicate | `REPLICATE_API_KEY` | `meta/meta-llama-3-70b-instruct` | [docs](https://replicate.com/collections/language-models) |
|
||||
| Ollama (local) | n/a | `llama3` | [docs](https://ollama.com/library) |
|
||||
| Groq | `GROQCLOUD_API_KEY` | `groq/mixtral-8x7b-32768` | [docs](https://console.groq.com/docs/models) |
|
||||
| Hugging Face | `HUGGINGFACE_API_KEY` | `huggingface/mistralai/Mistral-Nemo-Instruct-2407` | [docs](https://huggingface.co/models) |
|
||||
| Novita AI | `NOVITA_API_KEY` | `gryphe/mythomax-l2-13b` | [docs](https://novita.ai/model-api/product/llm-api?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link) |
|
||||
| Qwen | `QWEN_API_KEY` | `qwen-max` | [Intl.](https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api)/[cn](https://help.aliyun.com/zh/model-studio/getting-started/models) |
|
||||
| Mistral | `MISTRAL_API_KEY` | `mistral-large-latest` | [docs](https://docs.mistral.ai/getting-started/models/models_overview/) |
|
||||
| xAI | `XAI_API_KEY` | `grok-beta` | [docs](https://docs.x.ai/docs) |
|
||||
| `openai` | `OPENAI_API_KEY` | `gpt-4o-mini` | [docs](https://platform.openai.com/docs/models) |
|
||||
| `google` | `GEMINI_API_KEY` | `gemini-pro` | [docs](https://ai.google.dev/gemini-api/docs/models/gemini) |
|
||||
| `anthropic` | `ANTHROPIC_API_KEY` | `claude-3-haiku-20240307` | [docs](https://docs.anthropic.com/claude/docs/models-overview) |
|
||||
| `replicate` | `REPLICATE_API_KEY` | `replicate/meta/meta-llama-3-70b-instruct` | [docs](https://replicate.com/collections/language-models) |
|
||||
| `ollama` (local) | n/a | `llama3` | [docs](https://ollama.com/library) |
|
||||
| `groq` | `GROQCLOUD_API_KEY` | `groq/mixtral-8x7b-32768` | [docs](https://console.groq.com/docs/models) |
|
||||
| `huggingface` | `HUGGINGFACE_API_KEY` | `huggingface/mistralai/Mistral-Nemo-Instruct-2407` | [docs](https://huggingface.co/models) |
|
||||
| `novita` | `NOVITA_API_KEY` | `gryphe/mythomax-l2-13b` | [docs](https://novita.ai/model-api/product/llm-api?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link) |
|
||||
| `qwen` | `QWEN_API_KEY` | `qwen-max` | [Intl.](https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api)/[cn](https://help.aliyun.com/zh/model-studio/getting-started/models) |
|
||||
| `xai` | `MISTRAL_API_KEY` | `mistral-large-latest` | [docs](https://docs.mistral.ai/getting-started/models/models_overview/) |
|
||||
| `deepseek` | `XAI_API_KEY` | `grok-beta` | [docs](https://docs.x.ai/docs) |
|
||||
| `openrouter` | `OPENROUTER_API_KEY` | `openrouter/anthropic/claude-3.5-sonnet` | [docs](https://openrouter.ai/models) |
|
||||
|
||||
If you use Ollama, to install the models used by default (generation and embedding), execute the following terminal command:
|
||||
`ollama pull llama3 && ollama pull nomic-embed-text`
|
||||
|
||||
## Online Servers
|
||||
### Online Servers
|
||||
To connect to online servers your bot will need an official Microsoft/Minecraft account. You can use your own personal one, but will need another account if you want to connect too and play with it. To connect, change these lines in `settings.js`:
|
||||
```javascript
|
||||
"host": "111.222.333.444",
|
||||
|
@ -63,7 +63,8 @@ To connect to online servers your bot will need an official Microsoft/Minecraft
|
|||
|
||||
// rest is same...
|
||||
```
|
||||
‼️ The bot's name in the profile.json must exactly match the Minecraft profile name! Otherwise the bot will spam talk to itself.
|
||||
> [!Important]
|
||||
> The bot's name in the profile.json must exactly match the Minecraft profile name! Otherwise the bot will spam talk to itself.
|
||||
|
||||
To use different accounts, Mindcraft will connect with the account that the Minecraft launcher is currently using. You can switch accounts in the launcer, then run `node main.js`, then switch to your main account after the bot has connected.
|
||||
|
||||
|
@ -87,57 +88,57 @@ When running in docker, if you want the bot to join your local minecraft server,
|
|||
|
||||
To connect to an unsupported minecraft version, you can try to use [viaproxy](services/viaproxy/README.md)
|
||||
|
||||
## Bot Profiles
|
||||
# Bot Profiles
|
||||
|
||||
Bot profiles are json files (such as `andy.json`) that define:
|
||||
|
||||
1. Bot backend LLMs to use for chat and embeddings.
|
||||
1. Bot backend LLMs to use for talking, coding, and embedding.
|
||||
2. Prompts used to influence the bot's behavior.
|
||||
3. Examples help the bot perform tasks.
|
||||
|
||||
### Specifying Profiles via Command Line
|
||||
## Model Specifications
|
||||
|
||||
By default, the program will use the profiles specified in `settings.js`. You can specify one or more agent profiles using the `--profiles` argument:
|
||||
|
||||
```bash
|
||||
node main.js --profiles ./profiles/andy.json ./profiles/jill.json
|
||||
```
|
||||
|
||||
### Model Specifications
|
||||
|
||||
LLM backends can be specified as simply as `"model": "gpt-3.5-turbo"`. However, for both the chat model and the embedding model, the bot profile can specify the below attributes:
|
||||
LLM models can be specified simply as `"model": "gpt-4o"`. However, you can use different models for chat, coding, and embeddings.
|
||||
You can pass a string or an object for these fields. A model object must specify an `api`, and optionally a `model`, `url`, and additional `params`.
|
||||
|
||||
```json
|
||||
"model": {
|
||||
"api": "openai",
|
||||
"model": "gpt-4o",
|
||||
"url": "https://api.openai.com/v1/",
|
||||
"model": "gpt-3.5-turbo"
|
||||
"params": {
|
||||
"max_tokens": 1000,
|
||||
"temperature": 1
|
||||
}
|
||||
},
|
||||
"code_model": {
|
||||
"api": "openai",
|
||||
"model": "gpt-4",
|
||||
"url": "https://api.openai.com/v1/"
|
||||
},
|
||||
"embedding": {
|
||||
"api": "openai",
|
||||
"url": "https://api.openai.com/v1/",
|
||||
"model": "text-embedding-ada-002"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The model parameter accepts either a string or object. If a string, it should specify the model to be used. The api and url will be assumed. If an object, the api field must be specified. Each api has a default model and url, so those fields are optional.
|
||||
`model` is used for chat, `code_model` is used for newAction coding, and `embedding` is used to embed text for example selection. If `code_model` or `embedding` are not specified, they will use `model` by default. Not all APIs have an embedding model.
|
||||
|
||||
If the embedding field is not specified, then it will use the default embedding method for the chat model's api (Note that anthropic has no embedding model). The embedding parameter can also be a string or object. If a string, it should specify the embedding api and the default model and url will be used. If a valid embedding is not specified and cannot be assumed, then word overlap will be used to retrieve examples instead.
|
||||
All apis have default models and urls, so those fields are optional. The `params` field is optional and can be used to specify additional parameters for the model. It accepts any key-value pairs supported by the api. Is not supported for embedding models.
|
||||
|
||||
Thus, all the below specifications are equivalent to the above example:
|
||||
## Embedding Models
|
||||
|
||||
```json
|
||||
"model": "gpt-3.5-turbo"
|
||||
```
|
||||
```json
|
||||
"model": {
|
||||
"api": "openai"
|
||||
}
|
||||
```
|
||||
```json
|
||||
"model": "gpt-3.5-turbo",
|
||||
"embedding": "openai"
|
||||
```
|
||||
Embedding models are used to embed and efficiently select relevant examples for conversation and coding.
|
||||
|
||||
Supported Embedding APIs: `openai`, `google`, `replicate`, `huggingface`, `novita`
|
||||
|
||||
If you try to use an unsupported model, then it will default to a simple word-overlap method. Expect reduced performance, recommend mixing APIs to ensure embedding support.
|
||||
|
||||
## Specifying Profiles via Command Line
|
||||
|
||||
By default, the program will use the profiles specified in `settings.js`. You can specify one or more agent profiles using the `--profiles` argument: `node main.js --profiles ./profiles/andy.json ./profiles/jill.json`
|
||||
|
||||
## Patches
|
||||
|
||||
|
|
10
bots/lintTemplate.js
Normal file
10
bots/lintTemplate.js
Normal file
|
@ -0,0 +1,10 @@
|
|||
import * as skills from '../../../src/agent/library/skills.js';
|
||||
import * as world from '../../../src/agent/library/world.js';
|
||||
import Vec3 from 'vec3';
|
||||
|
||||
const log = skills.log;
|
||||
|
||||
export async function main(bot) {
|
||||
/* CODE HERE */
|
||||
log(bot, 'Code finished.');
|
||||
}
|
25
eslint.config.js
Normal file
25
eslint.config.js
Normal file
|
@ -0,0 +1,25 @@
|
|||
// eslint.config.js
|
||||
import globals from "globals";
|
||||
import pluginJs from "@eslint/js";
|
||||
|
||||
/** @type {import('eslint').Linter.Config[]} */
|
||||
export default [
|
||||
// First, import the recommended configuration
|
||||
pluginJs.configs.recommended,
|
||||
|
||||
// Then override or customize specific rules
|
||||
{
|
||||
languageOptions: {
|
||||
globals: globals.browser,
|
||||
ecmaVersion: 2021,
|
||||
sourceType: "module",
|
||||
},
|
||||
rules: {
|
||||
"no-undef": "error", // Disallow the use of undeclared variables or functions.
|
||||
"semi": ["error", "always"], // Require the use of semicolons at the end of statements.
|
||||
"curly": "warn", // Enforce the use of curly braces around blocks of code.
|
||||
"no-unused-vars": "off", // Disable warnings for unused variables.
|
||||
"no-unreachable": "off", // Disable warnings for unreachable code.
|
||||
},
|
||||
},
|
||||
];
|
|
@ -59,14 +59,11 @@ def check_task_completion(agents):
|
|||
|
||||
return False # Default to failure if no conclusive result found
|
||||
|
||||
def update_results_file(task_id, success_count, total_count, time_taken, experiment_results):
|
||||
def update_results_file(task_id, success_count, total_count, time_taken, experiment_results, results_filename):
|
||||
"""Update the results file with current success ratio and time taken."""
|
||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
filename = f"results_{task_id}_{timestamp}.txt"
|
||||
|
||||
success_ratio = success_count / total_count
|
||||
|
||||
with open(filename, 'w') as f:
|
||||
with open(results_filename, 'w') as f: # 'w' mode overwrites the file each time
|
||||
f.write(f"Task ID: {task_id}\n")
|
||||
f.write(f"Experiments completed: {total_count}\n")
|
||||
f.write(f"Successful experiments: {success_count}\n")
|
||||
|
@ -87,6 +84,7 @@ def update_results_file(task_id, success_count, total_count, time_taken, experim
|
|||
f.write(f"Average time per experiment: {total_time / total_count:.2f} seconds\n")
|
||||
f.write(f"Last updated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
|
||||
|
||||
|
||||
def set_environment_variable_tmux_session(session_name, key, value):
|
||||
"""Set an environment variable for the current process."""
|
||||
subprocess.run(["tmux", "send-keys", "-t", session_name, f"export {key}={value}", "C-m"])
|
||||
|
@ -309,12 +307,11 @@ def detach_process(command):
|
|||
return None
|
||||
|
||||
|
||||
def run_experiment(task_path, task_id, num_exp):
|
||||
"""Run the specified number of experiments and track results."""
|
||||
# Read agent profiles from settings.js
|
||||
agents = read_settings(file_path="settings.js")
|
||||
print(f"Detected agents: {agents}")
|
||||
|
||||
# Generate timestamp at the start of experiments
|
||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
results_filename = f"results_{task_id}_{timestamp}.txt"
|
||||
print(f"Results will be saved to: {results_filename}")
|
||||
|
||||
success_count = 0
|
||||
experiment_results = []
|
||||
|
||||
|
@ -340,6 +337,7 @@ def run_experiment(task_path, task_id, num_exp):
|
|||
print(f"Experiment {exp_num + 1} failed")
|
||||
|
||||
end_time = time.time()
|
||||
|
||||
time_taken = end_time - start_time
|
||||
|
||||
# Store individual experiment result
|
||||
|
@ -350,6 +348,7 @@ def run_experiment(task_path, task_id, num_exp):
|
|||
|
||||
# Update results file after each experiment
|
||||
update_results_file(task_id, success_count, exp_num + 1, time_taken, experiment_results)
|
||||
|
||||
|
||||
# Small delay between experiments
|
||||
time.sleep(1)
|
||||
|
|
|
@ -74,5 +74,19 @@
|
|||
"number_of_target": 1,
|
||||
"type": "techtree",
|
||||
"timeout": 60
|
||||
},
|
||||
"smelt_ingot": {
|
||||
"goal": "Smelt 1 iron ingot and 1 copper ingot",
|
||||
"agent_count": 1,
|
||||
"initial_inventory": {
|
||||
"furnace": 1,
|
||||
"raw_iron": 1,
|
||||
"raw_copper": 1,
|
||||
"coal": 2
|
||||
},
|
||||
"target": "copper_ingot",
|
||||
"number_of_target": 1,
|
||||
"type": "techtree",
|
||||
"timeout": 300
|
||||
}
|
||||
}
|
|
@ -9,5 +9,7 @@
|
|||
"QWEN_API_KEY": "",
|
||||
"XAI_API_KEY": "",
|
||||
"MISTRAL_API_KEY": "",
|
||||
"DEEPSEEK_API_KEY": ""
|
||||
"DEEPSEEK_API_KEY": "",
|
||||
"NOVITA_API_KEY": "",
|
||||
"OPENROUTER_API_KEY": ""
|
||||
}
|
||||
|
|
14
package.json
14
package.json
|
@ -5,6 +5,8 @@
|
|||
"@google/generative-ai": "^0.2.1",
|
||||
"@huggingface/inference": "^2.8.1",
|
||||
"@mistralai/mistralai": "^1.1.0",
|
||||
"canvas": "^3.1.0",
|
||||
"express": "^4.18.2",
|
||||
"google-translate-api-x": "^10.7.1",
|
||||
"groq-sdk": "^0.5.0",
|
||||
"minecraft-data": "^3.78.0",
|
||||
|
@ -17,17 +19,21 @@
|
|||
"openai": "^4.4.0",
|
||||
"patch-package": "^8.0.0",
|
||||
"prismarine-item": "^1.15.0",
|
||||
"prismarine-viewer": "^1.28.0",
|
||||
"prismarine-viewer": "^1.32.0",
|
||||
"replicate": "^0.29.4",
|
||||
"ses": "^1.9.1",
|
||||
"vec3": "^0.1.10",
|
||||
"yargs": "^17.7.2",
|
||||
"socket.io": "^4.7.2",
|
||||
"socket.io-client": "^4.7.2",
|
||||
"express": "^4.18.2"
|
||||
"vec3": "^0.1.10",
|
||||
"yargs": "^17.7.2"
|
||||
},
|
||||
"scripts": {
|
||||
"postinstall": "patch-package",
|
||||
"start": "node main.js"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@eslint/js": "^9.13.0",
|
||||
"eslint": "^9.13.0",
|
||||
"globals": "^15.11.0"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,35 +0,0 @@
|
|||
diff --git a/node_modules/mineflayer-collectblock/lib/CollectBlock.js b/node_modules/mineflayer-collectblock/lib/CollectBlock.js
|
||||
index 2c11e8c..bb49c11 100644
|
||||
--- a/node_modules/mineflayer-collectblock/lib/CollectBlock.js
|
||||
+++ b/node_modules/mineflayer-collectblock/lib/CollectBlock.js
|
||||
@@ -77,10 +77,11 @@ function mineBlock(bot, block, options) {
|
||||
}
|
||||
yield bot.tool.equipForBlock(block, equipToolOptions);
|
||||
// @ts-expect-error
|
||||
- if (!block.canHarvest(bot.heldItem)) {
|
||||
+ if (bot.heldItem !== null && !block.canHarvest(bot.heldItem.type)) {
|
||||
options.targets.removeTarget(block);
|
||||
return;
|
||||
}
|
||||
+
|
||||
const tempEvents = new TemporarySubscriber_1.TemporarySubscriber(bot);
|
||||
tempEvents.subscribeTo('itemDrop', (entity) => {
|
||||
if (entity.position.distanceTo(block.position.offset(0.5, 0.5, 0.5)) <= 0.5) {
|
||||
@@ -92,7 +93,7 @@ function mineBlock(bot, block, options) {
|
||||
// Waiting for items to drop
|
||||
yield new Promise(resolve => {
|
||||
let remainingTicks = 10;
|
||||
- tempEvents.subscribeTo('physicTick', () => {
|
||||
+ tempEvents.subscribeTo('physicsTick', () => {
|
||||
remainingTicks--;
|
||||
if (remainingTicks <= 0) {
|
||||
tempEvents.cleanup();
|
||||
@@ -195,6 +196,8 @@ class CollectBlock {
|
||||
throw (0, Util_1.error)('UnresolvedDependency', 'The mineflayer-collectblock plugin relies on the mineflayer-tool plugin to run!');
|
||||
}
|
||||
if (this.movements != null) {
|
||||
+ this.movements.dontMineUnderFallingBlock = false;
|
||||
+ this.movements.dontCreateFlow = false;
|
||||
this.bot.pathfinder.setMovements(this.movements);
|
||||
}
|
||||
if (!optionsFull.append)
|
14
profiles/defaults/creative.json
Normal file
14
profiles/defaults/creative.json
Normal file
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"modes": {
|
||||
"self_preservation": false,
|
||||
"unstuck": false,
|
||||
"cowardice": false,
|
||||
"self_defense": false,
|
||||
"hunting": false,
|
||||
"item_collecting": false,
|
||||
"torch_placing": false,
|
||||
"elbow_room": true,
|
||||
"idle_staring": true,
|
||||
"cheat": false
|
||||
}
|
||||
}
|
14
profiles/defaults/god_mode.json
Normal file
14
profiles/defaults/god_mode.json
Normal file
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"modes": {
|
||||
"self_preservation": false,
|
||||
"unstuck": false,
|
||||
"cowardice": false,
|
||||
"self_defense": false,
|
||||
"hunting": false,
|
||||
"item_collecting": false,
|
||||
"torch_placing": false,
|
||||
"elbow_room": false,
|
||||
"idle_staring": true,
|
||||
"cheat": true
|
||||
}
|
||||
}
|
14
profiles/defaults/survival.json
Normal file
14
profiles/defaults/survival.json
Normal file
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"modes": {
|
||||
"self_preservation": true,
|
||||
"unstuck": true,
|
||||
"cowardice": false,
|
||||
"self_defense": true,
|
||||
"hunting": true,
|
||||
"item_collecting": true,
|
||||
"torch_placing": true,
|
||||
"elbow_room": true,
|
||||
"idle_staring": true,
|
||||
"cheat": false
|
||||
}
|
||||
}
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "Freeguy",
|
||||
|
||||
"model": "groq/llama-3.1-70b-versatile",
|
||||
"model": "groq/llama-3.3-70b-versatile",
|
||||
|
||||
"max_tokens": 8000
|
||||
}
|
|
@ -1,5 +1,10 @@
|
|||
{
|
||||
"name": "gpt",
|
||||
|
||||
"model": "gpt-4o"
|
||||
"model": {
|
||||
"model": "gpt-4o",
|
||||
"params": {
|
||||
"temperature": 0.5
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "LLama",
|
||||
|
||||
"model": "groq/llama-3.1-70b-versatile",
|
||||
"model": "groq/llama-3.3-70b-versatile",
|
||||
|
||||
"max_tokens": 4000,
|
||||
|
||||
|
|
|
@ -5,9 +5,13 @@
|
|||
|
||||
"model": {
|
||||
"api": "qwen",
|
||||
"url": "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation",
|
||||
"url": "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
|
||||
"model": "qwen-max"
|
||||
},
|
||||
|
||||
"embedding": "openai"
|
||||
"embedding": {
|
||||
"api": "qwen",
|
||||
"url": "https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
|
||||
"model": "text-embedding-v3"
|
||||
}
|
||||
}
|
|
@ -10,6 +10,8 @@ export default
|
|||
"mindserver_host": "localhost",
|
||||
"mindserver_port": process.env.MINDSERVER_PORT || 8080,
|
||||
|
||||
// the base profile is shared by all bots for default prompts/examples/modes
|
||||
"base_profile": "./profiles/defaults/survival.json", // also see creative.json, god_mode.json
|
||||
"profiles": ((process.env.PROFILES) && JSON.parse(process.env.PROFILES)) || [
|
||||
"./andy.json",
|
||||
// "./profiles/gpt.json",
|
||||
|
@ -23,6 +25,7 @@ export default
|
|||
// "./profiles/deepseek.json",
|
||||
|
||||
// using more than 1 profile requires you to /msg each bot indivually
|
||||
// individual profiles override values from the base profile
|
||||
],
|
||||
"load_memory": false, // load memory from previous session
|
||||
"init_message": "Respond with hello world and your name", // sends to all on spawn
|
||||
|
@ -33,6 +36,7 @@ export default
|
|||
|
||||
"allow_insecure_coding": false, // allows newAction command and model can write/run code on your computer. enable at own risk
|
||||
"code_timeout_mins": -1, // minutes code is allowed to run. -1 for no timeout
|
||||
"relevant_docs_count": 5, // Parameter: -1 = all, 0 = no references, 5 = five references. If exceeding the maximum, all reference documents are returned.
|
||||
|
||||
"max_messages": 15, // max number of messages to keep in context
|
||||
"num_examples": 2, // number of examples to give to the model
|
||||
|
|
|
@ -112,12 +112,13 @@ export class ActionManager {
|
|||
// Log the full stack trace
|
||||
console.error(err.stack);
|
||||
await this.stop();
|
||||
err = err.toString();
|
||||
|
||||
let message = this._getBotOutputSummary() +
|
||||
'!!Code threw exception!!\n' +
|
||||
let message = this._getBotOutputSummary() +
|
||||
'!!Code threw exception!!\n' +
|
||||
'Error: ' + err + '\n' +
|
||||
'Stack trace:\n' + err.stack;
|
||||
|
||||
'Stack trace:\n' + err.stack+'\n';
|
||||
|
||||
let interrupted = this.agent.bot.interrupt_code;
|
||||
this.agent.clearBotLogs();
|
||||
if (!interrupted && !this.agent.coder.generating) {
|
||||
|
@ -137,7 +138,7 @@ export class ActionManager {
|
|||
First outputs:\n${output.substring(0, MAX_OUT / 2)}\n...skipping many lines.\nFinal outputs:\n ${output.substring(output.length - MAX_OUT / 2)}`;
|
||||
}
|
||||
else {
|
||||
output = 'Code output:\n' + output;
|
||||
output = 'Code output:\n' + output.toString();
|
||||
}
|
||||
return output;
|
||||
}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
import { History } from './history.js';
|
||||
import { Coder } from './coder.js';
|
||||
import { Prompter } from './prompter.js';
|
||||
import { Prompter } from '../models/prompter.js';
|
||||
import { initModes } from './modes.js';
|
||||
import { initBot } from '../utils/mcdata.js';
|
||||
import { containsCommand, commandExists, executeCommand, truncCommandMessage, isAction, blacklistCommands } from './commands/index.js';
|
||||
|
@ -92,7 +92,11 @@ export class Agent {
|
|||
this.startEvents();
|
||||
|
||||
this.task.initBotTask();
|
||||
|
||||
|
||||
if (!load_mem) {
|
||||
this.task.initBotTask();
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error in spawn event:', error);
|
||||
process.exit(0);
|
||||
|
@ -100,11 +104,10 @@ export class Agent {
|
|||
});
|
||||
} catch (error) {
|
||||
// Ensure we're not losing error details
|
||||
console.error('Agent start failed with error:', {
|
||||
message: error.message || 'No error message',
|
||||
stack: error.stack || 'No stack trace',
|
||||
error: error
|
||||
});
|
||||
console.error('Agent start failed with error')
|
||||
console.error(error.message);
|
||||
console.error(error.stack);
|
||||
|
||||
throw error; // Re-throw with preserved details
|
||||
}
|
||||
}
|
||||
|
@ -140,6 +143,8 @@ export class Agent {
|
|||
console.error('Error handling message:', error);
|
||||
}
|
||||
}
|
||||
|
||||
this.respondFunc = respondFunc
|
||||
|
||||
this.bot.on('whisper', respondFunc);
|
||||
if (settings.profiles.length === 1)
|
||||
|
@ -447,6 +452,8 @@ export class Agent {
|
|||
if (this.task.data) {
|
||||
let res = this.task.isDone();
|
||||
if (res) {
|
||||
await this.history.add('system', `${res.message} ended with code : ${res.code}`);
|
||||
await this.history.save();
|
||||
console.log('Task finished:', res.message);
|
||||
this.killAll();
|
||||
}
|
||||
|
|
|
@ -42,6 +42,14 @@ class AgentServerProxy {
|
|||
console.log(`Restarting agent: ${agentName}`);
|
||||
this.agent.cleanKill();
|
||||
});
|
||||
|
||||
this.socket.on('send-message', (agentName, message) => {
|
||||
try {
|
||||
this.agent.respondFunc("NO USERNAME", message);
|
||||
} catch (error) {
|
||||
console.error('Error: ', JSON.stringify(error, Object.getOwnPropertyNames(error)));
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
login() {
|
||||
|
|
|
@ -4,6 +4,7 @@ import { makeCompartment } from './library/lockdown.js';
|
|||
import * as skills from './library/skills.js';
|
||||
import * as world from './library/world.js';
|
||||
import { Vec3 } from 'vec3';
|
||||
import {ESLint} from "eslint";
|
||||
|
||||
export class Coder {
|
||||
constructor(agent) {
|
||||
|
@ -12,15 +13,62 @@ export class Coder {
|
|||
this.fp = '/bots/'+agent.name+'/action-code/';
|
||||
this.generating = false;
|
||||
this.code_template = '';
|
||||
this.code_lint_template = '';
|
||||
|
||||
readFile('./bots/template.js', 'utf8', (err, data) => {
|
||||
readFile('./bots/execTemplate.js', 'utf8', (err, data) => {
|
||||
if (err) throw err;
|
||||
this.code_template = data;
|
||||
});
|
||||
|
||||
readFile('./bots/lintTemplate.js', 'utf8', (err, data) => {
|
||||
if (err) throw err;
|
||||
this.code_lint_template = data;
|
||||
});
|
||||
mkdirSync('.' + this.fp, { recursive: true });
|
||||
}
|
||||
|
||||
async lintCode(code) {
|
||||
let result = '#### CODE ERROR INFO ###\n';
|
||||
// Extract everything in the code between the beginning of 'skills./world.' and the '('
|
||||
const skillRegex = /(?:skills|world)\.(.*?)\(/g;
|
||||
const skills = [];
|
||||
let match;
|
||||
while ((match = skillRegex.exec(code)) !== null) {
|
||||
skills.push(match[1]);
|
||||
}
|
||||
const allDocs = await this.agent.prompter.skill_libary.getAllSkillDocs();
|
||||
// check function exists
|
||||
const missingSkills = skills.filter(skill => !!allDocs[skill]);
|
||||
if (missingSkills.length > 0) {
|
||||
result += 'These functions do not exist. Please modify the correct function name and try again.\n';
|
||||
result += '### FUNCTIONS NOT FOUND ###\n';
|
||||
result += missingSkills.join('\n');
|
||||
console.log(result)
|
||||
return result;
|
||||
}
|
||||
|
||||
const eslint = new ESLint();
|
||||
const results = await eslint.lintText(code);
|
||||
const codeLines = code.split('\n');
|
||||
const exceptions = results.map(r => r.messages).flat();
|
||||
|
||||
if (exceptions.length > 0) {
|
||||
exceptions.forEach((exc, index) => {
|
||||
if (exc.line && exc.column ) {
|
||||
const errorLine = codeLines[exc.line - 1]?.trim() || 'Unable to retrieve error line content';
|
||||
result += `#ERROR ${index + 1}\n`;
|
||||
result += `Message: ${exc.message}\n`;
|
||||
result += `Location: Line ${exc.line}, Column ${exc.column}\n`;
|
||||
result += `Related Code Line: ${errorLine}\n`;
|
||||
}
|
||||
});
|
||||
result += 'The code contains exceptions and cannot continue execution.';
|
||||
} else {
|
||||
return null;//no error
|
||||
}
|
||||
|
||||
return result ;
|
||||
}
|
||||
// write custom code to file and import it
|
||||
// write custom code to file and prepare for evaluation
|
||||
async stageCode(code) {
|
||||
code = this.sanitizeCode(code);
|
||||
|
@ -35,6 +83,7 @@ export class Coder {
|
|||
for (let line of code.split('\n')) {
|
||||
src += ` ${line}\n`;
|
||||
}
|
||||
let src_lint_copy = this.code_lint_template.replace('/* CODE HERE */', src);
|
||||
src = this.code_template.replace('/* CODE HERE */', src);
|
||||
|
||||
let filename = this.file_counter + '.js';
|
||||
|
@ -46,7 +95,7 @@ export class Coder {
|
|||
// });
|
||||
// } commented for now, useful to keep files for debugging
|
||||
this.file_counter++;
|
||||
|
||||
|
||||
let write_result = await this.writeFilePromise('.' + this.fp + filename, src);
|
||||
// This is where we determine the environment the agent's code should be exposed to.
|
||||
// It will only have access to these things, (in addition to basic javascript objects like Array, Object, etc.)
|
||||
|
@ -63,8 +112,7 @@ export class Coder {
|
|||
console.error('Error writing code execution file: ' + result);
|
||||
return null;
|
||||
}
|
||||
|
||||
return { main: mainFn };
|
||||
return { func:{main: mainFn}, src_lint_copy: src_lint_copy };
|
||||
}
|
||||
|
||||
sanitizeCode(code) {
|
||||
|
@ -115,7 +163,6 @@ export class Coder {
|
|||
for (let i=0; i<5; i++) {
|
||||
if (this.agent.bot.interrupt_code)
|
||||
return interrupt_return;
|
||||
console.log(messages)
|
||||
let res = await this.agent.prompter.promptCoding(JSON.parse(JSON.stringify(messages)));
|
||||
if (this.agent.bot.interrupt_code)
|
||||
return interrupt_return;
|
||||
|
@ -140,8 +187,15 @@ export class Coder {
|
|||
continue;
|
||||
}
|
||||
code = res.substring(res.indexOf('```')+3, res.lastIndexOf('```'));
|
||||
|
||||
const executionModuleExports = await this.stageCode(code);
|
||||
const result = await this.stageCode(code);
|
||||
const executionModuleExports = result.func;
|
||||
let src_lint_copy = result.src_lint_copy;
|
||||
const analysisResult = await this.lintCode(src_lint_copy);
|
||||
if (analysisResult) {
|
||||
const message = 'Error: Code syntax error. Please try again:'+'\n'+analysisResult+'\n';
|
||||
messages.push({ role: 'system', content: message });
|
||||
continue;
|
||||
}
|
||||
if (!executionModuleExports) {
|
||||
agent_history.add('system', 'Failed to stage code, something is wrong.');
|
||||
return {success: false, message: null, interrupted: false, timedout: false};
|
||||
|
@ -152,10 +206,10 @@ export class Coder {
|
|||
}, { timeout: settings.code_timeout_mins });
|
||||
if (code_return.interrupted && !code_return.timedout)
|
||||
return { success: false, message: null, interrupted: true, timedout: false };
|
||||
console.log("Code generation result:", code_return.success, code_return.message);
|
||||
console.log("Code generation result:", code_return.success, code_return.message.toString());
|
||||
|
||||
if (code_return.success) {
|
||||
const summary = "Summary of newAction\nAgent wrote this code: \n```" + this.sanitizeCode(code) + "```\nCode Output:\n" + code_return.message;
|
||||
const summary = "Summary of newAction\nAgent wrote this code: \n```" + this.sanitizeCode(code) + "```\nCode Output:\n" + code_return.message.toString();
|
||||
return { success: true, message: summary, interrupted: false, timedout: false };
|
||||
}
|
||||
|
||||
|
@ -170,5 +224,4 @@ export class Coder {
|
|||
}
|
||||
return { success: false, message: null, interrupted: false, timedout: true };
|
||||
}
|
||||
|
||||
}
|
|
@ -33,8 +33,10 @@ export const actionsList = [
|
|||
},
|
||||
perform: async function (agent, prompt) {
|
||||
// just ignore prompt - it is now in context in chat history
|
||||
if (!settings.allow_insecure_coding)
|
||||
if (!settings.allow_insecure_coding) {
|
||||
agent.openChat('newAction is disabled. Enable with allow_insecure_coding=true in settings.js');
|
||||
return 'newAction not allowed! Code writing is disabled in settings. Notify the user.';
|
||||
}
|
||||
return await agent.coder.generateCode(agent.history);
|
||||
}
|
||||
},
|
||||
|
|
|
@ -160,7 +160,7 @@ export function parseCommandMessage(message) {
|
|||
suppressNoDomainWarning = true; //Don't spam console. Only give the warning once.
|
||||
}
|
||||
} else if(param.type === 'BlockName') { //Check that there is a block with this name
|
||||
if(getBlockId(arg) == null) return `Invalid block type: ${arg}.`
|
||||
if(getBlockId(arg) == null && arg !== 'air') return `Invalid block type: ${arg}.`
|
||||
} else if(param.type === 'ItemName') { //Check that there is an item with this name
|
||||
if(getItemId(arg) == null) return `Invalid item type: ${arg}.`
|
||||
}
|
||||
|
|
|
@ -178,6 +178,42 @@ export const queryList = [
|
|||
return "Saved place names: " + agent.memory_bank.getKeys();
|
||||
}
|
||||
},
|
||||
{
|
||||
name: '!getCraftingPlan',
|
||||
description: "Provides a comprehensive crafting plan for a specified item. This includes a breakdown of required ingredients, the exact quantities needed, and an analysis of missing ingredients or extra items needed based on the bot's current inventory.",
|
||||
params: {
|
||||
targetItem: {
|
||||
type: 'string',
|
||||
description: 'The item that we are trying to craft'
|
||||
},
|
||||
quantity: {
|
||||
type: 'int',
|
||||
description: 'The quantity of the item that we are trying to craft',
|
||||
optional: true,
|
||||
domain: [1, Infinity, '[)'], // Quantity must be at least 1,
|
||||
default: 1
|
||||
}
|
||||
},
|
||||
perform: function (agent, targetItem, quantity = 1) {
|
||||
let bot = agent.bot;
|
||||
|
||||
// Fetch the bot's inventory
|
||||
const curr_inventory = world.getInventoryCounts(bot);
|
||||
const target_item = targetItem;
|
||||
let existingCount = curr_inventory[target_item] || 0;
|
||||
let prefixMessage = '';
|
||||
if (existingCount > 0) {
|
||||
curr_inventory[target_item] -= existingCount;
|
||||
prefixMessage = `You already have ${existingCount} ${target_item} in your inventory. If you need to craft more,\n`;
|
||||
}
|
||||
|
||||
// Generate crafting plan
|
||||
let craftingPlan = mc.getDetailedCraftingPlan(target_item, quantity, curr_inventory);
|
||||
craftingPlan = prefixMessage + craftingPlan;
|
||||
console.log(craftingPlan);
|
||||
return pad(craftingPlan);
|
||||
},
|
||||
},
|
||||
{
|
||||
name: '!help',
|
||||
description: 'Lists all available commands and their descriptions.',
|
||||
|
|
|
@ -42,7 +42,7 @@ export class History {
|
|||
console.log("Memory updated to: ", this.memory);
|
||||
}
|
||||
|
||||
appendFullHistory(to_store) {
|
||||
async appendFullHistory(to_store) {
|
||||
if (this.full_history_fp === undefined) {
|
||||
const string_timestamp = new Date().toLocaleString().replace(/[/:]/g, '-').replace(/ /g, '').replace(/,/g, '_');
|
||||
this.full_history_fp = `./bots/${this.name}/histories/${string_timestamp}.json`;
|
||||
|
@ -75,7 +75,7 @@ export class History {
|
|||
chunk.push(this.turns.shift()); // remove until turns starts with system/user message
|
||||
|
||||
await this.summarizeMemories(chunk);
|
||||
this.appendFullHistory(chunk);
|
||||
await this.appendFullHistory(chunk);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -3,20 +3,21 @@ import * as world from './world.js';
|
|||
|
||||
|
||||
export function docHelper(functions, module_name) {
|
||||
let docstring = '';
|
||||
let docArray = [];
|
||||
for (let skillFunc of functions) {
|
||||
let str = skillFunc.toString();
|
||||
if (str.includes('/**')){
|
||||
docstring += module_name+'.'+skillFunc.name;
|
||||
docstring += str.substring(str.indexOf('/**')+3, str.indexOf('**/')) + '\n';
|
||||
if (str.includes('/**')) {
|
||||
let docEntry = `${module_name}.${skillFunc.name}\n`;
|
||||
docEntry += str.substring(str.indexOf('/**') + 3, str.indexOf('**/')).trim();
|
||||
docArray.push(docEntry);
|
||||
}
|
||||
}
|
||||
return docstring;
|
||||
return docArray;
|
||||
}
|
||||
|
||||
export function getSkillDocs() {
|
||||
let docstring = "\n*SKILL DOCS\nThese skills are javascript functions that can be called when writing actions and skills.\n";
|
||||
docstring += docHelper(Object.values(skills), 'skills');
|
||||
docstring += docHelper(Object.values(world), 'world');
|
||||
return docstring + '*\n';
|
||||
let docArray = [];
|
||||
docArray = docArray.concat(docHelper(Object.values(skills), 'skills'));
|
||||
docArray = docArray.concat(docHelper(Object.values(world), 'world'));
|
||||
return docArray;
|
||||
}
|
||||
|
|
69
src/agent/library/skill_library.js
Normal file
69
src/agent/library/skill_library.js
Normal file
|
@ -0,0 +1,69 @@
|
|||
import { cosineSimilarity } from '../../utils/math.js';
|
||||
import { getSkillDocs } from './index.js';
|
||||
import { wordOverlapScore } from '../../utils/text.js';
|
||||
|
||||
export class SkillLibrary {
|
||||
constructor(agent,embedding_model) {
|
||||
this.agent = agent;
|
||||
this.embedding_model = embedding_model;
|
||||
this.skill_docs_embeddings = {};
|
||||
this.skill_docs = null;
|
||||
}
|
||||
async initSkillLibrary() {
|
||||
const skillDocs = getSkillDocs();
|
||||
this.skill_docs = skillDocs;
|
||||
if (this.embedding_model) {
|
||||
try {
|
||||
const embeddingPromises = skillDocs.map((doc) => {
|
||||
return (async () => {
|
||||
let func_name_desc = doc.split('\n').slice(0, 2).join('');
|
||||
this.skill_docs_embeddings[doc] = await this.embedding_model.embed(func_name_desc);
|
||||
})();
|
||||
});
|
||||
await Promise.all(embeddingPromises);
|
||||
} catch (error) {
|
||||
console.warn('Error with embedding model, using word-overlap instead.');
|
||||
this.embedding_model = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async getAllSkillDocs() {
|
||||
return this.skill_docs;
|
||||
}
|
||||
|
||||
async getRelevantSkillDocs(message, select_num) {
|
||||
if(!message) // use filler message if none is provided
|
||||
message = '(no message)';
|
||||
let skill_doc_similarities = [];
|
||||
if (!this.embedding_model) {
|
||||
skill_doc_similarities = Object.keys(this.skill_docs)
|
||||
.map(doc_key => ({
|
||||
doc_key,
|
||||
similarity_score: wordOverlapScore(message, this.skill_docs[doc_key])
|
||||
}))
|
||||
.sort((a, b) => b.similarity_score - a.similarity_score);
|
||||
}
|
||||
else {
|
||||
let latest_message_embedding = '';
|
||||
skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
|
||||
.map(doc_key => ({
|
||||
doc_key,
|
||||
similarity_score: cosineSimilarity(latest_message_embedding, this.skill_docs_embeddings[doc_key])
|
||||
}))
|
||||
.sort((a, b) => b.similarity_score - a.similarity_score);
|
||||
}
|
||||
|
||||
let length = skill_doc_similarities.length;
|
||||
if (typeof select_num !== 'number' || isNaN(select_num) || select_num < 0) {
|
||||
select_num = length;
|
||||
} else {
|
||||
select_num = Math.min(Math.floor(select_num), length);
|
||||
}
|
||||
let selected_docs = skill_doc_similarities.slice(0, select_num);
|
||||
let relevant_skill_docs = '#### RELEVENT DOCS INFO ###\nThe following functions are listed in descending order of relevance.\n';
|
||||
relevant_skill_docs += 'SkillDocs:\n'
|
||||
relevant_skill_docs += selected_docs.map(doc => `${doc.doc_key}`).join('\n### ');
|
||||
return relevant_skill_docs;
|
||||
}
|
||||
}
|
|
@ -79,7 +79,7 @@ export async function craftRecipe(bot, itemName, num=1) {
|
|||
}
|
||||
}
|
||||
if (!recipes || recipes.length === 0) {
|
||||
log(bot, `You do not have the resources to craft a ${itemName}. It requires: ${Object.entries(mc.getItemCraftingRecipes(itemName)[0]).map(([key, value]) => `${key}: ${value}`).join(', ')}.`);
|
||||
log(bot, `You do not have the resources to craft a ${itemName}. It requires: ${Object.entries(mc.getItemCraftingRecipes(itemName)[0][0]).map(([key, value]) => `${key}: ${value}`).join(', ')}.`);
|
||||
if (placedTable) {
|
||||
await collectBlock(bot, 'crafting_table', 1);
|
||||
}
|
||||
|
@ -111,6 +111,18 @@ export async function craftRecipe(bot, itemName, num=1) {
|
|||
return true;
|
||||
}
|
||||
|
||||
export async function wait(seconds) {
|
||||
/**
|
||||
* Waits for the given number of seconds.
|
||||
* @param {number} seconds, the number of seconds to wait.
|
||||
* @returns {Promise<boolean>} true if the wait was successful, false otherwise.
|
||||
* @example
|
||||
* await skills.wait(10);
|
||||
**/
|
||||
// setTimeout is disabled to prevent unawaited code, so this is a safe alternative
|
||||
await new Promise(resolve => setTimeout(resolve, seconds * 1000));
|
||||
return true;
|
||||
}
|
||||
|
||||
export async function smeltItem(bot, itemName, num=1) {
|
||||
/**
|
||||
|
@ -1267,7 +1279,7 @@ export async function tillAndSow(bot, x, y, z, seedType=null) {
|
|||
* @returns {Promise<boolean>} true if the ground was tilled, false otherwise.
|
||||
* @example
|
||||
* let position = world.getPosition(bot);
|
||||
* await skills.till(bot, position.x, position.y - 1, position.x);
|
||||
* await skills.tillAndSow(bot, position.x, position.y - 1, position.x, "wheat");
|
||||
**/
|
||||
x = Math.round(x);
|
||||
y = Math.round(y);
|
||||
|
@ -1275,8 +1287,14 @@ export async function tillAndSow(bot, x, y, z, seedType=null) {
|
|||
let block = bot.blockAt(new Vec3(x, y, z));
|
||||
|
||||
if (bot.modes.isOn('cheat')) {
|
||||
placeBlock(bot, x, y, z, 'farmland');
|
||||
placeBlock(bot, x, y+1, z, seedType);
|
||||
let to_remove = ['_seed', '_seeds'];
|
||||
for (let remove of to_remove) {
|
||||
if (seedType.endsWith(remove)) {
|
||||
seedType = seedType.replace(remove, '');
|
||||
}
|
||||
}
|
||||
placeBlock(bot, 'farmland', x, y, z);
|
||||
placeBlock(bot, seedType, x, y+1, z);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
|
@ -204,7 +204,7 @@ class ItemWrapper {
|
|||
}
|
||||
|
||||
createChildren() {
|
||||
let recipes = mc.getItemCraftingRecipes(this.name);
|
||||
let recipes = mc.getItemCraftingRecipes(this.name).map(([recipe, craftedCount]) => recipe);
|
||||
if (recipes) {
|
||||
for (let recipe of recipes) {
|
||||
let includes_blacklisted = false;
|
||||
|
|
|
@ -38,7 +38,7 @@ export class SelfPrompter {
|
|||
let no_command_count = 0;
|
||||
const MAX_NO_COMMAND = 3;
|
||||
while (!this.interrupt) {
|
||||
const msg = `You are self-prompting with the goal: '${this.prompt}'. Your next response MUST contain a command !withThisSyntax. Respond:`;
|
||||
const msg = `You are self-prompting with the goal: '${this.prompt}'. Your next response MUST contain a command with this syntax: !commandName. Respond:`;
|
||||
|
||||
let used_command = await this.agent.handleMessage('system', msg, -1);
|
||||
if (!used_command) {
|
||||
|
|
|
@ -100,7 +100,7 @@ export class Task {
|
|||
return;
|
||||
let bot = this.agent.bot;
|
||||
let name = this.agent.name;
|
||||
|
||||
|
||||
bot.chat(`/clear ${name}`);
|
||||
console.log(`Cleared ${name}'s inventory.`);
|
||||
|
||||
|
@ -110,13 +110,13 @@ export class Task {
|
|||
}
|
||||
//wait for a bit so inventory is cleared
|
||||
await new Promise((resolve) => setTimeout(resolve, 500));
|
||||
|
||||
let initial_inventory = null;
|
||||
if (this.data.agent_count > 1) {
|
||||
var initial_inventory = this.data.initial_inventory[this.agent.count_id.toString()];
|
||||
initial_inventory = this.data.initial_inventory[this.agent.count_id.toString()];
|
||||
console.log("Initial inventory:", initial_inventory);
|
||||
} else if (this.data) {
|
||||
console.log("Initial inventory:", this.data.initial_inventory);
|
||||
var initial_inventory = this.data.initial_inventory;
|
||||
initial_inventory = this.data.initial_inventory;
|
||||
}
|
||||
|
||||
if ("initial_inventory" in this.data) {
|
||||
|
|
|
@ -3,8 +3,9 @@ import { strictFormat } from '../utils/text.js';
|
|||
import { getKey } from '../utils/keys.js';
|
||||
|
||||
export class Claude {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.params = params || {};
|
||||
|
||||
let config = {};
|
||||
if (url)
|
||||
|
@ -20,13 +21,16 @@ export class Claude {
|
|||
let res = null;
|
||||
try {
|
||||
console.log('Awaiting anthropic api response...')
|
||||
// console.log('Messages:', messages);
|
||||
if (!this.params.max_tokens) {
|
||||
this.params.max_tokens = 4096;
|
||||
}
|
||||
const resp = await this.anthropic.messages.create({
|
||||
model: this.model_name || "claude-3-sonnet-20240229",
|
||||
system: systemMessage,
|
||||
max_tokens: 2048,
|
||||
messages: messages,
|
||||
...(this.params || {})
|
||||
});
|
||||
|
||||
console.log('Received.')
|
||||
res = resp.content[0].text;
|
||||
}
|
||||
|
|
|
@ -3,8 +3,9 @@ import { getKey, hasKey } from '../utils/keys.js';
|
|||
import { strictFormat } from '../utils/text.js';
|
||||
|
||||
export class DeepSeek {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.params = params;
|
||||
|
||||
let config = {};
|
||||
|
||||
|
@ -23,6 +24,7 @@ export class DeepSeek {
|
|||
model: this.model_name || "deepseek-chat",
|
||||
messages,
|
||||
stop: stop_seq,
|
||||
...(this.params || {})
|
||||
};
|
||||
|
||||
let res = null;
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
import { GoogleGenerativeAI } from '@google/generative-ai';
|
||||
import { toSinglePrompt } from '../utils/text.js';
|
||||
import { toSinglePrompt, strictFormat } from '../utils/text.js';
|
||||
import { getKey } from '../utils/keys.js';
|
||||
|
||||
export class Gemini {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.params = params;
|
||||
this.url = url;
|
||||
this.safetySettings = [
|
||||
{
|
||||
|
@ -34,29 +35,47 @@ export class Gemini {
|
|||
|
||||
async sendRequest(turns, systemMessage) {
|
||||
let model;
|
||||
const modelConfig = {
|
||||
model: this.model_name || "gemini-1.5-flash",
|
||||
// systemInstruction does not work bc google is trash
|
||||
};
|
||||
|
||||
if (this.url) {
|
||||
model = this.genAI.getGenerativeModel(
|
||||
{ model: this.model_name || "gemini-1.5-flash" },
|
||||
modelConfig,
|
||||
{ baseUrl: this.url },
|
||||
{ safetySettings: this.safetySettings }
|
||||
);
|
||||
} else {
|
||||
model = this.genAI.getGenerativeModel(
|
||||
{ model: this.model_name || "gemini-1.5-flash" },
|
||||
modelConfig,
|
||||
{ safetySettings: this.safetySettings }
|
||||
);
|
||||
}
|
||||
|
||||
const stop_seq = '***';
|
||||
const prompt = toSinglePrompt(turns, systemMessage, stop_seq, 'model');
|
||||
console.log('Awaiting Google API response...');
|
||||
const result = await model.generateContent(prompt);
|
||||
|
||||
turns.unshift({ role: 'system', content: systemMessage });
|
||||
turns = strictFormat(turns);
|
||||
let contents = [];
|
||||
for (let turn of turns) {
|
||||
contents.push({
|
||||
role: turn.role === 'assistant' ? 'model' : 'user',
|
||||
parts: [{ text: turn.content }]
|
||||
});
|
||||
}
|
||||
|
||||
const result = await model.generateContent({
|
||||
contents,
|
||||
generationConfig: {
|
||||
...(this.params || {})
|
||||
}
|
||||
});
|
||||
const response = await result.response;
|
||||
const text = response.text();
|
||||
console.log('Received.');
|
||||
if (!text.includes(stop_seq)) return text;
|
||||
const idx = text.indexOf(stop_seq);
|
||||
return text.slice(0, idx);
|
||||
|
||||
return text;
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
|
|
|
@ -3,8 +3,9 @@ import { getKey, hasKey } from '../utils/keys.js';
|
|||
import { strictFormat } from '../utils/text.js';
|
||||
|
||||
export class GPT {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.params = params;
|
||||
|
||||
let config = {};
|
||||
if (url)
|
||||
|
@ -25,6 +26,7 @@ export class GPT {
|
|||
model: this.model_name || "gpt-3.5-turbo",
|
||||
messages,
|
||||
stop: stop_seq,
|
||||
...(this.params || {})
|
||||
};
|
||||
if (this.model_name.includes('o1')) {
|
||||
pack.messages = strictFormat(messages);
|
||||
|
@ -32,8 +34,9 @@ export class GPT {
|
|||
}
|
||||
|
||||
let res = null;
|
||||
|
||||
try {
|
||||
console.log('Awaiting openai api response...')
|
||||
console.log('Awaiting openai api response from model', this.model_name)
|
||||
// console.log('Messages:', messages);
|
||||
let completion = await this.openai.chat.completions.create(pack);
|
||||
if (completion.choices[0].finish_reason == 'length')
|
||||
|
|
|
@ -3,8 +3,10 @@ import { getKey } from '../utils/keys.js';
|
|||
|
||||
// xAI doesn't supply a SDK for their models, but fully supports OpenAI and Anthropic SDKs
|
||||
export class Grok {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.url = url;
|
||||
this.params = params;
|
||||
|
||||
let config = {};
|
||||
if (url)
|
||||
|
@ -23,7 +25,8 @@ export class Grok {
|
|||
const pack = {
|
||||
model: this.model_name || "grok-beta",
|
||||
messages,
|
||||
stop: [stop_seq]
|
||||
stop: [stop_seq],
|
||||
...(this.params || {})
|
||||
};
|
||||
|
||||
let res = null;
|
||||
|
|
|
@ -4,12 +4,13 @@ import { getKey } from '../utils/keys.js';
|
|||
|
||||
// Umbrella class for Mixtral, LLama, Gemma...
|
||||
export class GroqCloudAPI {
|
||||
constructor(model_name, url, max_tokens=16384) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.url = url;
|
||||
this.max_tokens = max_tokens;
|
||||
this.params = params || {};
|
||||
// ReplicateAPI theft :3
|
||||
if (this.url) {
|
||||
|
||||
console.warn("Groq Cloud has no implementation for custom URLs. Ignoring provided URL.");
|
||||
}
|
||||
this.groq = new Groq({ apiKey: getKey('GROQCLOUD_API_KEY') });
|
||||
|
@ -20,14 +21,15 @@ export class GroqCloudAPI {
|
|||
let res = null;
|
||||
try {
|
||||
console.log("Awaiting Groq response...");
|
||||
if (!this.params.max_tokens) {
|
||||
this.params.max_tokens = 16384;
|
||||
}
|
||||
let completion = await this.groq.chat.completions.create({
|
||||
"messages": messages,
|
||||
"model": this.model_name || "mixtral-8x7b-32768",
|
||||
"temperature": 0.2,
|
||||
"max_tokens": this.max_tokens, // maximum token limit, differs from model to model
|
||||
"top_p": 1,
|
||||
"stream": true,
|
||||
"stop": stop_seq // "***"
|
||||
"stop": stop_seq,
|
||||
...(this.params || {})
|
||||
});
|
||||
|
||||
let temp_res = "";
|
||||
|
@ -46,6 +48,6 @@ export class GroqCloudAPI {
|
|||
}
|
||||
|
||||
async embed(text) {
|
||||
console.log("There is no support for embeddings in Groq support. However, the following text was provided: " + text);
|
||||
throw new Error('Embeddings are not supported by Groq.');
|
||||
}
|
||||
}
|
|
@ -3,9 +3,10 @@ import {getKey} from '../utils/keys.js';
|
|||
import {HfInference} from "@huggingface/inference";
|
||||
|
||||
export class HuggingFace {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name.replace('huggingface/','');
|
||||
this.url = url;
|
||||
this.params = params;
|
||||
|
||||
if (this.url) {
|
||||
console.warn("Hugging Face doesn't support custom urls!");
|
||||
|
@ -25,7 +26,8 @@ export class HuggingFace {
|
|||
console.log('Awaiting Hugging Face API response...');
|
||||
for await (const chunk of this.huggingface.chatCompletionStream({
|
||||
model: model_name,
|
||||
messages: [{ role: "user", content: input }]
|
||||
messages: [{ role: "user", content: input }],
|
||||
...(this.params || {})
|
||||
})) {
|
||||
res += (chunk.choices[0]?.delta?.content || "");
|
||||
}
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
import { strictFormat } from '../utils/text.js';
|
||||
|
||||
export class Local {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.params = params;
|
||||
this.url = url || 'http://127.0.0.1:11434';
|
||||
this.chat_endpoint = '/api/chat';
|
||||
this.embedding_endpoint = '/api/embeddings';
|
||||
|
@ -15,7 +16,12 @@ export class Local {
|
|||
let res = null;
|
||||
try {
|
||||
console.log(`Awaiting local response... (model: ${model})`)
|
||||
res = await this.send(this.chat_endpoint, {model: model, messages: messages, stream: false});
|
||||
res = await this.send(this.chat_endpoint, {
|
||||
model: model,
|
||||
messages: messages,
|
||||
stream: false,
|
||||
...(this.params || {})
|
||||
});
|
||||
if (res)
|
||||
res = res['message']['content'];
|
||||
}
|
||||
|
|
|
@ -5,10 +5,13 @@ import { strictFormat } from '../utils/text.js';
|
|||
export class Mistral {
|
||||
#client;
|
||||
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.params = params;
|
||||
|
||||
if (typeof url === "string") {
|
||||
console.warn("Mistral does not support custom URL's, ignoring!");
|
||||
|
||||
}
|
||||
|
||||
if (!getKey("MISTRAL_API_KEY")) {
|
||||
|
@ -22,8 +25,6 @@ export class Mistral {
|
|||
);
|
||||
|
||||
|
||||
this.model_name = model_name;
|
||||
|
||||
// Prevents the following code from running when model not specified
|
||||
if (typeof this.model_name === "undefined") return;
|
||||
|
||||
|
@ -49,6 +50,7 @@ export class Mistral {
|
|||
const response = await this.#client.chat.complete({
|
||||
model,
|
||||
messages,
|
||||
...(this.params || {})
|
||||
});
|
||||
|
||||
result = response.choices[0].message.content;
|
||||
|
|
|
@ -1,11 +1,14 @@
|
|||
import OpenAIApi from 'openai';
|
||||
import { getKey } from '../utils/keys.js';
|
||||
import { strictFormat } from '../utils/text.js';
|
||||
|
||||
// llama, mistral
|
||||
export class Novita {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name.replace('novita/', '');
|
||||
this.url = url || 'https://api.novita.ai/v3/openai';
|
||||
this.params = params;
|
||||
|
||||
|
||||
let config = {
|
||||
baseURL: this.url
|
||||
|
@ -17,10 +20,15 @@ export class Novita {
|
|||
|
||||
async sendRequest(turns, systemMessage, stop_seq='***') {
|
||||
let messages = [{'role': 'system', 'content': systemMessage}].concat(turns);
|
||||
|
||||
|
||||
messages = strictFormat(messages);
|
||||
|
||||
const pack = {
|
||||
model: this.model_name || "meta-llama/llama-3.1-70b-instruct",
|
||||
messages,
|
||||
stop: [stop_seq],
|
||||
...(this.params || {})
|
||||
};
|
||||
|
||||
let res = null;
|
||||
|
@ -41,6 +49,18 @@ export class Novita {
|
|||
res = 'My brain disconnected, try again.';
|
||||
}
|
||||
}
|
||||
if (res.includes('<think>')) {
|
||||
let start = res.indexOf('<think>');
|
||||
let end = res.indexOf('</think>') + 8;
|
||||
if (start != -1) {
|
||||
if (end != -1) {
|
||||
res = res.substring(0, start) + res.substring(end);
|
||||
} else {
|
||||
res = res.substring(0, start+7);
|
||||
}
|
||||
}
|
||||
res = res.trim();
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
|
|
58
src/models/openrouter.js
Normal file
58
src/models/openrouter.js
Normal file
|
@ -0,0 +1,58 @@
|
|||
import OpenAIApi from 'openai';
|
||||
import { getKey, hasKey } from '../utils/keys.js';
|
||||
import { strictFormat } from '../utils/text.js';
|
||||
|
||||
export class OpenRouter {
|
||||
constructor(model_name, url) {
|
||||
this.model_name = model_name;
|
||||
|
||||
let config = {};
|
||||
config.baseURL = url || 'https://openrouter.ai/api/v1';
|
||||
|
||||
const apiKey = getKey('OPENROUTER_API_KEY');
|
||||
if (!apiKey) {
|
||||
console.error('Error: OPENROUTER_API_KEY not found. Make sure it is set properly.');
|
||||
}
|
||||
|
||||
// Pass the API key to OpenAI compatible Api
|
||||
config.apiKey = apiKey;
|
||||
|
||||
this.openai = new OpenAIApi(config);
|
||||
}
|
||||
|
||||
async sendRequest(turns, systemMessage, stop_seq='*') {
|
||||
let messages = [{ role: 'system', content: systemMessage }, ...turns];
|
||||
messages = strictFormat(messages);
|
||||
|
||||
// Choose a valid model from openrouter.ai (for example, "openai/gpt-4o")
|
||||
const pack = {
|
||||
model: this.model_name,
|
||||
messages,
|
||||
stop: stop_seq
|
||||
};
|
||||
|
||||
let res = null;
|
||||
try {
|
||||
console.log('Awaiting openrouter api response...');
|
||||
let completion = await this.openai.chat.completions.create(pack);
|
||||
if (!completion?.choices?.[0]) {
|
||||
console.error('No completion or choices returned:', completion);
|
||||
return 'No response received.';
|
||||
}
|
||||
if (completion.choices[0].finish_reason === 'length') {
|
||||
throw new Error('Context length exceeded');
|
||||
}
|
||||
console.log('Received.');
|
||||
res = completion.choices[0].message.content;
|
||||
} catch (err) {
|
||||
console.error('Error while awaiting response:', err);
|
||||
// If the error indicates a context-length problem, we can slice the turns array, etc.
|
||||
res = 'My brain disconnected, try again.';
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
throw new Error('Embeddings are not supported by Openrouter.');
|
||||
}
|
||||
}
|
|
@ -1,40 +1,51 @@
|
|||
import { readFileSync, mkdirSync, writeFileSync} from 'fs';
|
||||
import { Examples } from '../utils/examples.js';
|
||||
import { getCommandDocs } from './commands/index.js';
|
||||
import { getSkillDocs } from './library/index.js';
|
||||
import { getCommandDocs } from '../agent/commands/index.js';
|
||||
import { getSkillDocs } from '../agent/library/index.js';
|
||||
import { SkillLibrary } from "../agent/library/skill_library.js";
|
||||
import { stringifyTurns } from '../utils/text.js';
|
||||
import { getCommand } from './commands/index.js';
|
||||
import { getCommand } from '../agent/commands/index.js';
|
||||
import settings from '../../settings.js';
|
||||
|
||||
import { Gemini } from '../models/gemini.js';
|
||||
import { GPT } from '../models/gpt.js';
|
||||
import { Claude } from '../models/claude.js';
|
||||
import { Mistral } from '../models/mistral.js';
|
||||
import { ReplicateAPI } from '../models/replicate.js';
|
||||
import { Local } from '../models/local.js';
|
||||
import { Novita } from '../models/novita.js';
|
||||
import { GroqCloudAPI } from '../models/groq.js';
|
||||
import { HuggingFace } from '../models/huggingface.js';
|
||||
import { Qwen } from "../models/qwen.js";
|
||||
import { Grok } from "../models/grok.js";
|
||||
import { DeepSeek } from '../models/deepseek.js';
|
||||
import { Gemini } from './gemini.js';
|
||||
import { GPT } from './gpt.js';
|
||||
import { Claude } from './claude.js';
|
||||
import { Mistral } from './mistral.js';
|
||||
import { ReplicateAPI } from './replicate.js';
|
||||
import { Local } from './local.js';
|
||||
import { Novita } from './novita.js';
|
||||
import { GroqCloudAPI } from './groq.js';
|
||||
import { HuggingFace } from './huggingface.js';
|
||||
import { Qwen } from "./qwen.js";
|
||||
import { Grok } from "./grok.js";
|
||||
import { DeepSeek } from './deepseek.js';
|
||||
import { OpenRouter } from './openrouter.js';
|
||||
|
||||
export class Prompter {
|
||||
constructor(agent, fp) {
|
||||
this.agent = agent;
|
||||
this.profile = JSON.parse(readFileSync(fp, 'utf8'));
|
||||
this.default_profile = JSON.parse(readFileSync('./profiles/_default.json', 'utf8'));
|
||||
let default_profile = JSON.parse(readFileSync('./profiles/defaults/_default.json', 'utf8'));
|
||||
let base_fp = settings.base_profile;
|
||||
let base_profile = JSON.parse(readFileSync(base_fp, 'utf8'));
|
||||
|
||||
for (let key in this.default_profile) {
|
||||
if (this.profile[key] === undefined)
|
||||
this.profile[key] = this.default_profile[key];
|
||||
// first use defaults to fill in missing values in the base profile
|
||||
for (let key in default_profile) {
|
||||
if (base_profile[key] === undefined)
|
||||
base_profile[key] = default_profile[key];
|
||||
}
|
||||
// then use base profile to fill in missing values in the individual profile
|
||||
for (let key in base_profile) {
|
||||
if (this.profile[key] === undefined)
|
||||
this.profile[key] = base_profile[key];
|
||||
}
|
||||
// base overrides default, individual overrides base
|
||||
|
||||
|
||||
this.convo_examples = null;
|
||||
this.coding_examples = null;
|
||||
|
||||
let name = this.profile.name;
|
||||
let chat = this.profile.model;
|
||||
this.cooldown = this.profile.cooldown ? this.profile.cooldown : 0;
|
||||
this.last_prompt_time = 0;
|
||||
this.awaiting_coding = false;
|
||||
|
@ -43,68 +54,22 @@ export class Prompter {
|
|||
let max_tokens = null;
|
||||
if (this.profile.max_tokens)
|
||||
max_tokens = this.profile.max_tokens;
|
||||
if (typeof chat === 'string' || chat instanceof String) {
|
||||
chat = {model: chat};
|
||||
if (chat.model.includes('gemini'))
|
||||
chat.api = 'google';
|
||||
else if (chat.model.includes('gpt') || chat.model.includes('o1'))
|
||||
chat.api = 'openai';
|
||||
else if (chat.model.includes('claude'))
|
||||
chat.api = 'anthropic';
|
||||
else if (chat.model.includes('huggingface/'))
|
||||
chat.api = "huggingface";
|
||||
else if (chat.model.includes('meta/') || chat.model.includes('replicate/'))
|
||||
chat.api = 'replicate';
|
||||
else if (chat.model.includes('mistralai/') || chat.model.includes("mistral/"))
|
||||
chat.api = 'mistral';
|
||||
else if (chat.model.includes("groq/") || chat.model.includes("groqcloud/"))
|
||||
chat.api = 'groq';
|
||||
else if (chat.model.includes('novita/'))
|
||||
chat.api = 'novita';
|
||||
else if (chat.model.includes('qwen'))
|
||||
chat.api = 'qwen';
|
||||
else if (chat.model.includes('grok'))
|
||||
chat.api = 'xai';
|
||||
else if (chat.model.includes('deepseek'))
|
||||
chat.api = 'deepseek';
|
||||
else
|
||||
chat.api = 'ollama';
|
||||
}
|
||||
|
||||
console.log('Using chat settings:', chat);
|
||||
let chat_model_profile = this._selectAPI(this.profile.model);
|
||||
this.chat_model = this._createModel(chat_model_profile);
|
||||
|
||||
if (chat.api === 'google')
|
||||
this.chat_model = new Gemini(chat.model, chat.url);
|
||||
else if (chat.api === 'openai')
|
||||
this.chat_model = new GPT(chat.model, chat.url);
|
||||
else if (chat.api === 'anthropic')
|
||||
this.chat_model = new Claude(chat.model, chat.url);
|
||||
else if (chat.api === 'replicate')
|
||||
this.chat_model = new ReplicateAPI(chat.model, chat.url);
|
||||
else if (chat.api === 'ollama')
|
||||
this.chat_model = new Local(chat.model, chat.url);
|
||||
else if (chat.api === 'mistral')
|
||||
this.chat_model = new Mistral(chat.model, chat.url);
|
||||
else if (chat.api === 'groq') {
|
||||
this.chat_model = new GroqCloudAPI(chat.model.replace('groq/', '').replace('groqcloud/', ''), chat.url, max_tokens ? max_tokens : 8192);
|
||||
if (this.profile.code_model) {
|
||||
let code_model_profile = this._selectAPI(this.profile.code_model);
|
||||
this.code_model = this._createModel(code_model_profile);
|
||||
}
|
||||
else {
|
||||
this.code_model = this.chat_model;
|
||||
}
|
||||
else if (chat.api === 'huggingface')
|
||||
this.chat_model = new HuggingFace(chat.model, chat.url);
|
||||
else if (chat.api === 'novita')
|
||||
this.chat_model = new Novita(chat.model.replace('novita/', ''), chat.url);
|
||||
else if (chat.api === 'qwen')
|
||||
this.chat_model = new Qwen(chat.model, chat.url);
|
||||
else if (chat.api === 'xai')
|
||||
this.chat_model = new Grok(chat.model, chat.url);
|
||||
else if (chat.api === 'deepseek')
|
||||
this.chat_model = new DeepSeek(chat.model, chat.url);
|
||||
else
|
||||
throw new Error('Unknown API:', api);
|
||||
|
||||
let embedding = this.profile.embedding;
|
||||
if (embedding === undefined) {
|
||||
if (chat.api !== 'ollama')
|
||||
embedding = {api: chat.api};
|
||||
if (chat_model_profile.api !== 'ollama')
|
||||
embedding = {api: chat_model_profile.api};
|
||||
else
|
||||
embedding = {api: 'none'};
|
||||
}
|
||||
|
@ -126,17 +91,22 @@ export class Prompter {
|
|||
this.embedding_model = new Qwen(embedding.model, embedding.url);
|
||||
else if (embedding.api === 'mistral')
|
||||
this.embedding_model = new Mistral(embedding.model, embedding.url);
|
||||
else if (embedding.api === 'huggingface')
|
||||
this.embedding_model = new HuggingFace(embedding.model, embedding.url);
|
||||
else if (embedding.api === 'novita')
|
||||
this.embedding_model = new Novita(embedding.model, embedding.url);
|
||||
else {
|
||||
this.embedding_model = null;
|
||||
console.log('Unknown embedding: ', embedding ? embedding.api : '[NOT SPECIFIED]', '. Using word overlap.');
|
||||
let embedding_name = embedding ? embedding.api : '[NOT SPECIFIED]'
|
||||
console.warn('Unsupported embedding: ' + embedding_name + '. Using word-overlap instead, expect reduced performance. Recommend using a supported embedding model. See Readme.');
|
||||
}
|
||||
}
|
||||
catch (err) {
|
||||
console.log('Warning: Failed to initialize embedding model:', err.message);
|
||||
console.log('Continuing anyway, using word overlap instead.');
|
||||
console.warn('Warning: Failed to initialize embedding model:', err.message);
|
||||
console.log('Continuing anyway, using word-overlap instead.');
|
||||
this.embedding_model = null;
|
||||
}
|
||||
|
||||
this.skill_libary = new SkillLibrary(agent, this.embedding_model);
|
||||
mkdirSync(`./bots/${name}`, { recursive: true });
|
||||
writeFileSync(`./bots/${name}/last_profile.json`, JSON.stringify(this.profile, null, 4), (err) => {
|
||||
if (err) {
|
||||
|
@ -146,6 +116,76 @@ export class Prompter {
|
|||
});
|
||||
}
|
||||
|
||||
_selectAPI(profile) {
|
||||
if (typeof profile === 'string' || profile instanceof String) {
|
||||
profile = {model: profile};
|
||||
}
|
||||
if (!profile.api) {
|
||||
if (profile.model.includes('gemini'))
|
||||
profile.api = 'google';
|
||||
else if (profile.model.includes('openrouter/'))
|
||||
profile.api = 'openrouter'; // must do before others bc shares model names
|
||||
else if (profile.model.includes('gpt') || profile.model.includes('o1')|| profile.model.includes('o3'))
|
||||
profile.api = 'openai';
|
||||
else if (profile.model.includes('claude'))
|
||||
profile.api = 'anthropic';
|
||||
else if (profile.model.includes('huggingface/'))
|
||||
profile.api = "huggingface";
|
||||
else if (profile.model.includes('replicate/'))
|
||||
profile.api = 'replicate';
|
||||
else if (profile.model.includes('mistralai/') || profile.model.includes("mistral/"))
|
||||
model_profile.api = 'mistral';
|
||||
else if (profile.model.includes("groq/") || profile.model.includes("groqcloud/"))
|
||||
profile.api = 'groq';
|
||||
else if (profile.model.includes('novita/'))
|
||||
profile.api = 'novita';
|
||||
else if (profile.model.includes('qwen'))
|
||||
profile.api = 'qwen';
|
||||
else if (profile.model.includes('grok'))
|
||||
profile.api = 'xai';
|
||||
else if (profile.model.includes('deepseek'))
|
||||
profile.api = 'deepseek';
|
||||
else if (profile.model.includes('llama3'))
|
||||
profile.api = 'ollama';
|
||||
else
|
||||
throw new Error('Unknown model:', profile.model);
|
||||
}
|
||||
return profile;
|
||||
}
|
||||
|
||||
_createModel(profile) {
|
||||
let model = null;
|
||||
if (profile.api === 'google')
|
||||
model = new Gemini(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'openai')
|
||||
model = new GPT(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'anthropic')
|
||||
model = new Claude(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'replicate')
|
||||
model = new ReplicateAPI(profile.model.replace('replicate/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'ollama')
|
||||
model = new Local(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'mistral')
|
||||
model = new Mistral(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'groq')
|
||||
model = new GroqCloudAPI(profile.model.replace('groq/', '').replace('groqcloud/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'huggingface')
|
||||
model = new HuggingFace(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'novita')
|
||||
model = new Novita(profile.model.replace('novita/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'qwen')
|
||||
model = new Qwen(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'xai')
|
||||
model = new Grok(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'deepseek')
|
||||
model = new DeepSeek(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'openrouter')
|
||||
model = new OpenRouter(profile.model.replace('openrouter/', ''), profile.url, profile.params);
|
||||
else
|
||||
throw new Error('Unknown API:', profile.api);
|
||||
return model;
|
||||
}
|
||||
|
||||
getName() {
|
||||
return this.profile.name;
|
||||
}
|
||||
|
@ -162,13 +202,20 @@ export class Prompter {
|
|||
// Wait for both examples to load before proceeding
|
||||
await Promise.all([
|
||||
this.convo_examples.load(this.profile.conversation_examples),
|
||||
this.coding_examples.load(this.profile.coding_examples)
|
||||
]);
|
||||
this.coding_examples.load(this.profile.coding_examples),
|
||||
this.skill_libary.initSkillLibrary()
|
||||
]).catch(error => {
|
||||
// Preserve error details
|
||||
console.error('Failed to initialize examples. Error details:', error);
|
||||
console.error('Stack trace:', error.stack);
|
||||
throw error;
|
||||
});
|
||||
|
||||
console.log('Examples initialized.');
|
||||
} catch (error) {
|
||||
console.error('Failed to initialize examples:', error);
|
||||
throw error;
|
||||
console.error('Stack trace:', error.stack);
|
||||
throw error; // Re-throw with preserved details
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -188,6 +235,17 @@ export class Prompter {
|
|||
}
|
||||
if (prompt.includes('$COMMAND_DOCS'))
|
||||
prompt = prompt.replaceAll('$COMMAND_DOCS', getCommandDocs());
|
||||
if (prompt.includes('$CODE_DOCS')) {
|
||||
const code_task_content = messages.slice().reverse().find(msg =>
|
||||
msg.role !== 'system' && msg.content.includes('!newAction(')
|
||||
)?.content?.match(/!newAction\((.*?)\)/)?.[1] || '';
|
||||
|
||||
prompt = prompt.replaceAll(
|
||||
'$CODE_DOCS',
|
||||
await this.skill_libary.getRelevantSkillDocs(code_task_content, settings.relevant_docs_count)
|
||||
);
|
||||
}
|
||||
prompt = prompt.replaceAll('$COMMAND_DOCS', getCommandDocs());
|
||||
if (prompt.includes('$CODE_DOCS'))
|
||||
prompt = prompt.replaceAll('$CODE_DOCS', getSkillDocs());
|
||||
if (prompt.includes('$EXAMPLES') && examples !== null)
|
||||
|
@ -273,7 +331,7 @@ export class Prompter {
|
|||
await this.checkCooldown();
|
||||
let prompt = this.profile.coding;
|
||||
prompt = await this.replaceStrings(prompt, messages, this.coding_examples);
|
||||
let resp = await this.chat_model.sendRequest(messages, prompt);
|
||||
let resp = await this.code_model.sendRequest(messages, prompt);
|
||||
this.awaiting_coding = false;
|
||||
return resp;
|
||||
}
|
|
@ -1,104 +1,79 @@
|
|||
// This code uses Dashscope and HTTP to ensure the latest support for the Qwen model.
|
||||
// Qwen is also compatible with the OpenAI API format;
|
||||
|
||||
import { getKey } from '../utils/keys.js';
|
||||
import OpenAIApi from 'openai';
|
||||
import { getKey, hasKey } from '../utils/keys.js';
|
||||
import { strictFormat } from '../utils/text.js';
|
||||
|
||||
export class Qwen {
|
||||
constructor(modelName, url) {
|
||||
this.modelName = modelName;
|
||||
this.url = url || 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation';
|
||||
this.apiKey = getKey('QWEN_API_KEY');
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.params = params;
|
||||
let config = {};
|
||||
|
||||
config.baseURL = url || 'https://dashscope.aliyuncs.com/compatible-mode/v1';
|
||||
config.apiKey = getKey('QWEN_API_KEY');
|
||||
|
||||
this.openai = new OpenAIApi(config);
|
||||
}
|
||||
|
||||
async sendRequest(turns, systemMessage, stopSeq = '***', retryCount = 0) {
|
||||
if (retryCount > 5) {
|
||||
console.error('Maximum retry attempts reached.');
|
||||
return 'Error: Too many retry attempts.';
|
||||
}
|
||||
async sendRequest(turns, systemMessage, stop_seq='***') {
|
||||
let messages = [{'role': 'system', 'content': systemMessage}].concat(turns);
|
||||
|
||||
const data = {
|
||||
model: this.modelName || 'qwen-plus',
|
||||
input: { messages: [{ role: 'system', content: systemMessage }, ...turns] },
|
||||
parameters: { result_format: 'message', stop: stopSeq },
|
||||
messages = strictFormat(messages);
|
||||
|
||||
const pack = {
|
||||
model: this.model_name || "qwen-plus",
|
||||
messages,
|
||||
stop: stop_seq,
|
||||
...(this.params || {})
|
||||
};
|
||||
|
||||
// Add default user message if all messages are 'system' role
|
||||
if (turns.every((msg) => msg.role === 'system')) {
|
||||
data.input.messages.push({ role: 'user', content: 'hello' });
|
||||
}
|
||||
|
||||
if (!data.model || !data.input || !data.input.messages || !data.parameters) {
|
||||
console.error('Invalid request data format:', data);
|
||||
throw new Error('Invalid request data format.');
|
||||
}
|
||||
|
||||
let res = null;
|
||||
try {
|
||||
const response = await this._makeHttpRequest(this.url, data);
|
||||
const choice = response?.output?.choices?.[0];
|
||||
|
||||
if (choice?.finish_reason === 'length' && turns.length > 0) {
|
||||
return this.sendRequest(turns.slice(1), systemMessage, stopSeq, retryCount + 1);
|
||||
console.log('Awaiting Qwen api response...');
|
||||
// console.log('Messages:', messages);
|
||||
let completion = await this.openai.chat.completions.create(pack);
|
||||
if (completion.choices[0].finish_reason == 'length')
|
||||
throw new Error('Context length exceeded');
|
||||
console.log('Received.');
|
||||
res = completion.choices[0].message.content;
|
||||
}
|
||||
catch (err) {
|
||||
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
|
||||
console.log('Context length exceeded, trying again with shorter context.');
|
||||
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
|
||||
} else {
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
}
|
||||
|
||||
return choice?.message?.content || 'No content received.';
|
||||
} catch (err) {
|
||||
console.error('Error occurred:', err);
|
||||
return 'An error occurred, please try again.';
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
// Why random backoff?
|
||||
// With a 30 requests/second limit on Alibaba Qwen's embedding service,
|
||||
// random backoff helps maximize bandwidth utilization.
|
||||
async embed(text) {
|
||||
if (!text || typeof text !== 'string') {
|
||||
console.error('Invalid embedding input: text must be a non-empty string.');
|
||||
return 'Invalid embedding input: text must be a non-empty string.';
|
||||
}
|
||||
|
||||
const data = {
|
||||
model: 'text-embedding-v2',
|
||||
input: { texts: [text] },
|
||||
parameters: { text_type: 'query' },
|
||||
};
|
||||
|
||||
if (!data.model || !data.input || !data.input.texts || !data.parameters) {
|
||||
console.error('Invalid embedding request data format:', data);
|
||||
throw new Error('Invalid embedding request data format.');
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await this._makeHttpRequest(this.url, data);
|
||||
const embedding = response?.output?.embeddings?.[0]?.embedding;
|
||||
return embedding || 'No embedding result received.';
|
||||
} catch (err) {
|
||||
console.error('Error occurred:', err);
|
||||
return 'An error occurred, please try again.';
|
||||
const maxRetries = 5; // Maximum number of retries
|
||||
for (let retries = 0; retries < maxRetries; retries++) {
|
||||
try {
|
||||
const { data } = await this.openai.embeddings.create({
|
||||
model: this.model_name || "text-embedding-v3",
|
||||
input: text,
|
||||
encoding_format: "float",
|
||||
});
|
||||
return data[0].embedding;
|
||||
} catch (err) {
|
||||
if (err.status === 429) {
|
||||
// If a rate limit error occurs, calculate the exponential backoff with a random delay (1-5 seconds)
|
||||
const delay = Math.pow(2, retries) * 1000 + Math.floor(Math.random() * 2000);
|
||||
// console.log(`Rate limit hit, retrying in ${delay} ms...`);
|
||||
await new Promise(resolve => setTimeout(resolve, delay)); // Wait for the delay before retrying
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
}
|
||||
// If maximum retries are reached and the request still fails, throw an error
|
||||
throw new Error('Max retries reached, request failed.');
|
||||
}
|
||||
|
||||
async _makeHttpRequest(url, data) {
|
||||
const headers = {
|
||||
'Authorization': `Bearer ${this.apiKey}`,
|
||||
'Content-Type': 'application/json',
|
||||
};
|
||||
|
||||
const response = await fetch(url, {
|
||||
method: 'POST',
|
||||
headers,
|
||||
body: JSON.stringify(data),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text();
|
||||
console.error(`Request failed, status code ${response.status}: ${response.statusText}`);
|
||||
console.error('Error response content:', errorText);
|
||||
throw new Error(`Request failed, status code ${response.status}: ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseText = await response.text();
|
||||
try {
|
||||
return JSON.parse(responseText);
|
||||
} catch (err) {
|
||||
console.error('Failed to parse response JSON:', err);
|
||||
throw new Error('Invalid response JSON format.');
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -4,9 +4,10 @@ import { getKey } from '../utils/keys.js';
|
|||
|
||||
// llama, mistral
|
||||
export class ReplicateAPI {
|
||||
constructor(model_name, url) {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name;
|
||||
this.url = url;
|
||||
this.params = params;
|
||||
|
||||
if (this.url) {
|
||||
console.warn('Replicate API does not support custom URLs. Ignoring provided URL.');
|
||||
|
@ -22,7 +23,11 @@ export class ReplicateAPI {
|
|||
const prompt = toSinglePrompt(turns, null, stop_seq);
|
||||
let model_name = this.model_name || 'meta/meta-llama-3-70b-instruct';
|
||||
|
||||
const input = { prompt, system_prompt: systemMessage };
|
||||
const input = {
|
||||
prompt,
|
||||
system_prompt: systemMessage,
|
||||
...(this.params || {})
|
||||
};
|
||||
let res = null;
|
||||
try {
|
||||
console.log('Awaiting Replicate API response...');
|
||||
|
|
|
@ -57,11 +57,9 @@ const argv = yargs(args)
|
|||
const agent = new Agent();
|
||||
await agent.start(argv.profile, argv.load_memory, argv.init_message, argv.count_id, argv.task_path, argv.task_id);
|
||||
} catch (error) {
|
||||
console.error('Failed to start agent process:', {
|
||||
message: error.message || 'No error message',
|
||||
stack: error.stack || 'No stack trace',
|
||||
error: error
|
||||
});
|
||||
console.error('Failed to start agent process:');
|
||||
console.error(error.message);
|
||||
console.error(error.stack);
|
||||
process.exit(1);
|
||||
}
|
||||
})();
|
||||
|
|
|
@ -116,6 +116,18 @@ export function createMindServer(port = 8080) {
|
|||
}, 2000);
|
||||
});
|
||||
|
||||
socket.on('send-message', (agentName, message) => {
|
||||
if (!inGameAgents[agentName]) {
|
||||
console.warn(`Agent ${agentName} not logged in, cannot send message via MindServer.`);
|
||||
return
|
||||
}
|
||||
try {
|
||||
console.log(`Sending message to agent ${agentName}: ${message}`);
|
||||
inGameAgents[agentName].emit('send-message', agentName, message)
|
||||
} catch (error) {
|
||||
console.error('Error: ', error);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
server.listen(port, 'localhost', () => {
|
||||
|
@ -148,4 +160,4 @@ function stopAllAgents() {
|
|||
// Optional: export these if you need access to them from other files
|
||||
export const getIO = () => io;
|
||||
export const getServer = () => server;
|
||||
export const getConnectedAgents = () => connectedAgents;
|
||||
export const getConnectedAgents = () => connectedAgents;
|
||||
|
|
|
@ -80,6 +80,7 @@
|
|||
${agent.in_game ? `
|
||||
<button class="stop-btn" onclick="stopAgent('${agent.name}')">Stop</button>
|
||||
<button class="restart-btn" onclick="restartAgent('${agent.name}')">Restart</button>
|
||||
<input type="text" id="messageInput" placeholder="Enter a message or command..."></input><button class="start-btn" onclick="sendMessage('${agent.name}', document.getElementById('messageInput').value)">Send</button>
|
||||
` : `
|
||||
<button class="start-btn" onclick="startAgent('${agent.name}')">Start</button>
|
||||
`}
|
||||
|
@ -110,6 +111,10 @@
|
|||
function shutdown() {
|
||||
socket.emit('shutdown');
|
||||
}
|
||||
|
||||
function sendMessage(agentName, message) {
|
||||
socket.emit('send-message', agentName, message)
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
</html>
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import { cosineSimilarity } from './math.js';
|
||||
import { stringifyTurns } from './text.js';
|
||||
import { stringifyTurns, wordOverlapScore } from './text.js';
|
||||
|
||||
export class Examples {
|
||||
constructor(model, select_num=2) {
|
||||
|
@ -18,17 +18,6 @@ export class Examples {
|
|||
return messages.trim();
|
||||
}
|
||||
|
||||
getWords(text) {
|
||||
return text.replace(/[^a-zA-Z ]/g, '').toLowerCase().split(' ');
|
||||
}
|
||||
|
||||
wordOverlapScore(text1, text2) {
|
||||
const words1 = this.getWords(text1);
|
||||
const words2 = this.getWords(text2);
|
||||
const intersection = words1.filter(word => words2.includes(word));
|
||||
return intersection.length / (words1.length + words2.length - intersection.length);
|
||||
}
|
||||
|
||||
async load(examples) {
|
||||
this.examples = examples;
|
||||
if (!this.model) return; // Early return if no embedding model
|
||||
|
@ -49,7 +38,7 @@ export class Examples {
|
|||
// Wait for all embeddings to complete
|
||||
await Promise.all(embeddingPromises);
|
||||
} catch (err) {
|
||||
console.warn('Error with embedding model, using word overlap instead:', err);
|
||||
console.warn('Error with embedding model, using word-overlap instead.');
|
||||
this.model = null;
|
||||
}
|
||||
}
|
||||
|
@ -68,8 +57,8 @@ export class Examples {
|
|||
}
|
||||
else {
|
||||
this.examples.sort((a, b) =>
|
||||
this.wordOverlapScore(turn_text, this.turnsToText(b)) -
|
||||
this.wordOverlapScore(turn_text, this.turnsToText(a))
|
||||
wordOverlapScore(turn_text, this.turnsToText(b)) -
|
||||
wordOverlapScore(turn_text, this.turnsToText(a))
|
||||
);
|
||||
}
|
||||
let selected = this.examples.slice(0, this.select_num);
|
||||
|
|
|
@ -190,7 +190,10 @@ export function getItemCraftingRecipes(itemName) {
|
|||
recipe[ingredientName] = 0;
|
||||
recipe[ingredientName]++;
|
||||
}
|
||||
recipes.push(recipe);
|
||||
recipes.push([
|
||||
recipe,
|
||||
{craftedCount : r.result.count}
|
||||
]);
|
||||
}
|
||||
|
||||
return recipes;
|
||||
|
@ -327,4 +330,156 @@ export function calculateLimitingResource(availableItems, requiredItems, discret
|
|||
}
|
||||
if(discrete) num = Math.floor(num);
|
||||
return {num, limitingResource}
|
||||
}
|
||||
|
||||
let loopingItems = new Set();
|
||||
|
||||
export function initializeLoopingItems() {
|
||||
|
||||
loopingItems = new Set(['coal',
|
||||
'wheat',
|
||||
'diamond',
|
||||
'emerald',
|
||||
'raw_iron',
|
||||
'raw_gold',
|
||||
'redstone',
|
||||
'blue_wool',
|
||||
'packed_mud',
|
||||
'raw_copper',
|
||||
'iron_ingot',
|
||||
'dried_kelp',
|
||||
'gold_ingot',
|
||||
'slime_ball',
|
||||
'black_wool',
|
||||
'quartz_slab',
|
||||
'copper_ingot',
|
||||
'lapis_lazuli',
|
||||
'honey_bottle',
|
||||
'rib_armor_trim_smithing_template',
|
||||
'eye_armor_trim_smithing_template',
|
||||
'vex_armor_trim_smithing_template',
|
||||
'dune_armor_trim_smithing_template',
|
||||
'host_armor_trim_smithing_template',
|
||||
'tide_armor_trim_smithing_template',
|
||||
'wild_armor_trim_smithing_template',
|
||||
'ward_armor_trim_smithing_template',
|
||||
'coast_armor_trim_smithing_template',
|
||||
'spire_armor_trim_smithing_template',
|
||||
'snout_armor_trim_smithing_template',
|
||||
'shaper_armor_trim_smithing_template',
|
||||
'netherite_upgrade_smithing_template',
|
||||
'raiser_armor_trim_smithing_template',
|
||||
'sentry_armor_trim_smithing_template',
|
||||
'silence_armor_trim_smithing_template',
|
||||
'wayfinder_armor_trim_smithing_template']);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Gets a detailed plan for crafting an item considering current inventory
|
||||
*/
|
||||
export function getDetailedCraftingPlan(targetItem, count = 1, current_inventory = {}) {
|
||||
initializeLoopingItems();
|
||||
if (!targetItem || count <= 0 || !getItemId(targetItem)) {
|
||||
return "Invalid input. Please provide a valid item name and positive count.";
|
||||
}
|
||||
|
||||
if (isBaseItem(targetItem)) {
|
||||
const available = current_inventory[targetItem] || 0;
|
||||
if (available >= count) return "You have all required items already in your inventory!";
|
||||
return `${targetItem} is a base item, you need to find ${count - available} more in the world`;
|
||||
}
|
||||
|
||||
const inventory = { ...current_inventory };
|
||||
const leftovers = {};
|
||||
const plan = craftItem(targetItem, count, inventory, leftovers);
|
||||
return formatPlan(plan);
|
||||
}
|
||||
|
||||
function isBaseItem(item) {
|
||||
return loopingItems.has(item) || getItemCraftingRecipes(item) === null;
|
||||
}
|
||||
|
||||
function craftItem(item, count, inventory, leftovers, crafted = { required: {}, steps: [], leftovers: {} }) {
|
||||
// Check available inventory and leftovers first
|
||||
const availableInv = inventory[item] || 0;
|
||||
const availableLeft = leftovers[item] || 0;
|
||||
const totalAvailable = availableInv + availableLeft;
|
||||
|
||||
if (totalAvailable >= count) {
|
||||
// Use leftovers first, then inventory
|
||||
const useFromLeft = Math.min(availableLeft, count);
|
||||
leftovers[item] = availableLeft - useFromLeft;
|
||||
|
||||
const remainingNeeded = count - useFromLeft;
|
||||
if (remainingNeeded > 0) {
|
||||
inventory[item] = availableInv - remainingNeeded;
|
||||
}
|
||||
return crafted;
|
||||
}
|
||||
|
||||
// Use whatever is available
|
||||
const stillNeeded = count - totalAvailable;
|
||||
if (availableLeft > 0) leftovers[item] = 0;
|
||||
if (availableInv > 0) inventory[item] = 0;
|
||||
|
||||
if (isBaseItem(item)) {
|
||||
crafted.required[item] = (crafted.required[item] || 0) + stillNeeded;
|
||||
return crafted;
|
||||
}
|
||||
|
||||
const recipe = getItemCraftingRecipes(item)?.[0];
|
||||
if (!recipe) {
|
||||
crafted.required[item] = stillNeeded;
|
||||
return crafted;
|
||||
}
|
||||
|
||||
const [ingredients, result] = recipe;
|
||||
const craftedPerRecipe = result.craftedCount;
|
||||
const batchCount = Math.ceil(stillNeeded / craftedPerRecipe);
|
||||
const totalProduced = batchCount * craftedPerRecipe;
|
||||
|
||||
// Add excess to leftovers
|
||||
if (totalProduced > stillNeeded) {
|
||||
leftovers[item] = (leftovers[item] || 0) + (totalProduced - stillNeeded);
|
||||
}
|
||||
|
||||
// Process each ingredient
|
||||
for (const [ingredientName, ingredientCount] of Object.entries(ingredients)) {
|
||||
const totalIngredientNeeded = ingredientCount * batchCount;
|
||||
craftItem(ingredientName, totalIngredientNeeded, inventory, leftovers, crafted);
|
||||
}
|
||||
|
||||
// Add crafting step
|
||||
const stepIngredients = Object.entries(ingredients)
|
||||
.map(([name, amount]) => `${amount * batchCount} ${name}`)
|
||||
.join(' + ');
|
||||
crafted.steps.push(`Craft ${stepIngredients} -> ${totalProduced} ${item}`);
|
||||
|
||||
return crafted;
|
||||
}
|
||||
|
||||
function formatPlan({ required, steps, leftovers }) {
|
||||
const lines = [];
|
||||
|
||||
if (Object.keys(required).length > 0) {
|
||||
lines.push('You are missing the following items:');
|
||||
Object.entries(required).forEach(([item, count]) =>
|
||||
lines.push(`- ${count} ${item}`));
|
||||
lines.push('\nOnce you have these items, here\'s your crafting plan:');
|
||||
} else {
|
||||
lines.push('You have all items required to craft this item!');
|
||||
lines.push('Here\'s your crafting plan:');
|
||||
}
|
||||
|
||||
lines.push('');
|
||||
lines.push(...steps);
|
||||
|
||||
if (Object.keys(leftovers).length > 0) {
|
||||
lines.push('\nYou will have leftover:');
|
||||
Object.entries(leftovers).forEach(([item, count]) =>
|
||||
lines.push(`- ${count} ${item}`));
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
|
@ -26,8 +26,21 @@ export function toSinglePrompt(turns, system=null, stop_seq='***', model_nicknam
|
|||
return prompt;
|
||||
}
|
||||
|
||||
// ensures stricter turn order for anthropic/llama models
|
||||
// combines repeated messages from the same role, separates repeat assistant messages with filler user messages
|
||||
function _getWords(text) {
|
||||
return text.replace(/[^a-zA-Z ]/g, '').toLowerCase().split(' ');
|
||||
}
|
||||
|
||||
export function wordOverlapScore(text1, text2) {
|
||||
const words1 = _getWords(text1);
|
||||
const words2 = _getWords(text2);
|
||||
const intersection = words1.filter(word => words2.includes(word));
|
||||
return intersection.length / (words1.length + words2.length - intersection.length);
|
||||
}
|
||||
|
||||
// ensures stricter turn order and roles:
|
||||
// - system messages are treated as user messages and prefixed with SYSTEM:
|
||||
// - combines repeated messages from users
|
||||
// - separates repeat assistant messages with filler user messages
|
||||
export function strictFormat(turns) {
|
||||
let prev_role = null;
|
||||
let messages = [];
|
||||
|
|
12
viewer.html
12
viewer.html
|
@ -26,9 +26,9 @@
|
|||
</div>
|
||||
<script>
|
||||
function updateLayout() {
|
||||
var width = window.innerWidth;
|
||||
var height = window.innerHeight;
|
||||
var iframes = document.querySelectorAll('.iframe-wrapper');
|
||||
let width = window.innerWidth;
|
||||
let height = window.innerHeight;
|
||||
let iframes = document.querySelectorAll('.iframe-wrapper');
|
||||
if (width > height) {
|
||||
iframes.forEach(function(iframe) {
|
||||
iframe.style.width = '50%';
|
||||
|
@ -43,10 +43,10 @@
|
|||
}
|
||||
window.addEventListener('resize', updateLayout);
|
||||
window.addEventListener('load', updateLayout);
|
||||
var iframes = document.querySelectorAll('.iframe-wrapper');
|
||||
let iframes = document.querySelectorAll('.iframe-wrapper');
|
||||
iframes.forEach(function(iframe) {
|
||||
var port = iframe.getAttribute('data-port');
|
||||
var loaded = false;
|
||||
let port = iframe.getAttribute('data-port');
|
||||
let loaded = false;
|
||||
function checkServer() {
|
||||
fetch('http://localhost:' + port, { method: 'HEAD' })
|
||||
.then(function(response) {
|
||||
|
|
Loading…
Add table
Reference in a new issue