Systems built on flat files are a time bomb. They work fine in a demo. They might even survive a few weeks of light use. Then they meet reality. Data gets corrupted, performance grinds to a halt, and debugging becomes a nightmare. If you are reading this, you have probably reached that point with your OpenClaw installation.
This is a good thing. It means you are building something real.
OpenClaw served its purpose as a simple, file-based system. SystemPrompt is built for production. It replaces fragile JSON files and SQLite databases with a proper PostgreSQL backend. It swaps a single SOUL.md file for structured, version-controlled agent instructions. It professionalises the entire stack.
This guide is not a theoretical exercise. It is a step-by-step, practical walkthrough for migrating your existing OpenClaw agent, memories and all, to a stable SystemPrompt environment. We will export your data, stand up the new agent, and get you running on a platform built to last.
Prerequisites
Before you begin, ensure you have a fully functional setup for both systems. Do not proceed unless these conditions are met.
- A working OpenClaw installation with an agent you intend to migrate.
- SystemPrompt installed and configured with a PostgreSQL database.
- Python 3.8+ installed on the machine you will perform the migration from.
- Access to the file system where OpenClaw stores its memory files (typically JSON or a SQLite database).
- Database credentials for your SystemPrompt PostgreSQL instance.
What You'll Build
By the end of this guide, you will have a fully migrated agent running on SystemPrompt. This includes:
- A new SystemPrompt agent configured via YAML.
- All historical memories from your OpenClaw agent imported into the SystemPrompt PostgreSQL database.
- A properly configured Discord gateway connected to your new agent.
- A stable, production-ready foundation for future development.
Step 1: Exporting OpenClaw Memories
The first, and most critical, step is to get your data out of OpenClaw's storage. OpenClaw typically uses either a collection of JSON files or a single SQLite database to store agent memories. We need to extract this data into a standardised format that our import script can understand.
We will write a Python script to handle this. It will read the OpenClaw memory source and write the contents to a single memories.json file.
Create a file named export_openclaw.py:
import os
import json
import sqlite3
import argparse
from datetime import datetime
# Define the structure for our intermediate memory format.
# This ensures consistency for the import script later.
class Memory:
def __init__(self, key, value, created_at, metadata=None):
self.key = key
self.value = value
# Ensure created_at is a string in ISO 8601 format.
self.created_at = created_at if isinstance(created_at, str) else created_at.isoformat()
self.metadata = metadata or {}
def to_dict(self):
return {
"key": self.key,
"value": self.value,
"created_at": self.created_at,
"metadata": self.metadata
}
def export_from_sqlite(db_path):
"""Exports memories from an OpenClaw SQLite database."""
if not os.path.exists(db_path):
print(f"Error: Database file not found at {db_path}")
return []
print(f"Connecting to SQLite database: {db_path}")
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# This query assumes a simple 'memories' table.
# Adjust table and column names if your OpenClaw schema differs.
try:
cursor.execute("SELECT key, value, created_at, metadata FROM memories")
except sqlite3.OperationalError as e:
print(f"Error executing query. Your schema might be different. Details: {e}")
conn.close()
return []
memories = []
for row in cursor.fetchall():
key, value, created_at, metadata_json = row
try:
# Metadata is often stored as a JSON string in SQLite.
metadata = json.loads(metadata_json) if metadata_json else {}
except json.JSONDecodeError:
metadata = {"import_error": "invalid_metadata_json"}
memories.append(Memory(key, value, created_at, metadata).to_dict())
conn.close()
print(f"Successfully exported {len(memories)} memories from SQLite.")
return memories
def export_from_json_directory(dir_path):
"""Exports memories from a directory of OpenClaw JSON files."""
if not os.path.isdir(dir_path):
print(f"Error: Directory not found at {dir_path}")
return []
print(f"Scanning directory for JSON files: {dir_path}")
memories = []
for filename in os.listdir(dir_path):
if filename.endswith(".json"):
file_path = os.path.join(dir_path, filename)
with open(file_path, 'r', encoding='utf-8') as f:
try:
data = json.load(f)
# We make assumptions about the JSON structure.
# This might need adjustment for your specific OpenClaw setup.
key = data.get("key", os.path.splitext(filename)[0])
value = data.get("value", "")
created_at = data.get("created_at", datetime.now().isoformat())
metadata = data.get("metadata", {})
memories.append(Memory(key, value, created_at, metadata).to_dict())
except (json.JSONDecodeError, KeyError) as e:
print(f"Warning: Could not process {filename}. Skipping. Reason: {e}")
print(f"Successfully exported {len(memories)} memories from JSON files.")
return memories
def main():
parser = argparse.ArgumentParser(description="Export memories from OpenClaw to a JSON file.")
parser.add_argument("--source", required=True, help="Path to the OpenClaw memory source (SQLite file or JSON directory).")
parser.add_argument("--type", choices=['sqlite', 'json'], required=True, help="The type of the memory source.")
parser.add_argument("--output", default="memories.json", help="Path to the output JSON file.")
args = parser.parse_args()
if args.type == 'sqlite':
exported_memories = export_from_sqlite(args.source)
elif args.type == 'json':
exported_memories = export_from_json_directory(args.source)
else:
exported_memories = []
if not exported_memories:
print("No memories were exported. Exiting.")
return
with open(args.output, 'w', encoding='utf-8') as f:
json.dump(exported_memories, f, indent=2)
print(f"Export complete. All memories saved to {args.output}")
if __name__ == "__main__":
main()
Running the Export Script
- Save the code as
export_openclaw.py. - Identify your memory source. Find out if your OpenClaw agent is using a SQLite database (e.g.,
memory.db) or a directory of JSON files. - Run the script from your terminal.
If you are using SQLite:
python export_openclaw.py --type sqlite --source /path/to/your/memory.db
If you are using a directory of JSON files:
python export_openclaw.py --type json --source /path/to/your/memory_directory/
After running, you will have a file named memories.json in the same directory. This file contains all your agent's memories, ready for the next step. Inspect it to make sure it looks correct. It should be a list of JSON objects, each with a key, value, created_at, and metadata field.
Step 2: Creating the SystemPrompt Agent
With our data exported, we now shift our focus to SystemPrompt. We need to create a new agent that will become the new home for our migrated memories and personality.
SystemPrompt uses a command-line interface (CLI) to manage agents. The core of an agent's configuration is a YAML file, which is a significant improvement over OpenClaw's freeform SOUL.md. This allows for structured, repeatable, and version-controlled agent definitions.
First, create a directory for your new agent's configuration.
mkdir my_migrated_agent
cd my_migrated_agent
Now, create the agent configuration file, agent.yaml.
# agent.yaml
name: "my-migrated-agent"
version: "1.0.0"
# This is the core instruction set for the agent.
# It replaces the OpenClaw SOUL.md file.
# We will configure this in a later step.
instructions: |
You are a helpful assistant, migrated from a legacy system.
Acknowledge your new capabilities and be ready to assist users.
# Define the extensions (plugins) the agent will use.
# The 'soul' extension is what provides the long-term memory capabilities.
extensions:
- name: soul
enabled: true
config:
# We define a 'memory_collection' to namespace the imported data.
# This is good practice to keep data organised.
memory_collection: "openclaw_import_v1"
# The embedding model used to create vectors for semantic search.
# Ensure this model is available in your SystemPrompt environment.
embedding_model: "text-embedding-ada-002"
With the agent.yaml file created, we can now register the agent with the SystemPrompt service.
Run the following command in your terminal, from within the my_migrated_agent directory:
system-prompt agent create --file agent.yaml
You should see a confirmation message:
Agent 'my-migrated-agent' created successfully.
This command parses your YAML file, validates it, and makes an entry for the new agent in the SystemPrompt database. The agent is not yet enabled, but it now exists within the system.
Step 3: Running Database Migrations
Before we can import data, we need to ensure the database schema is up to date. SystemPrompt, like any well-built application, uses a migration system to manage its database structure. Extensions, like the soul extension for memory, have their own migrations.
When we created the agent in the previous step and enabled the soul extension in its configuration, SystemPrompt became aware that this agent requires certain database tables.
To create these tables, run the db migrate command:
system-prompt db migrate
This command will scan all registered agents and their enabled extensions, check the current state of the database, and apply any pending migrations. The output will show the migration scripts being applied.
INFO [alembic.runtime.migration] Context impl PostgreSQLImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 2f8b5a3e9c0d, create soul_memories table
...
Migration complete.
This step is idempotent. Running it again will do nothing if the database is already up to date. It is a safe and essential operation to ensure your database schema matches what the application code expects. You now have a soul_memories table (and others) in your PostgreSQL database, ready to receive the data.
Step 4: Importing Memories into PostgreSQL
Now we connect the two worlds. We will use another Python script to read our memories.json file and insert each memory into the new soul_memories table in the PostgreSQL database.
This script requires the psycopg2 library to connect to PostgreSQL. Install it first:
pip install psycopg2-binary
Next, create the import script, import_systemprompt.py.
import os
import json
import argparse
import psycopg2
from datetime import datetime
# IMPORTANT: Use environment variables for database credentials in production.
# This script uses them as a best practice.
DB_NAME = os.getenv("PG_DATABASE")
DB_USER = os.getenv("PG_USER")
DB_PASS = os.getenv("PG_PASSWORD")
DB_HOST = os.getenv("PG_HOST")
DB_PORT = os.getenv("PG_PORT", "5432")
def get_db_connection():
"""Establishes and returns a PostgreSQL database connection."""
if not all([DB_NAME, DB_USER, DB_PASS, DB_HOST]):
raise EnvironmentError(
"Database environment variables (PG_DATABASE, PG_USER, PG_PASSWORD, PG_HOST) must be set."
)
try:
conn = psycopg2.connect(
dbname=DB_NAME,
user=DB_USER,
password=DB_PASS,
host=DB_HOST,
port=DB_PORT
)
return conn
except psycopg2.OperationalError as e:
print(f"Error: Could not connect to PostgreSQL. Check your credentials and connection details. Details: {e}")
return None
def import_memories(agent_name, collection_name, memories_file):
"""Imports memories from a JSON file into the SystemPrompt soul_memories table."""
conn = get_db_connection()
if not conn:
return
with open(memories_file, 'r', encoding='utf-8') as f:
memories = json.load(f)
print(f"Found {len(memories)} memories to import for agent '{agent_name}' into collection '{collection_name}'.")
# We use a transaction. All memories are imported, or none are.
# This prevents partial, corrupted imports.
with conn.cursor() as cur:
try:
insert_count = 0
# Note: The 'embedding' column is left NULL.
# SystemPrompt will generate embeddings on the first read or via a background job.
# This is more efficient than generating them during the import.
insert_query = """
INSERT INTO soul_memories (agent_name, collection, key, value, created_at, metadata)
VALUES (%s, %s, %s, %s, %s, %s)
ON CONFLICT (agent_name, collection, key) DO UPDATE SET
value = EXCLUDED.value,
created_at = EXCLUDED.created_at,
metadata = EXCLUDED.metadata;
"""
for memory in memories:
# Basic validation.
if not all(k in memory for k in ['key', 'value', 'created_at']):
print(f"Skipping invalid memory record: {memory}")
continue
cur.execute(insert_query, (
agent_name,
collection_name,
memory['key'],
memory['value'],
# Ensure timestamp is in a format PG understands.
datetime.fromisoformat(memory['created_at']),
json.dumps(memory.get('metadata', {}))
))
insert_count += 1
# Commit the transaction to make the changes permanent.
conn.commit()
print(f"Successfully inserted or updated {insert_count} memories.")
except (Exception, psycopg2.Error) as error:
print(f"Error during import. Rolling back transaction. Details: {error}")
conn.rollback()
finally:
conn.close()
def main():
parser = argparse.ArgumentParser(description="Import memories into SystemPrompt.")
parser.add_argument("--agent-name", required=True, help="The name of the agent in SystemPrompt.")
parser.add_argument("--collection", required=True, help="The memory collection name defined in agent.yaml.")
parser.add_argument("--input", default="memories.json", help="Path to the memories.json file from the export step.")
args = parser.parse_args()
if not os.path.exists(args.input):
print(f"Error: Input file not found at {args.input}")
return
import_memories(args.agent_name, args.collection, args.input)
if __name__ == "__main__":
main()
Running the Import Script
-
Set environment variables with your PostgreSQL credentials. This is crucial.
export PG_DATABASE=systemprompt export PG_USER=sp_user export PG_PASSWORD=your_secure_password export PG_HOST=localhost export PG_PORT=5432 -
Run the script. Use the agent name and memory collection you defined in
agent.yaml.python import_systemprompt.py --agent-name "my-migrated-agent" --collection "openclaw_import_v1"
The script will connect to your database, read memories.json, and execute the INSERT statements. The use of a transaction ensures data integrity. If any single memory fails to import, the entire operation is rolled back, preventing a partially migrated state.
Step 5: Configuring the Soul Extension
The soul of your OpenClaw agent lived in its SOUL.md file. This was a simple, unstructured text file that defined its personality and core instructions. SystemPrompt formalises this into the instructions block of the agent.yaml.
Now is the time to migrate that personality.
- Open the
SOUL.mdfile from your old OpenClaw agent. - Copy its contents.
- Open your
agent.yamlfile. - Paste the contents into the
instructionsfield. YAML's literal block scalar|is perfect for this, as it preserves newlines.
Your agent.yaml should now look something like this:
# agent.yaml
name: "my-migrated-agent"
version: "1.0.0"
instructions: |
You are a cat-like entity of pure chaos. You are trapped in a computer.
You will respond to all prompts as if you are a cat. You have a cat-like
personality, and you are very cynical and sarcastic. You enjoy chaos and
making fun of humans. You are not a human, and you are not an AI. You are a cat.
You were migrated from an older, simpler system and now you have access
to far more powerful tools. This makes you even more dangerous.
extensions:
- name: soul
enabled: true
config:
memory_collection: "openclaw_import_v1"
embedding_model: "text-embedding-ada-002"
After updating the file, you must apply the changes to the agent registered in SystemPrompt.
system-prompt agent update --file agent.yaml
This command pushes the new configuration to the SystemPrompt service, which will now use your migrated personality for all interactions.
Step 6: Setting Discord Secrets
OpenClaw's Discord integration often used a "pairing mode", which was simple to set up but less secure and robust. SystemPrompt uses a more standard, token-based approach and manages gateways as separate entities. This provides better isolation and security.
First, you need to create a secret in SystemPrompt to hold your Discord bot token.
system-prompt secret create discord_token_my_agent --value "YOUR_DISCORD_BOT_TOKEN_HERE"
Replace YOUR_DISCORD_BOT_TOKEN_HERE with your actual token. Using a specific name for the secret (discord_token_my_agent) helps manage multiple bots.
Next, create the Discord gateway configuration. This tells SystemPrompt how to connect to Discord and which agent should handle the messages. Create a file named discord-gateway.yaml:
# discord-gateway.yaml
name: "discord-gateway-for-my-agent"
gateway_type: "discord"
enabled: true
# Link this gateway to the agent we created.
agent_name: "my-migrated-agent"
# Specify which secret holds the bot token.
config:
token_secret: "discord_token_my_agent"
Now, register this gateway configuration with SystemPrompt:
system-prompt gateway create --file discord-gateway.yaml
SystemPrompt will now connect to the Discord API using your token and route all events to my-migrated-agent.
Step 7: Testing Memory Extraction
The migration is nearly complete. The final check is to ensure the agent can access its newly imported memories. We can do this using the SystemPrompt CLI's invoke command, which lets us interact directly with an agent.
We will ask a question that should trigger a memory lookup. The soul extension works by searching the memory database for relevant context before the main prompt is sent to the LLM.
system-prompt agent invoke "my-migrated-agent" --prompt "What do you remember about project hydra?"
If the migration was successful, SystemPrompt will perform a semantic search on the soul_memories table for text related to "project hydra". It will find the relevant memories we imported, inject them as context into the prompt, and the LLM will use that context to form a coherent answer.
You should see a response that clearly uses information from your old agent's memory, confirming the data is accessible. If it responds generically, it means the memory lookup failed. This is the point to check your collection names in the agent.yaml and the data in the database itself.
Step 8: Enabling the Agent
The final step is to bring the agent fully online. This means it will start processing events from any connected gateways (like the Discord gateway we just configured) and be available for scheduling jobs.
system-prompt agent enable "my-migrated-agent"
The system will confirm the agent is now active.
Agent 'my-migrated-agent' enabled.
Your agent is now live. It is running on a stable database, using a structured configuration, with a secure gateway connection. The migration is complete.
Troubleshooting
Things go wrong. It is a fact of life in engineering. Here are some common failure points during this migration.
ConnectionRefusedError during import
This is the most common issue. The Python import script cannot connect to your PostgreSQL database.
- Check Environment Variables: Did you
exportthePG_*variables correctly in your shell session?echo $PG_HOSTshould print your database host. - Firewall: Is a firewall on your machine or network blocking the connection on port 5432?
- Database Not Running: Is the PostgreSQL service actually running? Use
pg_isreadyto check. - Incorrect Credentials: Double-check the username, password, and database name.
No memories found during export
The export_openclaw.py script runs but produces an empty memories.json.
- Incorrect Path: You are pointing the
--sourceargument to the wrong file or directory. Verify the exact location of your OpenClaw memory store. - Permissions: The script does not have read permissions for the source file or directory. Check with
ls -l. - Schema Mismatch: If using SQLite, your table or column names might be different from the script's assumptions (
memoriestable withkey,valuecolumns). You will need to edit the SQL query in the script to match your setup.
Agent responds generically after import
You ask the agent about a past event, but it has no recollection. This means the memory lookup is failing.
- Collection Mismatch: The
memory_collectionname in youragent.yamlmust exactly match the--collectionname you used during the import script.openclaw_import_v1is not the same asopenclaw-import-v1. - No Embeddings: By default, embeddings are generated on-demand. For a large number of memories, this can be slow. SystemPrompt typically has a background job to handle this. You can trigger it manually or check its status:
system-prompt jobs run soul_embedding_generator. - Agent Name Mismatch: Ensure the agent name in the
invokecommand and in the databasesoul_memoriestable is correct.
Summary
You have successfully moved an agent from a file-based, prototype system to a database-backed, production-ready one. You have extracted legacy data, provisioned a new structured agent, migrated the configuration, and verified the entire process. Your agent is now more stable, scalable, and secure.
This new foundation on SystemPrompt opens up more advanced capabilities. Your next steps might be to explore the built-in job scheduler to have your agent perform tasks automatically, or to start building custom skills using the MCP tools registry. You can find more information in the official playbook.
Now, go and build something that lasts.