StudioMeyer Memory API
Memory for your AI. As an API.
A backend that remembers. Store facts, search semantically, build knowledge graphs, import ChatGPT, Claude and Gemini history. Your LLM provider stays your choice — we only ship the memory.
50+
MCP Tools
10+
REST Endpoints
EU
Hosting
curl -X POST https://memory.studiomeyer.io/api/search \
-H "Authorization: Bearer sm_live_xxx" \
-d '{"query": "what do we know about Q1 sales"}'{
"success": true,
"count": 3,
"results": [
{ "type": "learning", "content": "Q1 revenue up 34% YoY...", "rank": 2.4 },
{ "type": "decision", "content": "Q1 pricing policy...", ... }
]
}What this is
Long-term memory as a backend service
We host the memory backend for your AI agents. Semantic search, knowledge graph, confidence decay, contradiction detection, multi-tenant isolation — all addressable via REST and MCP. You POST facts, we index with pgvector + trigram + full-text. You query, we return relevant hits. Your LLM does the rest. Ideal for Claude/GPT agents, chatbots with long-term context, internal knowledge tools and anything where memory matters.
Why memory as a service
What you save by not building it yourself
No RAG pipeline
No chunking. No embeddings. No vector DB to host. No hybrid-search magic. You POST JSON, we store, index, and return relevant hits.
Knowledge graph included
Entities, observations, relations. Bi-temporal model. Automatic dedup via gatekeeper. Contradiction detection. All without a Neo4j setup.
Import from ChatGPT/Claude/Gemini
POST your export files, we parse, structure, and write to your memory. ChatGPT JSON, Claude Projects, Gemini History — all supported.
REST Endpoints
Memory over HTTP
All endpoints on memory.studiomeyer.io. Bearer token in Authorization header. Plus MCP endpoint at /mcp for Claude Desktop, Cursor, Codex.
/api/learn/api/search/api/decide/api/entity/api/import/api/backfill/api/export/api/accountPricing
Plans for Solo, Team and Scale
30 days free trial. Cancel anytime. Memory reads unlimited in your plan.
Use Cases
What teams use it for
Claude/GPT agent with long-term context
Your agent forgets nothing. Customer data, project history, earlier decisions — all in memory, available every session. Via MCP in Claude Desktop/Cursor or via REST in your agent framework.
Internal knowledge assistant
Company knowledge once via /api/import or /api/learn, then query via /api/search. Employees get consistent answers from the same pool. Tenant isolation per team or per department.
Chatbot with CRM awareness
Store customer profiles, history, preferences. On every conversation the bot pulls context via /api/search. Feels like a support rep who knows the customer — because it does.
Multi-agent shared memory
Multiple agents, one shared memory. Research agent stores findings, writer agent reads them, reviewer agent adds comments. All tenant-isolated, bi-temporally versioned.
Integration
Integrated in 5 minutes
One HTTP request. Any language, any framework. No SDK required. Or: drop the MCP endpoint into Claude Desktop — done.
// 1. Store a fact
await fetch('https://memory.studiomeyer.io/api/learn', {
method: 'POST',
headers: { 'Authorization': 'Bearer sm_live_xxx', 'Content-Type': 'application/json' },
body: JSON.stringify({
category: 'insight',
content: 'Acme Corp prefers video calls over phone',
tags: ['acme', 'preferences']
})
});
// 2. Search later
const res = await fetch('https://memory.studiomeyer.io/api/search', {
method: 'POST', headers,
body: JSON.stringify({ query: 'how does Acme prefer to meet?' })
});FAQ