Skip to main content
StudioMeyer
Why I Built matthiasmeyer.tech
Back to Blog
AI & Automation May 4, 2026 10 min readby Matthias Meyer

Why I Built matthiasmeyer.tech

Another domain alongside studiomeyer.io for the open-source half of the work. 22 repos, 6 deep dives, 3D force graph hero, AI-ready discovery from day one, build-time GitHub stats. Live in a Sunday session.

I shipped another domain today. Not a sub-page, not a subdomain, a separate site at matthiasmeyer.tech that exists for one purpose: to explain the open-source repos I publish, on their own terms, without competing with the studio that pays the bills. It went live in one Sunday session. The story is partly a build report, partly an argument for why solo founders running a company should keep their personal-brand surface separate from the company surface, and partly a walk through what is actually inside: twenty-two repositories, six explainer essays, a 3D force graph as the hero, build-time GitHub stats so the numbers stay fresh, and the AI-ready discovery layer that every site I touch now ships by default.

There was a moment around midnight where the live URL came up clean for the first time, all twenty-two repos visible in the graph, the cyan particles flowing along the edges, and I realised the site was both technically tighter than studiomeyer.io and smaller in scope. That is the whole point. Studio is for clients. Academy is for learners. matthiasmeyer.tech is for the open-source half of the work, and that work has been getting cluttered as a sub-page on the company site for too long.

This article is a walk through what got built, why it lives where it lives, and the patterns I lifted from the studio stack that made the whole thing fit in a single session.

Why another domain

The argument for separating personal from company surface is not new. Guillermo Rauch runs rauchg.com next to vercel.com. Lee Robinson runs leerob.com next to vercel.com too. Dax Raad runs dax.dev next to sst.dev. The pattern works for them because their personal voice and their company voice serve different audiences. Vercel sells deployment. Rauch writes about systems thinking and React internals. The two reinforce each other without diluting either.

For me the split looks like this. studiomeyer.io is a German-Spanish-English studio site that sells custom websites and AI systems to clients in DACH and on Mallorca. It has 1500 Bing Copilot citations, a long form blog with three locales, a concrete pricing page. studiomeyer.academy is the learning side, recipes and lessons for builders who want to understand the AI stack we use. matthiasmeyer.tech is the third leg. It is the open-source half. The repos themselves, explained in first person, with architecture notes and trade-offs, without the sales scaffolding that makes the studio site work.

The SEO purists will tell you to consolidate everything into subdirectories under studiomeyer.io for authority. They are right about authority transfer in the abstract. They are wrong about audience. A developer who lands on local-memory-mcp via npm and clicks through to read about the architecture is not the same person who books a custom website project. Putting both in the same surface forces compromises in both directions. Two domains with explicit cross-links in headers and footers solve the audience problem and accept the small authority hit as a cost.

What is actually inside

Twenty-two repositories sit on the GitHub org. Eight of them are cornerstones in the sense that everything else extends or pairs with one of them. The other fourteen are conformance harnesses, n8n bridges, SaaS connectors and security extensions that hang off the cornerstones.

The memory cluster has one repo. local-memory-mcp is a thirteen-tool MCP server that gives Claude, Cursor and Codex persistent memory backed by SQLite, FTS5 and a small knowledge graph. It runs locally, no cloud, no API keys. The hosted SaaS variant is studiomeyer-memory at memory.studiomeyer.io for builders who want multi-tenant.

The agent cluster has three cornerstones plus one connector. mcp-personal-suite is a forty-nine-tool kit covering email, calendar, messaging, search and image generation, BYOK, no signup. agent-fleet is the orchestrator that runs specialised agents in parallel for research, critique and analysis. darwin-agents is the experimentation layer that evolves prompts via A/B testing and judge-arbitrated scoring. The connector mcp-studiomeyer-agents lets Pro-tier customers of the StudioMeyer Agents service read their audit data and tweak agent configs from their own Claude or Cursor.

The security cluster has two cornerstones, plus the new Python port and an attestation layer. ai-shield is a zero-dependency LLM security toolkit, prompt injection detection, PII masking, cost tracking, tool policies, sub-25ms scans. ai-shield-py is the Python port that I shipped two days ago for FastAPI and LangChain projects, same defense surface, different framework hooks. mcp-armor is the Rust sidecar that wraps any MCP server and validates Ed25519-signed manifests, sub-5ms p99 overhead, defends against supply-chain CVEs that the OX Security advisory documented in April. mcp-server-attestation is the TypeScript companion to mcp-armor for teams that prefer staying in Node.

The media cluster has one repo. mcp-video wraps ffmpeg and Playwright behind eight tools for recording, editing, captions, TTS and smart screenshots. It is the last MCP server I would have predicted writing two years ago and ended up being the most useful for marketing automation.

The workflow cluster has one cornerstone, two extensions. n8n-templates ships hardened workflows with cross-session memory baked in, voice agents, customer support, personal assistants, multi-provider LLM routing. n8n-nodes-studiomeyer-memory is the n8n community node that bridges those templates to the memory backend. n8n-workflows is the memoryless production-pattern variant for teams that do not need cross-session state.

The factory cluster is three test harnesses that exist because shipping MCP servers to a marketplace turned out to require more discipline than the spec implies. mcp-protocol-conformance validates JSON-RPC 2.0, OAuth 2.1 PKCE, tool schemas and capabilities across spec versions 2024-11-05, 2025-03-26 and 2025-06-18. mcp-hook-conformance audits Claude Code v2.1.118 lifecycle hooks for idempotency, latency and side-effects. mcp-tenant-pair is the foundation library for multi-user tenancy with bi-temporal storage and SQLite plus Postgres adapters, used by anything multi-user we ship.

The SaaS-connector cluster is six docs-only mirrors of the four hosted SaaS products plus the marketplace and the academy. studiomeyer-memory, studiomeyer-crm, studiomeyer-geo and studiomeyer-crew each have a public read-only mirror with the documentation and tool reference. studiomeyer-marketplace bundles all four for Claude Code via Magic Link auth. mcp-academy is the npm-distributed connector for the StudioMeyer Academy lesson and recipe API.

That is the full landscape. Twenty-two repos, nineteen total stars at launch, five different programming languages, all MIT or Apache.

A 3D force graph as the hero

A list of repos is not a hero. A list of repos is the part that nobody reads. The hero is the thing that signals what kind of site this is, and a personal hub for open-source tools should signal that the tools are interconnected, that they belong to a coherent stack, that picking one of them puts you in conversation with the others.

The hero on matthiasmeyer.tech is a 3D force graph rendered with Three.js and the 3d-force-graph library. Twenty-two nodes, twenty-two edges. Hero repos are larger and saturated, secondary repos are smaller and softer. Group colours map to function: cyan for memory, amber for agents, red for security, purple for media, emerald for workflow, slate for factory tooling, blue for SaaS connectors. Cyan particles flow along the edges in the direction of the dependency. Hovering a node opens a preview panel with name, group and description. Clicking pins it. Auto-rotate is on by default at zero point four radians per second. Filter chips above the canvas toggle clusters on and off, plus a search box that filters by name and tagline.

Crawlers see a screen-reader-only repo list immediately below the canvas with all twenty-two entries as plain text. The graph is interactive but not exclusionary. Bingbot and ClaudeBot read the same content as a sighted user with a mouse.

The pattern is a direct lift from our Memory 3D demo on studiomeyer.io. The Three.js scene, the hover-pin pattern with a 350ms auto-close timer, the sr-only fallback, the cinema background gradient with corner glows in cyan and purple. None of it is original to matthiasmeyer.tech. All of it is reused, which is the point of having a studio stack in the first place.

The concept layer

Six explainer essays sit alongside the repos. They are repo-agnostic, written in first person, no marketing wrapper. What is MCP, actually explains the protocol without the marketing layer. Stdio vs HTTP for MCP is a transport decision tree based on real deployment scars. Memory architectures compared walks through three systems I shipped this year and which architecture fits which agent shape. Agent-to-Agent protocol v1.0 RC is a short note on what an agent-card.json buys you and what it does not. WebMCP for browser agents is the hygiene-play piece on the W3C Community Group Draft. Agent orchestration patterns breaks down single agent versus sequential pipeline versus parallel fleet versus judge-arbitrated, with cost-of-orchestration heuristics from production traffic.

The concept layer is the part of the site I expect to perform best in AI citations. Repo-pages have natural traffic from npm and GitHub. Concept posts have to earn their traffic from search and from links, and the way they earn it is by being the answer to a question that someone is going to type into ChatGPT or Perplexity. Each post leaves with a takeaway, names libraries by name, gives concrete numbers when there are concrete numbers to give.

The AI-Ready layer

Every site I build ships the same discovery chain on day one. matthiasmeyer.tech is no exception. Layer one is semantic HTML5, header main article section nav footer, no div-soup wrapped in ARIA. Layer two is JSON-LD in the head, Person plus WebSite plus twenty-two SoftwareSourceCode entries plus Article schema on every concept post. Layer three is /llms.txt as the plain-text site overview with cross-references to the rest of the discovery chain. Layer four is /.well-known/agents.json with three callable tools for AI agents, plus /.well-known/agent-card.json for the A2A v1.0 RC skill descriptors, plus /.well-known/webmcp for the W3C CG draft browser-agent manifest. Layer five is /robots.txt with explicit Allow for fourteen AI bot user agents from GPTBot to MistralAI-User. Layer six is /sitemap.xml with the discovery URLs included so crawl-based agents can find the endpoints.

The pattern is documented in our internal AI-ready brand bible. Every customer site we ship runs it. matthiasmeyer.tech runs it because not running it would mean the AI half of the open-source story is invisible to the AI tools that are supposed to find it.

Build-time GitHub stats

The repos in lib/repos.ts have descriptions, taglines, group assignments, edge relationships. Those are editorial decisions. They do not change unless I rewrite a paragraph. The stars, the last-updated date, the primary language and the tool count change all the time. Hardcoding those would mean updating a TypeScript file every time someone stars a repo. Not acceptable.

The fix is a pre-build script. A small Node script in scripts/fetch-github-stats.ts shells out to the gh CLI, pulls the latest twenty-two repos for the studiomeyer-io org, writes the result to data/github-stats.json. The lib/repos.ts module imports that JSON at module load and merges the live values into the editorial base. Every build picks up fresh stars and fresh updated-at timestamps. The script also runs a drift check by reading the slug list out of lib/repos.ts with a regex and warning when GitHub has a new repo that is not yet in the editorial layer. The first time I ran the drift check it told me ai-shield-py was missing. Two minutes later it was added.

The script is gated by gh CLI availability so the build does not break when run somewhere without auth, like inside a Docker container. The build sequence is npm run fetch-stats followed by docker compose build, with the JSON committed to the repo so a fresh clone has data on day one even before fetching.

Where this goes next

The site is live, the discovery chain is wired, the search engines have the sitemap, the tracking is in place. What it does not have yet is sustained content. Six concept posts is a starting set. Each one needs to earn its traffic, and the way that happens is by writing more of them as new patterns surface in the work. The next batch is queued: a deeper dive on Memory architectures, a stdio-trap post on the failure modes I have hit twice, a piece on agent orchestration cost-of-complexity from production data.

The repos themselves keep growing. Two days ago ai-shield-py joined the family. Whatever ships next will get added to lib/repos.ts in five lines and to the graph as a new node, the Python or Rust port slotted next to its sibling, the edges drawn through the cousin-of relationship. The build-time fetch script picks up the rest.

If you are running a company site and you have an open-source half that has been getting cluttered as a sub-page, separate it. Two domains, clear cross-links in headers and footers, AI-ready stack on both sides. The audiences sort themselves out and the writing finds its right voice in each place.

Matthias Meyer

Matthias Meyer

Founder & AI Director

Founder & AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 5 SaaS products for SMBs and agencies across DACH and Spain.

open-sourcemcppersonal-brandai-readydiscovery-chain3d-force-graphnext-jsbuild-time-fetch
Why I Built matthiasmeyer.tech