Logo

Mission

Native mobile voice controlled AI orchestration for developers

Announcing the release of v.01.

If you're here, chances are that you're looking for a voice-assisted app that can orchestrate AI outcomes. Maybe to use some cool MCP server your friends are talking about, or to engage with and manage your own AI tooling.

Think about it for a second, you want to literally have a conversation with a device that understands and responds to you in natural language, and executes deterministic functions based on your input. WTF.

Tell that to yourself a couple of years ago and just imagine your reaction...

That's what we're trying to build here at systemprompt. It's a reality that's tantalizingly close, and we are really excited to welcome you onboard for the journey.


What

If you're looking for a bug-free app that will reliably and deterministically understand and execute your intent in any natural language, you'll have better luck looking for a time machine.

It is a vision that will become reality, possibly even soon. But it's not here yet. Although timelines are accelerating exponentially and the impossible is becoming possible, this isn't the reality.

systemprompt is, as far as we are aware, the first app of its kind. It can be said to be a version 0.01. The core pieces work stabily, and it gives a glimpse into the future, but expectations must be tempered.

systemprompt.io is:

  1. A comprehensive MCP client built for native mobile (and soon to be released on web too)
  2. Connected via API with state-of-the-art voice interaction models that act as an orchestrator for said MCP client
  3. A native mobile app available in the iOS App Store and Android Play Store

These three pillars are pretty mature and work stabily. However:

  1. The MCP/AI ecosystem as a whole is immature. There WILL be bugs and glitches
  2. Mixing 1, 2, 3 has never been done before. The UX will need work, and accidents will happen to frustrate you; features will break, deploys will be broken. If this is a dealbreaker for you, it's too early. And that's ok.

We

The simple fact is, we aren't really a "we" at all. I talk in the plural because I refer to myself and the community for whom I am building.

There is no team, just my blood, sweat and tears. Everything that I have built, every goal, every line of code, every mission and feature is directly driven by early users, feedback and social engagements - with a dollop of "because I think it's cool".

It's important for me to be clear about this. This isn't some multi-million dollar Y Combinator startup, or a funded research lab with squads of engineers.

It's a labor of passion, born out of my obsession for AI and joy at building with the latest technology, alongside a naive almost childlike delusion on what one person can achieve with AI that belies my age, experience and reality (and hopefully skills).

The disadvantage of this approach, is that there are no guardrails. This is visceral, raw, cutting edge software.

The advantage of this approach, is that there are no guardrails. This is visceral, raw, cutting edge software.

I will go faster, implement more quickly and deploy to production the very latest in AI technology if I think it brings us closer to the initial goal, human like AI tool execution... This is why we are first, and this is foundational to my ethos as a software engineer (and this isn't my first rodeo).

Another advantage of this approach, is that you can quite literally contribute to the success (or failure!) of this project, simply by getting in touch. My socials are open, YOU are my customer and I'm here to listen and build the application we all want.

So, when the app breaks, or something doesn't work, or you are frustrated (or even if you are happy). The best thing you can do for me, and the project, is tell me about it. This project exists for you, please do us all the favour and let me know exactly what you think. Feedback is a gift, even if that feedback isn't necessarily the softest ...

Just remember, that bananas feature that you want. I'm such a fool, I might be stupid enough to code it for you and have it in the app store the next day... Just try me.

Oh, and if you did decide out of your own free will to download and subscribe to the apps, to support, to play, to use, to experiment or for deeply practical applications, you are an absolute legend. (jump into the discord and tell me)


Why

systemprompt started as a research project, professional development almost. To stay abreast of the latest changes in the software engineering industry and keep my skills sharp. This was nearly 1 year ago. Since then:

  1. We've had 1000s of signups to our web platform (to use various early products/MCP servers, many of which have been discontinued)
  2. We've published 3 open source repos to the AI community. Cumulatively they have hundreds of stars and are growing daily:
    • Our initial multimodal voice client, a POC built on really early APIs
    • Our MCP server which implements the full MCP spec
    • Our MCP orchestrator, a repo which converts your local environments into remote MCP servers
  3. 100s of signups to our Android/iOS MCP client in the App/Play Store

Frankly, one thing led to another and the project has snowballed. We now find ourselves in a situation where we are doing super interesting work, with an engaged and vibrant community, building something that we love. It's getting bigger and better every day.

It wasn't an accident, but it wasn't exactly planned either. Kind of like all the best adventures.

Thanks for finding systemprompt, and thanks for reading this far. If you join us on our journey or not, we greatly appreciate your time. I ask for patience and understanding, as we rapidly iterate on a wildly ambitious plan to build the future.

Edward


How It Works (The Technical Bits)

Look, the technology is pretty cool, but let's be real about what's happening here:

Voice → AI → MCP → Magic (sometimes)

We're using state-of-the-art voice models connected to MCP servers. When it works, it feels like magic. When it doesn't, well... that's why we're calling it version 0.01.

  • Voice recognition that mostly understands user inputs
  • AI that tries to figure out what you want to do
  • MCP servers that should execute your commands
  • A mobile app that attempts to tie it all together

Built on Open Standards

We didn't reinvent the wheel. We're using the Model Context Protocol (MCP) and mature Voice Recognition APIs because:

  • It's an open standard (thank you, Anthropic!)
  • Other people are building cool servers for it
  • We believe in community over walled gardens
  • Honestly, building our own protocol would be insane

Get Started (Or Don't, I'm Not Your Boss)

Download the Apps

Want to try this madness? Here you go:

Get Involved

This only works if you help make it work:

  • Discord - Come complain about bugs (or say nice things)
  • GitHub - Star stuff, open issues, send PRs
  • Tell your friends - Or don't, we're not desperate (we are)

FAQs (The Questions You're Thinking)

"If this is so early, why is it paid?"

Great question. Here's the honest answer:

We have no funding. Zero. Zilch. Nada.

Building this requires servers, APIs, voice processing, and about 18 cups of coffee per day. More importantly, we need to prove this is more than just "a plaything" - that there's real demand for voice-controlled AI development tools.

Your $15/month isn't just keeping the lights on (though it does that too). It's a vote of confidence that says "yes, this crazy idea is worth pursuing." It separates the curious from the committed, and helps us build something sustainable.

"Why should I support this?"

Because you're not just buying an app - you're investing in a vision where developers can:

  • Code from anywhere (yes, even the beach)
  • Orchestrate complex workflows with natural language
  • Be part of shaping how AI and development merge

Plus, early supporters get:

  • Direct access to influence development
  • The satisfaction of saying "I was there when..."
  • A front-row seat to either spectacular success or glorious failure

"What if it doesn't work?"

It won't always work. That's a promise, not a bug report.

But here's the thing - every crash, every confused AI response, every "WTF just happened" moment is data that makes it better. Give us that feedback, you are accelerating the future.

"Is this actually useful or just cool?"

Both? Neither? Depends on the day.

Right now, it's like having a very eager but slightly confused robot who always responds to voice commands in a usually reliable way. Sometimes brilliant, sometimes baffling, always interesting.

The real question is, are you willing to be part of making it genuinely useful? And do you have the skill to create servers and services that work? This is the key, but you need to bring or find your own lock.

On this page