When Sentry explained why they chose the Business Source License, it crystallized something we had been thinking about.
Open source built the modern software stack. Linux, PostgreSQL, Git, React. The evidence for sharing code is overwhelming. But the economics that made it work assumed human effort was the bottleneck. Copying code required understanding it. Understanding required time. Time created natural protection.
AI changes that equation. Models trained on public repositories can understand architectures instantly. Competitors can replicate years of work in weeks. The moat that complexity once provided is gone.
This is why we chose BSL-1.1 for systemprompt.io. Not because we reject open source values, but because we think those values need new mechanisms to survive the AI era.
The Economics That Made Open Source Work
Open source is a powerful idea. Software should be free to use, modify, and distribute. Transparency builds trust. Collaboration beats isolation. Communities outperform corporations.
The evidence was overwhelming. Linux runs the internet. Git changed how we build software. PostgreSQL powers companies that could afford any database. React, Django, Kubernetes, Rust: the modern tech stack is an open source monument.
┌─────────────────────────────────────────┐
│ Your Application │
├─────────────────────────────────────────┤
│ Frameworks (React, Django, Rails...) │
├─────────────────────────────────────────┤
│ Languages (Python, Rust, Go...) │
├─────────────────────────────────────────┤
│ Databases (PostgreSQL, Redis...) │
├─────────────────────────────────────────┤
│ Operating System (Linux) │
└─────────────────────────────────────────┘
↑ All open source ↑
The economics made sense too. You give code, you get contributions. Bugs found by thousands of eyes. Features built by people who actually use the software. A moat made of community rather than lawyers.
But those economics had an unspoken assumption: copying and understanding code required effort. Human effort was the bottleneck. Sharing accelerated everyone because everyone had to do the work themselves.
What happens when that assumption breaks?
Training Data Extraction
The GitHub Copilot class action lawsuit made it official: AI companies view open source repositories as training data to be harvested.
Microsoft trained Copilot on public GitHub repositories. Billions of lines of code, ingested without explicit consent, used to build a commercial product that competes with the very developers whose code it learned from. The lawyers are still arguing about whether this is legal. But legality is not the point.
The point is the economics changed.
When The Pile dataset was assembled for training large language models, it included a GitHub scrape. MIT-licensed code, shared so others could learn, build, and contribute back, became training data for systems that will never submit a pull request.
Traditional Open Source Value Flow
┌──────────────────────────────────────────────────┐
│ Developer A │
│ │ │
│ ▼ shares code │
│ Open Source Project │
│ │ │
│ ▼ used by │
│ Developer B │
│ │ │
│ ▼ contributes improvements │
│ Open Source Project (better) │
│ │ │
│ ▼ benefits │
│ Developer A (and everyone) │
└──────────────────────────────────────────────────┘
AI Era Extraction Flow
┌──────────────────────────────────────────────────┐
│ Developer A │
│ │ │
│ ▼ shares code │
│ Open Source Project │
│ │ │
│ ▼ scraped by │
│ AI Training Pipeline │
│ │ │
│ ▼ produces │
│ AI Model (commercial product) │
│ │ │
│ ▼ competes with │
│ Developer A │
└──────────────────────────────────────────────────┘
The old bargain was: share freely, benefit collectively. The new reality is: share freely, get extracted by those with compute.
Commoditisation at AI Speed
Open source used to protect you through complexity. Understanding a codebase took time. Grasping the architecture required reading thousands of lines, tracing call paths, building mental models. Contributing meaningfully took even longer.
That complexity was a form of defence. Competitors could not just copy your work; they had to understand it. Understanding took the same human effort for them as it did for you.
AI compresses that effort to nearly zero.
A model can read your entire repository, understand your architecture, generate documentation, and explain the design decisions to a junior developer in an afternoon. Tools like Devin can take a codebase and implement features autonomously. Cursor, Claude, and Copilot make it trivial to navigate unfamiliar code.
Time to Replicate Open Source Project
Pre-AI Era
┌────────────────────────────────────────────────────────┐
│ Read code ████████████████████ 2-4 weeks │
│ Understand arch ████████████████████████ 4-8 weeks │
│ Implement clone ████████████████████████████ months │
└────────────────────────────────────────────────────────┘
AI-Assisted Era
┌────────────────────────────────────────────────────────┐
│ AI reads code █ hours │
│ AI explains arch █ hours │
│ AI-assisted impl ████ days to weeks │
└────────────────────────────────────────────────────────┘
The complexity moat only works when complexity takes time to traverse. AI makes traversal instant.
The Sustainability Crisis
Open source already had a sustainability problem. The Roads and Bridges report documented it in 2016: critical infrastructure maintained by unpaid volunteers, burning out while companies worth billions depend on their work.
Then came Log4j.
In December 2021, a critical vulnerability was discovered in Log4j, a logging library used by millions of Java applications. The severity was maximum. The blast radius was enormous. The project was maintained by a handful of volunteers.
One of those maintainers posted on Twitter: "Log4j maintainers have been working sleeplessly on mitigation measures; fixes, docs, CVE, replies to inquiries, etc. Yet nothing is stopping people to mass-mail us, mass-mail our employers asking for faster answers, mass-create issues... It's Christmas soon, we have lives too."
Companies with trillion-dollar market caps depended on software maintained by people working for free. Nobody had noticed until everything was on fire.
AI accelerates this problem. The extraction rate increases while the contribution rate stays flat. AI does not sponsor maintainers. The companies training on open source code are not proportionally increasing their contributions to the projects they depend on.
The Turning Point
In August 2023, HashiCorp switched Terraform, Vault, and their other products from MPL 2.0 to the Business Source License. Their reasoning was blunt: cloud providers were taking their open source code and offering managed services that competed directly with HashiCorp, without contributing meaningfully to development.
The open source community's reaction was predictable. "Betrayal." "Bait and switch." "They built on open source and now they're pulling up the ladder."
But we saw a company articulating what we had been observing across the industry.
Elastic had done the same thing in 2021, switching to SSPL after AWS launched a competing Elasticsearch service. Redis changed licenses for similar reasons. Sentry explained their rationale with unusual clarity.
A pattern was emerging. Companies that had invested years building valuable open source software were discovering that permissive licenses let well-resourced competitors capture the value without contributing to the costs.
AI accelerates this dynamic. It enables anyone with compute to understand, replicate, and compete with open source projects at unprecedented speed.
What We Chose
systemprompt.io uses the Business Source License 1.1.
Here is what that means:
The core is source-available, not open source. You can read every line of code. You can run it on your infrastructure. You can modify it for your own use. You can learn from it, audit it, extend it. What you cannot do is offer systemprompt.io as a competing service.
The template layer is MIT. When you download systemprompt.io and build on it, everything you create (your agents, your services, your business logic) is yours. MIT licensed. No restrictions. We never see it.
Everything converts to Apache 2.0 after four years. Code released today becomes fully open source in 2030. This is not permanent restriction. It is a sustainability window.
License Timeline
┌────────────────────────────────────────────────────────┐
│ Year 0 BSL-1.1: Use freely, can't compete │
│ │ │
│ Year 1 │ Development continues │
│ │ │
│ Year 2 │ Community grows │
│ │ │
│ Year 3 │ Ecosystem matures │
│ │ │
│ Year 4 ───┴──► Apache 2.0: Full open source │
│ No restrictions │
└────────────────────────────────────────────────────────┘
The Additional Use Grant permits broad usage. You can build commercial products on systemprompt.io. You can deploy it in production. You can customise it however you want. The only restriction is on commoditising our work directly: offering systemprompt.io itself as a service before we have had time to build a sustainable business.
Addressing the Objection
"But it's not really open source."
Correct. That is the point.
The Open Source Definition was written in 1998. The term "open source" was chosen specifically to be more business-friendly than "free software." The definition encodes assumptions from that era: that the primary barrier to software use was access to source code, that contribution would flow naturally from use, that the main threat was proprietary vendors withholding code.
Those assumptions made sense in 1998. They made sense in 2008. By 2018 they were showing strain. By 2025 they have broken down.
The definition of open source was written before AI existed. Before training runs that treat public code as raw material. Before assistants that can understand any codebase instantly. Before the economics of software development fundamentally changed.
We can either pretend the definition is timeless and watch open source get strip-mined into irrelevance, or we can acknowledge that definitions should serve communities rather than the reverse.
Source-available licenses like BSL preserve what matters about open source:
| What Matters | Open Source | BSL-1.1 |
|---|---|---|
| Transparency | Yes | Yes |
| Self-hosting | Yes | Yes |
| Learning from code | Yes | Yes |
| Modification for own use | Yes | Yes |
| Community contribution | Yes | Yes |
| Prevention of extraction at scale | No | Yes |
| Sustainable development | Maybe | More likely |
What BSL restricts is the ability to extract value at scale without contributing back. That is the part that was killing open source anyway.
The Broader Implication
We do not think we are special. We think we are early.
The AI era changes the economics of software in ways that require new licensing models. Projects that stick with permissive licenses will increasingly find their work absorbed into training data, their architectures replicated by AI-assisted competitors, and their maintainers burning out from extraction without contribution.
Some will survive through scale. Linux is not going anywhere. Some will survive through foundation support. Some will survive through corporate backing that creates its own conflicts.
But for independent projects trying to build sustainable businesses while contributing to the commons, the traditional open source model is becoming a trap. You put in the work. AI extracts the value. Someone with more resources uses that extraction to compete with you.
Heather Meeker, the lawyer who drafted the BSL, understood this. The license was designed specifically for companies that want to be open but need protection from commoditisation. It is not a compromise. It is an evolution.
We expect more projects to make this choice. Not because they have sold out. Because they have done the math.
What This Means For You
If you are building on systemprompt.io:
Nothing changes. The code is visible. You can self-host. You can modify it for your needs. You can build commercial products. The template layer is MIT. Your code is yours, full stop.
If you are considering your own licensing:
Ask the hard questions. What happens when your code becomes training data? What happens when an AI-assisted competitor can replicate your work in weeks instead of years? What is your path to sustainability when extraction scales but contribution does not?
The answer might still be permissive open source. Projects with foundation backing, corporate sponsorship, or sufficient scale can absorb the extraction. Projects that are truly infrastructure, where even competitors have incentives to contribute, may sustain the traditional model.
But if you are an independent project trying to build something sustainable, you should at least consider the alternative. The ideological commitment to "truly open" licensing does not help if the project dies from lack of resources.
If you are part of the broader community:
The conversation needs to happen. Not in terms of ideology (open source good, source-available bad) but in terms of economics. What licensing models actually serve communities when AI changes everything?
The maintainers of critical infrastructure are burning out. The companies benefiting most from open source are contributing least. The AI companies scraping repositories for training data are not proportionally funding the projects they depend on.
Something has to change. BSL is one answer. There may be others. But "just use MIT and hope for the best" is not working anymore.
Looking Forward
Four years from now, the code we are writing today becomes Apache 2.0. Fully open source by any definition. The restriction is temporary. The sustainability it enables is not.
The AI era requires new thinking about old models. BSL is our answer. We think it preserves what matters about open source (transparency, self-hosting, learning, modification, community) while adding what the AI era demands: protection from extraction at scale.
If you are building on systemprompt.io, nothing changes for you. If you are thinking about licensing for your own project, we hope this gives you something to consider.
systemprompt.io is source-available at systemprompt.io. The core is BSL-1.1, converting to Apache 2.0 after four years. The template is MIT: yours to use, modify, and own. Read our licensing documentation for details.