
Why the First True Agentic AI Had to Come from Open Source, Not Big Tech
App Web Dev Ltd
26 March 2026
Why the first truly agentic AI that acts in the real world had to emerge from open source — and why Big Tech's legal and commercial constraints made it impossible for them to get there first.

Think back to February 2026 and the WIRED feature on OpenClaw. It was one of those pieces that felt genuinely surprising, not because an AI agent was doing useful things, but because of who built it. Not Google. Not Microsoft. Not OpenAI. A community project, stitched together by developers who wanted something that actually worked in the real world, on their terms, with their data, without asking permission from a product roadmap committee somewhere in Silicon Valley.
That moment crystallised something that had been quietly building for over a year: the first genuinely agentic AI — not a chatbot with extra steps, but a system that independently reads your calendar, sends emails, browses the web, manages tasks, and makes decisions across your digital life — had to come from open source. It could not have come from Big Tech. The question is why.
The answer has everything to do with permission, legal liability, and the difference between building something people can actually use versus building something a corporation can actually ship.
What "Agentic" Actually Means
Before we get into the legal weeds, it's worth being precise about the term. Agentic AI has become marketing-speak to the point where it has started to mean almost nothing. Every chatbot with a tool integration now calls itself an AI agent.
True agentic AI is something different. It is a system that can pursue goals across multiple steps, multiple tools, and multiple sessions — without a human in the loop for each decision. It reads your inbox and decides which emails matter. It books a meeting by checking three people's calendars, drafting a message, and firing it off. It monitors a project status and escalates when something looks wrong. It does not wait to be asked. It acts.
That kind of autonomy requires the system to have access to things: your email, your files, your calendars, your APIs, your financial data if needed. It needs to be able to write, send, and delete things on your behalf. And that is precisely where Big Tech hits a wall.
The Open Source Acceleration
As of early 2026, there are more than fifty notable open-source agent frameworks and projects in active development. The landscape covered by lists like AIMultiple's "Best 50+ Open Source AI Agents" ranges from lightweight single-task runners to full orchestration platforms with persistent memory, multi-agent coordination, and tool use across dozens of integrations.
OpenClaw sits at the more ambitious end of that spectrum. It connects to messaging platforms, manages files, schedules tasks, fires notifications, controls browser sessions, and coordinates with other AI agents. It runs wherever you choose to host it. You own the data. You control what it can and cannot do. The code is there, publicly, for anyone to inspect, fork, or extend.
This is not an accident. Open-source communities move the way they do precisely because there is no legal department signing off on features. No product manager asking whether this use case exposes the company to litigation. No terms-of-service team nervous about what happens when the agent sends an email on a user's behalf and the recipient complains. When a developer in Manchester decides to add a feature that lets their agent monitor a client's inbox and draft replies, they ship it. Nobody stops them.
The result is that genuinely agentic behaviour, the kind that requires the system to actually touch real data and take real actions, has proliferated in open-source projects years ahead of anything comparable from a major commercial provider.

Microsoft's Agent Framework, announced in October 2025, is a serious piece of engineering. AutoGen has been iterating for a while. But look at what they actually let users do in commercially available products versus what an open-source agent can do out of the box, and the gap is striking. Microsoft's commercial agents operate within tightly constrained sandboxes. They suggest actions rather than taking them. They require confirmation for anything consequential. They are designed, above all, to minimise legal exposure.
That design philosophy is rational from a corporate standpoint. It is terrible for actually building useful agents.
Why Big Tech Cannot Ship What Open Source Already Has
Here is the problem Big Tech faces in plain terms. An AI agent that genuinely acts on your behalf is, in certain contexts, acting as your agent in a legal sense. It is doing things in your name. When it sends an email, that email came from you. When it agrees to a meeting, you are agreeing to that meeting. When it processes a payment or files something or deletes something, the consequences are real.
For a company like Google or Microsoft, deploying a system at scale that does these things on behalf of hundreds of millions of users creates an almost incalculable liability surface. The moment a user's agent books a flight, sends an email that damages a business relationship, or deletes something that turns out to have been evidence in a lawsuit, someone is going to argue that the company should have prevented it. And at scale, these edge cases are not rare. They are daily.
The UK regulatory environment amplifies this considerably. The Law Society and the Solicitors Regulation Authority have both published guidance in recent years on AI in legal practice, flagging particular concern around autonomous decision-making. The concern is not theoretical. Professionals who use AI agents that take actions without adequate supervision may find themselves in breach of their regulatory obligations. Firms offering agentic AI tools to legal or financial sector clients face scrutiny about whether those tools facilitate unauthorised practice, breach professional indemnity requirements, or conflict with data protection obligations under UK GDPR.
For a startup or open-source community, these are risks that land on the person deploying the tool. They are manageable, configurable, and context-specific. For a company like Microsoft or Google, they are risks that land on the company, multiplied by every user, in every jurisdiction, across every use case simultaneously. The legal team's answer is always going to be the same: scope it down, add confirmation steps, restrict what it can do.
The commercial incentive pulls in the same direction. Big Tech's agentic AI products live inside existing product suites. Copilot lives inside Microsoft 365. Gemini lives inside Google Workspace. The agent capabilities are shaped by the product boundaries, the billing model, the enterprise sales cycle, and the need to maintain backward compatibility with existing enterprise contracts. None of this encourages radical autonomy.
The UK Legal Picture: What You Actually Need to Know
If you are a business in Manchester or anywhere in the UK thinking about deploying agentic AI, the regulatory landscape is real but navigable, provided you approach it sensibly.
The UK's AI governance framework as of 2026 remains sectoral rather than prescriptive. There is no single AI Act equivalent to the EU regulation, though the AI Safety Act passed in 2025 introduced some reporting requirements for high-capability models. For most business use cases, the relevant obligations come from existing law: UK GDPR, the Computer Misuse Act, and sector-specific regulations for finance, health, and legal services.
The practical implications for AI agents are fairly clear. If your agent handles personal data, you need a lawful basis, retention policies, and the ability to demonstrate that the processing is proportionate. If your agent takes actions in regulated sectors, you need to be confident that the level of human oversight meets whatever your sector requires. If your agent sends communications on behalf of your business, those communications are legally yours, and you are responsible for their content.
None of this makes agentic AI unusable. It makes it require thought. And the advantage of open-source systems here is that you can actually read the code, audit what data is stored, configure exactly what the agent can and cannot do, and run it on infrastructure you control. That auditability is the foundation of a compliant deployment. You cannot audit a black-box commercial service in the same way.
This is not a criticism of commercial AI products. It is an observation about where genuine compliance capability lives. When you need to demonstrate to an auditor or a regulator that your AI system does not retain certain data, does not take certain actions without authorisation, and does not transmit information outside specific boundaries, you need to be able to show the working. Open-source systems let you do that.
The Manchester SME Playbook: Deploying an Agent That Actually Works
For a small or medium business in Manchester, the practical question is not whether agentic AI is legal or interesting in the abstract. It is whether it is deployable, affordable, and genuinely useful right now.
The answer is yes, with the right approach.
An agent stack built around something like OpenClaw can be hosted on a modest VPS for a few hundred pounds a year. The agent can handle incoming enquiries, manage a calendar, draft and send routine correspondence, monitor a shared inbox, and log customer interactions to a CRM, all without requiring a dedicated developer to manage it day-to-day once it is configured. For a small agency, a solicitor's practice, or an e-commerce operation, that kind of automation is the difference between spending your evenings on admin and spending your evenings on growth.
The configuration is the key investment. Getting the agent set up correctly, with appropriate boundaries around what it can act on autonomously versus what it flags for human review, requires some expertise. But it is a one-time setup cost, not an ongoing one. And because the system is open source, there is no vendor lock-in, no annual renewal negotiation, and no risk that a pricing change makes the economics collapse.
The compliance piece is manageable with the right guidance. A straightforward privacy notice update, clear internal documentation of what the agent does and how long it retains data, and a simple review process for any automated communications sent on the business's behalf will cover the GDPR obligations for most non-regulated sectors. Legal and financial services need additional care, but even there, the framework is workable.

Risks Worth Taking Seriously
None of this is to say agentic AI is risk-free. The WIRED feature on OpenClaw spent considerable time on the ClawdBot incident, where a misconfigured agent was used to send spam at scale. That is a real risk. An agent with broad permissions and poor configuration is a liability, not an asset.
The risks break down into a few categories. First, scope creep in permissions: if you give the agent access to everything from the start because it is easier than thinking through what it actually needs, you have created an attack surface and a compliance problem simultaneously. The right approach is to start narrow and expand deliberately.
Second, hallucination in consequential contexts: agents built on large language models can and do generate plausible but incorrect information. An agent that autonomously drafts a contract clause or gives a client a cost estimate needs guardrails or human review before anything goes out. The automation is for the routine; the exceptions need human eyes.
Third, over-reliance without oversight: the point of an agent is to reduce the time you spend on tasks, not to remove you from the loop entirely. Periodic review of what the agent is doing, particularly for communications and customer-facing actions, keeps you in control and catches problems before they compound.
Managing these risks is primarily a configuration and governance question, not a technology question. The tools are mature enough. The question is whether the deployment is thoughtful.
The Bigger Picture: Why This Matters
The reason the first practical agentic AI had to come from open source is not a historical accident. It reflects something structural about how innovation works when the stakes involve real-world autonomy.
Big Tech companies are extraordinarily good at building things that hundreds of millions of people can use safely, at scale, with predictable behaviour and reliable uptime. Those constraints produce excellent consumer products and terrible agents. An agent that is safe enough for every user in every context is an agent that cannot do very much.
Open-source communities, by contrast, can build for specific contexts, specific users, and specific use cases. They can take on the liability of real-world action because that liability lands on the people making the choices about their own deployment. They can move faster because they are not simultaneously serving the enterprise sales cycle, the consumer protection team, the legal department, and the quarterly earnings call.
The result is that the genuinely useful agentic AI of 2026 is not a product you subscribe to from a big company. It is a system you deploy, configure, and own. And that means the barrier to entry is a bit of technical knowledge, or a relationship with someone who has it.
For UK businesses, particularly in a city like Manchester where there is real appetite for practical AI adoption without the Silicon Valley mystique, that is actually good news. The technology is accessible. The costs are manageable. The compliance requirements are navigable. What is needed is a guide through the configuration choices and a clear view of what the agent should and should not be doing.
That is precisely the kind of work we do at App Web Dev. We have built agentic systems for real businesses, navigated the UK compliance questions, and we know how to set up something that is genuinely useful rather than just impressive in a demo. If you are curious about what an agent could actually do for your business, the best starting point is a conversation.
Get in touch at appwebdev.co.uk and we can talk through your specific situation, what automation would actually save you time, and what a sensible, compliant deployment looks like in practice. No jargon, no overselling, just an honest assessment of what is possible and what it would take.
The first truly agentic AI came from open source because open source had the freedom to build it. Now that it exists, you have the freedom to use it.
About App Web Dev Ltd
UK-based AI agency specialising in business automation and intelligent AI solutions
Related Articles

OpenClaw Hosting in the UK: Why GDPR-Compliant AI Matters for British Businesses
Why hosting your AI assistant in the UK matters for GDPR compliance, data sovereignty, and business trust — and how App Web Dev handles this for UK clients.

Why You Need an Expert to Set Up and Secure Your OpenClaw Instance
OpenClaw is powerful but dangerously easy to misconfigure. Learn why Manchester businesses trust App Web Dev to set up, harden, and manage their AI assistant securely.

OpenClaw vs DIY AI Automation: The Hidden Costs of Going It Alone
Building your own AI automation stack sounds appealing until you hit the security gaps, maintenance overhead, and integration headaches. Here is why businesses choose a managed OpenClaw setup.