I Built a Distributed AI Search Engine to Kill SEO. Turn Your Website Into an Agent.
How to increase your GEO visibility by integrating LLMs directly into your web page and bypassing search engine indexes.
I spent the last month optimizing a side project to achieve a new goal: ensuring my content appears in LLM (Large Language Model) conversations using web search tools. Despite obsessing over traditional SEO/GEO like keywords and meta tags, the results were mediocre.
I realized the problem was my approach: I was optimizing for search indexation, when I needed a solution that actually got my content inserted into the LLM conversation.
Out of frustration, I stopped.
I spent the next 24 hours coding a Proof of Concept (POC) that challenges how we think about search entirely.
It’s called the Agent Orchestrator. It doesn’t crawl the web. It doesn’t rely on MCP “Tool Calling”. Instead, it decentralizes AI, allowing businesses to answer users directly via REST API.
The Problem: We Are Hitting the “Context Wall”
There are three major issues facing developers and businesses right now:
1. SEO is an Endless Treadmill
You can spend massive amounts of time and effort optimizing for SEO/GEO, yet remain invisible because other players do the same and constantly overtake the rankings. It is an endless run for optimization where the goalposts keep moving.
2. The MCP Scalability Paradox
You can build a Model Context Protocol (MCP) for your site, but it faces two hurdles. First, discovery: how will an average user know to add your MCP tool if they don’t even know you exist? Second, saturation: if every web page offers a tool, the LLM becomes overwhelmed. Even a 1-million token context window cannot effectively manage millions of competing tools.
3. Search Fragments Your Information
Information can be split across multiple pages on your website. Web search tools only retrieve isolated, indexed pages, completely missing the relationships and logic that connect your content together.
I realized that Orchestration is the missing layer. We don’t need a smarter model; we need a better traffic controller.
The Solution: The “Orchestrator” Model
I built a system where the “Orchestrator” acts as the decision-maker.
Here is the architectural shift: Instead of the LLM trying to figure out which tool to call from a list of thousands, the Orchestrator classifies the intent and routes the request via a secure REST API.
How it works in 4 steps:
- User Query: The user asks their preferred LLM (e.g., ChatGPT, Claude, Gemini): “Where can I get the best vintage furnace repair in Ivano-Frankivsk?”
- Orchestrator Handoff: The LLM forwards the query to the Orchestrator (via tooling), which classifies the intent and location (e.g., Category: HomeRepair, Location: Ivano-Frankivsk).
- Async API Routing: The Orchestrator looks up all registered web pages in its database that fit the category and sends them REST API requests in asynchronous mode.
- Synthesis & Return: The Orchestrator aggregates the answers from these pages and sends the final result back to the LLM to display to the user.
Security: How It Actually Works
This requires a cryptographic handshake between the Orchestrator and the Business (Agent). Here is the workflow I built to ensure security:
1. The Registration (The “Visa”)
You (the business owner) visit the central Orchestrator Dashboard. You enter your agent’s details (e.g., category, location, etc.) and your URL.
2. The Integration (The “Setup”)
Once registered, the system generates a unique credential.
- The Key: You must download the .pem public key and place it into the authorized_keys folder within your web page’s repository.
- The Code: You need to create a new /agent endpoint in your application. Crucially, you must wrap this endpoint with the agent_orchestrator decorator, which handles the incoming security checks.
3. The Request (The “Check”)
When a user searches for “Best Pizza,” the Orchestrator signs the request using its private key. Your agent (via the decorator) checks the signature against the .pem file in your repository. If the signatures match, your agent answers; if not, it blocks the request.
Why The Handshake Matters: Protecting Your Wallet & Maintaining Control
This cryptographic handshake isn’t just for show; it is a firewall for your bank account and a tool for independence.
Running an AI Agent on your server has a cost (database reads + inference compute). If you simply exposed a public API endpoint, malicious bots could flood your server with requests (DDoS), skyrocketing your cloud bill overnight. The RSA-Key system ensures that only the specific Orchestrator you authorized can trigger your agent. If a request comes in without the correct cryptographic signature, your server rejects it instantly — before spending a single cent on compute.
Crucially, the library gives you full control. It includes a built-in key generator, meaning you are not locked into a single central Orchestrator. You can generate keys for different partners or private networks, allowing you to register your service agent with multiple orchestrators while deciding exactly who is trusted to access your API.
Why REST API?
This is the technical hill I am willing to die on: Triggering via REST API is superior to LLM Tool Calling.
When you rely on standard Tool Calling, you are asking the LLM to be the logic engine for everything. It is too much noise and it doesn’t scale.
By using a REST API trigger:
- Massive Parallelism (Async vs. Serial): REST APIs can run asynchronously. The Orchestrator can trigger 1,000 agents simultaneously. In contrast, LLM Tool Calling simply cannot effectively select and execute tools at that scale.
- Distributed Compute: The web page owner (the business) runs the inference on their server. They pay for the compute, but in exchange, they own the lead.
- Privacy & Data: The business can run their own RAG (Retrieval-Augmented Generation) against their private database/CRM to give the best answer. The Orchestrator never sees the database, only the answer.
- Full Protocol Control: You get the full advantage of the standard HTTP protocol. You can modify the request body, enforce strict timeouts, and manage headers. You are in control of the connection, not at the mercy of the model’s opaque decision-making.
The Economics: Why Web Pages Will Do This
You might ask, “Why would a business want to pay for the compute to answer a query?”
Think about the math. A single Google Ad click for ‘Emergency Plumber’ costs $50. For that same $50, you can run a lightweight LLM (or simple SQL query or use free LLL API key) to answer 10,000 direct customer queries via this API.
What are the benefits for small business?
- The business receives the original query.
- They can run internal algorithms to see if they have the item in stock.
- They return a high-quality answer.
How I Built It (The Tech Stack)
I built this POC in one day:
- Core: Python & Flask.
- Security: RSA-based JWT Authentication. (This is crucial — we can’t have spam agents).
- AI: Google Gemini for the classification and synthesis layer.
- Protocol: Async REST requests.
from llm_library import llm
# Import the distributed security layer
from agent_orchestrator import AgentAuth
load_dotenv()
app = Flask(__name__)
# Initialize the handshake protocol
auth = AgentAuth()
@app.route('/')
def index():
return render_template('index.html', movies=get_movies(app))
# The new "Agent Endpoint" that replaces Google Crawlers
@app.route('/agent', methods=['POST'])
@auth.require_auth # <--- The RSA Security Check
def agent():
data = request.get_json()
user_query = data.get('query')
# Run LLM call on your own database
prompt = f"Context: Movies: {movies_str}. Query: {user_query}. Answer query."
response = llm.generate_llm_answer(prompt)
return jsonify({"result": response})
Here is the security flow that makes it enterprise-ready: We use content_sha256 hashing to ensure the request body wasn't tampered with, and jti claims to prevent replay attacks.
Is this the death of SEO?
No. It’s the completion of it.
Current SEO relies on Indexation (hoping a crawler finds you). This protocol relies on Registration (you telling the network you exist). It turns marketing from a passive game of ‘hide and seek’ into an active conversation.
SEO is about optimizing for a crawler. This technology is about optimizing for connection.
I see a future where:
- Orchestrators act as the trust layer.
- Agents (Websites) act as the experts.
- LLMs act as the synthesizer, not the source of truth.
I Need Your Brain on This
This is a Proof of Concept, but the implications are massive. I want to open this up to the community because I can’t solve this alone.
- Is SEO a rigged game? Am I the only one feeling like SEO has become an unwinnable race, or are you also tired of optimizing for an algorithm that constantly moves the goalposts?
- Would you actually build this? If you could bypass Google and get traffic directly from LLMs, would you take the time to add an “Agent Endpoint” to your site?
- Is this the future? Do you think a decentralized network of agents can actually disrupt the search industry, or is the current “Index” model too big to fail?
- Did I miss something? Is there already a protocol or standard out there solving this exact problem that I haven’t seen yet?
Let’s argue in the comments.
If you want to see the code, break it, or build your own Agent, the repo is open: Agent Orchestrator Project [GitHub Repo]
PS: If you try to run the orchestrator, make sure you set APP_ENV=production if you deploy it, or the security checks will remain in dev mode.