Making Websites Talk to AI

The Web Was Built for Humans
Every website you visit was designed for a browser. HTML, CSS, navigation menus, hero images. The entire stack assumes a human is on the other end, reading and clicking. Search engines adapted to this by crawling and indexing the same HTML, extracting what they could from structure and metadata.
AI agents can browse, but it is clumsy. Scraping HTML and parsing rendered pages works until it does not. What agents actually need is structured data at the source.
That is the problem I set out to solve, using korm.co as a live demonstration.
Model Context Protocol
Anthropic published the Model Context Protocol (MCP) as an open standard for connecting AI systems to external data and tools. The idea is straightforward: define a server that exposes resources, prompts, and tools over a standard interface. Any MCP-compatible client can connect, discover what is available, and interact with it programmatically.
MCP uses JSON-RPC over HTTP. A client sends a request, the server responds with structured data. Resources are read-only content like a blog post or a bio. Prompts are templated interactions that guide the agent toward useful outputs. Tools are actions the server can perform. The protocol handles capability negotiation, so a client knows exactly what a server offers before making any calls.
I built an MCP server for korm.co that exposes the entire site as structured resources. Blog posts, the links directory, author bio, terms of service. All available as machine-readable content through mcp.korm.co.
Discovery via DNS
Having a server is one thing. Letting agents find it is another.
MCP supports discovery through DNS TXT records. A single record on _mcp.korm.co advertises the server location and auth requirements:
_mcp.korm.co TXT "v=mcp1; src=https://mcp.korm.co; auth=none"
Any agent that knows to look for this record can find the MCP server for a domain without prior configuration. No API keys, no registration, no manual setup. Just a DNS lookup.
This is the part that excited me. DNS is infrastructure that already exists everywhere. Using it for service discovery means any domain owner can make their content agent-accessible by adding one TXT record and running a server.
mcp-www
Discovery only works if agents have a way to perform the lookup and speak the protocol. I built mcp-www to be that bridge.
It is an open source npm package that sits between any MCP-compatible AI client and the open web. Install it, point it at a domain, and it handles DNS discovery, server connection, and content retrieval.
npm install -g mcp-www
Add it to your MCP client configuration:
{
"mcpServers": {
"mcp-www": {
"type": "stdio",
"command": "npx",
"args": ["mcp-www"]
}
}
}
Then call discover_browse with any domain that has an MCP TXT record*:
{
"tool": "discover_browse",
"arguments": { "domain": "korm.co" }
}
That single call performs the DNS lookup, connects to the advertised server, and returns the full manifest of available resources, prompts, and tools. From there the agent can read blog posts, get the author bio, ask for post recommendations, or call any tool the server exposes.
The package provides seven tools*:
| Tool | Purpose |
|---|---|
discover | DNS-only lookup, returns server URLs and metadata (single or batch) |
discover_browse | DNS + server card lookup in one call |
browse | Inspect a domain or server URL via server card with MCP fallback |
read_remote_resource | Fetch a specific resource by URI |
call_remote_tool | Invoke a tool on the remote server |
get_remote_prompt | Retrieve a prompt with arguments |
install | Generate client config for Claude Desktop, VS Code, Cursor, and Windsurf |
What korm.co Exposes
The MCP server running at mcp.korm.co currently offers:
Resources are read-only content that agents can fetch directly:
korm://biofor author informationkorm://linksfor the curated links directorykorm://mediafor a listing of all blog posts with metadatakorm://media/{slug}for individual post content in markdownkorm://termsfor terms of service
Prompts are guided interactions with arguments:
summarize-blogprovides all post metadata and asks for a thematic summaryrecommend-posttakes a topic and suggests the most relevant postabout-evanreturns author context for introduction
Tools are actions:
ask_agentis a stub endpoint for a future conversational agent backed by Claude
The server reads content from the same source files that build the static site. Blog posts come from the MDX files in content/posts/, links from content/links.json. One source of truth, two interfaces: one for browsers, one for agents.
From robots.txt to llms.txt
robots.txt has been the web's way of talking to machines since 1994, but it only says come in or stay out. The llms.txt proposal picks up where it leaves off, giving agents static context and acting as a guide, not just a gate.
korm.co publishes one at korm.co/llms.txt. It describes the site, bootstraps agents toward mcp-www and the MCP server, and falls back to direct page links for agents without MCP support.
Why This Matters
Right now korm.co exposes content and prompts. An agent can read every blog post, get a recommendation, or learn about the author. That alone is useful, but it is the least interesting part of what MCP can do.
The protocol supports tools. Actions an agent can execute on the server. The ask_agent stub on korm.co hints at where this goes. Imagine a site that lets agents submit forms, run queries, trigger workflows, or interact with services, all through the same protocol they used to read a blog post. Content is the starting point. Tools are where this gets serious.
MCP is young, discovery patterns are still forming, and the tooling is early. But the infrastructure is being built in the open, and anyone can participate. A TXT record, a small Node.js server, and your site is available to every AI agent that knows how to ask.
What Comes Next
DNS discovery and mcp-www solve the problem for individual sites, but the broader question is how this scales. I submitted a proposal to the MCP spec discussions focused on DNS-based discovery for the open web. Separately, others have proposed concepts like WebMCP for browser-native transport and indexation layers that would let agents discover servers without knowing specific domains in advance.
I am running benchmarks across discovery methods, DNS TXT via mcp-www, .well-known HTTP endpoints, and traditional web scraping, to simulate indexation scenarios.
As those standards emerge, they will build on top of infrastructure like this rather than replace it. DNS-based discovery is not a registry in itself. You still need to know the domain to query it. But it provides a scalable, high-performance foundation for building one. No central authority, no sign-up, no gatekeepers. Every domain owner controls their own entry.
Try It
Install mcp-www and discover korm.co:
npx mcp-www
Then in your MCP client:
{ "tool": "discover_browse", "arguments": { "domain": "korm.co" } }
The source for mcp-www is at github.com/kormco/mcp-www.
*Updated March 16, 2026 — mcp-www v0.2.0 consolidated these tools into discover, discover_browse, browse, call_remote_tool, read_remote_resource, get_remote_prompt, and a new install tool. See the README for current usage.