Written by
Ayushman

Recently, I had the opportunity to work on a proof of concept where we needed to let LLM chatbots fetch courses from our catalog. With that context, the bots could recommend options tailored to each person’s activity, goals, and learning history.
That's when I came across MCP - the Model Context Protocol. It’s a simple but powerful way to personalise LLMs by giving them access to your own data or functionality whenever they need it.
In ChatGPT and Claude, all the services I use daily are connected through what they call connectors. Under the hood, those connectors are powered by MCP. Thanks to that, ChatGPT can now pull insights directly from my Notion pages or Google Calendar whenever I ask.
And just a few weeks ago, ChatGPT introduced developer mode for custom MCP servers, which means you can now add your own tools to GPT as well.
MCP standardises and provides an interface for all LLMs to receive context and function calling capabilities.

The basic setup is refreshingly straightforward. You register your tools and provide context - a title, description, parameters, and a callback function. Here's what that looks like in practice:
The example above is neat, but it isn’t realistic. We don’t want just anyone sending Slack messages or pulling data from our servers - proper authentication is a must.
MCP uses OAuth 2.1 with PKCE for authentication, but with some extra rules that make it different from what you might expect. Specifically, MCP requires discovery documents, resource indicators, and strict audience checks to keep tokens bound to the right server.
To secure our MCP server, we need an authorisation layer. Without it, anyone could hit our endpoints and start pulling data or sending requests. MCP builds this on top of OAuth 2.1, so the flow might look familiar if you’ve worked with modern auth systems.

.well-known EndpointThe first step is to let clients know how to talk to our authorization server. MCP follows the OAuth 2.0 Protected Resource Metadata (RFC 9728) and Authorization Server Metadata (RFC 8414) standards, which means clients look for metadata at predictable paths under .well-known.
For example, a request to:
https://our-domain.com/.well-known/oauth-authorization-server
should return JSON describing metadata for our auth server’s capabilities:
This document tells the MCP client where to send users for login (/authorize), where to swap codes for tokens (/token), and which scopes and flows are supported.
/authorize – Starting AuthenticationThe /authorize endpoint is where the login flow begins. When the MCP host opens, it redirects the user to our IDP (Google, Notion, GitHub, etc.) through this endpoint.
We need to generate a PKCE challenge for security and include the resource=https://our-domain.com/mcp parameter so the token is bound to our MCP server only.
/idp/callback – Handling RedirectsAfter the user logs into the IDP (Identity Provider), they’re redirected back to our server at a callback endpoint, such as /idp/callback. This is where we exchange the authorization code for an access token. That token is proof of who the user is and what they’re allowed to do.
/token – Issuing and Refreshing TokensFinally, our /token endpoint issues tokens (access + refresh) and handles renewals. Every request the MCP client makes to your server will carry:
Authorization: Bearer <access-token>
Our server must validate:
jwks.json)https://our-domain.comhttps://our-domain.com/mcpIf invalid or expired, return 401.
/mcp – Protecting the MCP EndpointWe need to protect our MCP endpoint with middleware that checks the token before connecting the client.
Now that we’ve got authentication and the /mcp endpoint in place, let’s look at how to actually use this setup.
We’ve got multiple options to deploy this service. Since it’s just a JavaScript runtime, we can spin it up with Node, Deno, or any other JS runtime - either locally or on a remote server.
node http.js (MCP Server) and node auth.ts (Auth Server), or use Docker. In this setup, you’ll usually have http://localhost:3000 for the MCP server and http://localhost:4000 for the auth server.localhost, you’ll point to your domain, e.g. https://mcp.your-domain.com and https://auth.your-domain.com.The only hard requirement: your .well-known endpoints and /mcp must be reachable over HTTPS.
Here are the different ways you can configure and connect to your MCP server depending on your deployment scenario:
| Scenario | Config Example | Notes |
|---|---|---|
| Local HTTP (dev) | mcp-remote http://localhost:3000/mcp 9696 --transport http-only | MCP on localhost:3000, Auth on localhost:4000. Great for testing. |
| Remote HTTPS | mcp-remote https://mcp.your-domain.com/mcp 9696 --transport http-only | Use when server is deployed to Vercel, Railway, EC2, etc. Requires HTTPS + .well-known. |
| Pure STDIO (no remote) | node ./server.js --transport=stdio | Direct connection to your server's stdio. Fastest dev loop. No HTTP bridge. |
| LangGraph / Agentic AI | new MultiServerMCPClient({ mcpServers: { ... }}) | Works with both mcp-remote (HTTP bridge) and pure stdio. Auto-discovers tools. |
mcp-remote http://localhost:3000/mcp 9696 --transport http-onlyMCP on localhost:3000, Auth on localhost:4000. Great for testing.
mcp-remote https://mcp.your-domain.com/mcp 9696 --transport http-onlyUse when server is deployed to Vercel, Railway, EC2, etc. Requires HTTPS + .well-known.
node ./server.js --transport=stdioDirect connection to your server's stdio. Fastest dev loop. No HTTP bridge.
new MultiServerMCPClient({ mcpServers: { ... }})Works with both mcp-remote (HTTP bridge) and pure stdio. Auto-discovers tools.
What started as a simple proof of concept - letting a chatbot recommend courses - has now become a pattern I use everywhere. By building an authenticated MCP server, I’ve been able to give LLMs secure access to tools and data that actually matter: Slack for communication, Notion for notes, Google Calendar for scheduling, or even custom APIs inside a company.
The amazing thing about MCP is its plug and play capability. Write once, and your server works across Claude, ChatGPT, Cursor, or even your own in-house chatbot. Instead of reinventing integrations, you simply register tools and let the LLM call them securely.
Of course, it wasn’t all smooth sailing. While testing with Cursor, I kept running into an annoying issue: every time I changed a tool on my server, I had to restart both Cursor and the server before the updates would show up. It really slowed me down in the early stages. I eventually solved this by adding nodemon to automatically restart the server whenever files changed, and then triggering the tools update method from the MCP SDK so that changes were detected without having to reconnect or restart Cursor.
And the possibilities are still wide open. Imagine LLMs booking meetings directly into your calendar, summarizing Notion or Obsidian notes, or managing deployment pipelines — all gated by OAuth and standardized by MCP. It’s also a great way for products, services, and apps to extend their knowledge and functionality into the world of LLMs.