Last year, Anthropic released Model Context Protocol (MCP), a new protocol for AI processes like LLMs or AI Agents to predictably communicate with external resources. Previously, developers would ingest an API’s OpenAPI specification to LLMs. While descriptive, these specifications didn’t provide LLMs with adequate intention behind an API. Instead, MCP emerged, a protocol that organizes pre-packaged prompt templates, tools, and data of an external resource like Salesforce, a document repository, or another agentic system.
Additionally, authorization flows for MCP are distinct from a traditional API. Typically, APIs rely on signed credentials granted to a human (e.g. via email and password). However, AI agents or LLMs aren’t humans, yet still need to attain authorization without a human present. Hardcoding email and password information into an AI agent is a naive and risky approach—instead, we need a context-aware authorization approach for AI agents and MCP.
Today, we’ll dive into the best practices for authorization for MCP, detailing OAuth 2.1, Proof Key for Code Exchange, Dynamic Client Registration, and authorization frameworks. However, before getting into the details, let’s review how MCP actually works.
What is Model Context Protocol (MCP)
Model Context Protocol (MCP) is an open standard developed by Anthropic that standardizes how applications provide context to Large Language Models (LLMs). The common, potentially overused analogy for MCP is that it’s a “USB-C port for AI applications”; however, I prefer describing it as a cakemix box, describing what’s inside, how to use it, and the potential complex things to create with it.
MCP is distinct from API protocols, such as strictly descriptive protocols like OpenAPI. With an OpenAPI specification, AI agents must figure out what to do from scratch. With MCP, AI agents can explore the application’s purpose, common usage patterns, and tools that effectively bundle actions that would otherwise take multiple API calls. When humans interface with applications, they use an API, the API’s documentation, and step-by-step tutorials online. With MCP, AI agents can access all of that in one place.
How does MCP work?
MCP defines a client-server architecture. Each AI application acts as a host that runs multiple MCP clients. When an MCP host wants to connect to an MCP server (i.e. an external resource), it’ll create an MCP client to manage that relationship.
MCP servers expose three main components:
- Tools, specific functionalities akin to API routes that the AI can invoke
- Resources, specific files, or data that AI can access from integrated applications
- Prompts, pre-written instructions to assist in particular situations.
In other words, unlike traditional APIs that only explain what can be done, developers can use MCP to communicate the business context of their application, making it easier for AI systems to understand when and why to use specific tools.
How does MCP handle authorization?
While a few public MCP servers might not require authorization (such as a public resource like a WHOIS system), a large number of MCP servers need to authorize clients—either to rate-limit requests (e.g. a CC0 images repository) or to expose otherwise confidential data (e.g. Salesforce or Snowflake).
There are a few techniques to handle authorization. The simplest is to use API keys, ideally with a key management solution (e.g. Infisical) for safety. However, API keys have inherent limitations: they typically provide broad, service-level access rather than fine-grained, user-specific permissions. They're also only suitable for services that support API key authentication
Instead, most applications require a user-delegated authorization flow like OAuth 2.0. However, OAuth for AI Agents is more complex: agents aren’t humans, and manually storing credentials in an AI Agent’s codebase is dangerous.
To work around this, developers can use Dynamic Client Registration (DCR) to pre-authorize AI Agents identified with attributes. However, before diving into those details, it’s best to start with the basics: OAuth 2.1, MCP’s chosen standard for authorization.
What is OAuth 2.1?
OAuth 2.1 is a proposed IEEE specification that builds on OAuth 2.0 while addressing some security shortcomings. One of the critical components of OAuth 2.1 is that MCP clients and servers can delegate authorization, where a third-party service handles authorization on behalf of the MCP clients and server.
There are a few key distinctions between OAuth 2.1 and OAuth 2.0. In particular, there are three changes: mandating PKCE, Metadata Discovery, and DCR.
Let’s discuss each of these in-depth.
Proof Key for Code Exchange (PKCE)
Originally an optional extension for OAuth 2.0, PKCE adds an additional security layer during access token grants following submitted credentials.
Let’s remind ourselves of how OAuth 2.0 works:
- The user provides valid credentials.
- The authorization server grants the client server an authorization code.
- The client can then trade that code for an access token.
- With the access token, a server can access a protected resource.
However, this poses a vulnerability: if an attacker intercepts the authorization code grant, they can trade it for an access token!
To address this, PKCE creates an additional set of steps:
- Before credentials are submitted, the client server generates a random string (a.k.a. a verifier) and a code challenge (derived from the verifier).
- When the client requests the authorization code grant, the challenge is also submitted.
- When the authorization code grant is submitted for an access token, the verifier is also included.
- The access token is only dispatched if the verifier matches the code challenge.
Because the client never transmits the verifier over the wire until the access token request, the authorization server can confidently identify the same client server instead of an attacker.
Metadata Discovery
Because humans are not manually rigging connections—in this case, between MCP clients and MCP servers—there needs to be a deterministic system for an MCP client to determine which authorization server to connect to. The solution is for authorization servers to expose metadata informing the MCP client about themselves.
In particular, this apparatus is necessary for Dynamic Client Registration, where MCP clients automatically register with authorization servers without a human present.
Dynamic Client Registration
Dynamic Client Registration (DCR) is another protocol extension for OAuth 2.0 that is included with OAuth 2.1. With DCR, MCP clients automatically pre-register with new authorization servers without requiring a user to be present. Given that AI Agents often don’t know what resources they’ll need ahead of time, DCR allows MCP to request authorization servers that the user didn’t know when they originally instantiated the AI Agent
Beyond OAuth 2.1, how is authorization practically implemented?
OAuth 2.1 handles the problem of getting a valid access token for an MCP client. But authorization doesn’t stop at the token. Once an identity has been established, systems still need to determine what that identity is actually allowed to do.
This is where access frameworks come in. In modern systems, most approaches fall into three categories:
- Role-Based Access Control (RBAC), where permissions are grouped into roles like admin, editor, or viewer.
- Relationship-Based Access Control (ReBAC), where access depends on graph-like relationships between entities (e.g. user owns dataset, employee reports to manager).
- Attribute-Based Access Control (ABAC), where permissions might depend on any attributes of the user or resource, such as user identity, device, resource type, or request context.
Developers might combine these frameworks (sometimes referred to as “AnyBAC”) and implement policy-as-code engines such as Oso or Open Policy Agent (OPA) to manage enforcement.
For MCP specifically, OAuth 2.1 securely authorizes clients. Afterward, RBAC, ReBAC, or ABAC schemes define what resources an MCP client can touch, under what conditions, and how the system logs and audits actions. In other words, OAuth decides who gets in, while these frameworks decide what they can do once inside.
What is Oso?
Oso is a policy-as-code framework designed to help developers implement fine-grained authorization directly into their applications. Instead of scattering permission checks across code, Oso centralizes them into policies written in a declarative language called Polar. These policies can capture everything from simple role-based access controls (RBAC) to more complex, attribute- or relationship-based models (ABAC, ReBAC).
In practice, Oso acts as a decision engine. When an MCP client presents a token, Oso evaluates whether that token’s identity can perform a given action on a resource. For example, a Polar policy could state that only the owner of a document may edit it, or that access to sensitive data requires both the correct role and a trusted device.
If you are curious about Oso’s work, and how we’re automating least privilege for AI agents, learn more by clicking here.

.jpg)