Why a Standard JWT Access Token Matters

Khalid Abuhakmeh |

When OAuth 2.0 was published as RFC 6749 in 2012, it was deliberately silent on the format of access tokens. The spec described them as opaque strings: the client receives one, attaches it to API requests, and never looks inside. That abstraction was intentional, but it created a vacuum.

As the industry adopted JWTs for access tokens (a practice accelerated by the publication of the JWT specification itself as RFC 7519 in 2015), every identity provider filled that vacuum in its own way. Auth0 puts scopes in an array. Azure AD used roles claims and application ID URIs. IdentityServer emitted individual scope claims. Okta had its own structure. The token was always a JWT, but beyond that, nothing was consistent.

The consequences were predictable. If you built an API that consumed tokens from a single provider, you wrote validation logic tailored to that provider's JWT structure. If your organization then migrated providers, or if you needed to accept tokens from multiple issuers (a common scenario in B2B integrations and microservice architectures), you had to write custom parsing logic for each one.

Claim names varied. Scope formats clashed. Some providers included an audience claim; others didn't. Some set a typ header; most left it as the default JWT. Every integration was a bespoke affair, and every bespoke affair was a surface for security bugs.

RFC 9068, published in September 2021, ended the guesswork. It defines a concrete JWT profile for OAuth 2.0 access tokens: which claims must be present, how they're formatted, and how validators should process them. It gives us a common language. When your identity provider issues an RFC 9068-compliant access token and your API validates one, both sides agree on the structure: the typ header reads at+jwt, the scope claim is a space-delimited string, the aud claim names the intended API, and so on. The timeline from silence to standard took nearly a decade: OAuth 2.0 in 2012, JWT in 2015, the draft profile in 2019, and the finalized RFC in 2021.

Today, every major identity provider supports it, and there is no longer a good reason to invent your own access token format.

Anatomy of a Compliant Access Token

Check out the Duende JWT Verification Tool

An RFC 9068-compliant access token is a signed JWT with a specific header and a defined set of required claims. Understanding each piece is essential for both issuing conformant tokens and validating them correctly on the API side.

{
  "typ": "at+jwt",
  "alg": "RS256",
  "kid": "key-id-123"
}

The header is where the single most important field lives: typ: "at+jwt". This two-part media type subtype tells any validator, unambiguously, that this JWT is an OAuth 2.0 access token. It is not an identity token (JWT), not a logout token (logout+jwt), not a DPoP proof (dpop+jwt). It is an access token, full stop.

Before RFC 9068, most JWTs shipped with typ: "JWT" or omitted the field entirely, making it impossible for a validator to distinguish an access token from an identity token based on the header alone.

That ambiguity enabled an entire class of token-confusion attacks, in which an attacker could present an identity token to an API endpoint and have it accepted because the signature was valid and the claims appeared plausible. The at+jwt type header eliminates that attack vector. Your API must check it. If the typ doesn't match, reject the token before inspecting anything else.

The alg field specifies the signing algorithm. RS256 (RSA with SHA-256) is the most common choice, though ES256 (ECDSA with P-256) is increasingly preferred for its smaller key and signature sizes. The kid (key ID) field tells the validator which key from the issuer's JSON Web Key Set (JWKS) to use for signature verification. Together, these three fields give the validator everything it needs to determine what the token is, how it was signed, and which key to use to verify it.

Required Claims (Payload)

{
  "iss": "https://identity.example.com",
  "exp": 1735689600,
  "aud": "https://api.example.com",
  "sub": "user-123",
  "client_id": "mobile-app",
  "iat": 1735686000,
  "jti": "unique-token-id-456",
  "scope": "openid profile api:read api:write"
}

The payload carries the claims that define the token's authorization context. RFC 9068 mandates the following:

Claim Purpose Notes
iss Token issuer Must match the expected authority URL exactly
exp Expiration time Unix timestamp; reject tokens past this time
aud Intended audience The API identifier(s) this token is valid for
sub Subject The user's unique ID, or the client ID for M2M flows
client_id OAuth client identifier Always present, regardless of flow
iat Issued-at time Unix timestamp; preferred over nbf for freshness
jti JWT ID A unique identifier for replay detection
scope Granted scopes A single space-delimited string

The iss (issuer) claim identifies the authorization server that produced the token. Your API should compare this value against a known, trusted issuer URL. A mismatch means the token originated from an unexpected source, and it should be rejected immediately.

The exp (expiration) and iat (issued-at) claims bound the token's lifetime. RFC 9068 prefers iat over the older nbf (not-before) claim for indicating when the token was created. Together, iat and exp let your validator check both that the token hasn't expired and that its lifetime is reasonable. A token issued an hour ago with a 5-minute lifetime is suspicious regardless of what exp says.

The jti (JWT ID) claim provides a globally unique identifier for the token. Its primary purpose is replay detection: if your API needs to ensure a token is used only once (or to track token usage), the jti provides a value to store and check. Not every API needs replay detection, but the claim's presence is mandatory so that APIs that do need it have a reliable identifier to work with.

The client_id claim identifies which OAuth client requested the token. This claim is always present (in both user-interactive flows and machine-to-machine flows), and it is distinct from sub. We'll examine the relationship between sub and client_id in the next section, because getting this wrong is one of the most common authorization bugs in API code.

Note that not all identity providers issue JWT access tokens; some use opaque (reference) tokens that require introspection. If your APIs need to handle both formats, you'll need a unified validation strategy that can inspect the token format at runtime and route to the appropriate handler.

The sub Claim: Dual Semantics

The sub claim in RFC 9068 carries different meanings depending on the OAuth flow that produced the token, and this duality was one of the most debated design decisions during the specification's development.

In a user-interactive flow (authorization code, device code, or any flow where a human authenticates), sub contains the unique identifier of the authenticated user. This aligns with how OpenID Connect defines sub in identity tokens, and it's what most developers expect. Your API uses sub to know who is making the request.

In a client credentials flow, the machine-to-machine grant, where no user is involved, sub contains the client identifier. There is no user in the picture, so the token's subject is the application itself. The value of sub will match the value of client_id.

This dual semantics was one of the most contested decisions during the specification's development. The concern was that overloading a single claim with two different meanings created ambiguity: API code that reads sub and assumes it's always a user ID will silently misattribute actions when the token is issued via a client credentials flow. An M2M token's sub is a client identifier, not a person, and treating it as a user ID in audit logs, database ownership records, or authorization policies could produce subtle and dangerous bugs.

Critics argued for separate, unambiguous claims. The working group acknowledged this concern but ultimately chose unification for pragmatic reasons: the sub claim was already deeply embedded in the JWT ecosystem through OpenID Connect, and defining a parallel claim would have further fractured implementations. The standard was published with dual semantics, and that decision is now settled.

The practical implication is that your API code must never assume sub is always a user. Defensive coding means always inspecting client_id alongside sub and branching your authorization logic accordingly:

public static class TokenIdentity
{
    /// <summary>
    /// Determines token type and extracts the appropriate subject identity.
    /// </summary>
    public static (bool IsM2M, string Subject, string ClientId) GetIdentity(ClaimsPrincipal user)
    {
        var sub = user.FindFirstValue("sub");
        var clientId = user.FindFirstValue("client_id");

        if (string.IsNullOrEmpty(sub) || string.IsNullOrEmpty(clientId))
        {
            throw new InvalidOperationException(
                "Token is missing required claims. Both 'sub' and 'client_id' must be present per RFC 9068.");
        }

        // When sub equals client_id, this is a machine-to-machine token
        // issued via the client credentials grant
        var isM2M = string.Equals(sub, clientId, StringComparison.Ordinal);

        return (isM2M, sub, clientId);
    }
}

// Usage in a controller or minimal API endpoint
app.MapGet("/api/documents", (ClaimsPrincipal user) =>
{
    var (isM2M, subject, clientId) = TokenIdentity.GetIdentity(user);

    if (isM2M)
    {
        // Machine-to-machine: authorize based on client identity and scopes.
        // Do NOT treat 'sub' as a user ID for ownership queries.
        logger.LogInformation("M2M access by client {ClientId}", clientId);
        return documentService.GetByClientPermissions(clientId);
    }

    // User-interactive: sub is the authenticated user's unique identifier
    logger.LogInformation("User access by {UserId} via client {ClientId}", subject, clientId);
    return documentService.GetByOwner(subject);
});

The key defensive principle: never use sub alone for authorization decisions. Always check client_id explicitly.

Log both values. A silent misattribution is far worse than a noisy type check. These misattributions can lead to audit log poisoning, incorrect data ownership assignments and security checks, billing and metering errors, and more operational-related headaches.

By the time you discover this problem, you may have weeks or months of corrupted data. If your API serves both user-interactive and M2M clients, as most APIs do, build the branch early and make a type check obvious in your code.

The aud Claim: Mandatory Audience Validation

RFC 9068 requires the aud (audience) claim in every access token, and it requires every resource server to validate it. This was another point of contention during the specification's development. Some implementers felt audience validation added complexity without sufficient benefit, but the working group made it mandatory for a straightforward security reason: without audience validation, a token issued for one API can be replayed against a different API protected by the same authorization server.

Consider a scenario where your organization runs both a document API and a billing API, both of which trust the same identity provider. Without audience validation, an access token issued for the document API would also be accepted by the billing API. The signature is valid (same issuer, same keys), the claims look fine, and the billing API happily grants access to an endpoint the user was never authorized to reach.

The aud claim prevents this by naming the intended recipient. The document API's token includes "aud": "https://documents.example.com", and the billing API rejects it because its own identifier doesn't appear in the audience.

When a token targets a single resource server, aud is a simple string:

{
  "aud": "https://api.example.com"
}

When a token targets multiple resource servers (less common, but valid), aud becomes a JSON array:

{
  "aud": ["https://api.example.com", "https://reports.example.com"]
}

Your API must validate that its own identifier appears in the aud value, whether that value is a string or an array. In ASP.NET Core, the AddJwtBearer middleware handles this automatically when you set options.Audience.

The aud claim works hand in hand with RFC 8707 (Resource Indicators for OAuth 2.0), which defines the resource parameter for authorization and token requests. When a client includes resource=https://api.example.com in its token request, the authorization server knows to set the aud claim to that value. This gives clients explicit control over which API a token is scoped to and prevents the authorization server from issuing overly broad tokens valid for every API in the ecosystem.

If you're using Duende IdentityServer, resource indicators are supported natively through API resource configuration. Clients request a specific resource, the server sets the audience, and the receiving API validates it, forming a clean chain of trust from request to validation.

Scope: The Space-Delimited String Problem

RFC 9068 specifies that the scope claim is a single space-delimited string: "openid profile api:read api:write". This is consistent with how OAuth 2.0 itself represents scopes in protocol messages (the scope parameter in authorization requests has always been space-delimited), but it created a practical problem for .NET developers because many identity providers historically emitted scopes differently.

Some providers issued scope as a JSON array: ["openid", "profile", "api:read"]. Others, including older versions of IdentityServer, emitted individual scope claims, one claim per scope value. The .NET claims model maps naturally to both of those formats: User.FindAll("scope") returns multiple claims, and authorization policies can check for specific values with RequireClaim("scope", "api:read").

But when scope arrives as a single space-delimited string, that model breaks. User.FindFirstValue("scope") returns "openid profile api:read api:write" as one monolithic value, and RequireClaim("scope", "api:read") fails because it's comparing against the entire concatenated string, not the individual scope values.

The fix is a claims transformation that splits the space-delimited string into individual claims at the authentication boundary, before your authorization policies ever see them. Register an IClaimsTransformation that runs after the JWT bearer handler authenticates the token, finds any scope claim whose value contains spaces, splits it, and replaces the single claim with multiple individual scope claims:

using System.Security.Claims;
using Microsoft.AspNetCore.Authentication;

public class ScopeClaimsTransformation : IClaimsTransformation
{
    public Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
    {
        var identity = principal.Identity as ClaimsIdentity;
        if (identity is null)
            return Task.FromResult(principal);

        var scopeClaims = identity.FindAll("scope").ToList();
        var needsTransformation = scopeClaims.Any(c => c.Value.Contains(' '));

        if (!needsTransformation)
            return Task.FromResult(principal);

        // Remove the original space-delimited scope claims
        foreach (var claim in scopeClaims)
        {
            identity.RemoveClaim(claim);
        }

        // Split and add individual scope claims
        var individualScopes = scopeClaims
            .SelectMany(c => c.Value.Split(' ', StringSplitOptions.RemoveEmptyEntries))
            .Distinct(StringComparer.Ordinal);

        foreach (var scope in individualScopes)
        {
            identity.AddClaim(new Claim("scope", scope, ClaimValueTypes.String));
        }

        return Task.FromResult(principal);
    }
}

// Register in Program.cs
builder.Services.AddTransient<IClaimsTransformation, ScopeClaimsTransformation>();

After the transformation, User.FindAll("scope") returns ["openid", "profile", "api:read", "api:write"] as separate claims, and standard authorization policies work as expected. Your [Authorize(Policy = "...")] attributes and manual claim checks work uniformly regardless of whether the incoming token follows RFC 9068's space-delimited format or the older array/multi-claim format.

Validation in ASP.NET Core 10

Validating an RFC 9068-compliant access token in ASP.NET Core 10 requires configuring the JWT bearer handler with the right settings. Here's the full configuration with commentary on each decision:

builder.Services.AddAuthentication()
    .AddJwtBearer(options =>
    {
        // The authority is your identity provider's base URL.
        // The middleware fetches the OpenID Connect discovery document
        // from {Authority}/.well-known/openid-configuration to discover
        // the issuer identifier and the JWKS endpoint for signing keys.
        options.Authority = "https://identity.example.com";

        // The audience value must match the aud claim in incoming tokens.
        // This is how your API identifies itself. Tokens issued for a
        // different API will be rejected even if the signature is valid.
        options.Audience = "https://api.example.com";

        options.TokenValidationParameters = new TokenValidationParameters
        {
            // RFC 9068 compliance: validate the at+jwt type header.
            // This is your first line of defense against token confusion
            // attacks. An identity token, logout token, or DPoP proof
            // will be rejected immediately because its typ won't match.
            ValidTypes = ["at+jwt"],

            // Standard validations: all should be true in production.
            // ValidateIssuer ensures the iss claim matches the Authority.
            // ValidateAudience ensures the aud claim matches your API.
            // ValidateLifetime checks exp (and nbf if present).
            // ValidateIssuerSigningKey verifies the signature against
            // the issuer's published JWKS keys.
            ValidateIssuer = true,
            ValidateAudience = true,
            ValidateLifetime = true,
            ValidateIssuerSigningKey = true,

            // Clock skew compensates for small time differences between
            // the identity provider's clock and your API server's clock.
            // The default is 5 minutes, which is generous. One minute is
            // sufficient for most deployments; zero is ideal if both
            // systems use NTP reliably.
            ClockSkew = TimeSpan.FromMinutes(1),
        };
    });

// Register the scope transformation so space-delimited scopes
// are split into individual claims for authorization policies
builder.Services.AddTransient<IClaimsTransformation, ScopeClaimsTransformation>();

Each setting maps directly to a security requirement. ValidTypes = ["at+jwt"] enforces the RFC 9068 type header. Without this, your API is vulnerable to token confusion attacks in which an identity token or another JWT type is accepted as an access token.

ValidateIssuer and ValidateAudience ensure the token was issued by your trusted authority and intended for your API specifically. ValidateLifetime rejects expired tokens. ValidateIssuerSigningKey confirms the token's cryptographic signature matches a key published in the issuer's JWKS document.

A common mistake is leaving ValidateAudience = false during development and forgetting to re-enable it. This silently disables one of the most important checks in the validation pipeline. Another common mistake is setting ClockSkew to TimeSpan.Zero in environments where the authorization server and the API server drift by even a few seconds; tokens will be intermittently rejected, producing confusing errors under load. One minute of skew tolerance is a reasonable default.

For a deeper discussion of JWT validation security, including algorithm restrictions, key management, and defending against known attack patterns, consult RFC 8725 (JWT Best Current Practices).

Provider Conformance Matrix

RFC 9068 adoption is now widespread among major identity providers, though the degree of conformance and the required configuration vary. The following matrix summarizes the current state:

Provider typ: at+jwt sub dual semantics Space-delimited scope jti aud Notes
Duende IdentityServer v7 Full conformance out of the box. RFC 9068 is the default token format.
Microsoft Entra ID ⚠️ Uses the application ID URI (e.g., api://client-id) as the aud value rather than a URL-style resource identifier. Ensure your Audience configuration matches. v2.0 endpoints required.
Auth0 Conformant when a custom API audience is configured. Without one, tokens use an opaque format.
Keycloak ⚠️ Requires explicit configuration to emit typ: at+jwt. Default type header is Bearer. Enable the "oidc-usermodel-attribute-mapper" or configure the token-type in realm settings.
Okta Conformant with custom authorization servers. The org authorization server uses an opaque token format.

A few observations worth noting:

  • Entra ID's aud handling is the most common source of confusion for .NET developers: if you configure options.Audience = "https://api.example.com" but Entra ID issues tokens with "aud": "api://your-client-id", validation will fail. Always check your app registration's "Application ID URI" and use that exact value.
  • Keycloak requires the most manual configuration to achieve full conformance; the default token format predates RFC 9068, and you need to explicitly opt in to the standard type header.
  • Auth0 and Okta both require that you define a custom API resource; their default or org-level tokens don't follow the profile.

The overall trend is positive. As of 2026, every provider in this list can produce RFC 9068-compliant tokens with the right configuration, and most do so by default.

Optional Claims: auth_time, amr, and acr

Beyond the required claims, RFC 9068 defines several optional claims that carry additional authentication context. These claims aren't needed for basic access control, but they become essential when your API needs to make authorization decisions based on how or when the user authenticated, not just who they are.

The auth_time claim records the Unix timestamp of when the user's authentication event occurred, the moment they entered their password, scanned their fingerprint, or otherwise proved their identity. This is distinct from iat, which records when the token was issued. A user might authenticate once and then receive multiple tokens over the course of a long session.

If your API needs to enforce that the user has been authenticated recently (for example, before approving a high-value financial transaction), you check auth_time against your freshness threshold. If the authentication is too old, you reject the request and force a re-authentication through the max_age parameter or a step-up authentication flow.

The amr (Authentication Methods References) claim is an array of strings indicating which authentication methods were used. Common values include pwd (password), mfa (multi-factor authentication), otp (one-time password), fpt (fingerprint), and sms (short message service). When your API needs to verify that multi-factor authentication was performed (a common requirement for sensitive operations), you inspect amr for the presence of mfa or a combination of factors:

var amr = User.FindAll("amr").Select(c => c.Value).ToList();
if (!amr.Contains("mfa"))
{
    return Results.Forbid(); // Require MFA for this operation
}

The acr (Authentication Context Class Reference) claim takes this a step further. Rather than listing individual methods, acr names a specific authentication context class, a policy that defines what level of assurance was achieved. For instance, urn:mace:incommon:iap:silver might indicate standard authentication, while urn:mace:incommon:iap:gold might require hardware-backed MFA.

Step-up authentication patterns rely heavily on acr: a user might browse your application with a silver context, but when they attempt to change their password or approve a payment, your API checks the acr claim and, if it's insufficient, triggers an incremental authorization request that demands a gold context. The authorization server then prompts the user for stronger authentication, issues a new token with the elevated acr, and the user retries the request.

Custom claims (roles, permissions, tenant identifiers, organization memberships) are also commonly included in access tokens. RFC 9068 doesn't prescribe these, but it doesn't prohibit them either. The standard's required claims establish the baseline; your domain-specific claims build on top of it.

Interop Testing Checklist

Before deploying an API that consumes RFC 9068 access tokens in production, or before switching identity providers, run through this checklist against actual tokens from your authorization server. These checks verify that both sides (the issuer and the validator) agree on the token format. Automated integration tests that decode a real token and assert each of these properties will save you from subtle misconfigurations that only surface under production traffic.

1. Type header is at+jwt. Decode the token header (the first Base64-encoded segment) and confirm that the typ field is exactly at+jwt. If it reads JWT, Bearer, or anything else, the token is not RFC 9068-compliant. Check your identity provider's configuration for JWT profile settings.

2. All required claims are present. The payload must contain iss, exp, aud, sub, client_id, iat, and jti. Missing claims indicate a configuration issue on the authorization server. The scope claim should also be present if scopes were requested.

3. Scope is a space-delimited string. If your identity provider emits scope as a JSON array or as multiple claims, it is not conformant with RFC 9068. Verify that the scope claim is a single string with values separated by spaces: "openid profile api:read".

4. Audience matches your API identifier. The aud claim must contain the exact string your API expects. Pay particular attention to trailing slashes, protocol schemes, and the difference between URL-style identifiers and application ID URIs (common with Entra ID).

5. Signature validates against the issuer's JWKS. Fetch the issuer's JWKS endpoint (typically {issuer}/.well-known/openid-configuration points to it) and confirm that the token's kid matches a published key and the signature verifies with that key. If the issuer rotates keys, verify that your API picks up new keys without a restart.

6. Timestamps are reasonable. Confirm that iat is in the past, exp is in the near future (not days or weeks out), and the lifetime (exp minus iat) matches your authorization server's configured token lifetime.

7. jti is unique. If you're implementing replay detection, issue two tokens in quick succession and confirm their jti values differ. Even if you're not implementing replay detection today, verifying jti uniqueness confirms the authorization server is generating them correctly.

Conclusion

RFC 9068 resolved a decade-long gap in the OAuth ecosystem. Before its publication, each identity provider invented its own JWT access token format, and every API that consumed those tokens had to implement custom validation logic tailored to the specific provider. That era is over. The standard defines a concrete, interoperable format: a mandatory at+jwt type header that prevents token confusion attacks, required claims that cover issuer verification, audience restriction, and replay detection, and a consistent scope representation that works across providers.

For .NET developers building APIs in 2026, the path is clear: configure the JWT bearer handler to validate the at+jwt type, set your audience, register the scope transformation, and test against real tokens from your identity provider. That's the baseline. For deeper security hardening (algorithm restrictions, key rotation strategies, and defense against the full catalog of JWT attack patterns), consult RFC 8725 (JWT Best Current Practices).


Thanks for stopping by!

We hope this post helped you on your identity and security journey. If you need a hand with implementation, our docs are always open. For everything else, come hang out with the team and other developers on GitHub.

If you want to get early access to new features and products while collaborating with experts in security and identity standards, join us in our Duende Product Insiders program. And if you prefer your tech content in video form, our YouTube channel is the place to be. Don't forget to like and subscribe!

Questions? Comments? Just want to say hi? Leave a comment below and let's start a conversation.