At Duende, we’re best known for being the .NET identity company, and while security is our primary expertise, so is software development. We recently went through an optimization cycle, improving the performance of our website, duendesoftware.com, which runs on ASP.NET Core, to improve the user experience and help search engines rank our pages higher in the results.
If you also run your organization’s site, or even your Duende IdentityServer instance, in a public environment and want to get the best user experience, here are five things you should consider implementing in your solution to get the most optimal experience.
Note, it’s best to take each of these approaches one at a time and measure the impact of your work on the performance of the target application. Sometimes, applying these strategies may be counterproductive in your particular use case.
Optimizing Assets for Websites
The first, and likely easiest, option is to optimize the assets delivered from the server to your user’s browser. What does optimization mean? Well, it can mean a few things depending on the asset itself. Let’s go through the big three asset types you are likely to encounter when building a web application: scripts, images, and fonts.
Optimizing JavaScript
When a browser receives a request to download JavaScript in a script
tag, it must first download the file over the network, parse it, and then execute the contained JavaScript. Knowing that high-level process, we can optimize JavaScript in three ways: reduce the physical size of the files, mitigate the impact of network latency, and eliminate unnecessary JavaScript.
For .NET developers, to optimize JavaScript file sizes, you can use tools such as WebOptimizer to minify the contents of the files. Typically referred to as “uglifying”, the process can dramatically reduce the size of your JavaScript files by removing comments, unnecessary whitespace, and renaming internal variables to shortened names.
Bundling is still a common practice among developers, but with HTTP/2, most browser vendors no longer recommend bundling assets. The protocol now supports multiplexing, which means grouping assets by their change frequency is preferable, or even avoiding bundling altogether. A simple grouping strategy could involve serving versioned third-party libraries independently, grouping site-wide JavaScript, and keeping page-specific JavaScript as separate files. Keep in mind that the goal is to reduce the number of extra files a page requires to render.
You can avoid network requests entirely by choosing to inline page-specific behavior using tag helpers provided by WebOptimizer.
<script src="/any/file.js" inline></script>
The use of the inline
tag helper will take the contents of the file and embed them into the current page.
Finally, optimize your JavaScript delivery to only send the necessary script to your pages. It’s very easy and convenient to include unnecessary third-party JavaScript libraries on every page, but it can be detrimental to the overall user experience. Consider breaking up files into smaller deliverable chunks, or inline code on only the pages that require that functionality. The right way of doing this depends on your particular use case.
Optimizing Images
This one is relatively straightforward, but an easy mistake for many developers to make. Sending high-resolution and lossless images to the client only to be rendered in low-visibility scenarios. For example, do you need that 3000px by 3000px corporate logo, only to render it at 100px by 100px? Probably not. Use an image optimizer to scale images appropriately and use higher compression formats such as WebP and AVIF to keep image quality high and image size low.
And speaking of image sizes, it’s crucial to add width and height attributes to image
tag elements, as without them, the rendering engine of most modern browsers will have to recalculate the entire page once the client loads these images from the network.
<img src="/img/duende-logo.webp" width="100" height="50" />
Finally, any images that are not immediately visible should include the loading attribute set to lazy
, as they will only be loaded once they enter the client’s viewport.
<img src="/img/important-diagram.webp" loading="lazy" width="500x" height="50" />
Optimizing Fonts
This one is highly dependent on your use case, but if you find yourself downloading many variants of a particular font family, you can end up sending multiple files to a client. Here you have two great options to improve a site’s performance:
- Use a font already on the user’s machine.
- Switch to variable fonts.
The first option is the simplest as it removes a network call and a dependency on a font for your website. It’s incredible how a site can feel much faster when the fonts don’t pop as they’re loaded, creating that dreaded “flash of content” many sites experience.
The use of variable fonts is also a great option, since variable fonts can include all possible permutations of a font family in a single file, including regular, italic, bold, and other styles. If you’re including variable fonts in your CSS, you can add them using the following CSS rule.
@font-face {
font-family: 'Roboto';
src: url('../fonts/Roboto Variable Font.woff2') format('woff2-variations'),
url('../fonts/Roboto-VariableFont_wdth,wght.ttf') format('truetype-variations');
font-weight: 300 700;
font-style: normal;
font-display: swap;
}
The capabilities of each font will depend on the original font file, but you can find many great options for variable fonts. To learn more, check out Mozilla’s excellent documentation on the subject.
Preloading Assets
As the server streams HTML to the client, the browser takes notes of which assets to download, but it can only do so when encountering them within the incoming response. This wait-and-see approach can dramatically slow down the page’s rendering time. Luckily, we can give the browser hints to preload upcoming content before it encounters it on the page. With a few extra link
elements in the head tag, we can dramatically improve our page’s performance.
<head>
<link rel="preload" as="script" href="critical.js">
</head>
The link will instruct the browser to preload the asset, informing the client it will find it later in the page. Note, the href
is the key, so it must match the asset, including any versioning querystring variables.
You can also prefetch DNS records for third-party assets such as Google Tag Manager.
<!-- Resource Hints for Performance -->
<link rel="preconnect" href="https://www.googletagmanager.com">
<link rel="preconnect" href="https://www.google-analytics.com">
<link rel="preconnect" href="https://code.jquery.com">
<link rel="dns-prefetch" href="//www.googletagmanager.com">
<link rel="dns-prefetch" href="//www.google-analytics.com">
<link rel="dns-prefetch" href="//code.jquery.com">
<!-- Deferred resource hints for non-critical third-party scripts -->
<link rel="dns-prefetch" href="//www.clarity.ms">
<link rel="dns-prefetch" href="//js.hs-scripts.com">
You can read more about this optimization on Google’s blog and how it can improve your site’s overall performance.
SVGs and Spritesheets
The following technique can improve your site’s overall performance dramatically, especially since you’re likely using icons. Products like FontAwesome and Bootstrap Icons enable developers to incorporate visual iconography into otherwise dull pages using font files, albeit at the cost of larger file sizes. Instead of using font files, you can take advantage of SVG spritesheets to get a similar experience.
An SVG spritesheet typically includes all the images you need in a single SVG file, with each asset having an id
attribute.
<?xml version="1.0" encoding="UTF-8"?>
<svg width="250" height="250" viewBox="0 0 250 250" xmlns="http://www.w3.org/2000/svg">
<path id="Dog" fill="currentColor" fill-rule="evenodd" stroke="none" d="..."/>
<path id="Star" fill="currentColor" fill-rule="evenodd" stroke="none" d="..."/>
<path id="Doughnut" fill="currentColor" fill-rule="evenodd" stroke="none" d="..."/>
<path id="Bolt" fill="currentColor" fill-rule="evenodd" stroke="none" d="..."/>
</svg>
By embedding this SVG directly into the page, you can then reference the elements by identifier.
<svg viewBox="0 0 250 250" height="2em" width="2em" class="star-hotpink">
<use href="#Star"></use>
</svg>
<svg viewBox="0 0 250 250" height="2em" width="2em" style="color:orange">
<use href="#Doughnut"></use>
</svg>
<svg viewBox="294 0 294 294" height="4em" width="4em" style="color:deeppink">
<use href="#Doughnut"></use>
</svg>
We used a custom tag helper to embed this asset directly on the page to improve network transfer.
<svg viewBox="0 0 0 0" style="display:none">
<embed href="/svg/icons.svg" />
</svg>
And the tag helper code in C# is relatively simple.
using Microsoft.AspNetCore.Mvc.Rendering;
using Microsoft.AspNetCore.Mvc.Routing;
using Microsoft.AspNetCore.Mvc.ViewFeatures;
using Microsoft.AspNetCore.Razor.TagHelpers;
namespace SeoOptimizations.TagHelpers;
[HtmlTargetElement("embed", ParentTag = "svg", TagStructure = TagStructure.NormalOrSelfClosing)]
public class SvgEmbedTagHelper(IWebHostEnvironment env) : TagHelper
{
[HtmlAttributeName("href")] public string Href { get; set; } = string.Empty;
[ViewContext] public ViewContext ViewContext { get; set; } = null!;
public override async Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
{
var urlHelper = new UrlHelper(ViewContext);
var path = urlHelper.Content(Href).TrimStart('/');
var filePath = Path.Combine(env.WebRootPath, path);
output.TagName = "";
output.Content.Clear();
if (File.Exists(filePath))
{
var svgContent = await File.ReadAllTextAsync(filePath);
output.Content.SetHtmlContent(svgContent);
}
else
{
output.Content.SetHtmlContent($"<!-- File not found: {Href} -->");
}
}
}
What was once a few megabytes of font files is now a small embedded SVG asset in the kilobytes, with near-instant rendering and configurability options on the client. SVGs can also use CSS variables, allowing you to make icon colors and variations configurable with a few targeted CSS classes. Next time you reach for font icons, consider replacing them with an SVG spritesheet.
Response Compression and Caching
When working with an ASP.NET Core application, you can optimize the bits going over the network or reduce the request frequency by using caching on the client. These two tools go hand-in-hand, specifically response compression and output caching. While you can use the techniques independently, they provide the most value when used in conjunction with each other.
Let’s start with response compression. The two most commonly used compression algorithms are GZIP and Brotli, with the former being most widely supported on a wide variety of clients, while the latter offers the best compression to date. It is up to the client to tell the server which it supports.
Let’s configure your ASP.NET Core app to use response compression in a few lines of code. Add the following code to your Program.cs
file.
builder.Services.AddResponseCompression(options =>
{
// Enable compression for HTTPS requests (disabled by default)
options.EnableForHttps = true;
options.Providers.Add<BrotliCompressionProvider>();
options.Providers.Add<GzipCompressionProvider>();
});
// Configure compression providers if needed
builder.Services.Configure<BrotliCompressionProviderOptions>(options =>
{
options.Level = CompressionLevel.Fastest;
});
builder.Services.Configure<GzipCompressionProviderOptions>(options =>
{
options.Level = CompressionLevel.Fastest;
});
Then, you’ll want to add the ResponseCompressionMiddleware
to your ASP.NET Core request pipeline.
// Enable response compression - place it at the beginning of the pipeline
app.UseResponseCompression();
app.UseHttpsRedirection();
app.UseRouting();
Now that we have response compression, let’s add response caching headers for our assets.
app.UseStaticFiles(new StaticFileOptions
{
OnPrepareResponse = ctx =>
{
if (ctx.File.PhysicalPath is { } path)
{
path = path.ToLowerInvariant();
// Different cache durations based on file types
if (path.EndsWith(".css") || path.EndsWith(".js") || path.EndsWith(".woff2"))
{
// Long cache for CSS, JS, fonts - typically versioned resources
ctx.Context.Response.Headers[HeaderNames.CacheControl] =
$"public,max-age={(60 * 60 * 24 * 365)}"; // 1 year
}
else if (path.EndsWith(".jpg") || path.EndsWith(".png") || path.EndsWith(".gif"))
{
// Medium cache for images
ctx.Context.Response.Headers[HeaderNames.CacheControl] =
$"public,max-age={(60 * 60 * 24 * 30)}"; // 30 days
}
else
{
// Shorter cache for other resources
ctx.Context.Response.Headers[HeaderNames.CacheControl] =
$"public,max-age={(60 * 60 * 24 * 7)}"; // 7 days
}
}
}
});
I’ve included some logic based on the asset type, but you could also generally set the cache headers to a typical number of days. Adding CacheControl
headers enables the client to decide how long to store assets on a user’s device and when to retrieve newer versions.
An additional pro tip is to add version numbers to asset file names to force newer assets to download when changes occur.
<!-- annoying -->
<img src="/img/hello.png" alt="saying hello" />
<!-- good practice -->
<img src="/img/hello_v1.png" alt="saying hello" />
Your ASP.NET Core application should now be an asset-serving powerhouse.
Output Caching and Static Pages
I’d be remiss if I didn’t mention the following tricks, but they do come with some substantial trade-offs. For optimal performance, consider output caching or pre-rendering your HTML and storing the resulting assets under wwwroot
. Let’s start with the more straightforward approach of the two, output caching.
First, let’s create a cache policy we can use across our application. In the Program.cs
file, add the following snippet.
builder.Services.AddOutputCache(options =>
{
options.AddPolicy("Content", b =>
b.Cache().Expire(
env.IsDevelopment()
? TimeSpan.FromSeconds(1)
: TimeSpan.FromDays(1)
)
);
});
In production, that means the server will render our pages into server memory and then serve the result for the next day.
Then we can decorate our MVC actions or Razor Pages with the OutputCacheAttribute
.
using Microsoft.AspNetCore.Mvc.RazorPages;
using Microsoft.AspNetCore.OutputCaching;
namespace SeoOptimizations.Pages;
[OutputCache(PolicyName = "Content")]
public class IndexModel(ILogger<IndexModel> logger) : PageModel
{
public void OnGet()
{
}
}
Another more extreme option to reduce all potential performance issues is to pre-render pages and place them in wwwroot
in the direct path of the page. You might need to reconfigure your ASP.NET Core pipeline, but you’ll attempt to serve any index.html
files first before hitting an endpoint.
// will always serve /wwwroot/**/index.html files first
app.UseDefaultFiles();
app.UseStaticFiles();
app.UseRouting();
You will need to figure out how to prebuild your pages, but it could be as simple as scraping your existing pages. Again, this is an extreme option, but for those looking to reduce load on heavily visited pages, this might be a great option.
Conclusion
Web application optimization is a fun activity, as you can typically get a list of tasks and gradually work your way to a better-functioning application. Luckily, with browsers, it’s easy to see what you’re doing suboptimally and what steps to take to improve your performance scores. As you saw in this post, even low-effort changes can lead to significant improvements in your ASP.NET Core applications, including the one hosting your Duende IdentityServer instance.
If you have any other performance tricks you’d like to share or have any questions regarding this post or Duende products, please let us know.