GETA SEO Sitemaps – SitemapIndex Generation

This excellent module is used on a large number of Optimizely sites, I think I use it on just about every site I have delivered, but today I found out a new feature that I didn’t know existed.

Perhaps I am late to the party, but I had a requirement for a client who is having problems with their sitemap. The catalog contained well over 50,000 products, well over the limit for a single sitemap file. This meant that we needed to generate multiple sitemap files.

Whilst I could reference these files in the robots.txt file, I really wanted to generate a single sitemapindex.xml file and just reference this.

I fired up dotPeek to have a look around and try and work out the best way of implementing this requirement myself and had various ideas until I stumbled upon the ‘GetaSitemapIndexController’. Turns out the functionality already exists.

Configure your sitemaps

In the example below I have created multiple sitemaps, and then for the commerce ones I have also specified the node id. This represents the category I want to generate within the sitemap.

Example sitemap configuration

Generated index file

The sitemap index file will automatically be returned in response to the request for sitemapindex.xml.

<sitemapindex xmlns="">


Whilst this may not be a common requirement for all sites, it is really useful for larger eCommerce sites. Hopefully, someone finds this post useful.

Creating a cross platform package – Part 2



In the previous post, I covered the steps required to migrate your project to the new format. In this post, I am going to move to the next stage and cover how you adapt the solution to target multiple frameworks.

Project Changes

The first step is to modify the project and specify which frameworks you want to target. This is a simple change, just modify the <TargetFramework> element to be <TargetFrameworks> and then specify the frameworks you wish to target.


After the project as been modified you will also need to to update the package references ensuring they target the correct framework. This is straightforward, simply add a condition to the parent <ItemGroup>.

<ItemGroup Condition="'$(TargetFramework)' == 'net5.0'">
    <PackageReference Include="EPiServer.CMS.UI.Core" Version="[12.0.3,13)" />
    <PackageReference Include="EPiServer.Framework.AspNetCore" Version="[12.0.3,13)" /> 
    <PackageReference Include="EPiServer.Framework" Version="[12.0.3,13)" />
    <PackageReference Include="Microsoft.AspNetCore.Http" Version="2.0" />
    <PackageReference Include="Microsoft.AspNetCore.Http.Abstractions" Version="2.0" />
    <PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="5.0" />

There may also be orther sections of your project file that will also require you to use these condition clauses.

Code Changes

After changing the project to target multiple frameworks you will get compilation errors, you will need to fix these by creating different implementations of you code and wrapping each implementation with a preprocessor statement to indicate which framework the code is targeting.

#if NET461
 // Code specific for .net framework 4.6.1
#elif NET5_0
 // Code specific for .net 5.0
 // Code specific for .net 5.0 (or greater)
 // Code for anything else

You may just need to alter a couple of lines within a class, or in some cases you will need to deliver a completely different approach. A good example would be Middleware replacing a .Net Framework HTTPModule.

Wrapping it all up

Everyones journey whist converting their module will differ. The type of module, whether it has a UI etc will detemine the complexity.

Whilst you are modifying the code base I would strongly recommend :

  1. Keep the the ‘DRY’ principle and refactor you code when necessary so that you are not repeating sections of code.
  2. If you have an interface that uses WebForms then it is probably better to replace this with an interface that works for all the different frameworks rather than trying to maintain two different interfaces.

I hope this post helps you migrate your project.

Creating a cross platform package – Part 1

With Optimizely’s transition to .NET5 last year, developers of add-on packages will need to follow suit.

The complexity of delivering a package that supports both frameworks will vary depending on the type of package you are trying to migrate. For example, if you have delivered an admin module based on webforms you will need to re-write this so that it is accessed via the main navigation. In this case, it is probably best to use this in the .Net Framework version as well. In short, you will need to really consider how you refactor your module to support both environments.

This multi-part blog post will take you through the process, with part 1 focusing on converting your existing project to use the new project format and part 2 focusing on how to modify the code and project to support multiple targets.

Migrate to the new project format

The easiest way to do this is probably to create a new project and then bring your code across.

Set the correct framework version


When the project is created it will target either net5.0 or net6.0. This needs to be changed to match the framework version of the original project, i.e. net471.


Move nuget package references

The nuget package references are no longer managed in the ‘packages.config’ file, they are now part of the project file.

It is straightforward to migrate the references across; ‘packages‘ becomes ‘ItemGroup‘ and ‘package‘ becomes ‘PackageReference

    <PackageReference Include="EPiServer.Framework" Version="[11.1.0,12)" />
    <PackageReference Include="EPiServer.Framework.AspNet" Version="[11.1.0,12)" />
    <PackageReference Include="EPiServer.CMS.UI.Core" Version="[11.1.0,12)" />

Remove nuspec file

Again this information is included in the project file. For the most part, transitioning to the new format is relatively straightforward with similar approaches, but some areas (such as adding files created during the build) have to be done differently.


This is the information about the nuget package.

<?xml version="1.0" encoding="utf-8"?> 
<package xmlns="">    				
        <!-- Required elements--> 
	<!-- Optional elements --> 
	<!-- ... --> 
    <!-- Optional 'files' node --> 

changes to

<Project Sdk="Microsoft.NET.Sdk.Razor">
    <!-- Required elements--> 
    <!-- Optional elements -->

Content Files

You may need to include additional, or remove files from the nuget package. This was handled in the nuspec file with <files> and <contentFiles> nodes.

    <file src="bin\Debug\*.dll" target="lib" exclude="*.txt" />

     <!-- Include everything in the scripts folder except exe files -->
     <files include="cs/net45/scripts/*" exclude="**/*.exe"  
            buildAction="None" copyToOutput="true" />

changes to

  <Content Remove="src\**" />
  <Content Remove="node_modules\**" />
  <Content Remove="*.json" /> 

  <Content Include="deploy\**" Exclude="src\**\*">

NOTE: If you want to include files that are created during the build process then you need to take a different approach. i.e. a separate front-end build process whose output needs to be included.

You will need to create a target file, and reference this in your project. The targets file should be named the same as the built project.

<Content Include="build\net461\<project-name>.targets" PackagePath="build\net461\<project-name>.targets" />


<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="" ToolsVersion="4.0">
        <SourceScripts Include="$(MSBuildThisFileDirectory)..\..\contentFiles\any\any\modules\_protected\**\*"/>

    <Target Name="CopyFiles" BeforeTargets="Build">

Addtional project settings


When set to true will automatically create the nuget file when the project is built.


Is required when the project includes Razer files.


Can be used to set the location of the package sources; these can be either external or from the local file system.

Build and Test

Build the project and resolve any issues you encounter, these should be minor.

Once the package is generated you should test to ensure that it contains the correct content.

Wrapping things up

At this point, you should have a solution that builds your project and creates a nuget package but still targets a single framework.

In the next part, I will cover how to convert to target multiple frameworks.

Optimizely Data Platform Visitor Groups

The Optimizely Data Platform (ODP) builds a picture of a customer, their interactions, and their behavior in comparison to other customers on a site.

This module exposes these insights in the form of visitor groups which can then be used to personalise content.


There are currently five different visitor groups available. These are accessed via the ‘Data Platform’ group.

Real-Time Segments

Real-Time segments are new and are different from the ‘Calculated’ segments that are currently available on the platform.

Real-Time segments are based on the last 30 days of data whereas ‘Calculated’ segments are based on all the stored customer data and are calculated at regular intervals. They are more suited to reporting and journey orchestration. 


Note: You need to contact Optimizely to get Real-Time Segments enabled on your instance and there is currently no interface to create them.

Note 2: ‘Calculated’ Segments are not available via this visitor group criterion. 

Engagement Rank

This metric allows you to build personalisation based on how engaged the customer/visitor is with your site/brand.  This is biased toward more recent visits rather than historical visits.


This metric is calculated every 24 hrs.

Order Likelihood

As the name suggests, this criterion returns the likelihood that the customer will place an order.  

The possible values are:

  • Unlikely
  • Likely
  • Very Likely
  • Extremely Likely

This metric is calculated every 24 hrs.

Winback zone

Returns the ‘Winback Zone’ for the current customer.  This can be used to identify when a customer is altering their normal interaction patterns with the site; for example, are disengaging.

The options are:

  • Churned Customers
  • Winback Customers
  • Engaged Customers

This metric is calculated every 24hrs.


This criterion is can be used to build personalisation around 3 different customer order metrics.

  • Total Revenue
  • Order Count
  • Average Order Revenue


This metric is calculated every 24hrs.


Install the package directly from the Optimizely Nuget repository.

dotnet add package UNRVLD.ODP.VisitorGroups
Install-Package UNRVLD.ODP.VisitorGroups

Configuration (.NET 5.0)


// Adds the registration for visitor groups

appsettings.json All settings are optional, apart from the PrivateApiKey

   "EPiServer": {
      //Other config
      "OdpVisitorGroupOptions": {
         "OdpCookieName": "vuid",
         "CacheTimeoutSeconds": 10,
         "EndPoint": "",
         "PrivateApiKey": "key-lives-here"

Configuration (.Net Framework)

web.config All settings are optional, apart from the PrivateApiKey

    <add key="episerver:setoption:UNRVLD.ODP.OdpVisitorGroupOptions.OdpCookieName, UNRVLD.ODP.VisitorGroups" value="vuid" />
    <add key="episerver:setoption:UNRVLD.ODP.OdpVisitorGroupOptions.CacheTimeoutSeconds, UNRVLD.ODP.VisitorGroups" value="1" />
    <add key="episerver:setoption:UNRVLD.ODP.OdpVisitorGroupOptions.EndPoint, UNRVLD.ODP.VisitorGroups" value="" />
    <add key="episerver:setoption:UNRVLD.ODP.OdpVisitorGroupOptions.PrivateApiKey, UNRVLD.ODP.VisitorGroups" value="key-lives-here" />


I cannot take all the credit for this module, it was co-developed with David Knipe. Thanks for all the help.

Jhoose Security – Updated to support Episerver 11

I have updated the Jhoose security module to support any Episerver 11 site, the only dependency is .Net Framework 4.7.1.


Install the package directly from the Optimizley Nuget repository. This will install the admin interface along with the middleware to add the CSP header to the response.


dotnet add package Jhoose.Security.Admin
Install-Package Jhoose.Security.Admin


The installation process will add the following nodes to the web.config file within your solution.

	<sectionGroup name="JhooseSecurity" type="Jhoose.Security.Configuration.JhooseSecurityOptionsConfigurationSectionGroup, Jhoose.Security">
		<section name="Headers" type="Jhoose.Security.Configuration.HeadersSection, Jhoose.Security" />
		<section name="Options" type="Jhoose.Security.Configuration.OptionsSection, Jhoose.Security" />

Register the module with the .Net pipeline

	<modules runAllManagedModulesForAllRequests="true">
		<add name="JhooseSecurityModule" type="Jhoose.Security.HttpModules.JhooseSecurityModule, Jhoose.Security" />

Configuration options for the module

	<Options httpsRedirect="true">
			<add path="/episerver" />
		<StrictTransportSecurityHeader enabled="true" maxAge="31536000" />
		<XFrameOptionsHeader enabled="true" mode="Deny|SameOrigin|AllowFrom" domain=""/>
		<XContentTypeOptionsHeader enabled="true" />
		<XPermittedCrossDomainPoliciesHeader enabled="true" mode="None|MasterOnly|ByContentType|All"/>
		<ReferrerPolicyHeader enabled="true" mode="NoReferrer|NoReferrerWhenDownGrade|Origin|OriginWhenCrossOrigin|SameOrigin|StrictOrigin|StrictOriginWhenCrossOrigin|UnsafeUrl"/>
		<CrossOriginEmbedderPolicyHeader enabled="true" mode ="UnSafeNone|RequireCorp"/>
		<CrossOriginOpenerPolicyHeader  enabled="true" mode="UnSafeNone|SameOriginAllowPopups|SameOrigin"/>
		<CrossOriginResourcePolicyHeader enabled="true" mode="SameSite|SameOrigin|CrossOrigin" />

Exclusions: Any request which starts with a path specified in this property will not include the CSP header. 

httpsRedirect: This attribute controls whether all requests should be upgraded to HTTPS.

Nonce HTML helper

It is possible to get a nonce added to your inline <script> and <style> tags.

@using Jhoose.Security.Core.HtmlHelpers;
<script @Html.AddNonce() src="/assets/js/jquery.min.js"></script>

Response Headers

The response headers can be controlled within the web.config

Server Header and X-Powered-By Header

These aren’t removed, the reason being

  1. When hosting within Optimizley DXP, the CDN will obfuscate the server value anyway.
  2. The header cannot be removed programmatically.
IIS 10
<!-- web.config -->
<?xml version="1.0" encoding="UTF-8"?>
            <requestFiltering removeServerHeader="true" />

                <clear />
                <remove name="X-Powered-By" />

Jhoose Security – Update to include recommended security headers.

I have updated the module to automatically output the OWASP recommended security headers.

Example response headers

These headers are automatically added to the response but can be configured as required, or even disabled.

Code Configuration

        services.AddJhooseSecurity(_configuration, (securityOptions) => {
            // define the XFrame Options mode
            securityOptions.XFrameOptions.Mode = XFrameOptionsEnum.SameOrigin;
            // disable HSTS
            securityOptions.StrictTransportSecurity.Enabled = false;

Configuration via appSettings

"JhooseSecurity": {
      "ExclusionPaths": [
      "HttpsRedirection": true,
      "StrictTransportSecurity": {
        "MaxAge": 31536000,
        "IncludeSubDomains": true
      "XFrameOptions": {
        "Enabled": false,
        "Mode": 0,
        "Domain": ""
      "XPermittedCrossDomainPolicies": {
        "Mode": 0
      "ReferrerPolicy": {
        "Mode": 0
      "CrossOriginEmbedderPolicy": {
        "Mode": 1
      "CrossOriginOpenerPolicy": {
        "Mode": 2
      "CrossOriginResourcePolicy": {
        "Mode": 1

Managing the server header

The security module doesn’t remove the ‘server header’, this may seem strange, but the approach differs depending on how you are hosting your site. I have included some examples below.

Another consideration, if you are hosting your solution with Optimizely DXP then the CDN will automatically remove the header.


return Host.CreateDefaultBuilder(args)
  .ConfigureWebHostDefaults(webBuilder =>
   webBuilder.ConfigureKestrel(o => o.AddServerHeader = false);


<?xml version="1.0" encoding="UTF-8"?>
            <requestFiltering removeServerHeader="true" />


dotnet add package Jhoose.Security.Admin  --version

Introducing Jhoose Security – A module to manage your Content Security Policy

It has always been difficult to manage the CSP on a website, this new module for Optimizley aims to make the process easier giving control back to advanced editors.


  • Interface to manage policies.
  • Global ‘report only’ mode, or specify for each policy.
  • Add ‘nonce’ to inline script or style tags.
  • Ability to specify paths that are excluded from outputting the policy header.


Once the module is installed you will see a new ‘Security’ menu item within the top menu.


This screen gives you access to the global settings of the module, allowing the module to be enabled/disabled or switched into ‘Report Only’ mode.

It is also possible to specify an endpoint for a reporting service.

Module Settings

View Policies

All security policies are listed, with a summary of the policy configuration. A user is then able to click on a policy to view the policy in greater detail or amend it as required.

List of all policies

Edit Policy

This screen allows for an individual policy to be managed by the user, these will be saved when the ‘OK’ button is pressed.

When changes are made it is recommended that they are tested in ‘Report Only’ mode to ensure that nothing is adversely impacted by the new configuration.

Edit individual policy


Install the package directly from the Optimizley Nuget repository. This will install the admin interface along with the middleware to add the CSP header to the response.


dotnet add package Jhoose.Security.Admin



services.AddJhooseSecurity(IConfiguration configuration, Action<SecurityOptions> options = null);

The Action<SecurityOptions> options is optional and if not specified then the default will be used.

  "JhooseSecurity": {
    "ExclusionPaths": [

ExclusionPaths: Any request which starts with a path specified in this property will not include the CSP header.


Nonce Tag Helper

It is possible to get a nonce added to your inline <script> and <style> tags.


@addTagHelper *, Jhoose.Security.Core

<script nonce src="/assets/js/jquery.min.js"></script>

Managing the ‘Content Security Policy’ of your site


The ‘Content Security Policy’ response header is used to enhance a website’s security. It allows control of how resources are loaded, whitelisting trusted domains.

Any security audit will highlight this as a key recommendation.

Content-Security-Policy: default-src 'self' ; script-src; img-src; connect-src;

Read More:


Content Security Policy is an extra layer of security that protects against certain types of attack vectors (XSS, Click Jacking), anything you can do to protect the end-user is beneficial.

It is common practice these days for developers to draw in 3rd party packages to deliver a feature and these packages will also draw in other packages. This means that malicious code can get drawn into your solution without your knowledge.

Read More :


It is possible to be very granular when configuring the security policy. This gives you a high level of control of what can be loaded and where data can be sent.

Although the Content Security Policy can be added at a meta tag, it is typically delivered in a response header, additionally, it is also possible to enable specific inline scripts by adding a nonce value.

Testing your policy

Changes to the policy can inadvertently disable features on your site. YouTube videos can fail to load, analytics stops sending data, javascript components can fail. To help mitigate this you should either make any changes as ‘report only’ and monitor the output or configure the policy to report issues. This means that any issues are either reported within the browser console or are sent to a 3rd party reporting service.


You have deployed your policy, it’s been tested and signed off, but then things start to fail. Unfortunately, things do change, a dependency can update the endpoints they use or more typically a new tool is added via Google Tag Manager.

Managing the content security policy can be challenging. A common approach is to create a Url Rewrite outbound rule. This means that when the policy needs to change you have to update your source code and redeploy your site, not very efficient.

Is there an easier way?

Read my next post where I will introduce the new Episerver / Optimizely module that simplifies the management of the Content Security Policy.

Next Post:

Clone images from your production website using an ImageResizer Plugin


My requirement arose because I had updated the development environment to use a recent copy of production and due to various reasons it wasn’t feasible to get a copy of the images.

It is assumed that you are already using ImageResizer, and the EPiServerBlobReaderPlugin in your site.


As my solution used ImageResizer I wanted to see whether I could create a plugin that would allow me to download an asset if it was missing. It turns out that there is an ImageMissing event that you can attach to and this was perfect for my needs.

        public IPlugin Install(Config config)
            config.Pipeline.ImageMissing += Pipeline_ImageMissing;
            return this;

        private void Pipeline_ImageMissing(IHttpModule sender, HttpContext context, IUrlEventArgs e)


This event allowed me to detect when an image is missing and to download and store in the BlobStorage.

        private void Pipeline_ImageMissing(IHttpModule sender, HttpContext context, IUrlEventArgs e)
            var productionFile = $"{hostUrl}{e.VirtualPath}";
            var blobImage = GetBlobFile(e.VirtualPath, e.QueryString);

            using (var c = new HttpClient())
                var fileStream = c.GetStreamAsync(productionFile).Result;


        private EPiServerBlobFile GetBlobFile(string virtualPath, NameValueCollection queryString)
            var blobFile = new EPiServerBlobFile(virtualPath, queryString);

            return blobFile;

This worked well until I change the configuration of the solution to store the assets within an Azure Storage Container.

It turns out that Episerver/Optimizely didn’t detect that the image was missing, I just saw a lot of 404 error messages when attempting to get the asset from Azure. To handle this, I had to develop another plugin that correctly checks the Azure Storage and then triggers the the ImageMissing Event.

        protected virtual CloudBlobContainer GetContainer()
            return CloudStorageAccount.Parse(this.connectionString).CreateCloudBlobClient()

        public bool FileExists(string virtualPath, NameValueCollection queryString)
            bool fileExists;

                var blobFile = new EPiServerBlobFile(virtualPath, queryString);

                if (blobFile.Blob is AzureBlob)
                    var cloudBlobContainer = this.GetContainer();

                    fileExists = cloudBlobContainer.GetBlobReference(blobFile.Blob.ID.PathAndQuery).Exists();

                    fileExists = blobFile.BlobExists;
                fileExists = false;

            return fileExists;

The code above CloudBlobContainer and uses this to check if the file exists, if not then the ImageMissing event is fired to trigger the downloading of the asset and storing in the Azure Storage Blob.


      <add name="EPiServerAzureBlobReaderPlugin" />
      <add name="PatchImagePlugin" azureMode="true|false" hostUrl="https://source.of.images"/>

You will need to replace the existing plugins with the new ones you have created. You should also use the full namespace rather than just the class name.

  • azureMode (true|false) = set to true when you are writing the assets to Azure Blob storage.
  • hostUrl = this is the hostname (including protocol) of the site you want to download the images from.


You can access the full example code from

I see this as a useful option to have when developing, especially when transferring the assets between environments is challenging.

I hope that the example proves useful, even if it demonstrates how to write a simple Plugin for ImageResizer.