Azure AI Content Safety – Text Moderation on a String property in Optimizely CMS

In this article, I demonstrate how the text detection feature, offered by the Azure AI Content Safety service, can be used to moderate text on a string property that is in the process of being published on a Optimizely content type.

The Text Detection API which is part of the Azure AI Content Safety Service, scans text for sexual content, violence, hate, and self harm categories which are returned with a severity level. More information about the severity levels can be found here:

One important rule of the Text Detection API is that the text used for this cannot exceed 1000 characters. If you want to analyse a longer text, you can divide it into smaller parts based on punctuation or spaces and send them separately.

Text Moderation on a string property can be done by downloading the NuGet Package “Patel.AzureAIContentSafety.Optimizely”. This package can be obtained from the Optimizely NuGet Feed or at the NuGet Feed.

After downloading the NuGet Package and completing the initial Configuration/Setup steps, you need to add a boolean property with the [TextAnalysisAllowed] attribute to the Start Page type in Optimizely to activate this functionality. You can find more details about this here.

The next step is to create severity level integer properties, using the [SeverityLevel] attribute, on the Start Page type in Optimizely for each harm category. This helps decide what level of harmful content is allowed within the CMS. For more details about the [SeverityLevel] attribute, click here. The image below shows this process.

After finishing the previous step, the next thing to do is to create a string property with the [TextAnalysis] attribute. This can be added to any CMS page or block type. You can find more information here.

Once you have completed the previous steps, the last thing to do is to fill in the text property (which has a [TextAnalysis] attribute) with some content that is ready to be published.

There is a screenshot below which demonstrates the code used to moderate the content on a string property, using Text Detection API.

Response from the API via the Console

Azure AI Content Safety – Text Analysis complete
Hate severity: 0
SelfHarm severity: 0
Sexual severity: 4
Violence severity: 0

The Text Detection API returns severity levels based on published content. In this case, it found a severity level of 4 for sexual content. If this level is higher than the defined integer value (with the [SeverityLevel] attribute applied), an error message is shown, as seen in the screenshot.

If the level detected is lower than the defined limit, the content will be published and available in the CMS.