Azure AI Content Safety – Text Moderation on multiple string properties in Optimizely CMS

In this article, I will show how the text moderation feature, offered by the Azure AI Content Safety service, can be used to moderate text on multiple string properties which are in the process of being published on a Optimizely content type.

This is similar to the article Text Moderation on a String property in Optimizely CMS. The difference in this article is that now, I am using the same concept, to moderate multiple string properties.

Text Moderation on multiple string properties can be done by downloading the NuGet Package “Patel.AzureAIContentSafety.Optimizely”. This package can be obtained from the Optimizely NuGet Feed or at the NuGet Feed.

After downloading the NuGet Package and completing the initial Configuration/Setup steps, you need to add a boolean property with the [TextAnalysisAllowed] attribute to the Start Page type in Optimizely to activate this functionality. You can find more details about this here.

The next step is to create severity level integer properties, using the [SeverityLevel] attribute, on the Start Page type in Optimizely for each harm category. This helps decide what level of harmful content is allowed within the CMS. For more details about the [SeverityLevel] attribute, click here. The image below shows this process.

After you finish the previous step, the next thing to do is to create multiple text properties and give each one the [TextAnalysis] attribute. You can include these properties in any CMS page or block type. To learn more, you can visit this link.

Once you have finished the earlier steps, the final step is to fill in the string properties (marked with the [TextAnalysis] attribute) with content that is ready to be published.

Below is an example of myself executing these procedures, for the purpose of content moderation by the Azure AI Content Safety Service. The outcomes of this process are presented below.

Response from the API via the Console

Azure AI Content Safety – Text Analysis complete
Hate severity: 2
SelfHarm severity: 0
Sexual severity: 0
Violence severity: 4
Azure AI Content Safety – Text Analysis complete
Hate severity: 4
SelfHarm severity: 0
Sexual severity: 0
Violence severity: 0
Azure AI Content Safety – Text Analysis complete
Hate severity: 0
SelfHarm severity: 0
Sexual severity: 4
Violence severity: 0

Azure AI Content Safety – Text Analysis complete
Hate severity: 0
SelfHarm severity: 4
Sexual severity: 0
Violence severity: 0

The Text Detection API returns severity levels based on published content. In this case, it detected; one count of Violent content, one count of Hate content, one count of Sexual content and one count of Self -Harm content. If this level is higher than the defined integer value (with the [SeverityLevel] attribute applied), an error message is shown, as seen in the screenshot.

In the event that the feature identifies any hate, self-harm, sexual, or If the level detected is lower than the defined limit, the content will be published and available in the CMS.