Azure AI Content Safety – Image Moderation on Images in Optimizely CMS

In this article, I demonstrate how the Image moderation feature, offered by the Azure AI Content Safety service, can be used to moderate images which are being uploaded within Optimizely CMS.

The Image moderation feature, which is a key feature of the Azure AI Content Safety Service, scans images for sexual content, violence, hate, and self harm with multi-severity levels. More information about the severity levels can be found here:

The Azure AI Content Safety Service uses Microsoft’s Florence foundation model for computer vision technology. This model has been trained with billions of text-image pairs, allowing it to adapt to different computer vision tasks. By using this technology, Azure AI Content Safety can quickly find inappropriate images in real time. To make sure everything runs smoothly, the largest image you can submit is 4 MB. The image dimensions need to be between 50 x 50 pixels and 2,048 x 2,048 pixels. You can use JPEG, PNG, GIF, BMP, TIFF, and WEBP images.

Image Moderation on images can be done by downloading the “Patel.AzureAIContentSafety.Optimizely” NuGet package. You can get this package from the Optimizely NuGet Feed or the NuGet Feed.

After the NuGet Package has been downloaded and the initial Configuration/Setup steps have been completed, it is necessary to add a boolean property with the[ImageAnalysisAllowed] attribute to the Start Page type in Optimizely, to activate this functionality. You can find more details about this here.

The next step is to create severity level integer properties, using the [SeverityLevel] attribute, on the Start Page type in Optimizely for each harm category. This helps decide what level of harmful content is allowed within the CMS. For more details about the [SeverityLevel] attribute, click here. The image below shows this process.

After you add the required properties, you can moderate images being uploaded in Optimizely CMS. The screenshot below shows the code used for this.

Response from the API via the Console

Azure AI Content Safety – Image Analysis complete
Hate severity: 0
SelfHarm severity: 0
Sexual severity: 0
Violence severity: 6

After uploading an image, The Image Moderation feature returned a severity level of 6 for violent content in the image. If this is higher than the defined integer value, an error message is displayed. Check the video below for an example.

If the level detected is lower than the defined limit, the image will be uploaded and available in the CMS.