Thursday, November 30, 2023

Azure AI Content Safety Service

Microsoft introduced a new AI service called “Azure AI Content Safety Service” at the Build conference in May 2023.  This new service will inspect for questionable content in any of the following categories.

  1. Violent content
  2. Hateful content
  3. Sexual content
  4. Self-harm content

 

The Content Safety service is intended to protect customers’ web sites and social media apps from receiving questionable comments or images.

Content maybe text, images, audio, video, or a combination of items (i.e. multi-modal). 

 

Users can utilize filters to tweak the severity levels.  For example, an outdoor equipment provider may allow images of knives or guns uploaded to their social media, but a school or church may like to prevent those images. Filters are set to Medium by default and can be increased.  Turning the filter settings to be less restricted or turned off requires a written application to Microsoft to ensure the customer is trusted and low risk.

 

The AI Content Safety Service is built into Open AI and most Microsoft AI products.  It’s used internally at Microsoft as well in public products like Bing Chat. The purpose is to uphold responsible AI principles provided by Microsoft.

 

 


 

Code Example

  1. Using the Azure AI Content Safety Service is accessible through the Azure portal. After logging into the portal, simply create a “Content Safety” Resource in an existing group or a new group.

 

  1. Once the resource is created, the Keys and Endpoints will be accessible in the Resource Management pane

 

 

  1. To access the Safety Content API, I created a console application with the following NuGet packages

 


 

  1. The code will utilize an API call, using the key and endpoint from step #2

 

using Azure;

using Azure.AI.ContentSafety;

using Microsoft.Extensions.Configuration;

using System.Reflection;

 

 

namespace AIContenSafety.ConsoleApp

{

    internal class Program

    {

        static void Main(string[] args)

        {

            var config = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();

            string endpoint = config["AppSettings:endpoint"];

            string key = config["AppSettings:key"];

 

            string datapath = Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "Images", "TestImage1.jpg");

 

            ImageData image = new ImageData();

            image.Content = BinaryData.FromBytes(File.ReadAllBytes(datapath));

 

            var request = new AnalyzeImageOptions(image);

 

            Response<AnalyzeImageResult> response;

            try

            {

                ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));

                response = client.AnalyzeImage(request);

            }

            catch (RequestFailedException ex)

            {

                Console.WriteLine("Analyze image failed.\nStatus code: {0}, Error code: {1}, Error message: {2}", ex.Status, ex.ErrorCode, ex.Message);

                throw;

            }

 

            Console.WriteLine("Hate severity: {0}", response.Value.HateResult.Severity);

            Console.WriteLine("SelfHarm severity: {0}", response.Value.SelfHarmResult.Severity);

            Console.WriteLine("Sexual severity: {0}", response.Value.SexualResult.Severity);

            Console.WriteLine("Violence severity: {0}", response.Value.ViolenceResult.Severity);

        }

    }

}

 

 

Testing the Application

Provided in the solution is a folder containing a test image (shown below), called TestImage1.jpg.  Naturally this image should be classified as violent content.

 

Running the Application

Executing the console application will load the test image specified above.  All results are posted in the console window showing the type of content violating the safety guidelines.  In addition, it shows the severity level of the content.

 

Additional Resources

Get started in Studio https://aka.ms/contentsafetystudio

Visit product page to learn more https://aka.ms/contentsafety

Read the eBook https://aka.ms/contentsafetyebook