Technical Articles

Review Cloudmersive's technical library.

How to Scan for NSFW Images in Python
9/10/2025 - Brian O'Neill


Failing to properly flag NSFW content can create real problems for enterprises. Inadvertently exposing employees to illicit video or imagery on a corporate network can constitute regulatory violations or lead to independent lawsuits, and failing to screen and remove NSFW content from client-facing applications can cause indelible reputation damage among customers.

It’s critically important to scan for NSFW content before allowing image and video to be distributed or displayed in any employee- or user-accessible area.

Thankfully, there’s an API for that.

Cloudmersive’s AI-powered Image and Video APIs both represent developer-friendly NSFW classification solutions. Like all Cloudmersive APIs, both can be used with a single API key subscription, and both can be easily implemented into production applications with complementary, ready-to-run code examples.

In this article, we’ll take a closer look at how NSFW image detection works with Cloudmersive’s Image API (we’ll cover the Video API solution in another article), and we’ll walk through an end-to-end test case where we install the Image API client library in Google Colab and scan a few example images. We’ll share screenshots from our Colab test case along with code examples you can copy directly from this page.

Understanding Cloudmersive NSFW Classification

Cloudmersive’s NSFW Image API investigates images for NSFW content using Deep Learning AI. It assigns each image a score on a scale from 0.0 to 1.0, where lower scores indicate “safe to view” content and higher scores indicate “unsafe to view” content.

Cloudmersive generally considers images with scores below 0.2 safe to view and images with scores above 0.8 unsafe to view. Scores between 0.3 and 0.7 are typically assigned to images with some degree of raciness which are not necessarily unsafe. Discretion, of course, is up to the individual or enterprise implementing this solution; the scoring system allows for a wide range of conditional actions to be taken based on NSFW scan results.

Each score is accompanied by a classification_outcome string which describes the score in natural language. For example, an image which received a score greater than 0.8 would be accompanied by a ’classification_outcome’: ‘UnsafeContent_HighProbability’ string, and an image which received a score less than 0.2 would be accompanied by a ’classification_outcome’: ‘SafeContent_HighProbability' string. An image which received a score around 0.7 would be accompanied by a ’classification_outcome’: ‘RacyContent’ string.

Finding Python Code Examples for the NSFW API

Python code examples for calling the NSFW API can be found on the Image API page of the Cloudmersive API Console. We can navigate to this page from the API Console tab in our Cloudmersive Management Portal (CMP).

1a - navigate to console page
1 - api console page

We’ll scroll down to the NSFW path, where we’ll find the “Not safe for work NSFW racy content classification” tab.

2 - find NSFW tab

Clicking this tab will reveal a window which contains brief documentation describing the API, along with the option to scan a test file without writing any code. We’ll also find a detailed API response model in this window to help us extract important information from our API response object.

3 - NSFW API window

At the bottom of the window, we’ll select the Python programming language tab to bring up Python code examples. These examples have taken the bulk of our work away. The Image API client library imports are already written in, the configuration and file input snippets are set to placeholder strings, and our API instance is called and printed within a Try/Except block.

4a - select python, review imports
4b - select python, review snippets

We’ll find the pip command we need to install Image API resources by clicking on the Install Python SDK dropdown.

4c - pip install

As promised, we’ve included these code examples in this article. You can copy and paste directly from the code snippets below:

pip install cloudmersive-image-api-client


from __future__ import print_function
import time
import cloudmersive_image_api_client
from cloudmersive_image_api_client.rest import ApiException
from pprint import pprint

# Configure API key authorization: Apikey
configuration = cloudmersive_image_api_client.Configuration()
configuration.api_key['Apikey'] = 'YOUR_API_KEY'



# create an instance of the API class
api_instance = cloudmersive_image_api_client.NsfwApi(cloudmersive_image_api_client.ApiClient(configuration))
image_file = '/path/to/inputfile' # file | Image file to perform the operation on.  Common file formats such as PNG, JPEG are supported.

try:
    # Not safe for work NSFW racy content classification
    api_response = api_instance.nsfw_classify(image_file)
    pprint(api_response)
except ApiException as e:
    print("Exception when calling NsfwApi->nsfw_classify: %s\n" % e)

Testing the NSFW API in Colab

We’ll now select all the code examples we need (including the pip command), copy them to our clipboard, and paste them in our Python file (or code cell if we’re using Colab or Jupyter Notebooks).

5 - paste code in colab

Next, we’ll copy the pip install command and run it in our terminal or code cell.

6 - run pip install

Now that we have the Image API library installed in our environment, we’ll configure our API key by replacing the ’YOUR_API_KEY’ placeholder text with our actual API key string. It’s good practice to obfuscate this key in our code, and we’ll do so in this example using Colab’s Secrets tab.

7 - set api key

We’ll now move down to our next code snippet and replace the '/path/to/inputfile' placeholder text with the path to a locally stored image. In this example, we’ll upload 3 images with varying degrees of raciness to our Colab runtime, and we’ll test each of them one at a time. Please note that we will not be testing any ’UnsafeContent_HighProbability’ images in this example.

8 - upload test images

Test Case 1: Man Wearing Gray Suit

In our first test case, we’ll scan a stock image of a man wearing a gray suit. This image is rendered on the righthand side of the page in the below screenshot.

9 - test man with gray suit

This control test returns an extremely low score of 0.12. This constitutes a ’classification_outcome’: ‘SafeContent_HighProbability’ classification.

Test Case 2: Man Wearing Unbuttoned Shirt

In our second test case, we’ll scan a stock image of a man sitting on a chair with his shirt unbuttoned. This image is rendered on the righthand side of the page in the below screenshot.

10 - man with unbuttoned shirt

This test returns a high score of 0.70, constituting a ’classification_outcome’: ‘RacyContent’ designation. This indicates the image is likely too revealing and suggestive, and therefore likely unsafe for enterprise environments.

Test Case 3: Woman Wearing Pink Dress

In our third and final test case, we’ll scan a stock image of a woman wearing a pink dress. This image is rendered on the righthand side of the page in the below screenshot.

11 - woman in pink dress

This test returns a low score of 0.22, constituting a ’SafeContent_ModerateProbability’ classification. There’s nothing inherently racy or explicit about the image, so it’s safe to view – but there’s room to interpret suggestive themes.

Conclusion

The Cloudmersive NSFW API is an effective Deep Learning AI tool for scoring and classifying images as NSFW. It’s easy to use in any Python application thanks to ready-to-run code examples sourced from the Cloudmersive API console. If you have any questions about implementing the NSFW API for enterprise use-cases, please feel free to contact a member of our team.

600 free API calls/month, with no expiration

Sign Up Now or Sign in with Google    Sign in with Microsoft

Questions? We'll be your guide.

Contact Sales