STORAGE PROTECT FOR GOOGLE CLOUD
Data is the most precious commodity for any online business. However, that data can sometimes work against us, containing hidden threats and breaches of policy. What if a few new files in your Google Cloud storage instance are hiding ‘dormant’ viruses or malware? What if a website user uploaded a product review photo that contains sexually explicit content, violating your company’s NSFW (not safe for work) policy? These deceptive, threat-bearing files create big problems when they’re mistakenly trusted and opened. It’s easy to assume that internally stored files are always safe by right of existing inside a professional database, and not outside of it. For that reason, it’s best to check your data for hidden threats in storage.
Thankfully, with Cloudmersive Storage Protect for Google Cloud, you can easily scan new files as they enter Google Cloud storage to find out if any viruses or malware lay hidden within them. Further, you can configure an AI Content Moderation feature which scans image files and determines if they contain NSFW (Not Safe for Work) image content of a racy or pornographic nature.
VIRUS SCANNING API
Equipped with over 17 million virus and malware signatures and bolstered with regular cloud-based updates, the underlying Cloudmersive Virus Scanning API in Storage Protect checks underneath the ‘hood’ of each new file in Google Cloud storage and finds out if any such file contains a threat.
This flagship content security feature can be configured to perform either Basic or Advanced scans, with the former covering primarily virus and malware detection, and the latter expanding upon that coverage to include 360-degree content protection across executables, invalid files, scripts, and restrictions on accepted file types.
After each scan, problematic files can be assigned special outcome actions, including the option to quarantine or delete those files outright. Clean files can be easily filtered and tagged as well, ensuring you have full control on either end of the spectrum.
A.I. CONTENT MODERATION API
The Cloudmersive NSFW Image Classification API is deployed alongside the Cloudmersive Virus Scanning API in Storage Protect, leveraging Machine Learning to ensure image files in storage are safe and appropriate for everyone to view (based on each customer’s unique NSFW policy). This NSFW Classification feature can be configured by enabling the Advanced Scan feature of Storage Protect.
With NSFW scanning option configured, new image files entering cloud storage will be analyzed and categorized based on the degree of racy or pornographic content contained within them. To quantify such content, a “Profanity Score Result” is automatically provided for each scanned image, with values ranging from 0.0 to 1.0. Scores on the bottom end of that range indicate a lower likelihood that the image contains racy or pornographic content, while higher numbers indicate that such content is likely to exist in the image. Accompanying the Profanity Score Result value, a natural language description is provided indicating whether the image has a Low (0.0 – 0.2), Medium (0.2 – 0.8) or High (0.8 – 1.0) probability to contain NSFW content.
Storage Protect is also currently available for AWS S3, Azure Blob & SharePoint Online Site Drive storage instances.